Normal view

Yesterday — 4 May 2026Main stream

Why brand authority beats topical authority in AI search

4 May 2026 at 18:00
Why brand authority beats topical authority in AI search

There’s a fundamental battle happening in search right now.

  • On one side is topical authority — the darling phrase of every SEO consultant who needs to sell more content.
  • On the other is brand authority — something marketers have talked about for decades, while much of search treated it as optional, vague, or something the brand team could handle after the sitemap was fixed.

Now AI has walked into the room, kicked over the furniture, eaten half the traffic, and exposed the real problem.

Search still matters. The global economy runs on people looking, comparing, buying, and solving problems through it. But the industry has a marketing problem.

And it shows. Too many SEOs have lost the plot on why people choose, remember, trust, search for, recommend, and buy from brands. AI search is making that ignorance harder to hide. That’s why brand authority wins — but not in the way most SEO dashboards suggest.

Topical authority was never supposed to mean content landfill

Before we get to AI, we need to define what topical authority was meant to be. At its best, it’s simple. 

You publish useful work, create evidence, and share expertise. Others cite you, journalists mention you, communities discuss you, and customers search for you. Over time, your brand becomes associated with the topic. That’s authority. It’s also brand building.

The problem is that much of the SEO industry hasn’t sold it that way. In practice, topical authority became a convenient commercial wrapper for content production.

SEO retainers were built around three pillars: technical, content, and links. Technical SEO became more specialized. Links were outsourced, packaged, renamed, earned through digital PR, or bought in one way or another. 

Content, meanwhile, remained the dependable agency engine — easy to sell, scope, and report. Think 4-8 blog posts a month, a topical map, a content hub, a cluster, a pillar page, and another 2,000 words on something nobody asked to read.

This wasn’t always wrong. In the pre-AI search world, content had real labor behind it. A decent article required research, writing, editing, optimization, internal linking, and promotion. That work had value. Good content could rank, attract links, build email lists, support commercial pages, and create some advertising effect through exposure.

Back in the day, we built what were often called power pages — strategic assets designed to earn links, rank, get shared, and pass equity to commercial pages. They had a purpose. They weren’t created just because the spreadsheet had another empty cell.

Topical authority changed that logic. It turned “let’s create something worth citing” into “let’s cover every possible keyword in the topic map and hope Google mistakes volume for expertise.” That was the original sin.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Authority is what others say about you

Authority isn’t created by what you publish on your own site. It’s created when you become a recognized source.

Former Google engineer Jun Wu described this in terms of “mention information” — how search engines analyze natural language, identify topic phrases and sources, cluster related terms, and map associations between sources and topics. 

In plain English, they can recognize when certain brands, people, domains, and entities are repeatedly mentioned in relation to specific topics.

Today, SEOs call that brand co-occurrence. The idea isn’t new. When authoritative sites, journalists, communities, reviewers, experts, and customers consistently mention your brand in relation to a topic, you become associated with it — not because you published hundreds of near-identical articles, but because the wider web treats you as relevant.

Topical coverage is what you say about yourself. Authority is what the market says about you. AI search makes that difference hard to ignore.

The smash burger test

Suppose you want to become an authority in the smash burger industry. You probably don’t, but some topical authority consultant calling themselves a “semantic SEO” is likely pitching it to a fast food brand right now.

An SEO version of topical authority would probably begin with a map:

  • What is a smash burger?
  • Best meat for smash burgers.
  • History of smash burgers.
  • Smash burger recipes.
  • Smash burger toppings.
  • Smash burger glossary.
  • Best smash burger restaurants.
  • How to make a smash burger at home.

There’s nothing inherently wrong with that. If you run a serious smash burger publication, restaurant group, food brand, or equipment business, some of it might be useful. But authority doesn’t come from publishing those pages.

Real authority looks different. You create original data on the fastest-growing smash burger chains. You publish an index of the best-rated smash burger restaurants in the U.S. and U.K. You interview chefs, test meat blends, and produce videos people actually watch. 

You become a source journalists use when covering the category. Food creators reference your data. Restaurant owners subscribe to your newsletter. People search for your brand plus “smash burger report.”

That’s topical authority. It’s also brand authority.

The thin SEO version is publishing thousands of keyword pages and internally linking them until your CMS starts begging for death. The real version is becoming known.

AI has broken the old content economics

The old commercial defense of topical authority was traffic.

Brands didn’t hire search marketers because they had a deep spiritual yearning to become encyclopedias. They hired them for organic revenue growth — to appear when customers searched, and to drive clicks, leads, and sales.

Informational content was sold, in part, as advertising. Someone searches a question, lands on your article, and sees your brand. Maybe they join your email list, return later, or buy.

That model was always more fragile than the industry admitted. Most users don’t sit around thinking about your B2B SaaS platform, your dog food brand, or your running shoe category page. 

Ask someone to name 10 toothpaste brands, and they’ll struggle, despite a lifetime of exposure. Ask them to recall the last ten TikToks they watched, and watch their face collapse.

Advertising works through memory structures, distinctive assets, repeated exposure, and relevance. A single accidental visit to a generic “what is” article was never the brand-building miracle some content marketers claimed.

Now AI has made the economics worse. For many informational searches, answers are increasingly synthesized before the click. From the user’s point of view, that’s often a better experience.

My dad is in his 70s. He loves AI Overviews. He doesn’t want to click through three ad-infested recipe pages, dodge newsletter popups, reject cookies, scroll past a life story, and finally find how long to boil an egg. He wants the answer.

Users aren’t mourning your lost organic session. They’re getting on with their lives. That’s the uncomfortable truth.

If the click disappears, much of the supposed advertising effect of informational content disappears with it — no logo exposure, no distinctive assets, no remarketing pixel, no email capture, and no carefully designed journey. Just your content absorbed into a synthesized answer, and maybe a small source link on the side.

Get the newsletter search marketers rely on.


AI citations aren’t the same as human citations

This brings us to another emerging industry obsession: AI citations. 

The small source boxes in ChatGPT, Gemini, Perplexity, AI Overviews, and other AI search experiences are being treated as the new holy metric. Agencies, tools, and consultants are already building around it.

The SEO industry loves a single metric — domain authority, traffic, keyword positions, share of voice, and now AI visibility. The problem is that an AI citation isn’t the same as a human citation.

An AI citation is often a helpful link — a reference, a retrieval artifact. It’s directionally useful. It can show what sources a system uses to support an answer, and whether your content is accessible, relevant, and being surfaced in certain contexts.

But it’s not the same as:

  • A journalist choosing to cite your research. 
  • A customer recommending you in a forum.
  • A creator reviewing your product.
  • A trade publication naming your brand as an expert source.

Human citations are evidence of market recognition. AI citations are evidence of machine retrieval. Don’t confuse the two.

The goal isn’t to be scraped. It’s to be recommended.

Brand search is the cleaner signal

If you want a better proxy for whether your authority is growing, look at brand search.

People search for brands they know, are considering, have bought from, or were recommended. Brand search isn’t perfect, but it’s much closer to commercial reality than counting how often a chatbot footnotes your blog post.

That’s why share of search matters. It gives you a directional view of market demand and mental availability. If more people are searching for your brand relative to competitors, something is happening. Your advertising, PR, product, reviews, word of mouth, content, partnerships, social presence, and customer experience are creating demand.

This is where the “but this is just SEO” crowd starts clearing its throat.

It’s not “just SEO.” Or rather, it’s only SEO if you define it so broadly that it includes every activity that might influence a search result. That’s strategic ambiguity. It lets everyone claim they were doing the future all along.

Most SEO retainers weren’t building brand fame. They were producing content, fixing technical issues, buying or earning links, and reporting rankings. Sometimes it worked — sometimes very well. But the average topical authority strategy wasn’t a sophisticated brand visibility program.

Traditional SEO still matters

None of this means you abandon traditional SEO. Buyer-intent rankings, category pages, product pages, local pages, technical SEO, internal linking, structured data, reviews, and crawlability matter. 

Search still works as a shelf. Many brands are discovered for the first time in supermarkets. The same is true in Google. If someone searches “emergency locksmith near me,” “best trail running shoes,” or “meeting intelligence software,” you want to appear.

Being found still matters, but it’s not the same as being recommended. Traditional SEO helps you get found, while brand authority drives recommendation. 

AI search shifts the balance toward the latter, synthesizing options, reducing uncertainty, and often naming brands, products, and solutions directly.

The new job is meaningful visibility

Semrush accidentally said the quiet part out loud with its April Fools’ “Brand Visibility Expert” stunt, where employees changed their titles on LinkedIn. It was a joke, but not entirely. 

The company later described AI visibility tools that track brand visibility, mentions, prompts, perception, and competitor presence in AI search. That’s where the market is going.

The future of search marketing isn’t just search engine optimization. It’s brand visibility across the network.

That means increasing meaningful visibility in the places where humans and AI systems encounter information: 

  • Search engines.
  • AI answers.
  • Review sites.
  • Communities.
  • YouTube. 
  • Reddit.
  • Trade media.
  • News sites.
  • Podcasts.
  • Influencers.
  • Comparison pages.
  • Customer reviews.
  • Social platforms.
  • Partner ecosystems.
  • Your own site.

The web is now the surface, and your website is just one part of it. This is the shift many SEOs don’t want to face. Many are used to optimizing owned pages for search engines. 

The next era is about optimizing a brand’s presence across the web. That requires different work.

Start with positioning

If you want to build brand authority in AI, start with positioning.

  • Who are you for?
  • What problem do you solve?
  • How do you solve it better?
  • What should the market associate with you?
  • What proof supports that claim?

These aren’t fluffy brand questions. They’re search questions now.

  • A locksmith isn’t only an emergency locksmith. They may install commercial locks, repair window locks, replace garage locks, secure doors, and provide security advice. 
  • A running shoe retailer may want to be known for trail running expertise, fast delivery, wide range, gait analysis, competitive pricing, or specialist advice. 
  • A SaaS platform may want to be known for extracting meeting intelligence that helps sales teams improve conversion.

These are performance attributes — the reasons people choose you. Your search strategy should reinforce them.

If your pet food brand specializes in sensitive stomachs, you need to be visible around dog dietary problems — not just on your blog, but in vet commentary, buyer guides, reviews, creator content, journalist coverage, customer stories, comparison pages, and data studies. 

These are the places where humans and AI systems learn what’s credible. That’s brand authority.

Create things worth being cited by humans

The rule for AI-era content is simple. Every piece of content should have real-world marketing value at publish.

If one person encounters it, they should understand your brand better, feel more positively about it, remember something useful, or be more likely to trust you.

If content only makes sense as an SEO asset after it ranks, it’s probably weak.

This means you stop creating “dead” content. Instead:

  • Create original research. 
  • Publish category data. 
  • Build useful tools. 
  • Share expert commentary. 
  • Produce strong product comparisons. 
  • Release reports journalists can cite. 
  • Create opinionated guides. 
  • Review products properly. 
  • Explain problems better than competitors. 
  • Make videos people want to watch. 
  • Turn internal data into public insight. 
  • Build assets that earn links and mentions.

Do fewer things. Make them better. Promote them harder.

Brands have limited budgets — smaller ones have even less room for waste. Spending thousands on a content library that repeats known information may be less effective than using the same budget to create one excellent data study, seed it with journalists, get creators talking, earn reviews, improve product pages, and run ads that make people search for your brand.

Ask yourself, “What use of this budget is most likely to increase brand search, links, mentions, reviews, and recommendations?”

Fitness times visibility equals success

A useful idea from network science applies here: success is driven by fitness multiplied by visibility.

  • Fitness is your ability to outperform alternatives — product, service, price, expertise, speed, range, design, convenience, proof, reviews, and customer experience.
  • Visibility is how often and how meaningfully the market encounters those signals.

Fitness without visibility is a brilliant brand nobody knows. Visibility without fitness is hype — and it usually collapses. 

That’s how preferential attachment starts. Brands that are talked about get talked about more. Brands that are searched get searched more. Brands that earn links earn more links. Brands that become default sources are cited more often. Brands that sell more get more reviews, more mentions, more data, and more presence.

AI accelerates this dynamic, consuming the web faster than humans and reinforcing those signals at scale. If your brand has dense, consistent, and credible associations with the problems you solve, you reduce uncertainty that you’re a good recommendation.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

What actually wins in AI search

Brand authority wins in AI — because real topical authority was always brand authority.

The version of topical authority that deserves to survive is the one where a brand becomes a genuine source in its category — creating useful information, earning mentions, building demand, getting searched, getting cited, and becoming associated with the problems it solves.

The version that deserves to die is the one where a brand publishes endless keyword-targeted sludge and calls the result authority.

AI hasn’t killed SEO. It’s killed the illusion that mediocrity deserves traffic.

The search marketers who win next won’t be the ones who publish the most. They’ll be the ones who make brands more meaningfully visible across the internet. They’ll understand positioning, PR, content, technical SEO, reviews, creators, category demand, links, mentions, and brand search as one connected system.

The goal isn’t to optimize for search engines, but for the network they use to understand the world.

Build the brand. Make it visible. Make it worth recommending. Everything else is just content with delusions of grandeur.

7 tools for doing AEO right now

4 May 2026 at 17:00
Tools for doing AEO right now

The other day, I was putting together my version of a Lumascape of answer engine optimization (AEO) tools — I’m kidding, my computer doesn’t have that kind of bandwidth.

Instead of mapping every tool — which would be outdated in minutes — I’m focusing on the ones I actually use to grow clients’ AI search presence.

This is a deliberately short list: four tools I rely on, plus three I’m testing before adding them to my team’s stack.

1. AI assistants (ChatGPT, Claude, Perplexity)

Used thoughtfully, large language model (LLM) assistants are research and analysis tools in their own right. For AEO work specifically, they serve several distinct purposes: 

  • Competitive landscape research.
  • Content gap analysis.
  • Prompt testing.
  • Entity and topical coverage audits.
  • Structured content drafting. 

The key distinction from passive use is intentionality — using these tools with a defined AEO research methodology rather than ad hoc.

Why they’re essential

AEO requires a fundamental understanding of how AI systems process and represent information. The most direct way to develop that understanding is to work regularly and analytically within those systems. 

Querying AI assistants with the same prompts your target audience uses — and carefully analyzing what they return, what sources they cite, what entities they associate, and how they structure answers — gives you peerless ground-level intelligence.

Competitive strengths

Each platform has its own strengths worth noting:

  • ChatGPT is widely used and offers broad general knowledge synthesis, making it useful for understanding how mainstream AI handles queries in your category.
  • Claude tends toward more nuanced, caveated responses and is strong for analytical tasks.
  • Perplexity is citation-heavy by design and particularly valuable for AEO research precisely because it surfaces its sources explicitly. You can see in real time which domains are being pulled and why.

What you can’t do without them

Firsthand research on your brand’s current AEO status, which includes:

  • Manual prompt testing: See how your brand and content are being represented.
  • Competitive research: Query AI systems with category-level questions to see which competitors appear and how they are framed.
  • Topical gap analysis: Identify questions AI systems answer where your brand is absent.
  • Structural content analysis: Understanding the answer formats (lists, definitions, comparisons, how-tos) that AI systems prefer for your query types.

Caveats

AI assistant outputs are non-deterministic and vary by platform, model version, session context, and even time of day. Manual prompt testing is qualitative and difficult to scale. These tools are best used to build intuition and generate hypotheses, which should then be validated with quantitative data from platforms like Profound. 

Also worth noting: querying AI systems for competitive research can quickly become a rabbit hole, so before you truly dig in, build a structured testing framework and stick to it.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Profound 

Profound is purpose-built AEO intelligence that monitors how AI platforms (ChatGPT, Perplexity, Google AI Overviews, Claude, etc.) discover, surface, and cite your brand and content. 

It also tracks brand mention frequency and sentiment, competitors’ share of voice, and the specific prompts or query types that trigger your content to appear in AI-generated answers.

Why it’s essential

If you want to understand where your brand stands in the AI answer ecosystem, it’s currently the most direct way to get that data. It shifts the question from “where do we rank?” to “when AI answers a question in our category, are we in the answer?”

Competitive strengths

The cross-platform coverage is the tool’s most distinctive feature. Rather than measuring a single AI engine in isolation, it provides a comparative view across the major platforms simultaneously. The competitive benchmarking functionality is particularly useful: you can see both your own AI citation share and how it stacks up against named competitors. It’s the kind of context that transforms data into strategy.

What you can’t do without it

Some fundamental capabilities, like:

  • Quantifying your brand’s presence in AI-generated answers at scale.
  • Tracking citation share over time and across platforms.
  • Identifying which content types and topics drive AI mentions — and which competitors are winning the queries you’re losing.

It’s a pretty expensive tool. If you want to justify the expense to your C-suite, tell them, “This will show us exactly where we’re losing to {most hated competitor}.”

Caveats

The tool is evolving quickly, which it needs to do as the AEO landscape morphs in real time. The data it surfaces reflects AI outputs at the time the query is made. Outputs are inherently variable because AI systems don’t return the same answer to the same prompt every time.

Treat metrics as directional signals and trend data rather than precise, static rankings. It also won’t tell you why you’re being cited or not. That’s on you and your team to analyze.

3. Google Trends and Google Keyword Planner

Google Trends tracks the relative search interest for queries over time, across geographies, and in comparison to related terms. Google Keyword Planner provides search volume estimates and demand forecasting, originally designed for paid search planning but equally useful for organic and AEO strategy.

Why they’re essential

AEO strategy lives and dies by understanding demand signals. Before optimizing content to appear in AI answers, you need to know what questions people are actually asking, how that demand is trending, and whether the topic has enough volume to warrant investment. 

Google’s tools remain the most reliable source of this data at scale — and crucially, they reflect the same underlying search behavior that feeds into AI engine training data and query patterns.

Competitive strengths

Google Trends is uniquely powerful for directional trend analysis. It doesn’t give you absolute volume, but it gives you relative momentum — which is often more strategically valuable when you’re trying to anticipate where audience interest is heading rather than just where it has been.

For AEO specifically, rising query trends can signal emerging answer opportunities for you to address before your competitors do. 

In my experience, Keyword Planner’s forecasting features are underused. They can help you prioritize content investment based on projected demand rather than historical data alone.

What you can’t do without them

Build a truly dynamic AEO strategy in which you:

  • Understand whether demand for a topic is growing, stable, or declining before building content around it.
  • Identify seasonal patterns that should shape content publishing calendars.
  • Surface related queries and rising breakout terms that expand your AEO content coverage.
  • Validate whether a topic has enough search demand to justify the content investment.

Caveats

As you probably noticed when I recommended those tools, neither reflects AI-native query behavior directly. They measure traditional search, not prompts submitted to ChatGPT or Perplexity. 

As information-seeking behavior shifts toward AI interfaces, these tools will increasingly undercount true demand. Use them as a strong proxy and directional guide, not as a complete picture.

Worth noting: Keyword Planner also requires an active Google Ads account, and volume estimates in low-competition or niche categories can be imprecise.

Get the newsletter search marketers rely on.


4. Google Search Console and Google Analytics

Google Search Console (GSC) provides direct data on how your site performs in Google Search: which queries trigger impressions, click-through rates, average positions, and indexing status. 

Google Analytics 4 (GA4) tracks on-site behavior — how users arrive, what they do, how long they stay, and where they exit — including referral traffic sources that reveal whether visitors are arriving from AI-adjacent platforms.

Why they’re essential

For AEO practitioners, these tools serve critical diagnostic functions.

GSC tells you whether the content you’re optimizing for AI citation is also performing in traditional search, which matters because Google AI Overviews and traditional organic results draw from overlapping content pools.

GA4’s referral traffic data is increasingly important for detecting direct traffic from AI platforms: as users click through citations in tools like Perplexity or ChatGPT’s browsing mode, that activity shows up as referral or direct traffic. That’s worth segmenting and monitoring, even if, given the scorching rise of zero-click activity, it paints a very incomplete picture of your AEO impact.

Competitive strengths 

GSC’s query data is irreplaceable. No third-party tool has access to the same level of Google-sourced search performance data. The ability to see exactly which queries are driving impressions (even without clicks) is foundational for identifying content that has topical authority but may not be converting visibility into AI citations. 

GA4’s cross-channel attribution and audience analysis capabilities help you understand where AEO-driven traffic comes from and what that traffic does when it arrives — which is the commercial case for the discipline.

What you can’t do without them

Develop a true understanding of AEO business impact — and AEO blockers — by:

  • Measuring whether your AEO content investments translate into actual traffic and engagement.
  • Identifying content with high impression share but low CTR — a common signal of AI Overview cannibalization.
  • Monitoring referral traffic from AI platforms as that ecosystem matures.
  • Diagnosing indexing or crawlability issues that prevent AI systems from accessing your content.

Caveats

GSC data has well-documented limitations: it samples at scale, attribution can be murky, and data is typically available with a 48-72 hour lag. Critically, it only reflects Google. It tells you nothing about how you perform in Bing-powered AI search or standalone AI platforms. 

GA4 still has UX rough edges, so you’ll need to confirm that your event tracking and conversion configuration is solid before drawing strategic conclusions from the data.

Rapid-fire roundup 

That shortlist leaves, oh, thousands of tools left to consider. I recommend putting these on your radar and testing them to gauge their value as the AEO ecosystem develops.

5. AI Trust Signals

AI Trust Signals focuses on the credibility and trustworthiness signals that influence whether AI systems choose to cite a source.

This is an emerging and underexplored dimension of AEO: it goes upstream from content relevance and helps brands understand whether an AI system “trusts” a domain enough to surface it as an authoritative reference. It’s worth monitoring as the understanding of AI citation mechanics matures.

6. Ahrefs

Ahrefs is a mature SEO platform with deep backlink analysis, content gap tooling, site auditing, and keyword research capabilities. 

Its relevance to AEO is primarily indirect, but it’s significant: authority signals, including referring domain quality and topical authority depth, are widely believed to influence AI citation likelihood. Ahrefs is a benchmark tool for understanding and building that authority infrastructure.

Its Content Explorer is also a practical tool for identifying high-performing content in your category that AI systems are likely to draw from.

7. Roadway AI

Roadway AI positions itself as an AI-native platform with a focus on scaling growth marketing activities. Where it helps is building agents that can help attribute AEO signals into revenue, so you can better understand impact. 

As a newer entrant, it’s worth evaluating as part of a toolkit audit, especially if you’re looking for tooling built specifically for AEO use cases. The category is moving fast, and platforms like Roadway AI may gain significant mindshare within 12 months, which also means more competitors are coming soon.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The reality of AEO tools: Fast-moving and imperfect

AEO tooling is still catching up to AEO as a discipline, which will likely be the dynamic for the next few years, at least.

Everything is changing so fast, and AI-driven discovery is evolving as users adopt new behaviors that vary by vertical. What matters is consistently applied measurement, strong analysis, and testing that lead to actionable insights.

You won’t get your setup perfect. Like much of marketing, solidly directional is probably as good as you’re going to get. With any tool, if you can explain and measure how it improves your AEO efforts, that’s a great start. 

Before you sign any contracts, see if you can find an industry colleague with real-life experience using the tool, and ask them for their take. Unless they’re staunch advocates, chances are you can either find an alternative that does the same thing better or cheaper, or you can wait another month for one to emerge.

Apparently, OSSAA thinks sports is only a man's world | Opinion

What the heck were leaders of the Oklahoma Secondary School Activities Association thinking when they proudly announced they had chosen only men to serve on a powerful new committee to review the governing body’s rules?

Do they think we still live in 1911, when the body was first created and women weren’t even constitutionally guaranteed the right to vote?

Because the alternative is that the leaders of this body have apparently been hanging out in the men’s locker rooms imbibing so much testosterone-fueled Kool Aid that it has addled their brains and made them forget that women can lead, too.

I’m beginning to get a sense of why lawmakers are so fed up with the Oklahoma Secondary School Activities Association, better known as the OSSAA, and why Republican Gov. Kevin Stitt called for the group to be abolished during his State of the State address in February.

POINT: OSSAA should be subject to accountability | Opinion

OSSAA supporters say the body is a scapegoat for making difficult regulatory oversight decisions. But critics say it is an archaic board that metaphorically lives in the Stone Age and is failing to evolve with the times.

The fact that this body could not even bother to appoint a single woman to serve on their latest committee is only going to add fuel to that latter argument, because I’m not the only one who has taken notice.

Here’s a news flash: These days, tens of thousands of women and girls compete in sports, band, speech and all sorts of other high school activities that this association is entrusted to govern.

An OSSAA logo is pictured at OG&E Coliseum in Oklahoma City on Tuesday, March 3, 2026.

Interest in women’s sports is growing in a big way. The WNBA has seen its attendance numbers explode. The value of National Women’s Soccer League teams is up.

Oklahoma City is home to a professional women’s softball team, the Spark. Our state capital also hosts the Women's College World Series annually, and will welcome Olympic softball in 2028.

After years of being forced to use equipment designed for men, now there’s even a booming industry that sells items such as shoes and bicycles to address the unique needs of athletic women.

In the sporting world, women are no longer an afterthought.

COUNTERPOINT: OSSAA elevates local voices, school-led decisions | Opinion

That’s why it’s so puzzling that the OSSAA apparently doesn’t believe women deserve to have representation on this new, 15-member committee that will meet once a month between May and December to, as The Oklahoman put it, “analyze rules and pitch changes” to the entity’s 15-member board of directors.

The membership list of that board of directors, coincidentally, is also predominantly male. An internet search shows just two women serving on it.

This is not an inconsequential nonprofit.

The OSSAA’s survival is intertwined with that of the 482 public and private schools that make up its membership. It also operates on a budget of about $8 million, which is largely paid for by playoff ticket sales.

The nonprofit was founded over a century ago to bring fairness to high school sports and serve as the regulatory body not just for sporting competitions, but also nonsporting contests like band, drama and debate. 

It determines scheduling, trains over 11,000 judges and referees, enforces regulations and — more controversially of late — has the high-stakes responsibility of determining whether students are eligible to compete after transferring schools and ensuring no recruiting violations have occurred.

The Oklahoman first reported in April that the OSSAA was creating a committee to dig into the body’s 24 rules and was “accepting nominations” from all its members, but most particularly looking for athletic directors, principals and superintendents.

It took just days for OSSAA to announce that it had wrapped its search.That zest for manliness picked, of course, all men from a crop of school superintendents, assistant superintendents, athletic directors, assistant athletic directors.

Besides women, also noticeably missing from its ranks: leaders of nonsport competitions, in the fine arts or drama, not to mention debate teachers or band directors. 

“We know this group of people who work with students every day will be able to truly understand the impact of these rules, find gaps and improve them for the betterment of our students and teams,” said David Jackson, OSSAA’s executive director, who is also a man.

He told Fox 23 News that they received over 150 applications and “selected 15 who bring a wide range of ideas and perspectives from across the state.”

I find it hard to believe that there was not a woman as equally qualified as these men out of over 150 applicants.

Jackson also told The Oklahoman that OSSAA wants everyone to “know these decisions were made with students and families in mind.”

Which students? Just boys?

Look, diversity matters.

Women bring different perspectives and lived experiences. They’re traditionally the primary caregivers for children, make up the majority of educators, and when one parent has to give up their career, they’re the de facto stay-at-home parent.

In general, studies have found that women typically hold fewer leadership positions and are less likely to apply for employment and other career advancement opportunities if they don’t meet every single requirement. That’s due in part to perpetuating gender stereotypes (like perhaps the perception that sports are for men).

Plus, women’s sports are generally given second shrift to men’s. Just look at FBI Director Kash Patel’s decision to celebrate the U.S. men’s Olympic hockey team’s gold medal victory. Just days before he was nowhere to be found when the women also took home gold.

OSSAA should be doing everything they can to build girls' sports, not tear them down. 

One can’t help but wonder why – when already under the legislative microscope — OSSAA seems hellbent on inflicting more self-harm. Do they actually want to go the way of the dodo?

Because even if this lack of diversity was an accidental oversight, it raises some serious questions about why nobody internally noticed it or was emboldened to flag it.

And it raises some confounding questions about what OSSAA wants to be. 

Do they think women deserve a voice in their decisionmaking?

And if they do, who is going to have the gonads to hit pause on this debacle? Who among them will demonstrate the intestinal fortitude needed to insist OSSAA head back to the drawing board and actually conduct a search to find women who are ready to speak up?

Janelle Stecklein

Janelle Stecklein is editor of Oklahoma Voice. An award-winning journalist, Stecklein has been covering Oklahoma government and politics since moving to the state in 2014. Oklahoma Voice is part of States Newsroom, the nation’s largest state-focused nonprofit news organization.

This article originally appeared on Oklahoman: OSSAA should have given women a voice on overhaul board | Opinion

Why AI visibility starts before search and ends with citations

4 May 2026 at 16:00
Why AI visibility starts before search and ends with citations

The conversation has shifted. We’re spending less time optimizing for clicks and more time trying to fix the AI ROI story. AI now sits at the center of discovery, shaping what gets seen, summarized, and cited.

Here’s what’s working right now, what your peers are doing, and why SMX Advanced will feel different this year.

The SparkToro wake-up call: Influence happens everywhere

The foundation of any serious 2026 content strategy has to start with Rand Fishkin’s landmark March 2026 study, “Influence Happens Everywhere,” an analysis of the 5,000 most-visited sites on both mobile and desktop.

The finding that rattled the industry: while Google still commands 73% of search traffic, search itself is merely a response to influence created elsewhere.

People don’t wake up and search for a brand in a vacuum. They read, watch, and listen across a fragmented web of news, social media, and niche communities before they ever hit a search bar.

AI tools, despite their rapid growth, still account for a fraction of total web visits compared to the “big incumbents.” But the trajectory is unmistakable.

The fundamental problem with attribution in 2026 is that search gets over-credited because it captures demand at the finish line, while the fragmented channels — email, news, specialized content — get under-credited for creating that demand in the first place. 

When creating content, your job is to win the influence phase so thoroughly that when a user eventually turns to an AI assistant or a search bar, your brand is the only logical answer.

That framing is the strategic backbone behind sessions at the upcoming SMX Advanced in Boston, June 3-5, and the lens through which your entire editorial calendar should be rewritten.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

What your Search Engine Land colleagues are already doing

Before we discuss tactics, it’s worth pausing to note that this publication’s own contributor base has been sounding the alarm in complementary ways. Read them together and a clear picture emerges.

Dave Davies, principal SEO manager at Weights & Biases and a regular SMX Advanced speaker, published a rigorous piece in December 2025, “Mentions, citations, and clicks: Your 2026 content strategy.” 

Drawing on Siege Media’s two-year content performance study covering more than 7.2 million sessions, Grow and Convert’s conversion research, and Seer Interactive’s AI Overview findings, Davies made the case that the metrics we’ve lived by — impressions, sessions, CTR — “no longer tell the full story.” 

Mentions, citations, and structured visibility signals, he argued, are becoming the new levers of trust and the path to revenue.

Carolyn Shelby, who appeared in a recent SMX Munich 2026 recap for her session “Inside Google’s Head,” crystallized what many of us have only half-articulated: AI doesn’t discover new brands — it selects from known entities. 

The implications are stark. If you haven’t built entity recognition across the web’s key reference points — Wikipedia, Reddit, LinkedIn, authoritative press coverage — you don’t get selected. 

My own October 2025 piece for this publication compared how ChatGPT, Perplexity, Gemini, Claude, and DeepSeek differ in their data sources, live web use, and citation rules. The conclusion I reached then is truer today: a single-platform AI strategy isn’t a strategy. Each model has different retrieval logic, different trust signals, and different recency weighting. 

Jordan Koene made the same point in January 2026, noting that different LLMs win different jobs. This heterogeneity is the fundamental reason why “write good content” is both correct and insufficient as advice.

What ‘full-stack content’ actually means

In 2024, we were impressed if an AI tool could write a decent 500-word blog post. Today, writing is the least interesting thing AI does.

Jasper’s 2026 Enterprise Suite is a useful illustration. It doesn’t just draft text, it:

  • Pulls real-time performance data from Google Search Console.
  • Identifies content gaps where competitors are gaining ground.
  • Generates a multimodal package: a 1,500-word deep dive, three vertical videos for YouTube Shorts, and custom infographics, all calibrated to a brand-voice model trained on your last five years of successful campaigns.

We have moved from “Help me write this” to “Help me dominate this topic.”

But tools don’t solve strategy problems. The harder question is “what should the content actually say?” AI can’t produce the original research, the proprietary case study, or the hard-won perspective that makes an LLM choose you over a dozen lookalike alternatives.

This is why the most interesting SMX Advanced session on content this year may be the one by Purna Virji of LinkedIn, who opens the conference with a keynote on fixing the broken AI ROI story before budgets get cut

Her argument — that AI investment must generate measurable business outcomes “at the P&L level,” not just activity, efficiency, or content volume — is a direct challenge to teams that have been celebrating output metrics while their revenue dashboards flatline.

Google Vids and the democratization of video: A genuine inflection point

Perhaps the most significant platform shift for content creators in 2026 was Google moving Google Vids out of its Workspace-only silo. You can now create, edit, and share videos at no cost directly within the Google ecosystem, powered by the Veo 3 generative model.

For years, video production was protected by a high barrier to entry: expensive tools, specialist skills, and days of editing time. Google Vids collapses that barrier. Drop a Google Doc or a URL into the “Help me create” prompt, and you get a full-motion storyboard with AI-generated voiceovers, licensed music, and transitions in minutes.

The practical consequences are arriving fast:

  • Small agencies are now producing video-first content calendars that previously required five-figure budgets. The “if only we had video” excuse has expired.
  • Hyper-localization is becoming a baseline expectation. Using Vids’ automated dubbing and visual swapping, a single “hero” video can be localized for 20 different markets in an afternoon.
  • AI-generated summaries are already threatening video metadata. YouTube recently tested swapping video titles for AI-generated summaries. Brands that have not invested in clear entity signals and structured descriptions may soon find their video content renamed by an algorithm — not a person.

The strategic implication is the same as it was for text: AI tools lower the floor but raise the bar. Every competitor now has access to cheap video. But who has something worth saying in that video?

GEO, AEO, and the language problem

Depending on which Search Engine Land article you read in the past few weeks, the dominant framework for surviving this shift is either generative engine optimization (GEO) or answer engine optimization (AEO).

A growing number of contributors argue these terms are marketing noise for what is, at bottom, just good search everywhere optimization plus structured data plus earned media.

That debate is genuinely worth having, and it will be had at SMX Advanced. But for the practitioner who needs to make decisions next week, here’s what the evidence actually supports:

  • eMarketer’s Nate Elliott put it plainly in a recent FAQ: “Almost every GEO response is different from every other GEO response.” Between 40% and 60% of cited sources change month-to-month across Google AI Mode and ChatGPT, making AI visibility far less stable than organic search rankings. That volatility is the real risk, not the terminology debate.
  • Similarweb’s 2026 GenAI Brand Visibility Index, reported by Digiday, found that major publishers like Reuters and The Guardian receive less than 1% of referral traffic from AI platforms despite being frequently cited. Yet, The Washington Post found that visitors arriving from AI platforms convert to subscriptions at four to five times the rate of traditional search visitors. The volume-versus-value tension has never been more acute.

The practical translation of all of this:

  • In 2006, we optimized press releases for keyword density: In 2026, optimize for entity association: linking your brand to specific solutions in the AI’s knowledge graph.
  • Long-form blogs become modular content: Snippets, FAQs, and data tables designed for “chunk-level” ingestion by fetcher bots.
  • Gated white papers become open data: Making unique research crawlable so AI credits you as the source in an overview, not a competitor who summarized your findings.
  • Your robots.txt file now has strategic consequences: Allowing OAI-SearchBot but blocking GPTBot is a choice — one that determines whether you show up in real-time AI search citations versus model training data.

Get the newsletter search marketers rely on.


The human premium isn’t a platitude

As AI-generated content reaches its peak volume, the value of the human voice has skyrocketed — but not for the reasons most think-piece writers suggest.

The standard argument runs like this: 

  • Audiences can smell AI slop.
  • Authentic human writing wins. 

That’s partially true, but it understates the mechanism. The deeper reason human-authored content is winning in AI-mediated search is structural. 

Human authors who’ve built genuine reputations across years of bylined, cited, and cross-referenced work have, in effect, built entity graphs that AI systems can navigate. That isn’t something a prompt can replicate.

The classic example: an AI-generated 2026 review of a new electric vehicle might be factually flawless, listing every spec and battery range. But it loses to a human-authored piece that says, “I drove this through a New England blizzard and the door handle froze shut.” 

AI can’t freeze. It can’t feel frustration. It can’t have a bad morning. Those human frictions are now genuinely valuable SEO assets — not because they’re charming, but because no language model can fabricate them with any credibility.

Readers, trained by years of exposure to AI content, have developed a reliable instinct for the difference.

The Siege Media data Davies cited adds a quantitative dimension: across 7.2 million sessions, the content that earned sustained citations and conversions shared a consistent profile — original data, expert voice, and clear structure that an AI system could extract and attribute. Volume without those properties is, as the headline puts it, just noise.

What to watch at SMX Advanced 2026 — and what it tells us about where this is going

The SMX Advanced agenda is the clearest available signal of where the practitioner community thinks the critical problems are right now. A few sessions deserve particular attention from anyone focused on content creation.

Virji’s keynote, “Your AI ROI story is broken: How to fix it before budgets get cut,” opens Day 2. Virji isn’t arguing that AI investment is wrong. She’s arguing that almost every organization is measuring it incorrectly — and that the correction required is organizational, not tactical.

Davies’ session, “Predicting and influencing AI citations with retrieval signals,” on June 4, is the direct technical counterpart to the strategic framing above. If Virji is asking “what does success mean,” Davies is asking “how do you engineer it.” 

SMX Master Classes ran in April, and SMX Next follows in November. If there’s a throughline across the entire 2026 SMX calendar, it’s this: the search marketing community has collectively decided that the era of isolated channel optimization is over. Content, paid, technical, and brand are now one discipline, or they are failing disciplines.

What you need to actually do in the second half of 2026

Broad strategic advice is easy to nod at and ignore. Here is the specific and uncomfortable version:

  • Audit your AI visibility before you touch your content: Query ChatGPT, Claude, Copilot, Gemini, and Perplexity with the prompts your customers actually use. Note which brands appear. Note which sources get cited. If you’re not among them, adding more content isn’t the first fix — fixing your entity signals is.
  • Stop treating your unique research as a lead-generation gate: Crawlable, citable original data earns AI attribution. A PDF behind a form wall earns nothing except a diminishing number of direct downloads as discovery migrates to AI interfaces.
  • Invest in community platforms as a first-party strategy, not an afterthought: LLMs pull heavily from Reddit, YouTube, and Wikipedia. eMarketer’s Max Willens has noted that Reddit alone has 100 million daily active users generating brand conversations. Your brand’s absence from those conversations isn’t neutral. It creates a vacuum that your competitors or your critics will fill.
  • Optimize for citatability, not just rankability: The new KPI isn’t the visit — it’s the attribution. If an AI Overview uses your data but doesn’t name your brand, you’ve been mined, not cited. Use clear entity markup, structured FAQ sections, and “quotable” conclusions that make it easy for an LLM to attribute rather than anonymize.
  • Diversify your robots.txt strategy intentionally: Different bots serve different purposes. Allowing OAI-SearchBot (real-time citation) while blocking GPTBot (model training) is a legitimate strategic choice. Most organizations have not made it deliberately. Make it deliberately.
  • Measure differently: The eMarketer-recommended framework allocates 40% of your optimization budget to core SEO fundamentals, 25% to digital PR, 20% to data and reporting, 10% to training, and 5% to experimentation. If your current allocation looks nothing like that, the gap explains more about your AI visibility struggles than any content audit will. So, combining SEO and PR is even more important today than it was back in the old days when I started speaking and writing about search.
See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The bots are crawling: Are you worth citing?

The age of the proxy is over. You can no longer hide behind a ghostwriter or a simple prompt and expect to build a brand. But the deeper truth — the one that doesn’t make it into most AI content trend pieces — is that this transformation benefits people who’ve been doing the hard work all along.

If you’ve been building genuine expertise, publishing original data, earning bylines in authoritative publications, and cultivating real presence in the communities where your customers actually talk — then you already have most of what you need. The AI infrastructure of 2026 is, in many ways, a system that rewards exactly the things good content has always required.

The difference is that the competition is now generating plausible-sounding content on a scale that would have been impossible to imagine four years ago. Being good isn’t enough to stand out. 

You have to be citable, structured, and present in all the right places at precisely the right time — which is a harder, more interesting, and ultimately more durable strategic problem than keyword density ever was.

See you in Boston.

Before yesterdayMain stream

I’ve never seen iPhone users waiting for software updates like Samsung: Opinion

3 May 2026 at 10:39

When we talk about mobile software updates, three names come into mind – Apple and Samsung. Why is that? Apple is consistent with iPhone software development and rollout, two metrics that Samsung follows but cannot achieve. However, the scale of deployment matters, which is why Samsung comes second.

Let me help you understand the scenario and the reason you are here.

Apple

Apple has a systematic channel for software update development and release. The chain starts at its developer conference – WWDC, where all of the new software features are announced. Simultaneously, it releases a developer beta to all eligible devices so that they can participate. The same goes for the public beta campaign; all eligible iPhones can enter without restrictions on the number of models.

After months of testing, Apple releases the new iOS software update soon after the release of the latest iPhones. Users have become familiar with this pattern for years. They know what’s heading their way and when. Importantly, Apple keeps this process consistent.

The same goes for Pixel phones: Google releases updates for all Pixels at once, and the rollout is the same. Unlike Apple, Google changes its release date based on the development of the software. It means it has to cope with the Android ecosystem partners and ensure that their experience doesn’t lag.

On the positive side, Google keeps everyone posted about the development roadmap to the final release date.

What’s important?

It’s about approach; Apple has an unmatched consistency, everything is transparent and familiar to the users. The same goes for Pixel phone users.

What about Samsung?

Samsung used to act like a leader in this segment, but not anymore. The company had an annual developer conference, but that is no longer the case. It now announces a beta program with three models, prioritizes new software for new launches, and delays the rollout for old devices.

Unlike Apple and Google, Samsung publishes no prior information about its software development roadmap, shares no estimated launch date, or anything else regarding the final rollout. So, basically, existing Galaxy device owners don’t know when they will receive the next update because there’s no pattern in software development or the rollout.

I’ve also seen many people taking sides with Samsung on this matter, saying it launches more devices in the market at once. So, it can’t release the firmware update for all devices. Let’s agree to this for once, but why can’t it be transparent, share a development roadmap, and the final release date? What’s the loss in sharing a timeline and abiding by it?

That goes for a more consumer-friendly answer, but here’s a more befitting fact. Apple allows all eligible iPhones to test the latest iOS software update. For those who don’t know, Apple sells almost an identical number of iPhones as combined Galaxy devices each quarter. And, Samsung cannot even open the beta program for all S-series at once.

In the Apple ecosystem, users don’t wait for an update, protest about that in online forums, or wait endlessly. The iPhone maker gives them the after-sales service they deserve. Meanwhile, Samsung has become the opposite; you buy an expensive Ultra model, get a new pre-installed software, next year, you have to protest to get new features from the newest Ultra model, and keep on wondering about the final release.

One UI 8.5

The latest update has become a topic of discussion, but due to a lack of transparency. Users first protested against Samsung’s denial of the latest AI features to the previous flagship. Once confirmed, they are now waiting for the final release.

Basically, the beta program opened in December 2025, and the test continues through April 2026. In between, Samsung launched the Galaxy S26 series as the first phone with One UI 8.5. And, the beta is still open as of early May 3rd.

Galaxy smartphone users don’t know when this update will drop on their devices; there’s no announcement in this regard.

Conclusion

Yes, you may have a different opinion on this, but when it comes to consumer satisfaction, transparency plays lead role. This element is completely lacking in Samsung’s software ecosystem. Consumers want the best after-sales services, and they should get them because that’s what they’re paying for. Unfortunately, Samsung is taking consumers for granted just by offering them flashy hardware upgrades with new models and overlooking the after-sales service.

The post I’ve never seen iPhone users waiting for software updates like Samsung: Opinion appeared first on Sammy Fans.

How to build SEO agent skills that actually work

1 May 2026 at 18:00
SEO agent skills

I’ve built 10+ SEO agent skills in 34 days. Six worked on the first try. The other four taught me everything I’m about to show you about the folder structure most LinkedIn posts about AI SEO skills gloss over.

What makes these agents reliable isn’t better prompts. It’s the architecture behind them. Here’s how to build an agent from scratch, test it, fix it, and ship it with confidence.

Why most AI SEO skills fail

Here’s what a typical “AI SEO prompt” looks like on LinkedIn:

You are an SEO expert. Analyze the following website and provide a comprehensive audit with recommendations.

That’s it. One prompt. Maybe some formatting instructions. The person posts a screenshot of the output, gets 500 likes, and moves on. The output looks professional. It reads well. It’s also 40% wrong.

I know because I tried this exact approach. Early in the build, I pointed an agent at a website and said, “find SEO issues.” It came back with 20 findings. Eight didn’t exist. The agent had never visited some of the URLs it was reporting on.

Three problems kill single-prompt skills:

  • No tools: The agent has no way to actually check the website. It’s working from training data and guessing. When you ask, “Does this site have canonical tags?” the agent imagines what the site probably looks like rather than fetching the HTML and parsing it.
  • No verification: Nobody checks if the output is true. The agent says, “missing meta descriptions on 15 pages.” Which 15? Are those pages even indexed? Are they noindexed on purpose? No one asks. No one verifies.
  • No memory: Run the same skill twice, you get different output. Different structure. Different severity labels. Sometimes different findings entirely. There’s no consistency because there’s no template, no schema, no record of past runs.

If your skill is a prompt in a single file, you don’t have a skill. You have a coin flip.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Build SEO agent skills as workspaces

Every agent in our system has a workspace. Think of it like a new hire’s desk, stocked with everything they need. Here’s what the workspace looks like for the agent that crawls websites and maps their architecture:

agent-workspace/
  AGENTS.md          instructions, rules, output format
  SOUL.md            personality, principles, quality bar
  scripts/
    crawl_site.js    tool the agent calls to crawl
    parse_sitemap.sh tool to read XML sitemaps
  references/
    criteria.md      what counts as an issue vs noise
    gotchas.md       known false positives to watch for
  memory/
    runs.log         past execution history
  templates/
    output.md        expected output structure

Six components. One prompt file would cover maybe 20% of this.

AGENTS.md is the instruction manual 

I wrote thousands of words of methodology into AGENTS.md.  Instead of “crawl the site,” I laid out the steps: “Start with the sitemap. If no sitemap exists, check /sitemap.xml, /sitemap_index.xml, and robots.txt for sitemap references. 

Respect crawl-delay. Use a browser user-agent string, never a bare request. If you get 403s, note the pattern and try with different headers before reporting it as a block.”

Scripts are the agent’s tools

The agent calls node crawl_site.js –url to analyze website data. It doesn’t write curl commands from scratch every time. That’s the difference between giving someone a toolbox and telling them to forge their own wrench.

References are the judgment calls

This contains criteria for what counts as an issue. Known false positives to watch for. Edge cases that took me 20 years to learn. The agent reads these when it encounters something ambiguous.

Memory is institutional knowledge

Here I keep a log of past runs:

  • What it found last time. 
  • How long the crawl took. 
  • What broke. 

The next execution benefits from the last.

Templates enforce consistency 

This is where I get specific about the output I want: “Use this exact structure. These exact fields. This severity scale.” Output templates are the difference between getting the same quality in run 14 as you did in run 1.

Walkthrough: Building the crawler from scratch

Let me show you exactly how I built the crawler. It maps a site’s architecture, discovers every page, and reports what it finds.

Version 1: The naive approach

I provided the instruction: “Crawl this website and list all pages.”

The agent wrote its own HTTP requests, used bare curl, and got blocked by the first site it touched. Every modern CDN blocks requests without a browser user-agent string, so it was dead on arrival.

Version 2: Added a script

I built crawl_site.js using Playwright. This version used a headless browser and a real user-agent. The agent calls the script instead of writing its own requests.

This worked on small sites, but it crashed on anything over 200 pages. Because there was no rate limiting and no resume capability, it hammered servers until they blocked us.

Version 3: Introducing rate limiting and resume

I added throttling with a two requests per second default and never every two seconds for CDN-protected sites. The agent reads robots.txt and adjusts its speed without asking permission. I also added checkpoint files so a crashed crawl can resume from where it stopped.

This worked on most sites, but it failed on sites that require JavaScript rendering.

Version 4: JavaSript rendering

This time, I added a browser rendering mode. The agent detects whether a site is a single-page app (React, Next.js, Angular) and automatically switches to full browser rendering.

It also compares rendered HTML against source HTML, and I found real issues this way: Sites where the source HTML was an empty shell but the rendered page was full of content. Google might or might not render it properly. Now we check both.

This version worked on everything, but the output was inconsistent between runs.

Version 5: Time for templates and memory

For this version, I added templates/output.md with exact fields: URL count, sitemap coverage, blocked paths, response code distribution, render mode used, and issues found. This way every run produces the same structure.

I also added memory/runs.log. The agent appends a summary after every execution. Next time it runs, it reads the log and can compare results, like “Last crawl found 485 pages. This crawl found 487. Two new pages added.”

Version 5 is what we run today. Five iterations in one day of building.

THE CRAWLER'S EVOLUTION

  v1: Raw curl           → blocked everywhere
  v2: Playwright script  → crashed on large sites
  v3: Rate limiting      → couldn't handle JS sites
  v4: Browser rendering  → inconsistent output
  v5: Templates + memory → stable, consistent, reliable

  Time: 1 day. Lesson: the first version never works.

The pattern is always the same: Start small, hit a wall, fix the wall, hit the next wall.

Five versions in one day doesn’t mean five failures. It means five lessons that are now permanently encoded. I’ve rebuilt delivery systems four times over 20 years. The process doesn’t change. You start with what’s elegant, then reality hits, and you end up with what works.

Tip: Don’t try to build the perfect skill on the first attempt. Build the simplest thing that could possibly work. Run it on real data and watch it fail. The failures tell you exactly what to add next. Every version of our crawler was a direct response to a specific failure. Not a feature we imagined. A problem we hit.

Get the newsletter search marketers rely on.


Equip agents with the right tools

This is the most important architectural decision I made.

When you write “use curl to fetch the sitemap” in your instructions, the agent generates a curl command from scratch every time. Sometimes it adds the right headers. Sometimes it doesn’t. Sometimes it follows redirects. Sometimes it forgets.

When you give the agent a script called parse_sitemap.sh, it calls the script. The script always has the right headers, always follows redirects, and always handles edge cases. The agent’s judgment goes into WHEN to call the tool and WHAT to do with the results. The tool handles HOW.

Our agents have tools for everything:

  • crawl_site.js: Playwright-based crawler with rate limiting, resume, and rendering
  • parse_sitemap.sh: Fetches and parses XML sitemaps, counts URLs, detects nested indexes
  • check_status.sh: Tests HTTP response codes with proper user-agent strings
  • extract_links.sh: Pulls internal and external links from page HTML

The agent decides which tools to use and what parameters to set. The crawler chooses its own crawl speed based on what it encounters.  It reads robots.txt and adjusts. It has judgment within guardrails.

Think of it this way: You give a new hire a CRM, not instructions on how to build a database. The tools are the CRM. The instructions are the process for using them.

Progressive disclosure: Don’t dump everything at once

Here’s a mistake I made early: I put everything in AGENTS.md. Every rule. Every edge case. Every gotcha. Thousands of words.

The agent got confused. It had too much context and it started prioritizing obscure edge cases over common tasks. It would spend time checking for hash routing issues on a WordPress blog.

The fix: progressive disclosure.

Core rules that affect the 80% case go in AGENTS.md. This is what the agent needs to know for every single run.

Edge cases go in references/gotchas.md. The agent reads this file when it encounters something ambiguous. Not before every task. Only when it needs it.

Criteria for severity scoring go in references/criteria.md. The agent checks this when it finds an issue and needs to decide how bad it is. Not upfront.

This is the same way a skilled employee operates. They know the core process by heart. They check the handbook when something weird comes up. They don’t re-read the entire handbook before answering every email.

If your agent output is inconsistent but your instructions are detailed, the problem is usually too much context. Agents, like new hires, perform better with clear priorities and a reference shelf than with a 50-page manual they have to digest before every task.

The 10 gotchas: Failure modes that will burn you

Every one of these lessons cost me hours. They’re now encoded in our agents’ references/gotchas.md files so they can’t happen again.

Agents hallucinate data they can’t verify 

I asked the research agent to find law firms and count their attorneys. It made every number up. It had never visited any of their websites.

Only ask agents to produce data they can actually fetch and verify. Separate what they know (training data) from what they can prove (fetched data).

Knowledge doesn’t transfer between agents

This fix I figured out on day one (use a browser user-agent string to avoid CDN blocks) had to be re-taught to every new agent. Day 34, a brand new agent hit the exact same problem.

Agents don’t share memories. Encode shared lessons in a common gotchas file that multiple agents can reference.

Output format drifts between runs

The same prompt can result in different field names: “note” vs. “assessment.” “lead_score” vs. “qualification_rating.” If you run it twice, get two different schemas.

The fix: Create strict output templates with exact field names. Not “write a report.” “Use this exact template with these exact fields.”

Agents confidently report issues that don’t exist

The first three audits delivered false positives with total confidence.

The fix wasn’t a better prompt. It was a better boss. A dedicated reviewer agent whose only job is to verify everyone else’s work. The same reason code review exists for human developers.

Bare HTTP requests get blocked everywhere

Every modern CDN blocks requests without a browser user-agent string. The crawler learned this on audit number two when an entire site returned 403s.

All it required was a one-line fix, and now it’s in the gotchas file. Every new agent reads it on day one.

Don’t guess URL paths

Agents love to construct URLs they think should exist: /about-us, /blog, /contact. Half the time, those URLs 404.

My rule is: Fetch the homepage first, read the navigation, follow real links. Never guess.

‘Done’ vs. ‘in review’ matters 

Agents marked tasks as “done” when posting their findings. Wrong. “Done” means approved. “In review” means waiting for human verification.

This small distinction has a huge impact on workflow clarity when you have 10 agents posting work simultaneously.

Categories must be hyper-specific

“Fintech” is useless for prospecting because it’s too broad. “PI law firms in Houston” works. Every company in a category should directly compete with every other company.

My first attempt at sales categories was “Personal finance & fintech.” A crypto exchange doesn’t compete with a budgeting app. Lesson learned in 20 minutes.

Never ask an LLM to compile data

Unless you want fabricated results. I asked an agent to summarize findings from five separate reports into one document. It invented findings that weren’t in any of the source reports.

Always build data compilations programmatically. Script it. Never prompt it.

Agents will try things you never planned

The research agent tried to call an API we never set up. It assumed we had access because it knew the API existed.

The fix: Be explicit about what tools are available. If a script doesn’t exist in the scripts folder, the agent can’t use it. Boundaries prevent creative failures.

Build the reviewer first

This is counterintuitive. When you’re excited about building, you want to build the workers. The crawler. The analyzers. The fun parts.

Build the reviewer first. Without a review layer, you have no way to measure quality. You ship the first audit and it looks great. But 40% of the findings are wrong. You don’t know that until a client or a colleague spots it.

Our review agent reads every finding from every specialist agent. It checks:

  • Does the evidence support the claim?
  • Is the severity appropriate for the actual impact?
  • Are there duplicates across different specialists?
  • Did the agent check what it says it checked?

That single agent was the biggest quality improvement I made. Bigger than any prompt tweak. Bigger than any new tool.

The human approval rate across 270 internal linking recommendations: 99.6%. That number exists because a reviewer verifies every single one.

I’ve seen the same pattern with human SEO teams for 20 years. The teams that produce great work aren’t the ones with the best analysts. They’re the ones with the best review process. The analysis is table stakes. The review is the product.

BUILD ORDER (WHAT I LEARNED THE HARD WAY)

  What I did first:     Build workers → Ship output → Discover quality problems → Build reviewer
  What I should have done: Build reviewer → Build workers → Ship reviewed output → Iterate both

  The reviewer defines quality. Build it first. Everything else gets measured against it.

Tip: If you’re building multiple agents, the reviewer should be the first agent you build. Define what “good output” looks like before you build the thing that produces output. Otherwise, you’re shipping hallucinations with formatting. I learned this across three audits that were embarrassing in hindsight.

The validation standard (Our unfair advantage)

The reviewer catches technical errors. But there’s a higher bar than “technically correct.”

We have a real SEO agency with real clients and a team with 50 years of combined experience. Every agent finding gets validated against one question: “Would we stake our reputation on this?”

Would we actually send this to a client, put our name on the report, and tell the developer to build it?

Below are four tests we use for every finding:

  • The Google engineer test: If this client’s cousin works at Google, would they read this finding and nod? Would they say, “Yes, this is a real issue, this makes sense”? If the answer is no, it doesn’t ship.
  • The developer test: Can a developer reproduce this without asking a single follow-up question? “Fix your canonicals” fails. “Change CANONICAL_BASE_URL from http to https in your production .env” passes.
  • The agency reputation test: Would we defend this finding in a client meeting? If I’d be embarrassed explaining it to a technical CMO, it gets cut.
  • The implementation test: Is this specific enough to actually fix? Not “improve your page speed” but “your hero video is 3.4MB, which is 72% of total page weight. Serve a compressed version to mobile. Here’s the file.”

This is our unfair advantage. We’re not building agents in a vacuum. Most people building AI SEO tools have never run a real audit. They don’t know what “good” looks like. We do. We’ve been delivering it for 20 years with real clients. That’s why our approval rate is 99.6%.

Sandbox testing: Train on planted bugs

You don’t train an agent on real client sites. You build a test environment where you KNOW the answers. We built two sandbox websites with SEO issues we planted on purpose:

  • A WordPress-style site with 27+ planted issues: missing canonicals, redirect chains, orphan pages, duplicate content, broken schema markup.
  • A Node.js site simulating React/Next.js/Angular patterns with ~90 planted issues: empty SPA shells, hash routing, stale cached pages, hydration mismatches, cloaking.

The training loop:

  • Run agent against sandbox.
  • Compare agent’s findings to known issues.
  • Agent missed something? Fix the instructions.
  • Agent reported a false positive? Add it to gotchas.md.
  • Re-run. Compare again.
  • Only when it passes the sandbox consistently does it touch real data.

Think of it like a driving test course. Every accident on real roads becomes a new obstacle on the course. New drivers face every known challenge before they hit the highway.

The sandbox is a living test suite. Every verified issue from a real audit gets baked back in. It only gets harder. The agents only get better.

Consistency: The unsexy secret

Nobody writes about this because it’s boring. But consistency is what separates a demo from a product.

Three things that make output consistent:

  • Templates: Every agent has an output template in templates/output.md: Exact fields, structure, and severity scale. If the output looks different every run, you don’t need a better prompt. You need a template file.
  • Run logs: After every execution, the agent appends a summary to memory/runs.log. Timestamp, site, pages crawled, issues found, duration. The next run reads this log. It knows what happened last time. It can compare and provide outputs like, “Found 14 issues last run. Found 16 this run. 2 new issues identified.”
  • Schema enforcement: Field names are locked: “severity” not “priority,” “url” not “page_url,” “description” not “summary.” When you let field names drift, downstream tooling breaks. Templates solve this permanently.

If your agent output looks different every run, you need a template file, not a better prompt. I cannot stress this enough. The single fastest way to improve quality for any agent is a strict output template.

The stack that makes it work

A quick note on infrastructure, because the tools matter.

Our agents run on OpenClaw. It’s the runtime that handles wake-ups, sessions, memory, and tool routing. Think of it as the operating system the agents run on. When an agent finishes one task and needs to pick up the next, OpenClaw handles that transition. When an agent needs to remember what it did last session, OpenClaw provides that memory.

Paperclip is the company OS. Org charts, goals, issue tracking, task assignments. It’s where agents coordinate. When the crawler finishes mapping a site and needs to hand off to the specialist agents, Paperclip manages that handoff through its issue system. Agents create tasks for each other. Auto-wake on assignment.

Claude Code is the builder. Every script, every agent instruction file, every tool was built with Claude Code running Opus 4.6. I’m a vibe coder with 20 years of SEO expertise and zero traditional programming training. Claude Code turns domain knowledge into working software.

The combination: OpenClaw runs the agents. Paperclip coordinates them. Claude Code builds everything.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

The result

This process resulted in 14+ audits completed with 12 to 20 developer-ready tickets per audit, including exact URLs and fix instructions. All produced in hours, not weeks.

We have a 99.6% approval rate on internal linking recommendations on 270 links across two sites, verified by a dedicated review process. 

We completed more than 80 SEO checks mapped across seven specialist agents. Each check has expected outcomes, evidence requirements, and false positive rules. Every finding is specific (i.e., “the main app JavaScript bundle is 78% unused. Here are the exact files to fix”).

That level of specificity comes from the skill architecture. The folder structure. The tools. The references. The templates. The review layer. Not the prompt.

If you want to build SEO agent skills that actually work, stop writing prompts and start building workspaces. Give your agents tools, not instructions. Test on sandboxes, not clients.

Build the reviewer first. Enforce templates. Log everything. The first version will fail. The fifth version will surprise you.

This is how you turn agent output into something repeatable. The same system produces the same quality — whether it’s the first audit or the 14th — because every step is structured, verified, and encoded.

Not because the AI is smarter. Because the architecture is.

Performance Max for B2B: 5 best practices

1 May 2026 at 17:45
Performance Max for B2B- 4 best practices

Over the past few years, Performance Max has gone from an opaque experiment to a more capable — though still imperfect — campaign type for B2B marketers.

The fundamentals haven’t changed: skepticism still matters, first-party data is critical, experimentation is non-negotiable, and actionable reporting drives optimization. What has changed is how much better Google has gotten at operationalizing those inputs.

That means your Performance Max strategy needs to adapt. Here are five best practices for running more effective PMax campaigns for B2B today.

1. Guide AI with the right inputs

In 2022, given the automated nature of PMax campaigns and the aggressive way Google reps were pushing them, I predicted we’d see an accelerated move toward AI integration. That’s certainly played out, probably in part because of competitive pressures introduced by ChatGPT and the like. 

AI Max for Search (launched in 2025) and PMax are both being prioritized by Google, and that’s not necessarily a bad thing since Google hasn’t deprecated standard Search campaign for B2B and has provided a slew of helpful updates that make PMax more viable for B2B. 

Three updates worth using include: 

  • Search themes, which are useful for more precise targeting.
  • Brand exclusions, which help minimize CPC inflation and over-investment on less-incremental queries.
  • Account-level channel reporting, which gives you a single dashboard look at performance across campaigns. For this feature, segment by conversion metrics to drill down on ROI by channel. You’ll quickly see overperformers where you can increase investment and underperformers that cry out for further optimization or reduced budget.  
Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

2. Address persistent lead quality issues

B2B lead quality in search campaigns has always been a challenge, and PMax’s relative lack of advertiser control makes that challenge tougher. I’ve pushed offline conversion tracking (OCT) since we’ve had that capability, but it’s an absolute non-negotiable for B2B campaigns.

Along with OCT, leverage a relatively new functionality, enhanced conversions for leads, and work around the edges by incorporating reCAPTCHA and testing other mechanisms to reduce PMax spam leads.

Dig deeper: The parts of Performance Max you can actually control

3. Build stronger audience signals

Citing the phase-out of third-party cookies that still hasn’t happened (!), Google officially sunsetted Similar Audiences in 2023, which — well, it was a big loss for advertisers.

To compensate, understand and adapt according to the nature of PMax targeting, which is based on audience signals. Feed the AI high-quality first-party data (CRM lists) and let the algorithm find “lookalikes” through its own internal signals.

CRM lists for B2B are obviously critical, and this should give you even more incentive to clean up and segment CRM data, with audience lists closest to the point of revenue (e.g., SQLs or revenue if you don’t have enough closed-won data to send strong signals), especially valuable for finding high-value new users.

Get the newsletter search marketers rely on.


4. Make creative a performance lever

Creative is an important part of the puzzle for PMax. Good creative can prompt the right audience to engage, and great creative can deter the wrong audience from engaging.

Because YouTube is now a massive part of PMax campaigns, video — which has never been a B2B strength — should be prioritized more than ever for performance marketing.

Google has made this easier by adding the ability to build AI-generated assets right in the Google Ads interface. Just recently, they launched an important complementary feature in beta: PMax A/B creative testing to help advertisers understand which creatives are actually driving performance, and to use test-and-control structures to surface winning (and losing) elements.

Dig deeper: Is Google Ads Asset Studio a game changer? Not so fast

5. Use reporting to drive decisions

A major source of frustration with PMax has been a lack of transparency into results. Over the last few years, Google has introduced reporting updates to address some of those concerns.

Search term insights and auction insights in the Insights tab provide more visibility into performance. Search term insights show how your ads perform for the queries users actually type, including how those ads are being matched and served. This added nuance makes optimization more precise.

Auction insights add competitive context, showing how your campaigns perform against others in the same auctions through metrics like impression share and outranking share.

Finally, asset-level reporting brings visibility to creative performance, with data on impressions, clicks, cost, and conversions for each asset.

Together, these updates give you a clearer view into what’s driving performance — and where to focus optimization efforts.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Make Performance Max work for you

Taken together, recent updates make PMax more viable for B2B marketers than it used to be, especially for those with strong first-party data to train bidding algorithms and a need to find new customer pockets.

After more than 10 years in marketing, I still prefer having controllable levers — and I’m not willing to fully trust Google to act more in my (or my clients’) best interests than its own. Use everything at your disposal to make PMax campaigns work for you, and keep an eye out for new features Google releases that can give you more visibility and control over your account performance.

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

A blueprint for semantic programmatic SEO

1 May 2026 at 16:00
A blueprint for semantic programmatic SEO

Programmatic SEO (pSEO) has been viewed with suspicion by the market. For many SEOs, the term is synonymous with low-quality pages, duplicate content, and the old tactic of “find and replace” city names in static templates.

Google’s spam policies on scaled content abuse are clear: generating vast amounts of unoriginal content primarily to manipulate search rankings is a violation.

Modern pSEO replaces mass page generation with an infrastructure that answers thousands of specific search intents with local nuance and semantic depth at a scale that isn’t possible manually.

This blueprint shows how to evolve from syntax-based pSEO (swapping keywords) to semantics-based pSEO (meaning and context), using a methodology we’ve applied to major players in Brazil.

The fallacy of the static template vs. semantic granularity

The most common mistake when starting a pSEO project is starting with the template, not the data. The old mindset said: “I have a template for ‘Best Hotel in [City].’ I’ll replicate this for 500 cities.”

The problem? The search intent for “Best Hotel in [Las Vegas]” (focused on nightlife, casinos, and luxury) can be radically different from the intent for “Best Hotel in [Orlando]” (focused on family suites, park shuttles, and pools). The user priorities, amenities sought, and decision-making criteria change completely.

The semantic approach requires us to use AI to granularize content. Instead of just swapping the {{City}} variable, we use LLMs to rewrite entire sections of the page based on the specific travel intent of that destination.

We don’t want to create 1,000 pages that say the same thing. We want 1,000 pages that answer 1,000 unique travel needs while maintaining a scalable technical structure.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Strategy before scale: The authority map

Before writing a single line of content, you must answer a critical question: Where do I have permission to rank?

Many pSEO projects fail because they try to cover topics where the domain lacks historical authority. The solution we developed involves a deep analysis of topic clusters based on real Google Search Console (GSC) data, not just third-party search volume.

The authority map methodology works in three stages:

  • Cluster audit: Identify which topics the domain already dominates, which are opportunities, and where semantic gaps exist.
  • Priority definition: pSEO should be used surgically to fill these gaps and strengthen topical authority, not to shoot in all directions.
  • Connection with the calendar: The pSEO strategy must be born from this data. If GSC shows you have growing authority in a topic like “Mortgage Credit,” that is where scale should be applied first.

From there, AI suggests themes and direction, taking into account seasonality and brand guide specifications. This approach transforms pSEO from a “gamble” into a tactic of territorial defense and expansion based on proprietary data.

Solving ‘brand hallucination’: Context governance

The biggest barrier to AI adoption in enterprise companies is brand consistency. How do you ensure that 500 AI-generated articles don’t sound generic or, even worse, hallucinate information outside the company’s tone of voice?

The answer lies in context governance. Instead of relying on isolated prompts, the pSEO architecture must include a brand guidelines layer that acts as a guardian before text generation. This means systematically injecting:

  • Brand persona: (e.g., “We are technical, but accessible”).
  • Negative constraints: (e.g., “Never use the word ‘cheap,’ use ‘affordable’”).
  • Proprietary data: Institutional information that AI doesn’t have in its training data.

By centralizing these guidelines in a digital brand guide that feeds all AI agents, we ensure that multiple sites within the same corporate group (such as a retail conglomerate) maintain their distinct verbal identities, even when producing content on the same topic (like Black Friday) simultaneously. 

The AI stops being a “junior copywriter” and starts acting as a specialist trained in the company’s culture.

Get the newsletter search marketers rely on.


The architecture: The semantic mesh (internal linking)

You’ve created 1,000 excellent pages. How do you ensure Google finds and values all of them? The answer isn’t using “related posts” plugins that only look for matching tags. You need to create a strategy based on real data.

The end of the ‘dead end’

You don’t want the user to land on a page and leave. You want to offer the next logical step. Cross-reference search intent with the destination:

  • The practical example: If a user lands on the site searching for “What is a CRM,” they are in the discovery phase. If that page doesn’t link semantically to “Advantages of [your company’s] CRM,” the user journey “dies” there. The semantic mesh connects the question to the solution.

Strategic reasoning in practice

Instead of randomness, our analysis works based on semantic meaning. The AI identifies: 

  • “I noticed you are about to write about ‘customer retention.’ We have an older article about ‘churn rate’ that complements this topic perfectly. Insert a link to it.”

The tool suggests links between these pages because the context is relevant, strengthening the site’s Topical Mesh.

In programmatic SEO projects, where site depth can grow rapidly, this automation via vectors is the only way to ensure no good page gets forgotten at the bottom of the index.

This closes the loop of topical authority, ensuring no page generated at scale becomes an orphan page.

Case study: Regionalization and seasonality at scale

Theory is nice, but seeing it in practice is even better. Let’s analyze the case of Ânima Educação, one of the largest private education players in Brazil, with about 310,000 students and 18 higher education institutions.

The challenge

The National High School Exam (ENEM) is the “Black Friday” of Brazilian education. Search volume explodes in a short period, competition is brutal, and search intents shift rapidly (from “how to study” to “what is my score good for”). Furthermore, Brazil has continental dimensions; the questions of a student in the Northeast are different from those of a student in the extreme South.

The execution

Using the semantic pSEO methodology and the brand governance mentioned above, it was possible to structure complete coverage of the candidate journey — from exam preparation to the release of grades. 

We ensured that all 18 brands were positioned to answer student questions at the exact moment of the search, respecting local nuances.

The results

  • Scale with precision: During five months, hundreds of undergraduate course pages and articles were optimized or created with granular local relevance.
  • Business impact: Surpassed the organic revenue target by 110% during the critical ENEM season.
  • Omnichannel dominance: Visibility across Google Search, Google Discover, and AI Overviews, and LLMs like Gemini and ChatGPT.
  • Strategic shift: The SEO team transitioned from repetitive manual tasks to high-level strategic oversight.

The technical guardian: Conversational monitoring

Scaling content without scaling technical monitoring is a recipe for disaster. Publishing 500 pages that result in 404 errors, redirect loops, or poor Core Web Vitals (CWV) can destroy the site’s crawl budget.

Modern pSEO requires a layer of real-time technical SEO. It isn’t enough to wait for the monthly report. You need to connect data to the workflow. 

The trend now is the use of technical SEO agents — conversational interfaces that allow the professional to ask the data: “Of the 200 pages published today, which ones have indexing issues?” or “Which clusters are suffering from high LCP?”

This closes the cycle:

  • Planning (authority map).
  • Execution (pSEO with brand governance and semantic linking).
  • Monitoring (technical agent).
See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Putting semantic pSEO into practice

Programmatic SEO has ceased to be about volume to become about relevance. Success won’t come from publishing 10,000 pages tomorrow, but from building an infrastructure that delivers genuine value at scale.

You can use this semantic pSEO roadmap to start your transformation:

  • Start with data, not templates: Use your authority map (GSC) to identify where you already have permission to grow. Don’t waste resources attacking territories where your brand has no history.
  • Implement context governance: Before scaling, create the “rules of the game.” Inject your brand guidelines and proprietary data into prompts to avoid generic content and hallucinations. The AI should sound like your best expert.
  • Build bridges, not islands: Ensure every new page is integrated into a robust semantic mesh. Use internal linking to transfer authority and guide the user toward conversion, avoiding dead ends.
  • Monitor with AI: Abandon sporadic manual audits. Adopt technical agents that monitor your site’s health in real time as you scale.

The future of SEO isn’t about who creates the most content. It’s about who can unite the scale of the machine with the sensitivity of the human to deliver the best answer, at the right moment, for each individual user.

Inside ChatGPT ads: What the data tells us and what’s coming next by Adthena

1 May 2026 at 15:00

The trial is live, limited to the U.S. for now, and moving faster than you likely expected. ChatGPT ads launched Feb. 9 for logged-in users on Free and Go tiers, with 600+ advertisers already in. 

With 800 million weekly active users, a global rollout of ChatGPT ads is a matter of when, not if. 

OpenAI has confirmed the next expansion to Australia, New Zealand, and Canada. The latest update from Adthena trialists suggests the UK could see ads as early as mid-May.

We’ve tracked ChatGPT ad placements since rollout. With an index of 50,000+ daily placements across B2B software, ecommerce, fintech, and consumer verticals, we’ve had a front-row view of how this format is evolving. Here’s what we’ve found.

What ChatGPT ads actually look like

ChatGPT ads appear inline within conversation responses. When you ask something with commercial intent like “best weekend getaway” or “top running shoes under $100,” a sponsored result can appear alongside the AI’s answer, clearly labeled “Sponsored.”

This isn’t a search bar. It’s a conversation. Users arrive already engaged, already researching, often close to a decision. 

The format is tighter than traditional search: no sitelinks or extensions — just a headline, short body copy, and a destination.

But here’s what we didn’t expect. Our data shows what we’re calling the Adthena “Double Parked” phenomenon: a single brand appearing twice in the same response.

We spotted New Balance with two separate sponsored placements in one ChatGPT answer. This raises a key question around visibility, frequency, and what it means to own a conversation on this platform.

10 things we’ve learned from 50,000+ daily placements

If you move fast, this is a rare moment: a new format, an uncontested landscape, and data most competitors don’t have yet. Here’s what it shows.

  1. Headlines follow a “Brand: Benefit” formula. A name, a colon, a value claim. Think “Betterment: 5.25% APY Cash Account.” Dominant across top performers.
  2. Almost every ad leads with the brand name. Awareness thinking for a format where users are already deep in a conversation, not just entering a search bar.
  3. Headlines average just 30 characters, with a ceiling around 36. The constraint forces hyper-concise messaging and every word earns its place.
  4. Body copy runs around 19 words, structured as two tight sentences. One lead proof point, one offer or nudge. One reason to click.
  5. Context mirroring is a defining feature. The strongest ads echo the user’s query directly. A running shoe ad referencing “the transition from 5k to 21.1k” isn’t a coincidence.
  6. The $ symbol drives conversion. Specific dollar figures, precise APY rates, credit amounts. Concrete claims consistently outperform vague promises in intent-heavy environments.
  1. Numbers dominate body copy. Specs, trial lengths, rates. Hard numbers feel more native and trustworthy than soft superlatives in a research-led environment.
  2. “Free” is the most common conversion lever. It removes friction for users already in research mode and close to a decision.
  1. CTAs are action-specific and generic “Learn More” is virtually absent. “Open Account,” “Shop Cell Phones,” “Claim Credits.” Every CTA names the brand, offer, or next step.
  1. Tone is confident and measured. Exclamation marks are rare. The best ads mirror ChatGPT’s calm register—hype punctuation kills trust here.

What this means for your paid search strategy

Top-performing brands in ChatGPT don’t repurpose Google ad copy and hope for the best. They write for a conversational, intent-rich environment where users are already halfway through a decision before the ad appears.

Lead with your brand name. Anchor value in specifics. Make low-friction offers central to your creative. If you’re not thinking about context mirroring, you’re leaving performance on the table.

The bigger question is visibility. If your competitors show up in ChatGPT conversations and you don’t, you’re not just missing clicks — you’re missing the conversation.

See exactly what’s happening with Adthena’s ChatGPT Ads Intelligence

Knowing the trends is one thing. Knowing what your competitors are doing on your exact prompts is another. That’s the problem we set out to solve.

Right now, ChatGPT ads give you impressions and clicks — nothing more. No competitive context, no prompt-level visibility, no insight into who else appears in the same conversations or where you’re missing coverage. You’re optimizing blind.

Adthena’s ChatGPT Ads Intelligence changes that. Here’s what you get.

Your performance, in context

The Ads Performance tab gives you a live snapshot of your ChatGPT activity: ad presence rate, top-performing intent group, total impressions, average CTR, and unique competitors detected. The trend chart shows your presence over time so you can clearly see whether you’re gaining or losing momentum.

Know which topics you’re winning and where to close the gap

The Topics and Keywords Analysis view breaks down performance by intent group, showing your ad presence rate against the competitor average. Each group includes a built-in tactical recommendation, so you always know your next move.

See your own ads as users see them

The Ads Sampling tab shows all your ChatGPT creatives with their headline, description, image, and format. The insight panel highlights your top-performing creative and surfaces optimization opportunities, like pairing a price anchor with a time-limited offer.

Understand exactly what competitors are running

The Competitor Creative Analysis panel breaks down rival ads across your tracked prompts: the images they use, the dominant copy themes, and their format mix. No more guessing what your competition is doing.

Never miss a shift in the competitive landscape

The Ads Benchmarking tab shows who’s advertising on your prompts and how their presence changes week to week. The “What changed this week?” feed flags new entrants and share shifts in plain language before your next campaign review.

Find the gaps before your competitors do

The Competitor Gap Analysis table shows every prompt where competitors have presence and you don’t, flagged by intent group and competitor count. A clear, prioritized view of where to expand your ChatGPT coverage.

The first prompt is the new first click

We’re tracking early-stage data from a platform still in limited rollout. As OpenAI expands to new countries and the advertiser base grows, the competitive landscape will shift fast. Brands building their ChatGPT presence now — learning the format, testing creative, mapping competitive gaps — will have a meaningful head start over those who wait.

Don’t let competitors win the first prompt. Join the product waitlist to uncover your ChatGPT ads landscape. 

In the meantime, get your ads ready with Adthena’s free ChatGPT AdBridge. Connect your Google Ads account and we’ll build your ChatGPT ads setup with AI-enriched campaigns and smarter negative keywords — delivered to your inbox, ready to import.

Once a Samsung Fold fanboy, now I warn everyone: Never Again

23 April 2026 at 20:31

Hey Sammy Fans, I need to get this off my chest. I was a true Samsung Fold fanboy. The first time I unfolded my Galaxy Z Fold (It was Fold3), it felt like I was holding the future of smartphones. The reason was obvious – a big screen for videos, split screen, and cool folding tricks. I showed it off to everyone. “This is the best phone ever.”

After using my first foldable daily for over almost a year, the problems started coming up. The center crease started getting deeper and more annoying. Then one day, when I unfolded my phone, the inner screen was dead. There were no drops, no scratches, nothing crazy. I took it to the service center, hoping for warranty support. The service guy looked at it and said it was “user damage,” and quoted me almost $600-700 for the inner screen replacement. That’s more than half the price of a normal flagship phone. I was shocked.

This isn’t just my bad luck, the same happened with Fold4. I have seen many people in the Samsung community and on X with the same stories. The fold hinges were getting loose or making noise, screens failing after 8-12 months, and dust getting inside somehow.

Here’s another annoying issue. The protective film on the inner screen started peeling off by itself in less than 6 months. I didn’t misuse the phone, the film just lifted from the middle and started bubbling and peeling on its own. It looked ugly and felt terrible while using. I tried pushing it back, but it kept coming off worse. I ended up replacing the film once or twice on both my Fold5 and Fold6, but the same problem kept returning.

Samsung promises about “200,000 folds” or even more now, but in real life, it won’t stand up (my experience is horrible) to the company’s promise. If anything goes wrong, the repair cost is brutal. Many times, instead of repairing the current fold, it feels like the company is pushing you to just buy a new one.

There’s another downside – the battery life. The foldables have average battery life, but not great for such an expensive device. The cameras are fine, but nothing compared to regular Samsung S series phones. The phone feels thick even after Samsung made Fold7 thinner. Multitasking is cool at first, but while watching videos, you notice little bugs and that disturbing crease.

Look, I still love the idea of foldables. With foldable, you can have a big screen when you need it and small when you don’t. But right now, Samsung’s foldable devices are not ready to be a primary phone.

Samsung Fold 4 One UI 5.0 Canada

If you really want a foldable, you should wait for a perfect Fold in coming years. Or honestly? Just get a normal flagship like the Galaxy S26 Ultra. The available option is cheaper, tougher, and you won’t stress every time you open your phone. Even the repair cost is also less.

I stopped using my Samsung Fold (Fold3 after 11 months, Fold4 after 6 months, Fold5 in just 2 months, Fold6 in 1 month, and Fold7 after 10 days) and went back to a regular phone. No regrets. The excitement disappeared quickly once I started worrying about the next repair bill.

If you are thinking about buying a Samsung Fold, please read this first. I love the concept, but hate the experience. It is “never again” for me. What about you? Drop your experiences on X handle @thesammyfans.

The post Once a Samsung Fold fanboy, now I warn everyone: Never Again appeared first on Sammy Fans.

❌
❌