Normal view

Yesterday — 11 April 2026Main stream

Don't stop the Olympic flame at the Rockies. Come to New York. | Opinion

In the past two years, the world has fallen back in love with the Olympics. And how could you blame us? 

We watched athletes glide down the Seine during a breathtaking Opening Ceremony in Paris, then compete against some of the most iconic backdrops on earth. Just weeks ago, athletes split their events in Italy between chic Milan and the soaring peaks of Cortina, proving that a global city and a historic mountain town can together deliver a spectacular Winter Games. 

The Olympics feel vibrant, accessible and global in a way they have not in years. Soon, it will be America’s turn. 

Los Angeles will host the Summer Games in 2028, and no place on earth embodies endless summer quite like LA. From Hollywood to Venice Beach, from surfboards to skateboards, from the Beach Boys to Snoop Dogg, Los Angeles will offer a celebration of sport and culture unlike any other. 

Then, in 2034, the Winter Olympics head to Utah, where the “Greatest Snow on Earth” will once again take center stage, building on the legacy of the highly successful 2002 Salt Lake City Games. 

That is all worth celebrating. But it also reveals something important: In a country as vast and diverse as the United States, the Olympics spotlight is increasingly concentrated in one region: the West.

It does not have to be that way. 

A New York City-Lake Placid Winter Games makes sense

That is why New York City and Lake Placid should bid to jointly host the 2042 Winter Olympics. Unlike a Summer Olympics, a Winter Games here would require little to no new sports infrastructure – the venues already exist.

And because the Winter Games are less than a third the size of the Summer Olympics, the logistics are far more manageable. 

The Olympic Rings in Lake Placid, New York, on Jan. 9, 2023.

Milan-Cortina proved that a two-region model works. France, which hosted the 2024 Summer Games in Paris, will host the Winter Games in the Alps just six years later, in 2030. And if Switzerland secures the 2038 Games, as many expect, the Alps will have hosted three of four consecutive Winter Olympics.

There is no reason the United States, with its continental scale, world-class infrastructure and global reach, cannot host three Olympics within a 14-year span. In fact, we are uniquely positioned to do so. And nowhere is better suited than New York. 

New York City is home to people from every nation in the world and a center of global media, finance and culture, and yet it has never hosted an Olympic Games. Meanwhile, London, Los Angeles, Paris and Tokyo have each hosted multiple times

Opinion: World Cup will give America a global stage, and a stress test

What New York offers is unmatched: a concentration of world-class sports venues unlike anywhere else on earth. Madison Square Garden, Barclays Center, UBS Arena, Yankee Stadium, Citi Field and the USTA Billie Jean King National Tennis Center already host global events every year. In fact, New York could host more Winter Olympics events in existing venues right now than Milan did. 

Those arenas could stage every indoor ice event, while iconic outdoor settings could push the boundaries of what a Winter Games looks like – Big Air at Citi Field or Yankee Stadium, or even a cross-country sprint through Central Park. That is how you bring winter sports to the center of global attention. 

Five hours north sits Lake Placid, the birthplace of American winter sports and host of the 1932 and 1980 Winter Olympic Games. Anchored by Whiteface Mountain, with the greatest vertical drop east of the Rockies, and home to world-class skiing, sliding, ski jumping and Nordic facilities, Lake Placid is prepared right now to host a full slate of Olympic winter events. 

That did not happen by accident. For years, New York state, under the leadership of Gov. Kathy Hochul, has made a deliberate choice to modernize and maintain these Olympic venues rather than let them fade into history. Today, they regularly host World Cups and international competitions. That investment has preserved a winter sports tradition that stretches back more than a century and shows why Lake Placid was America’s original capital of winter sport.  

In fact, during the recent Milan-Cortina Games, Lake Placid was designated the official backup host for sliding events because of construction delays in Italy – a powerful testament to how competition-ready these facilities remain. 

New York and Lake Placid could easily best Milan-Cortina 

Just as important are the logistics, and Milan-Cortina showed they are more feasible than many people realize. Milan and Cortina are roughly the same distance apart as New York City and Lake Placid. Fans in Italy traveled by train and bus through the mountains.

New York could do the same – and do it better.  

Opinion: I'm a figure skater. It's not too late to save Olympic ice sports.

Spectators could travel by rail from Penn Station along the Empire Corridor to Albany or Saratoga Springs and continue by shuttle to Lake Placid. With the right planning, a future Olympics bid could even accelerate long-overdue investment in one of the busiest rail corridors in the country. 

Housing presents a similar opportunity for smart, long-term investment. Milan built a permanent Olympic Village that will become student housing, while Cortina relied on temporary accommodations that will be dismantled after the Winter Games.

New York City could replicate this model by partnering with universities or converting underused office space into a lasting Olympic Village that later becomes student or affordable housing. In Lake Placid, a mix of permanent and temporary structures could support both the Winter Games and the region’s long-term workforce and housing needs. 

But the strongest argument for a New York City-Lake Placid Games is an intangible one. 

New York would bring the Winter Olympics to the largest, most diverse, most creative and most connected metropolitan region in the world – the opposite of a vision of winter sports as geographically and economically exclusive, confined to distant mountain enclaves. The possibilities that would emerge are impossible to fully predict: new audiences, new athletes, new traditions and new expressions of winter sports. 

Winter sports on the doorstep of more than 60 million people would transform who follows these events and who participates in them for a generation. A child from Brooklyn, Philadelphia, New Haven or Baltimore could see these sports not as something distant, but as something that belongs to them, too. 

That matters. 

Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store.

In Italy, tickets were accessible. Fans from Germany, Japan, the Netherlands, Canada, South Korea, Brazil, Italy and the United States sat side by side cheering together. Volunteers and locals embraced the Winter Games as their own. From bartenders trading Olympic pins to Alpini soldiers guiding skiers through the Dolomites, the Olympics felt like a civic experience. 

New York knows how to create that energy. 

We have hosted Super Bowls, World Series, All-Star Games, the Grammys, political conventions, and every year the U.S. Open, the world’s largest marathon and the United Nations General Assembly. We know how to welcome the world – and by thoughtfully splitting events between New York City and Lake Placid, we could do it responsibly and sustainably. 

It is time to think big again. No credible future host can represent and welcome the world quite like New York City and Lake Placid. Together, we can honor Lake Placid’s Olympic legacy while charting a new course for Olympism in the world’s most international city, where nothing is impossible.  

A New York City-Lake Placid Winter Olympics in 2042 would not just be a sporting event. It would be a statement: that our communities are strongest when we dream big and welcome the world as neighbors. 

The Olympic flame should not stop at the Rockies. It is time to bring it to New York.

Robert Carroll

Robert Carroll represents Brooklyn in the New York State Assembly. This column originally appeared in The Journal News.

You can read diverse opinions from our USA TODAY columnists and other writers on the Opinion front page, on X, formerly Twitter, @usatodayopinion and in our Opinion newsletter.

This article originally appeared on Rockland/Westchester Journal News: Who hosts the next Olympics? It could've been New York | Opinion

Angel Reese’s Trade To The Atlanta Dream Is Best For All: Here’s Why

Angel Reese’s trade to the Atlanta Dream from the Chicago Sky made huge waves this past week. The two-time WNBA All-Star was traded for draft capital in 2027 and 2028. However, at least to me, the writing has been on the wall for this moment for at least a year now.

The Chicago Sky fired WNBA legend Teresa Weatherspoon after her first season as their head coach. Her first season was the 2024-25 campaign, which also coincided with the rookie season of the “Bayou Baddie.” The two took to each other, and I could see great potential for Reese being coached up by “T-Spoon.” The Sky had other ideas after a dismal first season together that touted a 10-34 record. But for a team in rebuild mode, this was to be expected.

A promising beginning for Angel Reese

As we entered the second season for Reese, Weatherspoon was dismissed in the off-season, which, to me, was a slap in the WNBA newcomer’s face. As far as I’m concerned, Weatherspoon was installing a culture in Chicago. She was cultivating a gritty identity for the squad, which included Reese’s voracious rebounding and energy. But the team’s brass letting Weatherspoon go after one season, I believe, fractured Reese’s trust in the Sky long-term.

With this trade taking place going into Reese’s third season, she gets traded not only to a talented Atlanta Dream team, but to a city that’s ready to welcome her. Atlanta loves its Black people and Black stars. With Reese’s likeness soaring for brands like Reese’s and Victoria’s Secret, her fanbase can grow even more in a city like Atlanta.

With the wealth of talent around her, her role can be even more defined. As Reese continues to broaden her offensive repertoire, she’ll also be able to offload much of those responsibilities at times, too. But her rebounding will be so key and featured that I think it can only help grow her confidence even more as a player.

Hindsight is looking very 20/20

The beauty in this trade, as far as I can see it, is that it just feels like Reese has a home now. It felt that way initially in Chicago when she had a figure like T-Spoon, whom she’d run through a wall for. Last season in Chicago felt more like purgatory. What would be the next shoe to drop? It turned out to be a respectful parting of ways that allows Reese to truly realize all of her potential with a team that accepts her for who she is on and off the floor.

Make no mistake, there’s so much to be excited about heading into this WNBA season. There are two new teams. The ladies are going to see substantial raises. They have a new collective bargaining agreement that includes equity for the players. And now we get to see Reese in her third season in a healthy team environment, going out and really seeing what she’s made of against the league’s best.

The Sky can start from scratch and hopefully learn from their past woes. But the “W” continues to look as healthy as ever, and you don’t just have to take my word for it.



The post Angel Reese’s Trade To The Atlanta Dream Is Best For All: Here’s Why appeared first on Blavity.

Before yesterdayMain stream

Healthcare reviews: How to stay compliant and win in local SEO

10 April 2026 at 19:00
Healthcare reviews compliance in local SEO

There’s a broad consensus that online reviews — especially Google reviews — should be a top priority for businesses that rely on local customers. 

Four of the top 15 ranking factors in Google Maps were related to reviews (quantity, quality, recency, and consistency), according to a recent Whitespark survey. Other surveys report that more than 80% of consumers use Google reviews to evaluate local businesses.

For most of these businesses, the solution is straightforward: ask more customers for reviews, and then reply to those reviews. However, if you work in healthcare, you’ll inevitably find that things aren’t that simple. 

From soliciting reviews to responding to reporting fake engagement, medical facilities face unique dilemmas due to ethical standards and federal laws that limit review-related activities. That said, if you understand the obstacles and your options, there’s no reason you can’t be both competitive and compliant in the arena of healthcare reviews.

After working in healthcare for over a decade, I’ll share the biggest obstacles I’ve faced, along with unique solutions.

The catch-22 in mental health

Years ago, I was assisting a therapist’s small private practice with local SEO. He only had a couple of reviews, so I pointed that out. That’s when he told me he wasn’t even allowed to ask for reviews.

At the time, I was certain he must be mistaken. To my surprise, it was actually part of the code of ethics from the American Psychological Association (APA), which explicitly states therapists and psychologists can’t solicit testimonials from their clients (due to concerns of undue influence). 

With that in mind, the lack of reviews was certainly understandable, but it was still a problem for local SEO. And Google doesn’t seem to make any exceptions for the mental health field. 

After working with many more clients and employers in the mental health space since, this has proven to be a recurring obstacle. Mental health professionals need visibility on Google the same as any other local business, but one of the best ways to achieve that visibility isn’t even allowed in their field. 

The result, unfortunately, is that the practitioners who follow their ethics rules are often those with the least visibility on Google. 

The good news is that there are still ways to get reviews without crossing those ethical boundaries — although it might require utilizing some outside-the-box solutions.

A case study in mental healthcare reviews

A few years ago, I started working with an addiction treatment center that had been doing well with reviews until a new local competitor opened and exceeded both the number of reviews and the average rating in less than one year (despite the client’s nearly 10 years in business).

This competitor was increasingly outperforming them in local search, so something had to be done. However, my client wasn’t sure how they could have received so many reviews without crossing ethical boundaries. 

To outpace and keep up with this competitor’s reviews, we needed to secure 50 to 100 reviews and maintain a rate of at least one review per week. The problem was that the client hadn’t received consent from former patients for marketing texts or emails, and they also knew they couldn’t make soliciting reviews a day-to-day part of the clinical staff’s work.

The solution

Since the APA ethics rules primarily govern psychologists and clinicians, and because the reasoning behind the APA guidance relates to patients having undue influence, we determined that individuals who opted into alumni engagement and who were no longer in active treatment could be asked for a review (and only by non-clinical staff).

We decided it made more sense to focus on expanding the alumni program rather than facing the review dilemma head-on or in a vacuum because:

  • An alumni program would improve the overall patient experience and success rate, and it would be the best way to offer non-clinical experiences and interactions with other staff.
  • We would designate the non-clinical alumni coordinator responsible for requesting reviews, and only from alumni (no ethical concerns).
  • The alumni coordinator would have an in-person rapport with these patients (better for review conversion).

So, we enacted the following:

  • Tasked the alumni coordinator with review generation
    • We didn’t create an incentive for the employee when they got reviews (I’ve never seen much success with that tactic anyway). Instead, we simply made it part of the job description and set the expectation that getting reviews every week was part of the gig. 
    • Now, we didn’t truly “enforce” this rule per se, but we did track it. When more than two weeks went by without any reviews, we would follow up with the alumni coordinator to see how things were going. Over time, the need for these check-ins decreased, and requesting reviews became part of the job.
  • We made an online alumni group and QR code cards
    • When someone graduated from the program, they would be encouraged to stay involved with the alumni community. The patient would be given a QR code to join a private online group to stay current on upcoming events. 
    • We also included a QR code for finding the phone number and driving directions to the facility (via a link to the Google Business Profile), making it easy to find where to leave a review if they felt inclined.
  • When an alum verbally said they would leave a review, we texted them a link
    • In my experience, most people will leave a review if you ask and make it easy to do. Many clients will agree to leave reviews, but unless you explicitly show them how, there’s rarely follow-through. It just might not be a priority for them, so they forget or put it on the back burner.
    • Simplifying the review process worked well. A direct link sent via text drove higher completion rates — no questionnaires, no review gates, just a straight path to the Google Business Profile.

The result

In less than a year, we were able to generate 100 new reviews, outpacing the competitor. The average rating also improved from 4.6 to 4.8, which was also better than the competitor.

In the second year, an additional 100 reviews were gathered, which meant we generated more reviews in two years than the first nine years of business combined.

Healthcare reviews - Case study results after alumni check-in
Healthcare reviews - Case study results as of January 2026

As of February 2026, the facility is just shy of 500 reviews, still averaging at least one review per week — without crossing any ethical boundaries.

If you want to duplicate this review strategy, here’s the summary:

  • Review owner: Designate a non-clinical staff member responsible for reviews, such as alumni coordinator (and make a review count goal part of their role).
  • Review trigger: Alumni event attendance or joining the alumni community.
  • Request methods: In person.
  • Request delivery: Print materials with QR codes for patients to stay in touch, find the Google Business Profile, and consent to communications, followed by a direct link via text to leave a review.
  • Tracking: Weekly review count. Follow up with the review owner when the weekly goal isn’t achieved.

For third-party agencies and freelancers: If you help a healthcare client with an SMS service or share information about patient identities in any way between a “covered entity” and a third party, there should be a business associate agreement with those third-party vendors.

What not to do when generating reviews:

  • Don’t ask current mental health patients for reviews.
  • Don’t “gate” reviews (it is against Google guidelines, and it reduces conversion).
  • Don’t pressure or coerce clients or patients to leave a review.
  • Don’t incentivize staff or clients to leave reviews.

What if you’re a solo mental health practitioner?

If you’re a therapist or psychologist who can’t rely on non-clinical staff to request reviews, you aren’t without options. Some other things I’ve had success with include:

  • Reducing friction: Instead of an explicit “ask,” you can provide a QR code at checkout or a link in your follow-up emails that directs patients to your Google Business Profile for “Directions and Information,” making it easier for patients to leave a review if they are inclined to do so.
  • Leveraging aggregate data: If you are in a high-sensitivity field (like behavioral health), you can also publish aggregate client satisfaction scores or patient outcome reports on your website and review platforms. While it may not have the same ranking impact as reviews for local search, it will provide similar social proof without the ethical questions.

Get the newsletter search marketers rely on.


Review replies and HIPAA compliance

In addition to getting reviews, replying to them is also important. While medical businesses can post replies to reviews, the subject matter in their response is regulated.

Merely acknowledging that a reviewer was a patient could be a risk under the Health Insurance Portability and Accountability Act (HIPAA) — even if the patient had already revealed as much in their review. That’s because HIPAA only regulates what providers share.

Patients are free to share whatever they like about themselves online, but that doesn’t change the provider’s legal responsibility to protect health information. (One California hospital learned that the hard way in 2013 with a $275,000 settlement after a spokesman commented to the media, stating that a patient’s medical records contradicted their own accusatory Yelp review.) 

Generally, you should avoid acknowledging that the reviewer was a patient to remain compliant under HIPAA. Instead:

  • Focus on policy, not the person: Keep the response focused on general facility policies and practices around the complaint rather than the reviewer’s exact situation.
  • Move the conversation offline: Provide a direct line to a patient advocate or office manager.
  • Avoid confirming status: Even if a patient says, “I was there yesterday,” your reply should never say, “We enjoyed seeing you.”

While not legal advice, here are some example templates I often use when replying to reviews:

Negative review reply template:

  • “Privacy laws prevent us from confirming or denying whether any individual is a patient at our facility. However, we take all feedback seriously. Our policy regarding [insert issue] is [insert general policy]. If you would like to discuss a specific experience, please contact [insert contact instruction].”

Positive review reply template:

  • “Thank you for your kind words. We appreciate you taking the time to share feedback.”

Why these work:

  • These avoid patient status confirmation.
  • For negative reviews, it explains why you can’t respond directly and offers an alternative way to discuss their concern in detail.

Reporting reviews and HIPAA compliance

You also can’t tell Google whether someone was a patient. This applies when reporting a review as fake engagement — claiming someone “wasn’t a customer” can be risky if you’re a covered entity under HIPAA.

Instead, focus on other types of review violations. One of Google’s review policies regarding “misinformation” can be helpful in the healthcare industry.

  • For example, I once had a client who received a review claiming the medication they were prescribed wasn’t safe. This was totally false and easy to prove since it was FDA-approved. Google ultimately removed the review when this was pointed out.

Some of the other Google policies that can lead to the successful reporting of healthcare reviews include: 

  • Offensive content, such as unsubstantiated allegations of unethical behavior or criminal wrongdoing.
  • Personally identifiable information (PII), such as the use of the first and last names of staff in the review.
  • Off-topic, such as leaving a review for a different facility or location.
  • Repetitive content, such as posting the same review from multiple accounts or the same review on multiple locations.

When you report reviews to Google, be sure to:

  • Correctly identify and list the policy category.
  • Quote the exact offending line from the review.
  • Provide evidence and explicitly explain why it violates the policy.
  • Avoid reference to the reviewer’s relationship to the facility.

Building a compliant and effective review engine in healthcare

Healthcare review management can be a compliance exercise, but the good news is you don’t have to choose between compliance and local SEO. You just have to build a review system designed for this industry’s reality:

  • Build a compliant, consistent process rather than a “one-off” push. Assign ownership, set expectations, and track consistently.
  • Reduce friction by making it easy to leave reviews via print materials and text messages, but without coercion, incentives, or asking current patients.
  • Stay neutral when replying to reviews (or reporting them), and never confirm patient status in public. When reporting reviews, focus on other Google categories that don’t require patient status.
  • Involve compliance leads in the review process. Unlike other fields, there are real liability risks with healthcare reviews.

Done right, you can grow local visibility, protect patient privacy, and sustain review consistency — just like any other industry.

LLM nudges: The hidden force behind AI-driven journeys

10 April 2026 at 18:00
LLM nudges- The hidden force behind AI-driven journeys

LLMs have become a starting point for nearly everything — work, play, consumerism, health, and more.

But one thing gets overlooked: how they finish answering prompts. They don’t — and that matters.

They operate in a “no, you hang up first” mentality. The prompts we enter don’t just end. LLMs “nudge” us to continue the conversation, offering to take the next step.

“Would you like me to create that travel itinerary for you?” “Would you like me to compare the Nike and New Balance running shoes and tell you which is best for a marathon?”

These nudges make it easy to keep going. Most of the time, I enter “sure” or “sounds good. Thank you,” and move to the next step to see what it provides.

These nudges drive consumer behavior. Where LLMs take us matters.

If you’re a premium brand and the LLM suggests a price comparison, you may not like it, but you need to understand it so you can react.

We analyzed how different LLMs use these nudges across prompts and platforms to understand the patterns shaping user behavior — and what they signal for brands trying to stay in control of the journey.

What LLM nudges actually look like across platforms

Budget and deals dominate

LLMs provide different types of follow-up suggestions. Overall, 45% of mentions are budget- and deal-related. While not evenly distributed, budgets and deals are treated as the default of what consumers want to see. 

Perplexity and ChatGPT are over 60% budgets and deals. Meta is the only one that doesn’t make that assumption at the same level.

Comparisons drive the next step

The second biggest recommendation type is product comparisons. LLMs offer to compare various products, including financial services products, health treatments, and retail products. All industries see suggestions for comparisons.

Specs play a minor role

Another key point: much of the current thinking urges you to provide LLMs with detailed technical specs. But those make up a small share of these suggestions. That doesn’t mean content lacks ranking value — it does — but it’s not how LLMs usually extend conversations with users.

Get the newsletter search marketers rely on.


How each platform uses nudges differently

We also analyzed the dominant nudge style across platforms. Each LLM uses a distinct tone when continuing the conversation. How these systems guide users forward reflects the personalities they present.

PlatformDominant nudge styleKey characteristic
ChatGPT“If you want…”Heavy commerce focus: Primarily nudges toward deals and product comparisons.
Microsoft Copilot“If you tell me…”Interactive/clarifying: Frequently asks for more user data to refine its recommendation.
Google Gemini“Would you like me…”Polite and permission-based: Exclusively uses this formal invitation to continue helping.
Perplexity“I can help…” / “If you’d like…”Service-oriented: Uses more varied phrasing to offer utility and assistance.
Meta AI“Let me know…”Casual and passive: Primarily nudges toward product comparisons and specs with a less aggressive, “standing by” tone.

What actions to take based on AI nudges

These nudges are designed to keep the conversation going and push users to explore further. They drive consumer behavior and shape the customer journey.

Over time, we’ll be able to better optimize for them as more data becomes available. For now, insights are limited to individual responses, with no way to connect conversations.

The actions to take fall into three buckets, mostly tied to the content you create across onsite and offsite channels:

  • Capitalize on the “support” gap
    • Proactive nudges for troubleshooting and support are significantly lower than commerce-driven themes. 
    • Own the post-purchase “how-to” and technical support space to build long-term authority where AI is currently less aggressive.
  • Prioritize the “comparison” hook
    • LLMs consistently nudge users toward comparative analysis. 
    • Double down on “Product A vs. Product B” guides to capture the AI’s primary next step.
  • Maximize the “budget and deals” opportunity
    • Pricing and discounts are the No. 1 driver of AI nudges (48% of all triggers).
    • Maintain structured, real-time deal data to ensure your site is the preferred destination for AI commerce referrals.

The LLM landscape will keep evolving quickly as these platforms become the primary interface for consumer research and decision-making. Your priority now is to understand how LLMs talk about your brand and how those conversational nudges affect users.

By analyzing these automated suggestions across platforms like Gemini, ChatGPT, and Perplexity, organizations can see how consumers are being directed — whether toward budget-friendly alternatives, product comparisons, or technical specifications.

Recognizing these patterns lets you move from passive observation to action, keeping your value proposition clear even when an LLM reframes the conversation around price or competitors.

Tracking this shift is key to maintaining brand authority as AI-driven interactions shape the customer journey.

Paid media efficiency: How to cut waste and improve ROAS

10 April 2026 at 17:00
Paid media cut waste

We’re being pushed harder than ever — expected to hit bigger revenue targets with the same or smaller PPC budgets. Even with flat budgets, rising platform costs mean we’re effectively facing a budget cut.

  • Average CPCs have risen by as much as 40%, with an average of 3.74%, per Wordstream. Certain periods, such as Black Friday, see much higher increases.
  • Teams are experiencing budget cuts, with average marketing budgets flatlining at 7.7%, according to Gartner.
  • Our own account audits show that 20-30% of most accounts’ spend is quietly underperforming.

This is the reality of paid media in 2026. But it isn’t all bad news. Efficiency isn’t just about spending less, it’s about spending smarter. Here’s how to find the waste, fix the fundamentals, and get maximum return from every dollar you invest.

Why efficiency has become the priority

Paid media has shifted dramatically over the last few years, with a greater focus on automation, which has led to hidden data. In parallel, businesses are freezing or reducing budgets while expanding revenue targets, and we’re seeing inflation hit CPCs across most industries, with accounts across our portfolio averaging 10% increases year on year, depending on the industry.

With the expansion into AI-driven automation, this has pushed us further into smart bidding strategies, meaning that where CPCs are rising, you have to be clever with the levers you pull to curtail or minimize these increases.

Meanwhile, customers are spreading their attention across more platforms than ever before, switching between screens and devices, and frequently double-screening.

The question for many businesses is no longer “how do we spend more?” but “how do we get maximum return from every dollar we spend?” Getting that answer right starts with an honest look at where money is being lost.

Auditing for waste: The 20-30% rule

One of the most important (and uncomfortable) truths in paid media is that aggregate metrics hide wasted spend in plain sight.

  • A campaign with a 600% ROAS average might have a single product consuming 20% of the budget at just 300%.
  • An untouched search term report can contain dozens of irrelevant queries burning through spend, especially when broad match keywords or Performance Max campaigns are in play.
  • Settings or targeting that made sense when you first launched your campaigns may not do so now. Consumer behavior shifts, and business objectives develop and change over time. Are your ROAS targets still reflective, for example?

Common waste zones to investigate include:

  • Zero-conversion products or keywords.
  • Low ROAS/CPL outliers.
  • High spend, low ROAS/CPL.

Zero-conversion products or keywords

Products or keywords that receive spend but generate no conversions are generally loss-making. Before drawing this conclusion, apply impression, click, and spend thresholds to ensure sufficient data. 

If a product or keyword has surpassed your target, look to stop spend in these areas. You also want to assess for seasonality and review other contributing factors such as:

  • Search term relevance.
  • Checkout funnels.
  • Competitive advantage.

Low ROAS/CPL outliers

Products consistently below your viable ROAS/CPL threshold are often hidden within blended campaign performance. Use performance bucketing, and set more aggressive targets to control spend and CPCs for these areas.

High spend, low ROAS/CPL

High visibility with low return is a common and costly pattern. Optimize your product feed, and apply more aggressive targets to bring these in line. Again, these products will benefit from implementing product bucketing.

Wastage breakdown by category

Beyond products, a thorough audit should cover:

  • Account-level settings (such as content suitability, scheduling, landing page quality, and device performance).
  • Campaign-level detail (including search term reports, cannibalization, negative keyword coverage, bid strategy alignment, and asset performance). 

AI tools can significantly accelerate this analysis. Feeding your data into a well-prompted model can surface patterns that would take hours to identify manually. AI can also help visualize data more clearly and break it down into manageable, easy-to-understand segments.

Full-funnel thinking: Where should your budget sit?

When budgets are tight, funnel prioritization becomes critical. Not all spend is equal, and the hierarchy matters.

Conversion (retargeting, branded terms, exact match)

This is where the highest intent and highest return live. Protect this budget as much as you can, but also assess whether other channels can pick up some of this slack. For example:

  • Do you need to spend on brand searches, or can you capture this organically? 
  • Can you re-engage better through email?

Consideration (generic search, shopping, social)

For established brands, this is where the majority of the budget will sit, supporting the pipeline. These users have an active need for your product, and you should prioritize appearing for these searches/users. Again, consider the need for paid ads. 

  • If you are strong organically, with low competition, can you cut back? 
  • Which keywords and products is your budget best spent promoting?

Awareness (social, display, video, audio)

Valuable for long-term brand building, but is usually the first area to be trimmed when budgets are under pressure. 

You should try to maintain a level of branding, or you end up passing the issues down the road, as you are unable to build a future pipeline. In Google Ads, campaign types like Performance Max allow full-funnel targeting.

Get the newsletter search marketers rely on.


Creative is a must-have, not a nice-to-have

Creative is no longer just a brand awareness nice-to-have. It’s directly correlated to campaign success.

Google and Meta campaigns rely heavily on creative variation to test and optimize. Without sufficient variants, the system runs out of testing capability, and performance plateaus over time as frequency increases.

Campaign types such as Performance Max (Google Ads), GMV Max (TikTok), and Advantage+ (Meta) are heavily restricted without sufficient creative. This results in inefficient spending.

  • Variety is a system requirement: Platforms need multiple creative variations to identify what works for each auction, audience, and placement. If you don’t supply enough variety, you risk performance decline.
  • Fatigue is accelerating: With AI-generated content flooding the digital landscape, audiences are tiring of ads faster than ever. For most categories, refreshing creative at least every four to six weeks is now the baseline.
  • Quality beats quantity: Variation is valuable, but one clear, well-crafted message will outperform ten low-quality. Know the purpose of each ad, and who it’s for before.

AI can support creative production, but strong messaging and strategic clarity still matter most.

Attribution and measurement: Getting honest about what works

Platform attribution has become more fragmented and broken over the years, but many advertisers are unsure how to address this and move forward.

Elements such as cross-device behaviors, iOS privacy changes, consent mode, and GDPR, modeled data, plus the platform’s bias toward claiming conversion credit mean that in-platform numbers should be treated as optimization signals, and not sources of truth.

Using blended metrics gives a cleaner picture of actual efficiency, and can help you establish how your paid media efforts are working:

  • Marketing efficiency ratio (MER): Total revenue divided by total ad spend. A single, honest view of overall paid media efficiency.
  • New customer acquisition cost (nCAC): Total spend divided by the number of new customers acquired. Shifts focus from retention to business growth.
  • CLV:CAC ratio: Sets a strategic ceiling on customer acquisition costs. A ratio of 3:1 or above is the benchmark to aim for.

Building a reliable measurement framework follows a clear sequence: fix your base tracking first, build a blended view of performance, use in-platform data for optimization signals only, and apply incrementality testing when making significant budget decisions.

Incrementality testing allows you to use treatment and holdout groups to clearly establish whether a new campaign or platform launch, for example, has added incremental value.

Automation and AI: Efficiency with guardrails

AI and automation offer real efficiency gains, but only when applied with thought and control. The biggest mistake is automating decisions that require strategic judgment, or removing human oversight from areas where context matters.

Safe to automate:

  • Bidding strategies.
  • Budget pacing alerts.
  • Data-backed budget adjustments.
  • Product labeling and exclusions.
  • Scheduled reporting and data visualization.
  • Competitor ad monitoring.

Keep human oversight:

  • Channel strategy.
  • Audience targeting.
  • Creative strategy.
  • Targets and KPIs.
  • Campaign launches.
  • Interpreting significant performance changes.

Scripts for product bucketing are a particularly high-value area of automation. Automatically labeling products based on performance criteria allows for continuous, data-driven management without manual intervention.

Scripts for product bucketing

Performance Max: When to use it (and when not to)

PMax works well when you have a strong product feed, sufficient conversion volume, high-quality assets, clear audience signals, an appropriate budget, and effective conversion measurement in place.

Without these conditions, the risks can be high, and can hide troublesome metrics among the averages. This can include:

  • Cannibalization of brand search.
  • Over-indexing on existing customers.
  • Loss of product-level control.

Get the foundations right before leaning into automation.

Getting the most from AI bidding strategies

Choosing the right bidding strategy matters as much as setting it up correctly:

StrategyWhen to useWatch out for
Target ROAS30+ conversions/month with a clear ROAS targetToo high throttles spend; too low creates wasted traffic
Target CPALead generation, where dynamic revenue isn’t trackedWorks best with consistent CPA; wrong targets cause delivery to spiral
Maximize Conversion ValueWhen you lack sufficient data to set a ROAS targetNo bid ceiling, monitor CPCs and budget closely
Maximize ClicksUpper funnel only, where traffic volume is the goalIgnores the bottom of the funnel entirely

The highest-leverage moves for paid media efficiency

If your paid media budget is under pressure, the highest-leverage moves are:

  • Run a waste audit: Find the 20-30% that’s underperforming.
  • Protect lower-funnel spend: Conversion-focused campaigns should be the last to be cut.
  • Refresh creative more frequently: Creative fatigue is costing performance in ways that aren’t always visible in the numbers.
  • Move to blended measurement: Get honest about what’s working across channels, not just within platform dashboards.
  • Automate selectively: Use AI for what it does well, and keep human judgment where it counts.

Done well, efficiency can give you a competitive advantage, and it’s available to any team willing to look honestly at where their spend is actually going.

How to take your marketing measurement from crawl to sprint

10 April 2026 at 16:00
Marketing measurement crawl walk run sprint

Measurement is the foundation for everything we do in performance marketing. Without accurate measurement, what we recommend, implement, and optimize is, at best, guesswork. Maintaining accurate measurement is more challenging than ever — and getting harder. 

Regulatory crackdowns and increased privacy concerns, alongside longer multi-touch journeys, are compounding to create a measurement crisis. Brands still using decade-old tactics won’t be able to overcome modern measurement challenges.

If your brand falls into this category, it’s time to rebuild your measurement foundation — from integrating first-party data (crawl), to creating cross-channel reporting for actionable insights (walk), to advanced media mix modeling (MMM) and incrementality testing for true incremental media lift (run).

The crawl: Building a first-party data foundation

Without integration of first-party data into your performance marketing channels, you’re fully reliant on third-party signals. While these metrics can be helpful, they’re surface-level signals and don’t show how channels impact your business goals

Audience integration

The first step is integrating your customer relationship management (CRM) data into your paid media platforms. This includes:

  • Remarketing to abandoners.
  • Creating exclusion lists for current subscribers or recent purchasers.
  • Compiling priority contact lists. 

You might be uploading lists today, but integration improves targeting by connecting to up-to-date audience lists for media platform targeting.

Offline-conversion tracking

For lead gen businesses, the next recommended step is setting up offline conversion tracking (OCT). It shows the bottom-line impact of your media on sales. The integration passes sales data back to the platforms for campaign attribution. 

With OCT in place, you can optimize for lower-funnel, higher-quality conversion steps in the sales cycle or even begin optimizing toward revenue to improve your return on ad spend.

Setup is simple. You add a click ID to your form and then pass it from the platform to your CRM. Most of the top CRMs today, like Salesforce, integrate directly with platforms for easy implementation.

Server-side tracking and consent mode

To gain momentum from crawl to walk requires a heavier uplift, shifting from client-side tracking to server-side tracking.

Client-side tracking is the default process for passing conversion signals from your website to your media platforms. The user’s web browser sends that signal, allowing for cookie loss, ad blockers, or strict browsers like Safari to muddy the signals and reduce data accuracy.

With server-side tracking, instead of relying on the user’s browser, you use a dedicated tagging server to capture signals from your website and send them directly to the platforms. This bypasses browser-based tracking and relies on your first-party data. It keeps your data accurate and resilient as privacy restrictions increase and cookies disappear.

You have two main integration methods:

  • Partner integration is the simpler option, as it uses pre-built connectors for setup through partners like Shopify, Tealium, Google Tag Manager, or similar platforms.
  • Direct API is code-heavy and for complex data or custom backends, and requires a developer team to build it. 

How you set up server-side tracking depends on the paid media channels you use, your tech stack, and your integration method. Both options require a dedicated cloud hosting server, which adds cost, but it’s worth it to better understand your media investment.

The walk: Cross-channel reporting integration

With a stronger measurement foundation in place, the next step is to break down platform silos and see the full ecosystem.

Going beyond last click

With server-side tracking in place, you’ve created a clean data pipeline. Last-click and first-click attribution give full credit to the first or final step, ignoring the full-funnel path a user takes.

Platforms offer advanced attribution models, like Google’s data-driven attribution, but they still favor and silo data within their own platforms. For example, a user clicks a Meta ad, then searches and converts on a Google ad. In this case, each platform claims the conversion.

The solution is to use a data warehouse, such as BigQuery or Snowflake, to centralize your data from your website, CRM, and other platforms. From there, you can apply custom logic to build a multi-touch attribution model that stitches your data together using your first-party identifiers to see the full journey and attribute across the ecosystem.

Unified reporting dashboards

With evolved attribution, a unified reporting dashboard will merge the platform performance data (views, clicks, impressions, etc.) with your integrated first-party conversion data (using server-side tracking and advanced attribution). There are many dashboard builders — the easiest being Looker Studio, as it integrates directly with BigQuery and Snowflake, making it effectively plug-and-play. 

With a dashboard in place, you can now visualize the data across the funnel to gain actionable insights into which platforms are driving volume, converting, and impacting your bottom line. 

The run: Media mix modeling and incrementality testing

You now have a detailed, day-to-day view of performance of user-level events and insights. But key questions remain.

  • How do you know if a channel has room for more growth? 
  • How do you measure offline performance like a TV ad? 
  • How do you know if a tactic is working? 

Understand the full impact of your media investment and tactics at a macro level requires media mix modeling and incrementality testing.

Get the newsletter search marketers rely on.


The holistic view through MMM

Think of MMM as your compass guiding strategy. It provides a holistic, mathematical source of truth for your paid media investments by measuring the relationship between your media inputs and business outcomes (revenue or leads) over time.

This isn’t a day-to-day tool. You typically use it on a 3-, 6-, or 12-month cycle, depending on your data volume, and it requires 2+ years of data to account for seasonality and promotions. The model then runs a regression analysis to determine the relationship between your inputs and business outcomes.

With MMM, you get channel-agnostic insights that remove platform bias. It helps you answer key questions about diminishing returns, budget allocation, and the impact of upper-funnel investment on revenue. That clarity helps you make smarter decisions for the next quarter, half, or year so your marketing dollars drive maximum impact.

Pulse checks with incrementality testing

Incrementality testing validates both MMM and your marketing efforts. It measures a single tactic or channel by splitting your audience into two groups: a test group that sees the tactic and a control group that does not. It compares results between the groups, with the difference representing incremental lift.

You can split test and control groups using user-level holdouts, individual-level tracking, or geo-level holdouts when individual tracking isn’t possible. It answers a core question: if a user didn’t see the ad, would they have converted anyway?

This shows the true lift of a specific platform or tactic and helps you decide whether to stop bidding on brand terms for existing customers. It can also calibrate your MMM.

For example, if MMM reports paid social drives $1 million in revenue, but an incrementality test shows lift closer to $500,000, you can feed that back into the MMM to improve future forecasts.

The sprint: Clean, integrated, and validated first-party data

With first-party data integrated through server-side tracking, cross-channel reporting, and custom attribution, you’ve built a strong measurement foundation.

Guided by MMM and validated with incrementality testing, you’re ready to sprint — with a system that helps you make better decisions and clearly show the impact of every investment.

Samsung keeps dragging One UI 8.5 Beta instead of starting One UI 9

10 April 2026 at 10:49

Hey Sammy fans, let’s talk about something that’s been bothering me lately. Samsung’s One UI 8.5 beta program is still going strong in April 2026, expanding to more devices like the S24, S23 series, older foldables, and some A-series phones. But here’s the thing: this minor update is starting to hold back the major software update.

The One UI 8.5 beta program for the Galaxy S25 series started back in December 2025. By March 2026, Samsung started the beta program for flagship S series and foldable phones, and as of yesterday, we received beta 9 on Galaxy S25.

It’s an incremental update, you can expect small refinements to AI features, performance tweaks, and everyday usability. The stable version of One UI 8.5 already launched with the Galaxy S26 series in February, and it’s supposed to roll out to older flagships soon.

But the beta testing has stretched longer than everyone expected. As per rumors, we could see even more beta builds this month for the S25, S24, and foldables.

The Galaxy users are now getting impatient, and I get it. When the beta program continues, it feels like Samsung is being extra careful after some buggy rollouts in the past.

Now, compare this to how things used to be. Remember One UI 7 rollout, the real major update? It had a long beta program and tons of bugs, pushing the stable release into April 2025 for many devices. The One UI 8 followed a better schedule in 2025. Samsung started the beta in May and stable hit foldables in July, and the wider rollout began in September last year. Samsung learned from the One UI 7 mess and successfully moved faster with One UI 8 rollout.

One UI 8.5 oneui85

The One UI 8.5 was meant to be that smoother, quicker follow-up, as it is a minor upgrade. Then what is the real issue now? Samsung has already started internal testing of Android 17-based One UI 9.

We have seen the early One UI 9 beta builds for the S26 series in March 2026, around the same time Android 17 beta 3 was released.

Google has now moved forward with platform stability, which means Samsung can ramp up its custom One UI 9 software. Yet here we are, still busy with One UI 8.5 betas in April. If Samsung doesn’t finish this long One UI 8.5 beta and start the One UI 9 beta soon, it will definitely disappoint millions of Galaxy users.

I know why starting the One UI 9 beta program early is important. The beta programs are where real users like us spot bugs, test new AI tools, and give feedback that makes the stable software better. If the One UI 8.5 beta keeps taking too much time on small changes, the One UI 9 beta will get delayed. In return, you will see a rushed or delayed public testing program, and ultimately, a delayed stable rollout for everyone.

We have seen this pattern before. When Samsung took extra time polishing One UI 7, the whole cycle was delayed, leaving some older devices waiting longer than promised. I am afraid the same thing could happen again. Samsung promises up to 7 years of software support, but that promise only holds if updates actually come on time.

In my opinion, this isn’t a great strategy. Samsung should focus on completing the One UI 8.5 beta program quickly. It is the right time to start the stable update rollout and start the One UI 9 beta program by May. 

The Galaxy S26 users deserve to try Android 17 features early. And the rest of us want to see Samsung moving forward, not falling behind. It’s not about releasing buggy software. It’s about smart planning.

Here’s my suggestion: Finish one update properly, listen to feedback quickly, and then move to the next one quickly. It helps Samsung stay competitive with Google, Apple, and others.

Most importantly, it makes sure the stable One UI 9 doesn’t get delayed into late 2026 or later. What do you think? If you are using the One UI 8.5 beta right now, how is it running on your phone? Share your experience on X Handle @thesammyfans.

The post Samsung keeps dragging One UI 8.5 Beta instead of starting One UI 9 appeared first on Sammy Fans.

How to measure intent gaps using Google Search Console data

9 April 2026 at 19:00
How to measure intent gaps using your Google Search Console data

There’s often a disconnect between what a webpage says it’s about and what its audience is actually searching for.

This mismatch has always existed. But the stakes are higher now.

If your page fails to match user intent, it won’t show up in AI-powered search surfaces. Search engines will find a page that delivers.

You can see the mismatch, but it’s hard to quantify. The data to measure it is already in your Google Search Console account. Below, you can analyze your own pages to see how closely your content aligns with what your audience is searching for.

Measuring the gap between positioning and demand

Most web content today is designed to accommodate multiple target audiences, tens or hundreds of keywords, and brand positioning. As a result, it drifts away from the problems people are trying to solve.

I’ve had this argument many times and learned that observations create interesting conversations, but numbers create urgency and action. In this case, the numbers you need are already in your data, and the intent gap analysis tool uses that data to measure them.

Google Search Console captures what your audience searches for when they find each page. The meta description captures what the page says it’s about. One is demand. The other is positioning.

Intent gap analysis scores the distance between your meta description and your audience’s queries. Vector embeddings make that score possible by measuring meaning rather than just matching words. The result is a single intent gap score (0-100) that shows how well your page aligns with what your audience is searching for. 

Connecting positioning to demand

Google’s Search Central documentation describes the meta description as “a pitch that convinces the user that the page is exactly what they’re looking for.”

The meta description also functions as a machine-readable signal. LLMs and generative engines consume it as a compact summary of what the page claims to deliver.

Achieving “durable visibility in AI ecosystems” requires “consistent metadata, provenance, and trust signals that can be interpreted by search crawlers and generative engines,” IDC’s December 2025 Market Note on brand visibility found.

Scoring a page’s meta description requires an anchor in audience behavior. Google Search Console provides that anchor — the queries where Google chose to surface your page, regardless of whether the page was built for that intent.

The intent gap analysis tool expresses the gap as a score. In the sample analysis below of LumonHR, a fictional SaaS platform inspired by Severance, the homepage scores a 32.

The meta description uses vague aspirational language that doesn’t match the functional, software-focused queries driving traffic. The page isn’t attracting the audience it targeted.

LumonHR's homepage scores a 32 out of 100. The colored bar shows how impressions distribute across topic clusters
LumonHR’s homepage scores a 32 out of 100. The colored bar shows how impressions distribute across topic clusters.

Dig deeper: How to use AI to diagnose and improve search intent alignment

Why intent is measurable now

Search engines now use vector embeddings as a core part of how they match content to queries. Intent matching runs on meaning, not just keywords. When a user searches, the engine embeds the query and compares it against content candidates in a shared vector space. 

Semantic similarity is one of the signals that determines whether your page gets surfaced, cited, or used to generate an answer, alongside authority, trust, freshness, and other ranking factors.

Vector embeddings let you see your page the way a search engine does.

Where existing tools stop

N-gram analysis and TF-IDF have been the standard tools for analyzing search queries. N-grams surface recurring phrases, revealing the vocabulary your audience uses. TF-IDF highlights which terms matter most in your query set. 

These approaches match words, not meaning. “Setting boundaries between office and personal time” and “maintaining employee work-life balance” share zero words. To a word-matching tool, they’re separate topics. To a search engine running on embeddings, they express the same intent. 

When brands match words and search engines match intent, you’re working at a disadvantage.

Measuring meaning, not words

Vector embeddings encode meaning. An embedding converts text into numbers, allowing you to create a map of relationships rather than a list of terms. When two pieces of text mean similar things, their vectors land close together in a shared mathematical space.

Once your meta description and your audience’s queries are plotted in the same space, the distance between them is measurable.

Queries close to the meta description align with the page’s positioning. Queries far from it represent demand the page wasn’t built for. That distance is the intent gap score.

The map below breaks the intent gap into clusters, showing where your page aligns with audience demand and where it doesn’t.

LumonHR's query clusters mapped by the relationship between positioning and demand.
LumonHR’s query clusters mapped by the relationship between positioning and demand.

Dig deeper: SEO gap analysis: How to find content and keyword gaps

Get the newsletter search marketers rely on.


What the intent gap reveals

Clustering your queries into topics reveals which audiences the page is reaching and which it’s missing. Each cluster has two properties: 

  • How closely it aligns with the meta description.
  • How much search demand it carries. 

Those two dimensions place every cluster into one of four quadrants: defend, create, optimize, or monitor.

Defend

High alignment, high demand. The audience is finding your page for the reasons you built it, and in volume. This is where your topical authority lives.

Protect and reinforce. Keep the content current, and update the meta description if the language has drifted from how the audience phrases their searches.

Create

Low alignment, high demand. The audience is arriving with intent the page was never built to serve. This is demand you’re visible for but not capturing.

Create new content for the clusters that fit your strategy, using the language your audience is already using. Ignore the ones that don’t. Each cluster that passes the filter is a signal for new content.

Optimize

High alignment, low demand. The page matches what these searchers need, but few are finding it. The content is right. The visibility isn’t.

Investigate the constraint. The alignment is there, but the audience is small. Rankings may be too low, the positioning too narrow, or the topic may need supporting content to grow.

Monitor

Low alignment, low demand. Some clusters may grow into Create or Optimize territory over time.

Watch for growth. This is often where emerging topics are first detected. If demand increases, re-evaluate.

Query clusters analyzed, scored, and assigned a recommended action.
Query clusters analyzed, scored, and assigned a recommended action.

Dig deeper: How and why to ‘be the primary source’ for organic search

Your data, your score: Running the intent gap analysis

Here’s the tool and how to run the analysis on your own pages.



Step 1: Export your page data

In Google Search Console, navigate to Performance > Search results, filter by a single page, and export as a .zip file. 

Step 2: Upload and score

Upload the .zip file to the tool (your data is not stored) to get your intent gap score. The tool scrapes the meta description, scores every query against it, and clusters the results. 

Step 3: Explore the map

Each cluster is plotted by alignment and demand. Click any bubble to see the individual queries with clicks, impressions, CTR, and position.

Step 4: Review the breakdown

Every cluster in one view with its quadrant, alignment score, and performance metrics.

Step 5: Get rewrite recommendations

The tool generates recommended changes to your page’s title and meta description, grounded in the search language from your highest-demand clusters.

Step 6: Share your results

Download the table as CSV or use the “Copy as Image” buttons to share individual views with your team.

Suggested title and meta description revisions based on intent gap findings.
Sample suggested title and meta description revisions based on intent gap findings.

Dig deeper: How to master user intent with SEO personas

Turning the score into a decision

The intent gap score assigns a number to the disconnect, and that number gives it traction. It turns observations into actions you can take in stakeholder conversations, whether that means changing a page or defending it.

Your audience is already telling you what they need. That signal is always shifting. Now you can monitor it, measure it, and close the gap.

The tool featured in this article was created by Robin Tully, co-founder at Forecast.ing.

Audit your agency: 6 questions to find a true growth partner

8 April 2026 at 19:00
Audit your agency- 6 questions to find a true growth partner

Most agencies present prospective clients with an account audit as part of their sales process. The purpose is twofold: 

  • To provide immediate value (usually without strings attached).
  • To demonstrate that they know their stuff.

But how often do brand marketers turn the tables and audit their agencies in their RFP?

I’m the head of performance marketing at a marketing agency, so I’m clearly writing from a biased perspective. However, over my decade-plus in the industry, I’ve seen too many brands settle for “good enough” because they didn’t know which questions would reveal the cracks in a potential partner’s strategy and approach.

If I were a brand looking for a true growth partner, here are the specific questions I’d ask to separate the top performers from the rest.

1. What are your key services, and what percentage of your clients utilize each?

A lot of agencies claim to be “full service,” but rarely are they “full excellence.” I’d be looking for where an agency truly spends its time versus where they’re just trying to upsell me.

It’s less about the channels in question (although if, say, LinkedIn is a key growth driver for your brand, they’d better demonstrate proficiency there), and more about how their strengths align with your needs.

If an agency claims to be experts in SEO, creative strategy, and paid media, but 90% of their client base only uses them for paid search, that’s a red flag. You want a partner whose core competencies align with your primary needs. 

If you need high-volume creative testing, you want an agency where 80%+ of clients use its creative production frameworks, not one that treats creative as an add-on service.

Dig deeper: Confessions of a PPC-only agency: Why we finally embraced SEO

2. How are you approaching AI-driven account optimization and platform automation?

I miss the days when knowledge of the manual controls at your disposal could set you apart as a high-performing marketer. But those days have been gone for a while.

In 2026, there’s a real danger of over-optimization with the controls we have left. This can reset algorithmic learnings and prevent them from fine-tuning in service of your goals. Agency teams that strike this balance most certainly have a healthier approach than those who either blindly trust algorithms or can’t help tinkering excessively.

One control you can and must be diligent about using is first-party data for enhanced conversions and offline conversion tracking. Part of the job of a great marketer is training the algorithms on which leads and which conversions to target, and first-party data is a huge lever to pull in that regard.

3. What is your reporting process and what KPIs do you focus on for the majority of your clients?

Don’t just ask for a sample report. Anyone can make a PDF look pretty. You need to understand their philosophy on data.

You’re looking for an agency that’s willing to move upstream. If the majority of their clients are measuring success on clicks, traffic, or even MQLs, run the other way.

A performance-driven agency should be obsessed with revenue, ROAS, and pipeline velocity. Ask them how they handle attribution. If they rely solely on in-platform metrics, which often over-claim credit, they aren’t looking at the full picture. 

Dig deeper: What successful brand-agency partnerships look like in 2026

Get the newsletter search marketers rely on.


4. What’s the average industry tenure of the team on my account?

This is actually a pretty common question and has been for years. Too many marketers know the pain of integrating rotating sets of agency teams because the agency can’t hold onto top employees, and you should be evaluating the answer from this perspective.

There’s another factor to consider. Generally speaking, the more experienced a marketing team is, the more effectively it uses AI tools.

Whereas junior marketers might be more avid proponents of AI and quicker to adopt its functionality, they’re also far more likely to use it for things like creative ideation and strategy. Both are areas where high-quality human thought is a true differentiator.

For this answer specifically, remember that you have some great research tools like Glassdoor that you can and should access. Employee tenure is one thing, but a Glassdoor profile with a bunch of red flags is an indicator that the agency might struggle to keep the talent it really wants to retain.

5. How is your team using AI on client accounts?

Again, you’re looking for a balance here. Agency teams that don’t use AI at all are almost certainly burning resources on manual tasks, but agency teams that overuse it to replace perspective, critical thinking, and creativity are commoditizing their own client service.

Two follow-up questions to ask:

  • What is your governance structure for AI use?
  • What’s your process for QAing AI output?

You’re looking for firm answers and redundant layers for each of these questions — at the very least, someone relatively senior should approve any output before it goes live.

Dig deeper: Why PPC teams are becoming data teams

6. When you take over an account, what are the first things you do to save budget without affecting growth?

This is the ultimate litmus test for technical proficiency. A great performance marketer knows where the ad platforms hide the waste buttons. If I were a brand marketer, I’d want to hear about:

  • Any harmful default settings that need to be turned off.
  • What inputs are driving wasted spend (audiences, networks, keywords, etc.).
  • A plan to prioritize budget around what’s driving business outcomes.

If an agency can’t rattle off these specific checks, they’re likely missing the “low-hanging fruit” of budget efficiency. Fixing some of these takes seconds, but missing them costs thousands.

What separates a true growth partner from the rest

Remember: when you’re choosing an agency partner, it’s the job of each agency to sound as good as they possibly can, but what an agency considers to be a great answer might not be a great fit for your brand. 

By focusing on utilization rates of services, strategic application of AI, and approaches to budget efficiency, you’ll find a partner capable of driving actual performance, not just spending your budget.

Dig deeper: How to find your next PPC agency: 12 top tips

Why product feeds need an organic strategy for AI search

8 April 2026 at 18:00
Why product feeds need an organic strategy for AI search

Ask most ecommerce brands who owns their product feed, and the answer is almost always the same: the paid media team.

Maybe a feed management tool sits under PPC. Maybe the shopping team built the feed years ago, and nobody’s touched the titles since. Either way, SEO rarely has a seat at the table, and it’s often forgotten as part of the broader feed management strategy.

Whether you’re worried about AI search or traditional clicks, you’re missing out on opportunities by excluding SEO from your feed management strategy.

AI shopping results are grounded in Google Shopping data

Up to 83% of ChatGPT carousel products match Google Shopping’s organic results, according to a recent Peec AI study analyzing more than 43,000 listings. And 60% of those matches came from Shopping positions 1-10.

carousel-products
Data shows how ChatGPT’s product carousel matches Google Shopping’s organic results, with Google dominating over Bing.

On Google’s side, the Shopping Graph now contains more than 50 billion product listings and feeds directly into AI Overviews, AI Mode, and Gemini. AI Overviews appear in roughly 14% of shopping queries, up from about 2% in late 2024. Like many other things we’ve discovered about AI search, the generative results are informed by traditional SERP.

SEO needs to be the strategic quarterback for brand authority. This is a highly valuable opportunity to work cross-channel toward a common goal of improving visibility across search surfaces. It really requires SEOs, commerce, and paid media teams to get in the same room.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

The case for a dedicated organic feed

Typically, brands run a single product feed optimized for Google paid shopping campaigns. Titles are written for bid relevance, descriptions are built for Quality Score, and the feed exists to win auctions, with less consideration for user search behaviors.

As user behavior shifts, search surfaces favor stronger semantic alignment between queries and product data. A title stuffed with paid-friendly modifiers or branded terms isn’t the same as a title that mirrors how someone conversationally searches for a product.

We tested this with a large ecommerce brand. Our agency’s AI SEO team partnered with the commerce team to launch a dedicated product feed for free organic listings, with titles and descriptions optimized specifically for organic visibility, rather than replicating what was already running in the paid feed.

After the organic feed was pushed live:

  • Organic listing CTR increased 10% month over month, alongside a 4% lift in purchasing rate.
  • A product-level test saw a 92% increase in revenue for free listings, with visibility up 83%, and add-to-cart up 14%.
  • The organic optimization changes alone drove 35,000 impressions at a 1.4% CTR, 55% higher than the CTR seen in paid for the same time period.

Rather than replacing our paid feed strategy, we recognized that organic and paid shopping solve different problems and have different needs that require optimizing accordingly.

Organic feed titles should reflect how your customers actually search, not how your bidding strategy is structured.

Dig deeper: How AI-driven shopping discovery changes product page optimization

Get the newsletter search marketers rely on.


What to prioritize in an organic feed strategy

Not every feed attribute carries equal weight. If you’re building a dedicated organic feed or just auditing your existing feed for gaps, here’s where you could start.

Titles are the highest-impact lever

Google’s algorithm heavily favors feed titles when matching products to queries, and its own documentation emphasizes including important attributes to “better match search queries and drive performance lift.” Consider how a customer might describe what they’re looking for in a conversational way, and how that aligns with product attributes.

Google's Merchant Center documentation on feed strategy
Google’s Merchant Center documentation reinforces the point that your feed strategy should map to how your customer actually shops to help improve their search journey

Global Trade Item Numbers (GTINs) are non-negotiable

Google’s GTIN documentation makes clear that products with correct GTINs receive significantly more visibility. Industry data has consistently shown that properly matched products can drive up to 40% more clicks. They’re also the primary signal for aggregating product reviews across sources.

Don’t overlook images

They’re still the most common source of Merchant Center disapprovals. Products with both standard and lifestyle images typically see significantly higher engagement. 

If budget or bandwidth has kept better product images on the back burner, Google’s Product Studio can help handle some of the editing, so you can test and improve creative at scale without a full reshoot. It’s also a way for SEO and creative teams to collaborate on feed-specific assets and testing.

Optimize key product attributes: product_highlight and product_detail 

  • product_highlight lets you add scannable benefit statements that appear in expanded Shopping views. For instance, “water-resistant for light rain commutes” is doing more work than “high-quality material” for both the shopper and the AI. 
  • product_detail provides structured specifications that power Google’s faceted filters in organic product grids.

The same semantic work SEOs are doing to optimize product detail pages (PDPs) for conversational search — like defining ideal buyers, naming use cases, and articulating compatibility — should inform feed attributes. 

Product and content teams already understand what drives someone to buy. That context should be in the feed, not just on a brand’s PDPs.

Dig deeper: How to make ecommerce product pages work in an AI-first world

Your feed is also your agentic commerce foundation

Here’s what makes this investment compound: the feed optimization work done today for organic shopping visibility will also help build brand readiness for agentic commerce standards and applications.

Google’s Universal Commerce Protocol, announced in January, is a framework that enables AI agents to discover products, build carts, and complete transactions directly inside AI Mode and Gemini. The shopper may never land on the brand website to make a purchase. UCP isn’t a replacement for Google Merchant Center, because it’s built directly on top of GMC data.

Feeds are how products enter the Shopping Graph. The Shopping Graph is the dataset AI agents query when processing a shopping request. The new native_commerce attribute added to feeds is what signals that a product is eligible for the UCP-powered “Buy” button in traditional and AI-driven Google services.

Google has also announced the eventual rollout of several new Merchant Center attributes designed specifically for conversational commerce: 

  • Product FAQs.
  • Use cases.
  • Compatible accessories.
  • Product substitutes. 

These are additions to an existing GMC feed that give AI agents the contextual understanding they need to match products to natural-language queries like “what’s a good waterproof jacket for bike commuting?” These new conversational attributes are rolling out to a small group of retailers first.

This is where feed data and on-page content need to stay tightly aligned. Search surfaces cross-reference a brand’s feed against:

  • Structured data. 
  • PDP content.
  • Other sources to validate findings. 

When those layers contradict each other, trust erodes at the domain level. 

Dig deeper: 7 organic content investments that drive ecommerce ROI

Building a cross-channel strategy for AI search

Product feed strategy and optimization is an opportunity for genuine cross-team collaboration to test, execute, and measure visibility. A holistic approach to managing product details across every surface will benefit brands in both traditional and AI-driven search.

  • SEOs bring the keyword intelligence, semantic understanding, and knowledge of how AI systems match queries to content. 
  • Commerce and marketplace teams own the product data, product information management, and relationships with retailers. 
  • Paid teams have the feed infrastructure, the tools, and years of experience managing feed health at scale.

These teams must work together to coordinate their insights and effectively establish an AI SEO operating system. The product feed sits at that intersection as it’s an owned asset managed by commerce infrastructure that directly feeds AI-powered visibility.

The first step is to pull a current feed and compare organic titles to paid titles. The second step is getting the right people in the room to build something better. SEO is most successful when more channels align toward the same goal: better brand visibility.

How AI search defines market relevance beyond hreflang

8 April 2026 at 17:00
How AI search defines market relevance beyond hreflang

Hreflang has long been a core mechanism in international SEO, directing users to the right regional version of a page. That approach worked when search engines primarily returned static results. 

AI-driven synthesis changes that. Instead of returning lists of links, AI systems construct answers. They don’t need, nor want, your perfectly implemented hreflang tags. They aren’t looking for instructions on which page to serve. They’re trying to determine which answer is best supported across sources.

Your content has to hold up when the model compares it against everything it’s seen, regardless of language or origin. If it doesn’t, it won’t be used.

What hreflang does and doesn’t do

We need to address a fundamental misunderstanding of the hreflang attribute. Hreflang has always been a switcher, not a booster. 

If your brand lacked organic authority in Australia before implementing the tag, adding the en-au attribute wouldn’t magically improve your rankings in Sydney. Its only function was to ensure that if you did rank, the user saw the correct regional version.

In AI search, this “you vs. you” dynamic has become a liability. While traditional search still relies on these tags to organize traffic, AI models often bypass them during the synthesis phase. If a brand’s U.S.-based .com site possesses decades of authority, the AI’s internal logic may determine that the U.S. site is the true source of information. 

Consequently, even when a user in Berlin searches in German, the AI may synthesize an answer based on the U.S. data and simply translate it on the fly, effectively ghosting the brand’s localized German site despite perfectly implemented hreflang tags.

The double-blind: Query fan-out vs. entity compression

AI models don’t just answer the query you see. They expand it into dozens of hidden checks, comparing sources, validating claims, and pulling in information across languages to see what aligns.

ChatGPT often translates and evaluates queries in English even when the user searches in another language, research from Peec AI shows. This reinforces how query fan-out operates across markets. If your local entity doesn’t hold up in that broader comparison, it doesn’t get used.

A second issue happens before retrieval even begins. During training, LLMs compress what they see so it can be stored and reused at scale.

When multiple regional pages look too similar, they don’t stay separate. They’re folded into a single representation, also known as canonical tokenization.

Local details — phone numbers, office locations, and market-specific references — don’t always survive that process. They’re treated as minor variations rather than meaningful signals.

By the time the model is asked a question, your local site is often no longer competing. In many cases, it’s already been absorbed into the global one.

Dig deeper: What the ‘Global Spanish’ problem means for AI search visibility

7 ways to build AI-first relevancy

To compete globally, expand your strategy to include signals that resonate with AI’s data supply chain.

1. Build locally aligned infrastructure

Meta tags tell systems what you intend. Infrastructure often tells them what to believe. Datasets like Common Crawl use geographic heuristics, IP location, and domain structure to make sense of content at scale. That happens early in the process, before anything resembling ranking.

This means your content may already be placed in a market before the model ever evaluates it. If your regional domains aren’t supported by local infrastructure or delivery, you’re sending mixed signals. Those are hard to recover from later.

2. Break the compression threshold

To break the semantic gravity that leads to entity compression, you need what I would call a clear “knowledge delta.” Most global teams fail here because they think localization means translation. It doesn’t. 

There’s no universally accepted magic number for unique content. From a semantic vector perspective, I speculate that a divergence threshold of at least 20% of the content on a local page must be unique to prevent the model from collapsing your local identity into your global one.

To address this, front-load market-specific data, such as regional shipping logistics, local tax identifiers, and native case studies, into the first 30% of your page. This lets you provide the mathematical proof the model needs to cite your local URL as a distinct authority.

3. Anchor your entity in semantic neighborhoods

AI models interpret market relevance by looking at the company you keep in the text. Incorporate geographic anchoring by referencing local neighborhoods, regional landmarks, or specific transit hubs (e.g., “located near the Alexanderplatz station” in Berlin). 

These co-occurrence signals pull your brand’s vector embedding toward the specific local coordinate in the model’s training data, creating a geographic fence that helps the AI disambiguate your local office from your global headquarters.

Dig deeper: How to craft an international SEO approach that balances tech, translation and trust

Get the newsletter search marketers rely on.


4. Prioritize local link sources

The origin of your links is a primary signal of market authority. During the fan-out phase, AI models look for regional consensus.

This is one of the areas where traditional link building logic starts to break. It’s not just about getting links. Consider where those links originate, along with their authority and contextual relevance.

If your Australian page has backlinks primarily from U.S.-based websites, the model has little evidence that you actually belong in or are relevant to the Australian market. Local sources, including high local trust and location-specific news outlets, change that. Without them, you’re often treated more like a visitor than a participant.

5. Incorporate linguistic and authoritative nuances

LLMs pick up on regional language nuances far more than most teams expect. This is where simple translation starts to break down. Unique market- or colloquial-specific terms, formatting, and even small legal references signal whether something actually belongs in a market.

Use the terms people in that market actually use — things like “incl. GST,” local identifiers like ABN, and even spelling differences. Without these signals, the page may be technically and linguistically correct, but it won’t register as truly local.

6. Capture the invisible long-tail

As mentioned, LLMs often generate multiple incremental queries during their research phase. These invisible queries may focus on local friction points, such as “How does this product comply with [name of local regulation]?” 

By incorporating local FAQ clusters that address these nuances, you ensure your local URL survives the fan-out check, making your global .com too generic to be cited in a localized answer.

Dig deeper: Why AI optimization is just long-tail SEO done right

7. Run AI citation audits

Expand your SEO reporting beyond traditional rank tracking. Incorporate AI citation audits by using a local VPN to query the most popular generative engines in your target markets. 

If the AI consistently pulls from your global .com domain for a local query, it’s a clear signal that your local domain lacks the necessary evidence chain. Identify where this market drift is occurring and reinforce those specific pages with more unique local data and infrastructure signals.

The new international standard

Hreflang and traditional technical signals still shape how search engines organize and deliver content, but they don’t determine what AI systems use.

AI models evaluate which sources to use based on evidence of local relevance. Without a distinct presence in each market, they default to the version of your brand they trust most, which often isn’t the one you intended.

Translation alone doesn’t establish that presence. Your content needs to demonstrate that it belongs in the market it’s meant to serve.

Dig deeper: Multilingual and international SEO: 5 mistakes to watch out for

How AI decides what your content means and why it gets you wrong

7 April 2026 at 19:00
How AI decides what your content means and why it gets you wrong

Google once attributed two of Barry Schwartz’s Search Engine Land articles to me — a misclassification at the annotation layer that briefly rewrote authorship in Google’s systems.

For a few days, when you searched for certain Search Engine Land articles Schwartz had written, Google listed me as the author. The articles appeared in my entity’s publication list and were connected to my Knowledge Panel.

What happened illustrates something the SEO industry has almost entirely overlooked: that annotation — not the content itself — is the key to what users see and thus your success.

How Google annotated the page and got the author wrong

Googlebot crawled those pages, found my name prominently displayed below the article (my author bio appeared as the first recognized entity name beneath the content), and the algorithm at the annotation gate added the “Post-It” that classified me as the author with high confidence.

This is the most important point to bear in mind: the bot can misclassify and annotate, and that defines everything the algorithms do downstream (in recruitment, grounding, display, and won). In this case, the issue was authorship, which isn’t going to kill my business or Schwartz’s.

But if that were a product, a price, an attribute, or anything else that matters to the intent of a user search query where your brand should be one of the obvious candidates, when any aspect of content is inaccurately annotated, you’ve lost the “ranking game” before you even started competing.

Annotation is the single most important gate in taking your brand from discover to won, whatever query, intent, or engine you’re optimizing for.

What annotation is and why it isn’t indexing

Indexing (Gate 4) breaks your content into semantic chunks, converts it, and stores it in a proprietary format. Annotation (Gate 5) then labels those chunks with a confidence-driven “Post-It” classification system.

It’s a pragmatic labeler and attaches classifications to each chunk, describing:

  • What that chunk contains factually.
  • In what circumstances it might be useful.
  • The trustworthiness of the information.

Importantly, it’s mostly unopinionated when labeling facts, context, and trustworthiness. Microsoft’s Fabrice Canel confirmed the principle that the bot tags without judging, and that filtering happens at query time.

What does that mean? The bot annotates neutrally at crawl time, classifying your content without knowing what query will eventually trigger retrieval. 

Annotation carries no intent at all. It’s the insight that has completely changed my approach to “crawl and index.”

That clearly shows you that indexing isn’t the ultimate goal. Getting your page indexed is table stakes. Full, correct, and confident annotation is where the action happens: an indexed page that is poorly annotated is invisible to each of the algorithmic trinity.

The annotation system analyzes each chunk using one or more language models, cross-referenced against the web index, the knowledge graph, and the models’ own parametric knowledge. But it analyzes each chunk in the context of the page wrapper

The page-level topic, entity associations, and intent provide the frame for classifying each chunk. If the page-level understanding is confused (unclear topic, ambiguous entity, mixed intent), every chunk annotation inherits that confusion. Even more importantly, it assigns confidence to every piece of information it adds to the “Post-Its.”

The choices happen downstream: each of the algorithmic trinity (LLMs, search engines, and knowledge graphs) uses the annotation to decide whether to absorb your content at recruitment (Gate 6). Each has different criteria, so you need to assess your own content for its “annotatability” in the context of all three.

And a small but telling detail: Back in 2020, Martin Splitt suggested that Google compares your meta description to its own LLM-generated summary of the page. When they match, the system’s confidence in its page-level understanding increases, and that confidence cascades into better annotation scores for every chunk — one of thousands of tiny signals that accumulate.

Annotation is the key midpoint of the 10-gate pipeline, where the scoreboard turns on. Everything before it is infrastructure: “Can the system access and store your content?” Everything after it is competition:

Annotation is where you simply cannot afford to fail

When you consider what happens at the annotation gate and its depth, links and keywords become the wrong lens entirely. They describe how you tried to influence a ranking system, whereas annotation is the mechanism behind how the algorithmic trinity chooses the content that builds its understanding of what you are.

The frame has to shift. You’re educating algorithms. They behave like children, learning from what you consistently, clearly, and coherently put in front of them. With consistent, corroborated information, they build an accurate understanding.

Given inconsistent or ambiguous signals, they learn incorrectly and then confidently repeat those errors over time. Building confidence in the machine’s understanding of you is the most important variable in this work, whether you call it SEO or AAO.

Confiance
“Confiance” (confidence) is the signal that drives how systems understand content. Slide from my SEOCamp Lyon 2017 presentation.

In 2026, every AI assistive engine and agent is that same child, operating at a greater scale and with higher stakes than Google ever had. Educating the algorithms isn’t a metaphor. It’s the operational model for everything that follows.

For a more academic perspective, see: “Annotation Cascading: Hierarchical Model Routing, Topical Authority, and Inter-Page Context Propagation in Large-Scale Web Content Classification.”

5 levels of annotation: 24+ dimensions classifying your content at Gate 5

When mapping the annotation dimensions, I identified 24, organized across five functional categories. After presenting this to Canel, his response was: “Oh, there is definitely more.”

Of course there are. This taxonomy is built through observation first, then naming what consistently appears. The [know/guess] distinctions follow the same logic: test hypotheses, eliminate what doesn’t hold up, and keep what remains.

The five functional categories form the foundation of the model. They are simple by design — once you understand the categories, the dimensions follow naturally. There are likely additional dimensions beyond those mapped here.

What follows is the taxonomy: the categories are directionally sound (as confirmed by Canel), while the specific dimension assignments reflect observed behavior and remain incomplete.

Level 1: Gatekeepers (eliminate)

  • Temporal scope, geographic scope, language, and entity resolution. Binary: pass or fail. 
  • If your content fails a gatekeeper (wrong language, wrong geography, or ambiguous entity), it is eliminated from that query’s candidate pool instantly. The other dimensions don’t come into play.

Level 2: Core identity (define)

  • Entities, attributes, relationships, sentiment. 
  • This is where the system decides what your content means:
    • Who is being discussed.
    • What facts are stated.
    • How entities relate.
    • What the tone is. 
  • Without clear core identity annotations, a chunk carries no semantic weight in any downstream gate.

Level 3: Selection filters (route) 

  • Intent category, expertise level, claim structure, and actionability. 
  • These determine which competition pool your content enters.
    • Is this informational or transactional? 
    • Beginner or expert? 
  • Wrong pool placement means competing against content that is a better match for the query, and you’ve lost before recruitment or ranking begins.

Level 4: Confidence multipliers (rank)

  • Verifiability, provenance, corroboration count, specificity, evidence type, controversy level, and consensus alignment. These scale your ranking within the pool. 
  • This is where validated, corroborated, and specific content outranks accurate but unvalidated content. 
  • The multipliers explain why a well-sourced third-party article about you often outperforms your own claims: provenance and corroboration scores are higher.
  • Confidence has a multiplier effect on everything else and is the most powerful of all signals. Full stop.

Level 5: Extraction quality (deploy)

  • Sufficiency, dependency, standalone score, entity salience, and entity role. These determine how your content appears in the final output. 
  • Is this chunk a complete answer, or does it need context? Is your entity the subject, the authority cited, or a passing mention? 
  • Extraction quality determines whether AI quotes you, summarizes you, or ignores you.
Five levels of annotation

Across all five levels, a confidence score is attached to every individual annotation. Not just what the system thinks your content means, but how certain it is.

Clarity drives confidence. Ambiguity kills it.

Canel also confirmed additional dimensions I had not initially mapped: audience suitability, ingestion fidelity, and freshness delta. These sit across the existing categories rather than forming a sixth level.

In 2022, Splitt named three annotation behaviors in a Duda webinar that map directly onto the five-level model. The centerpiece annotation is Level 2 in direct operation: 

  • “We have a thing called the centerpiece annotation,” Splitt confirmed, a classification that identifies which content on the page is the primary subject and routes everything else — supplementary, peripheral, and boilerplate — relative to it. 
  • “There’s a few other annotations” of this type, he noted. 

Annotation runs before recruitment, which means a chunk classified as non-centerpiece carries that verdict into every gate that follows. Boilerplate detection is Level 3: content that appears consistently across pages — headers, footers, navigation, and repeated blocks — enters a different competition pool based on its structural role alone. 

  • “We figure out what looks like boilerplate and then that gets weighted differently,” Splitt said 

Off-topic routing closes the picture. A page classified around a primary topic annotates every chunk relative to that centerpiece, and content peripheral to the primary topic starts its own competition pool at a disadvantage before Recruitment begins. 

Splitt’s example: a page with 10,000 words on dog food and a thousand on bikes is “probably not good content for bikes.” The system isn’t ignoring the bike content. It’s annotating it as peripheral, and that annotation is the routing decision.

Get the newsletter search marketers rely on.


The multiplicative destruction effect: When one near-zero kills everything

In Sydney in 2019, I was at a conference with Gary Illyes and Brent Payne. Illyes explained that Google’s quality assessment across annotation dimensions was multiplicative, not additive. 

Illyes asked us not to film, so I grabbed a beer mat and noted a simple calculation: if you score 0.9 across each of 10 dimensions, 0.9 to the power of 10 is 0.35. You survive at 35% of your original signal. If you score 0.8 across 10 dimensions, you survive at 11%. If one dimension scores close to zero, the multiplication produces a result close to zero, regardless of how well you score on every other dimension.

Payne’s phrasing of the practical implication was better than mine: “Better to be a straight C student than three As and an F.”

The beer mat went into my bag. The principle became central to everything I’ve built since.

The multiplicative destruction effect

The multiplicative destruction effect has a direct consequence for annotation strategy: the C-student principle is your guide. 

  • A brand with consistently adequate signals across all 24+ dimensions outperforms a brand with brilliant signals on most dimensions and a near-zero on one. The near-zero cascades. 
  • A gatekeeper failure (Level 1) eliminates the content entirely. 
  • A core identity failure (Level 2) misclassifies it so badly that high confidence multipliers at Level 4 are applied to the wrong entity. 
  • An extraction quality failure (Level 5) produces a chunk that the system can retrieve but can’t deploy usefully. The failure doesn’t have to be dramatic to be fatal.

At the annotation stage, misclassification, low confidence, or near-zero on one dimension will kill your content and take it out of the race.

Nathan Chalmers, who works at Bing on quality, told me something that puts this in a different light entirely. Bing’s internal quality algorithm, the one making these multiplicative assessments across annotation dimensions, is literally called Darwin

Natural selection is the explicit model: content with near-zero on any fitness dimension is selected against. The annotations are the fitness test. The multiplicative destruction effect is the selection mechanism.

How annotation routes content to specialist language models

The system doesn’t use one giant language model to classify all content. It routes content to specialized small language models (SLMs): domain-specific models that are cheaper, faster, and paradoxically more accurate than general LLMs for niche content. 

A medical SLM classifies medical content better than GPT-4 would, because it has been trained specifically on medical literature and knows the entities, the relationships, the standard claims, and the red flags in that domain.

What follows is my model of how the routing works, reconstructed from observable behavior and confirmed principles. The existence of specialist models is confirmed. The specific cascade mechanism is my reconstruction.

The routing follows what I call the annotation cascade. The choice of SLM cascades like this:

  • Site level (What kind of site is this?)
  • Refined by category level (What section?)
  • Refined by page level (what specific topic?)
  • Applied at chunk level (What does this paragraph claim?)

Each level narrows the SLM selection, and each level either confirms or overrides the routing from above. This maps directly to the wrapper hierarchy from the fourth piece: the site wrapper, category wrapper, and page wrapper each provide context that influences which specialist model the system selects.

How annotation routes content to SLMs

The system deploys three types of SLM simultaneously for each topic. This is my model, derived from the behavior I have observed: annotation errors cluster into patterns that suggest three distinct classification axes. 

  • The subject SLM classifies by subject matter — what is this about? — routing content into the right topical domain. 
  • The entity SLM resolves entities and assesses centrality and authority: who are the key players, is this entity the subject, an authority cited, or a passing mention? 
  • The concept SLM maps claims to established concepts and evaluates novelty, checking whether what the content asserts aligns with consensus or contradicts it.

When all three return high confidence on the same entity for the same content, annotation cost is minimal, and the confidence score is very high. When they disagree (i.e., the subject SLM says “marketing,” but the entity SLM can’t resolve the entity, and the concept SLM flags the claims as novel), confidence drops, and the system falls back to a more general, less accurate model.

The key insight? LLM annotation is the failure mode. The system wants to use a specialist. It defaults to a generalist only when it can’t route to a specialist. Generalist annotation produces lower confidence across all dimensions. 

The practical implication 

Content that’s category-clear within its first 100 words, uses standard industry terminology, follows structural conventions for its content type, and references well-known entities in its domain triggers SLM routing. 

Content that’s topically ambiguous or terminologically creative gets the generalist. Lower confidence propagates through every downstream gate.

Now, this may not be the exact way the SLMs are applied as a triad (and it might not even be a trio). However, two things strike me:

  • Observed outputs act that way.
  • If it doesn’t function this way, it would be.

First-impression persistence: Why the initial annotation is the hardest to correct

Here is something I’ve observed over years of tracking annotation behavior. It aligns with a principle Canel confirmed explicitly for URL status changes (404s and 301 redirects): the system’s initial classification tends to stick.

When the bot first crawls a page, it selects an SLM, runs the annotation, assigns confidence scores, and saves the classification. The next time it crawls the same page, it logically starts with the previously assigned model and annotations. I call this first-impression persistence. 

The initial annotation is the baseline against which all subsequent signals are measured. The system doesn’t re-evaluate from scratch. It checks whether the new crawl is consistent with the existing classification, and if it is, the classification is reinforced.

Canel confirmed a related mechanism: when a URL returns a 404 or is redirected with a 301, the system allows a grace period (very roughly a week for a page, and between one and three months for content, in my observation) during which it assumes the change might revert. After the grace period, the new state becomes persistent. I believe the same principle applies to content classification: a window of fluidity after first publication, then crystallization.

I have direct evidence for the correction side from the evolution of my own terminologies. When I first described the algorithmic trinity, I used the phrase “knowledge graphs, large language models, and web index.” Google, ChatGPT, and Perplexity all picked up on the new term and defined it correctly.

A month later, I changed the last one to “search engine” because it occurred to me that the web index is what all three systems feed off, not just the search system itself. At the point of correction, I had published roughly 10 articles using the original terminology. 

I went back and invested the time to change every single one, updating every reference, leaving zero traces. A month later, AI assistive engines were consistently using “search engine” in place of “web index.”

The lesson is that change is possible, but you need to be thorough: any residual contradictory signal (one old article, one unchanged social post, and one cached version) maintains inertia proportionally. Thoroughness is the unlock, rather than time.

First-impression persistence

A rebrand, career pivot, or repositioning is the practical example. You can change the AI model’s understanding and representation of your corporate or personal brand, but it requires thoroughly and consistently pivoting your digital footprint to the new reality.

In my experience, “on a sixpence” within a week. I’ve done this with my podcast several times. Facebook achieved the ultimate rebrand from an algorithmic perspective when it changed its name to Meta.

The practical implication

Get your annotation right before you publish. The first crawl sets the baseline. A page published prematurely (with an unclear topic or ambiguous entity signals) crystallizes into a low-confidence annotation, and changing it later requires significantly more effort than getting it right the first time.

Annotation-time grounding: The bot cross-references three sources while classifying your content

The system doesn’t annotate in a vacuum. When the bot classifies your content at Gate 5, it cross-references against at least three sources simultaneously. This is my model of the mechanism. The observable effect — that annotation confidence correlates with entity presence across multiple systems — is confirmed from our tracking data.

The bot carries prioritized access to the web index during crawling, checking your content against what it already knows: 

  • Who links to you.
  • What context those links provide.
  • How your claims relate to claims on other pages. 

Against the knowledge graph, it checks annotated entities during classification — an entity already in the graph with high confidence means annotation inherits that confidence, while absence starts from a much lower baseline. 

The SLM’s own parametric knowledge provides the third cross-reference: each SLM compares encountered claims against its training data, granting higher confidence to claims that align, flagging contradictions, and giving lower confidence to novel claims until corroboration accumulates.

This means annotation quality isn’t just about how well your content is written. It’s about how well your entity is already represented across all three of the algorithmic trinity. An entity with strong knowledge graph presence, authoritative web index links, and consistent SLM-domain representation gets higher annotation confidence on new content automatically. 

The flywheel: better presence leads to better annotation, which leads to better recruitment, which strengthens presence, and which improves future annotation.

Once again, better to have an average presence in all three than to have a dominant presence in two and no presence in one.

The annotation flywheel

And this is why knowledge graph optimization (what I’ve been advocating for over a decade) isn’t separate from content optimization. They are the same pipeline. Your knowledge graph presence directly improves how accurately, verbosely, and confidently the system annotates every new piece of content you publish.

If you’re thinking “Knowledge graph? That’s just Google,” think again.

In November 2025, Andrea Volpini intercepted ChatGPT’s internal data streams and found an operational entity layer running beneath every conversation: structured entity resolution connected to what amounts to a product graph mirroring Google Shopping feeds. 

OpenAI is building its own knowledge graph inside the LLM. My bet is that they will externalize it for several reasons: a knowledge graph in an LLM doesn’t scale, an LLM will self-confirm, so the value is limited, a standalone knowledge graph can be easily updated in real time without retraining the model, and it’s only useful at scale when it stays current.

The algorithmic trinity isn’t a Google phenomenon. It’s the architectural pattern every AI assistive engine and agent converges on, because you can’t generate reliable recommendations without a concept graph, structured entity data, and up-to-date search results to ground them.

Why Google and Bing annotate differently from engines that rent their index

Google and Bing own their crawling infrastructure, indexes, and knowledge graphs. They can afford grace periods, schedule rechecks, and maintain temporal state for URLs and entities over months.

OpenAI, Perplexity, and every engine that rents index access from Google or Bing operate on a fundamentally different model. They have two speeds: 

  • A slow Boolean gate (Does this content exist in the index I have access to?)
  • A fast display layer (What does the content say right now when I fetch it for grounding?)

The Boolean gate inherits Google’s and Bing’s annotations. Whether your content appears at all depends on whether it was recruited from the index those engines draw from, and that recruitment depends on annotation and selection decisions made by the algorithmic trinity. But what these engines show when they cite you is fetched in real time.

The practical implication

For Google and Bing, you’re optimizing for annotation quality with the benefit of grace periods and gradual reclassification. For engines that don’t own their index, the Boolean presence is inherited from the rented index and is slow to change, but the surface-level display changes every time they re-fetch.

That means what you are seeing in the results is not a direct measure of your annotation quality. It’s a snapshot of your page at the moment of fetch, and those two things may have nothing to do with each other.

How to optimize for annotation quality: The six practical principles

The SEO industry has spent two decades optimizing for search and assistive results — what happens after the system has already decided what your content means. We should be optimizing for annotation. 

If the annotation is wrong, everything downstream suffers. When the annotation is accurate, verbose, and confident, your content has a significant advantage in recruitment, grounding, display, and, ultimately, won.

1. Trigger SLM routing

Make your topic category obvious within the first 100 words. Use standard industry terminology. Follow structural conventions. Reference well-known entities. The goal: specialist model, not generalist.

2. Write for all three SLMs

Clear signals for subject (what is this about?), entity (who is the authority?), and concept (what established ideas does this connect to?). Ambiguity on any axis reduces confidence.

3. Get it right before publishing

First-impression persistence means the initial annotation is the hardest to change. Publish only when topic, entity signals, and claims are unambiguous.

4. Build the flywheel

Knowledge graph presence, web index centrality, LLM parameter strengthening, and correct SLM-domain representation all feed annotation confidence for new content. Invest in entity foundation, and every future piece benefits from inherited credibility.

5. Eliminate noise when correcting

Change every reference. Leave zero contradictory signals. Noise maintains inertia proportionally.

6. Audit for annotation, not just indexing

A page can be indexed and still misannotated. If the AI response is wrong about you, the problem is almost certainly at Gate 5, not Gate 8.

How to optimize for annotation quality

Annotation is the gate where most brands silently lose. The SEO industry doesn’t yet have a vocabulary for it. That needs to change, because the gap between brands that get annotation right and brands that don’t is the gap between consistent AI visibility and permanent algorithmic obscurity.

Why annotation matters so much and why it should be your main focus

You’ve done everything within your power to create the best possible content that maps to intent of your ideal customer profile, you have methodically optimized your digital footprint, your data feeds every entry mode simultaneously: pull, push discovery, push data, MCP, and ambient, so they are all drawing from the same clean, consistent source

So, content about your brand has passed through the DSCRI infrastructure phase, survived the rendering and conversion fidelity boundaries, and arrived in the index (Gate 4) intact. Phew!

Now it gets classified. Annotation is the last moment in the pipeline where you have the field to yourself. Every decision in DSCRI was absolute: you vs. the machine, with no competitor in the frame. 

Annotation is still absolute. The system classifies your content based on your signals alone, independently of what any competitor has done. Nobody else’s data changes how your entity is annotated.

But this is the last time you aren’t competing. From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in the competitive race you have to win.

That means: 

  • Get annotation right, and you start ahead, with confidence that compounds through every downstream gate in RGDW. 
  • Get it wrong, and the multiplicative destruction effect does its work — a near-zero on one annotation dimension cascades through recruitment, grounding, display, and won. No amount of excellent content, structural signals, or entry-mode advantage recovers it.

Warning: First-impression persistence (remember, the first time you are annotated is the baseline) means you don’t get a clean retry. Changing the baseline requires thoroughness, time, and more effort than getting it right on the first crawl.

Annotation isn’t the gate that most brands focus on. It’s the gate where most brands silently lose.

This is the eighth piece in my AI authority series. 

5 priorities for lead gen in AI-driven advertising

7 April 2026 at 18:00
5 priorities for lead gen in AI-driven advertising

Many of today’s PPC tools were designed to be easily accessible to ecommerce. That doesn’t mean lead gen can’t take advantage of them, but it does mean more intentional application is required.

Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply — but not always in the same way.

Here are the priorities that matter most for succeeding with lead gen using AI.

Disclosure: I’m a Microsoft employee. While this guidance is platform-agnostic, I’ll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.

1. Fix your conversion data first

This is the single most important thing you can do as AI becomes more embedded in media buying.

Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, it’s reasonable to ask whether your data is still telling an accurate story.

Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.

In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:

  • Confirm conversions are firing consistently.
  • Regularly review conversion goal diagnostics.
  • Validate that lead status updates and downstream signals are actually flowing back.

If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.

Dig deeper: How to make automation work for lead gen PPC

2. Make landing pages easy to ingest and easy to understand

Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.

Your landing pages should make it clear:

  • What action you want the user to take.
  • What happens after action is taken.
  • Which conversions matter most.

Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.

Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.

A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.

You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, you’re in a good place. If it doesn’t, that’s a signal to refine your content.

Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.

Dig deeper: AI tools for PPC, AI search, and social campaigns: What’s worth using now

3. Budget across the entire funnel

Lead gen has always struggled with long conversion cycles. That challenge doesn’t go away, and in some ways, it becomes more pronounced.

AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.

That means:

  • Budgeting intentionally across awareness, consideration, and conversion.
  • Applying the right metrics at each stage.
  • Looking beyond traffic as the primary success indicator.

In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.

Dig deeper: Lead gen PPC: How to optimize for conversions and drive results

Get the newsletter search marketers rely on.


4. Clean up your feeds and map data

You may not think you have a “feed” in your lead gen setup, but that absence can put you at a disadvantage.

Feeds help AI systems understand your business structure, services, and site architecture. Even if you don’t have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.

Clean up your feeds and map data
Example of a feed for lead gen

Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.

On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.

Account for potential AI-driven inflation in reporting, whether you’re looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.

5. Pressure-test your creative for clarity

Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.

If your value proposition requires three headlines, or a headline plus a description, to make sense, that’s a risk.

Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:

  • What you do
  • Who you help
  • Why it matters

If that clarity isn’t there, AI-driven placements can quickly become confusing.

Dig deeper: Why creative, not bidding, is limiting PPC performance

The fundamentals that still move the needle

Lead gen today doesn’t need to be complicated.

Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.

The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.

If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business — and that’s where sustainable performance comes from.

❌
❌