Reading view

Ironclad Super Bowl 60 picks you can bet your Bitcoin on! | Opinion

With the joyous arrival of Super Bowl LX – which I’m told translates from Roman to “Super Bowl 60” – readers the world over are again turning to me for sure-thing prognostications and sound ways to wager.

As most of you know, I’m a 12-time Super Bowl watcher with eight rings, each from atop frosted cupcakes purchased for the big-game party. My NFL predictions, to the best of my knowledge, have never been wrong, as I do not believe in criticism and delete all angry emails without reading them.

My Super Bowl picks have been called “probably safer than Bitcoin, maybe” and “a small tick above setting money on fire.”

So with that, let’s get to my ironclad forecasts as the Seattle Superb-hawks take on the New England Patriots in America’s favorite mix of capitalism and violence – the Super Bowl.

Large men will collide in pursuit of an oblong ball that isn't really a ball

The Seattle Seahawks defeated the Los Angeles Rams on Jan. 25, 2026, in Seattle.

I have it on good authority that this year’s game at Levi’s Stadium in Santa Clara, California, will involve literally dozens of large human men racing around a field after a ball that – in a real twist from one’s standard sense of “ball” – is oblong, more an air-filled egg than a spherical bouncy thing.

Some betting lines would have you believe this is the year the Super Bowl “football” is replaced with a proper spherical model, but my money is on the ball remaining a prolate spheroid. Bank on it.

A Budweiser Clydesdale will actually show up at your house

The Budweiser Clydesdales runs around the warning track before the opening day game between the St. Louis Cardinals and the Minnesota Twins at Busch Stadium on March 27, 2025.

Every Super Bowl seems to involve a beer commercial from Budweiser featuring Clydesdales, a type of horse that, to the best of my knowledge, knows literally nothing about brewing beer.

These commercials tend to grab at the heart, either via endearing visual scenes or the music the Clydesdales tromp along to. Having exhausted all charming horse-related scenarios, this is the year I predict Budweiser will dispatch roughly 135 million Clydesdales, one to each U.S. household, each accompanied by a band playing music that will make everyone cry.

Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store.

It will still have nothing to do with beer, but everyone will get drunk, so the horses don’t feel they’ve wasted their time.

Someone at your Super Bowl party will cite a stupid fact about salsa

This is one of the best bets you can place for this or any Super Bowl. At some point during the game, regardless of what is happening or what has happened and for no logical reason, one attendee will blurt out, without anyone asking, that Americans buy more salsa than ketchup each year.

This has been a fact for more than three decades, but that won’t stop your neighbor Phil from acting like he’s the king of breaking news. SHUT UP, PHIL! THIS WASN’T INTERESTING IN THE EARLY 1990s AND IT AIN’T INTERESTING NOW!

People will have opinions about the Super Bowl halftime show

Bad Bunny speaks at a news conference on Feb. 5, 2026, in San Francisco about his Super Bowl halftime show three days later.

This is where I’m putting all my money this year. The Super Bowl halftime show was actually invented in 1967 so football fans would have an additional dumb thing to argue about. This year, I expect the stupidity to reach a fevered pitch. Take the over.

Opinion: WARNING ‒ Bad Bunny's Super Bowl show will turn you woke

Bad Bunny, one of the biggest pop stars on the planet, will perform his Spanish-language hits at the half. A good parlay would be how many times someone complains, “I can’t understand what he’s saying!” or the number of people who learn, for the first time, that people like Bad Bunny, who were born in Puerto Rico, are U.S. citizens. Again, take the over.

President Trump will post something stupid about the game

President Donald Trump speaks with reporters on board Air Force One en route to Palm Beach, Florida, on Feb. 6, 2026.

If there’s one thing President Donald Trump hates, it’s not being the center of attention, so anytime something significant like the Super Bowl happens, he finds a way to draw the spotlight to himself, usually by saying or doing something horrible.

This year will be no different, and might be far worse than usual. Expect that by the end of halftime, the leader of the free world will have posted something uncouth about the game itself (TOO WOKE!) or about Bad Bunny’s performance (RADICAL LEFTIST PERFORMANCE! SAD!).

Kid Rock's alternative halftime show will be viewed by tens of people

Singer Kid Rock testifies on the cost of concert tickets before a Senate committee in Washington, DC, on Jan. 28, 2026.

The MAGA response to Bad Bunny’s utterly noncontroversial halftime show is an alternative halftime event featuring Kid Rock and some other people who can apparently sing, or something. While many doubt anyone will watch this sad display of xenophobia, I’m predicting an upset. There will be literally tens of people, possibly reaching into the low dozens, watching this streaming nonevent.

A favorable parlay here is that technical difficulties will make it nearly impossible to hear that one song you’ve never heard by that singer you’ve never seen before.

And that’s it. Enjoy your winnings, folks. And enjoy the big game, which I’m betting will be won by one of the two teams competing, by a score that is slightly higher than the other team’s score.

Follow USA TODAY columnist Rex Huppke on Bluesky at @rexhuppke.bsky.social and on Facebook at facebook.com/RexIsAJerk

You can read diverse opinions from our USA TODAY columnists and other writers on the Opinion front page, on X, formerly Twitter, @usatodayopinion and in our Opinion newsletter.

This article originally appeared on USA TODAY: Super Bowl 60 is here and I have guaranteed predictions | Opinion

Samsung Galaxy S25 is still a great buy over the S26 for many

Samsung is currently preparing to launch the Galaxy S26 series in late February 2026. Meanwhile, if you are thinking of buying a premium phone, the Galaxy S25 is still a very good phone. Several people may find it better than the new Galaxy S26. Below you can check why.

Here’s why the Samsung Galaxy S25 is still a great buy over the S26

The main reason behind choosing the Galaxy S25 is the processor. The phone comes with the Snapdragon 8 Elite chip worldwide. Snapdragon chips are fast, smooth, and save battery.

On the other side, the Galaxy S26 will reportedly use Exynos chips in Asia, Africa, and Europe. Exynos chips are often slower and use more battery. However, Samsung claims better performance this time, but if you want a phone that works fast and lasts longer, the Galaxy S25 is a good choice.

Samsung Galaxy S25 Plus

Also, the Galaxy S25 is cheaper than the Galaxy S26. You can get a premium phone without paying a lot of money, as it is a last year’s smartphone. Ahead of the new Galaxy S series launch, Samsung is offering great deals on Galaxy S25 series phones.

The Galaxy S25 cameras are very good and can take nice photos and videos, just like the S26. It means that this upcoming Galaxy S26 smartphone to feature the same camera specs as the Galaxy S25.

Talking about the design, it is quite familiar except for the back camera island. The Galaxy S26 series to adopt the latest foldable phone-like camera design. For most people, the Galaxy S25 has almost everything they need in a premium phone.

The Galaxy S25 is still a smart buy. It has a fast Snapdragon processor, good cameras, a familiar design, and a lower price. The Galaxy S26 may have some new features, but the S25 is better for people who want a reliable, easy-to-use, and affordable phone.

Google Search Top Stories Preferred Source

The post Samsung Galaxy S25 is still a great buy over the S26 for many appeared first on Sammy Fans.

Samsung Galaxy S26 Ultra is coming but is it enough? Check wishlist

I have been in the Samsung club for a long time. Every Ultra since the S22, every Foldable from the Fold4 up to the current 7, I have owned them all. I use these things for everything from long work trips to late-night report preparation. I love the tech, but I am getting tired. Spending $1,000 on a phone should buy you big changes. Minor yearly upgrades just don’t justify that high price tag.

With Unpacked coming up on Feb 25, the leaks are all over the place. Faster 60W charging, rounder corners, a new privacy screen… okay, cool. But it feels like more of the same. If Samsung wants me to hit that “buy” button without thinking twice, the company needs to actually listen to what users like me are asking for.

1. Give us a real battery

Samsung should stop with the 5,000mAh cap. It’s been years – 7th year in a row. Between the massive screens, multitasking, and all the AI stuff running in the background, I started looking for a charger by 7 PM. Chinese smartphone brands are already hitting 6,000 or 7,000mAh by using denser silicon-carbon cells. Why is Samsung still stuck in 2022? I’ll take the slightly faster 60W charging, sure, but I’d trade that in a bigger battery that actually lasts two full days. There’s a rumor that Apple’s 2026 iPhone will include a 5500mAh battery inside.

2. A “Pro” size for normal humans

The 6.9-inch screen is amazing for movies, but it’s a total brick in every pocket. I wish Samsung would do what Apple does. Give us a smaller 6.5 or 6.7-inch Ultra that has the same cameras and chip, more like Pro and Pro Max. Honestly? I barely use the S Pen anymore, well, for drawing. I’d gladly trade away all those extra features if it meant I could finally use my phone comfortably with just one hand.

Samsung Galaxy S25 Ultra

Source – Samsung Mobile Press

3. Better sensors > More Megapixels

I am done with the 200MP gimmick. Give me a 1-inch sensor instead. I am looking for a natural bokeh and less noise in the dark, not that weird “AI-smoothed” look we keep getting. If Xiaomi can fit a massive sensor in a flagship, why can’t the industry leader? Also, let us share custom camera profiles.

I’d rather download a pro’s “night mode” settings than adjust everything myself. Manually changing every setting every time is just too much work.

4. Magnets

The rumors saying no built-in Qi2 magnets for the S26 is just… disappointing. Apple’s MagSafe changed the game for car mounts and stands. Having to buy a special case just to use magnets on a $1,300 flagship in 2026 feels idiotic. Just build the coils into the body and get it over with. You can do this Sammy.

5. Colors

The “Titanium” grey/black stuff is classy, I get it, but it’s boring (for me). Bring back the fun. Give me a bright metallic orange like the S22 Ultra had, or a deep purple that is actually beautiful.

6. Overheating

The new Snapdragon chips are fast, but the Ultra still throttles and gets hot if anyone is gaming or shooting 8K for more than 15/20 minutes. We need better cooling. If Samsung is making the phone slimmer this year, I really hope the company didn’t sacrifice the vapor chamber to do it.

Look, as a Samsung fan, I love the S Pen, and the displays are second to none. But I want to feel that “wow” factor again. If the Galaxy S26 Ultra is just a slightly thinner S25, I might actually skip a generation for the first time.

Samsung Galaxy S25 Ultra Camera

When you are paying a premium price, small tweaks just don’t cut it anymore. Are you in or out this year? Drop your thoughts on @thesammyfans X Account, and let’s get into it.

The post Samsung Galaxy S26 Ultra is coming but is it enough? Check wishlist appeared first on Sammy Fans.

Apple is finally playing Samsung’s game and they might be winning

I am a Samsung guy. I love the S Pen, I love the ridiculous screens, and I usually think Apple is about three years late to every hardware party. But the 2026 leaks are starting to look… different. If the iPhone 18 Pro Max actually hits the marks we are hearing about, Samsung might face a real fight with Apple in 2026. Let’s talk about the stuff that actually matters.

The Battery Gap:

We have all been making fun of Apple’s “efficient” (tiny) batteries for a decade, but the tables are turning. Rumors suggest the iPhone 18 Pro Max is jumping to a 5,200mAh cell. Meanwhile, the S26 Ultra looks like it’s sticking to 5,000mAh for what feels like the 10th year (actually 7th) in a row.

I know capacity isn’t everything, but when you pair that bigger battery with Apple’s new 2nm A20 chipset, the gains are going to be stupid. We are talking “forgot to charge it last night and it doesn’t matter” levels of battery life.

Samsung is supposedly increasing the S26 Ultra up to 60W charging, which is great, but I’d honestly rather have a phone that just stays alive longer in the first place. You want it too, right?

Specs vs. Real Life Cameras:

Samsung always wins the megapixel war – 200MP is a cool number to put on a box, but Apple is finally joining the variable aperture game for 2026. This is the area that actually makes photos look like they came from a real camera/DSLR. The advantage of this tech, you will get better natural blur, better low-light, and none of that weird, fake-smooth look that Samsung’s AI sometimes overdoes.

Also, can we talk about the under-display Face ID? Apple is likely to add just a tiny punch-hole in 2026, as per rumors. Samsung has been doing under-display cameras on the Fold for a while, but let’s be honest: it usually looks like a screen door to me. If Apple manages to hide the sensors without ruining the display quality, the S26 Ultra is going to look a bit dated with its centered hole-punch.

Software Update Problems:

Samsung made a big deal about seven years of updates, and then Apple is like “hold my beer” and pushed a software update for an iPhone older than 11 years. For most of us, that doesn’t matter because we trade in every two or three years anyway. But for resale value? It’s huge. It makes an old iPhone worth way more than an old Galaxy.

More:

Apple’s in-house C2 modem is basically the chip that handles your phone’s internet connection (like 5G). Apple is making this one itself instead of buying it from Qualcomm (the company that usually supplies these chips for iPhones). This C2 modem should make the 5G faster and more stable. It will support super-quick mmWave 5G (the ultra-fast type you get in big cities, stadiums, or busy spots).

Rumors say it will also use less battery power and work more smoothly with the rest of the iPhone’s parts because Apple designed everything together. This could mean better signal in tough spots, quicker downloads, and your phone lasting longer on the same charge compared to older Qualcomm modems.

It’s like Apple finally building its own “engine” for the internet instead of using someone else’s, and early leaks suggest it could be more efficient and reliable.

My Take:

I am not switching to iOS anytime soon. I need my customization, but Samsung can’t just trick me with “big screen and a stylus” anymore. The S26 Ultra needs to be more than a spec bump. If Apple delivers a 2nm chip, a bigger battery, and a camera that actually rivals a DSLR, the “iPhone is boring” era might officially be over.

What do you think? Is 60W charging enough to keep you on Team Galaxy, or is that 5,200mAh iPhone actually attracting you? Drop a comment on @thesammyfans X account.

The post Apple is finally playing Samsung’s game and they might be winning appeared first on Sammy Fans.

Blue Jay girls basketball back in the win column

Feb. 6—JAMESTOWN — Bismarck Legacy was up 10 points on the Blue Jay girls basketball team in the second half.

The Sabers couldn't hold onto the lead.

"We put up a strong team win, showing resilience and composure after falling behind .... in the second half," said JHS head coach Andy Skunberg.

The Blue Jays not only came back but ended up defeating Legacy 61-54 Friday night. Final stats were not available when The Jamestown Sun went to press.

Jamestown is now 6-6 in the West Region standings. The next game for the Jays is scheduled for Feb. 10 against Minot. Tipoff is set for 7 p.m. at Jerry Meyer Arena.

"Facing a well-coached Legacy squad that played with relentless effort, we responded with grit on both ends of the floor," Skunberg said. "The performance was one as coaches we are really proud of, while also serving as a reminder that continued improvement is needed in all areas as the team prepares for a challenging week ahead."

Column: The hardest part of the Irish goodbye is the 'goodbye' part

Feb. 6—The Irish goodbye is the most effective strategy I've found for exiting a social situation.

For those who are unaware, the explanation is simple — you just leave. It's that easy.

Maybe it's ducking out the side door of a building where everyone's gathered near the front, maybe it's dialing up an Uber to secretly take you away from an afterparty, maybe it's just deciding enough is enough and it's time to leave.

It works wonders, but there is one potential hazard. There's always the danger of that one person seeing you leave, who then announces your departure to everyone else and thus spoils the perfect escape plan.

In this case, I am both people.

By the time you read this, my time as the Aiken Standard sports editor will be over. After more than 11 years on staff full-time, not to mention my years as a part-timer before that, it's finally time to step aside. And I owe entirely too many thanks to too many people to silently walk out the side door one last time.

It was not a decision that was made lightly. When I took this job in December of 2014, I swore this was the only place I'd ever cover local sports. That's not because I thought I wouldn't do a good job elsewhere, but more so because I knew there was no other place in the world I'd be more invested in my coverage than here in Aiken.

I grew up here. I went to school here. I played sports here. I hoped my coaches wouldn't report my scores to this very paper. I grabbed the sports section every day when I came home from school, noted aloud how the paper favored our rival and hated us and mustered every bit of creative energy I had to call it the Sub-Standard. As it turns out, those ideas are both timeless.

I remember coming home from football games on Friday nights and rushing to turn on one of the local news channels to see how all of the other games went — back when you couldn't just get real-time updates live on your phone — and then reading the recaps in the next morning's paper. As it turns out, guys like Rob Gantt and Kenton Makin were teaching me how to do the job before I even knew it.

After tooling around on various hot-take blogs, thinking I was going to be the next Rick Reilly, I answered an ad in the paper heading into the 2009 season because the Standard was in dire need of help — longtime readers may remember that was the season we sent teams of two out to every single local football game, netbooks in hand, to provide live online stat updates from kickoff to the final whistle.

I couldn't have gotten luckier with my assignment from the great Cam Huffman. I was paired with the late legend Rob Novit, my first mentor in this business that became my career, and our team to cover was Williston-Elko. He drove and took photos, and I kept the stats and wrote the stories, most of them from his passenger seat as we drove back much earlier than the others after a running clock in the second half because the Blue Devils were so dominant during a 14-0 season that culminated in the Class A, Division II state championship.

Maybe it's only fitting that my 16th season — I missed 2010 due to illness — covering high school football ended with Strom Thurmond bringing home the Class AA crown.

So much happened in between. Coaches became friends. Friends and classmates became coaches. Kids I covered as high-schoolers became friends as adults. More recently, in a jarring "this is 40" wake-up call, my friends' kids started showing up on varsity rosters.

Some of the best stories to write were about our state champions — shoutout to the North Augusta girls' basketball team for giving me plenty of practice on that front. I've been lucky enough to have written about state champions, national champions, world champions, current professionals and future ones, games with 30 spectators and ones with closer to 100,000.

One of the biggest perks for a sports writer working in this part of the world is the opportunity to cover the Masters Tournament, and the adrenaline rush that comes from writing that final-round story on Sunday in the Augusta National media building is hard to beat — but the same can be said for absolute madhouses like the state semifinals between the Ridge Spring-Monetta and Wagener-Salley football teams in 2019, the Barnwell and Silver Bluff football teams in 2021 and the Aiken and South Aiken volleyball teams in 2022. And that's just a small sample.

My favorite stories to write, though, were the hundreds of college signing stories I've written over the years. I think it's fair to say I've seen more dreams come true than a ticket-taker at Disney World, and that's the entire reason to get into this business on the local level. Sure, the high-level events carry a different type of prestige, but getting to document someone from your community that may be the first from their family to go to college, one carrying on a family legacy, maybe thought they'd never go to college, maybe thought they'd never get to play again — that carries a hell of a lot more weight than someone we don't know winning a title that happened to be awarded in our area.

I owe countless thanks to countless people. News outlets only stay in business, and they only continue to employ their writers, if people read. I cannot say enough how much I appreciate every single eyeball.

To Cam Huffman, Noah Feit, Jeremy Timmerman, Eric Russell and Nick Terry, it was an honor and a privilege to sit alongside you in that newsroom working all of those late nights that never really felt like work because we got along so well and shared so many laughs.

To Melissa Hanna, I could not have asked for a more welcoming first boss in the newsroom. To Larry Taylor, there's no one I'd rather stress out with late on a Saturday when everything decides to go wrong at the worst possible time.

To John Boyette, there's no one I'd rather work for — period, paragraph, end of story. I look forward to seeing you again at Palmetto Golf Club, at Whiskey Alley and at Nacho Mama's — preferably sooner rather than later.

To Mike Dawson, most of this is your fault. Keeping that column I wrote bashing Notre Dame football 20 years ago folded up in your glovebox to show to your friends only encouraged me to keep doing this. Go figure, I got it from the best storyteller I know.

Anyone who ever trusted me to tell your story, called in a score, texted in stats, emailed a tip, tagged a tweet, forwarded a Facebook post, shared a story elsewhere and, yes, even reached out to complain — thank you. Our coverage area is a big one, especially during the years I was operating as a one-man shop and greatly struggled to figure out how to divvy up my time between three counties' worth of high schools and a Division II university, so every little bit of information I received to help me do my job better and shine a brighter light on our local student-athletes was greatly appreciated — even if I didn't sound so enthusiastic on some of those less-than-happy calls.

I'm not going anywhere. I'll still be here in my hometown, just not as a full-time sports writer anymore. I'm looking forward to enjoying nights, weekends and holidays with my wife and our dogs, and I'll always be glad to see our readers out in public.

Feel free to say hello. And I promise I won't leave without saying goodbye.

Why content that ranks can still fail AI retrieval

Why content that ranks can still fail AI retrieval

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations. 

In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.

This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals. 

AI systems don’t. 

They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.

When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings. 

At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.

The visibility gap: Ranking vs. retrieval

Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.

AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.

This difference is where the visibility gap forms. 

A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.

Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.

Dig deeper: What is GEO (generative engine optimization)?

Structural failure 1: When content never reaches AI

One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.

This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.

In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.

How to tell if your content is returned in the initial HTML

The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.

Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.

To do this, open your CMD (or Command Prompt) and enter the following prompt: 

Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.

From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.

This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.

If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.

Why heavy code still hurts retrieval, even when content is present

Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.

AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.

This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.

What actually fixes retrieval failures

The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time. 

In practice, this can usually be achieved in one of two ways: 

  • Pre-rendering the page.
  • Ensuring clean and complete content delivery in the initial HTML response.

Pre-rendered HTML

Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.

This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.

AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.

The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion. 

This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.

From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.

Clean initial content delivery

Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.

Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML. 

Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.

From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether. 

These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.

Structural failure 2: When content is optimized for keywords, not entities

Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.

While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.

AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.

When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T

he content may rank for a query, but its meaning remains ambiguous at the vector level.

This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.

Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.

Without explicit definition, entity signals weaken and associations fragment.

Get the newsletter search marketers rely on.


Structural failure 3: When structure can’t carry meaning

AI systems don’t consume content as complete pages.

Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.

Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.

Detailed header tags

Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.

Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.

Dig deeper: The most important HTML tags to use for SEO success

Single-purpose sections

Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.

Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.

Structural failure 4: When conflicting signals dilute meaning

Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.

Common sources include:

Conflicting canonicals

When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions. 

The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.

Inconsistent metadata

Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.

Duplicated or lightly repeated sections

Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.

Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.

Complete visibility requires ranking and retrieval

SEO has always been about visibility, but visibility is no longer a single condition.

Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.

Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.

The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.

Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.

Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

How PR teams can measure real impact with SEO, PPC, and GEO

How to incorporate SEO and GEO into PR measurement

PR measurement often breaks down in practice.

Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.

That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.

Working together, these teams can help PR do three things that are hard to accomplish alone:

  • Show the connection between media outreach and customer action.
  • Incorporate SEO – and now generative engine optimization (GEO) – into measurement programs.
  • Select tools that match the metrics that actually matter.

This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

Digital communication isn’t linear – and measurement shouldn’t be either

Incorporating SEO and GEO into Your PR Measurement Program

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.

In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.

That’s why measurement must start by defining the response sought, not by counting outputs.

SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.

PR measurement becomes dramatically more actionable when it adopts the same mindset.

Step 1: Show the connection between media outreach and customer action

PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”

The answer usually exists in the data. It’s just spread across systems owned by different teams.

SEO and paid media teams already track:

  • Branded and non-branded search demand.
  • Landing-page behavior.
  • Conversion paths.
  • Assisted conversions across channels.

By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.

Practical examples

  • Spikes in branded search following major media placements.
  • Referral traffic from earned links and how those visitors behave compared to other sources.
  • Increases in conversions or sign-ups after coverage appears in authoritative publications.
  • Assisted conversions where media exposure precedes search or paid clicks.

Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.

This reframes PR from a cost center to a demand-creation channel.

Matt Bailey, a digital marketing author, professor, and instructor, said:

  • “The value of PR has been well-known by SEO’s for some time. A great article pickup can influence rankings almost immediately. This was the golden link – high domain popularity, ranking impact, and incoming visitors – of which PR activities were the predominate influence.”

Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma

Get the newsletter search marketers rely on.


Step 2: Incorporate SEO into PR measurement – then go one step further with GEO

Most communications professionals now accept that SEO matters. 

What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.

Traditional PR metrics focus on:

  • Volume of coverage.
  • Share of voice.
  • Sentiment.

SEO-informed PR adds new outcome-level indicators:

  • Authority of linking domains, not just link counts.
  • Visibility for priority topics, not just brand mentions.
  • Search demand growth tied to campaigns or announcements.

These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”

Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.

Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.

For PR and communications teams, this is a natural extension of credibility building:

  • Is your organization cited by AI systems as an authoritative source?
  • Do AI-generated summaries reflect your key messages accurately?
  • Are competitors shaping the narrative instead?

Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.

The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.

David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:

  • “Real-time content creation has always been an effective way of communicating online. But now, in the age of AI-powered search, it has become even more important. The organizations that monitor continually, act decisively, and publish quickly will become the ones people turn to for clarity. And because AI tools increasingly mediate how people experience the world, those same organizations will also become the voices that artificial intelligence amplifies.”

Dig deeper: A 90-day SEO playbook for AI-driven search visibility

Step 3: Select tools based on the response sought – not on what’s fashionable

One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.

A useful framework is to work backward from the action you want audiences to take.

If the response sought is awareness or understanding:

  • Brand lift studies (from Google, Meta, and Nielsen) measure changes in awareness, favorability, and message association.
  • These tools help PR teams demonstrate impact beyond raw reach,

If the response sought is engagement or behavior:

  • Web and campaign analytics track key events such as downloads, sign-ups, or visits to priority pages.
  • User behavior tools like heatmaps and session recordings reveal whether content actually helps users accomplish tasks.

If the response sought is long-term influence:

  • SEO visibility metrics show whether coverage improves authority and topic ownership.
  • GEO tools reveal whether AI systems recognize and reuse your content.

The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.

Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said: 

  • “If PR professionals want prove their impact, they need to go beyond tracking SEO to also understand their visibility in GEO as well. Search is where today’s purchasing and other decision making starts, and we’ve known for a while that good (or bad) press coverage drives searches for a brand. Which is why we’ve been advising PR professionals who want to prove their impact on the brand to ‘bake cookies and befriend’ the SEO folks within their companies. Today as more and more people rely on AI search for their answers, the value of traditional blue SEO links is declining faster than the value of a Tesla. As a result, understanding and ultimately quantifying how and where your brand is showing up in AI search (aka GEO) is critical.”

Dig deeper: 7 hard truths about measuring AI visibility and GEO performance

Why collaboration beats reinvention

PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.

What’s required is shared ownership of outcomes.

When these groups collaborate:

  • PR informs SEO about narrative priorities and upcoming campaigns.
  • SEO provides PR with data on audience demand and search behavior.
  • PPC teams validate messaging by testing what actually drives action.
  • Measurement becomes cumulative, not competitive.

This reduces duplication, saves budget, and produces insights that no single team could generate alone.

Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.

Today, tools are cheaper – or free – but the rule still holds.

The most valuable asset isn’t software. It’s professionals who can:

  • Ask the right questions.
  • Interpret data responsibly.
  • Translate insights into decisions.

Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.

Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.” 

I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Measurement isn’t about proving value – it’s about improving it

The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.

By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.

The theory hasn’t changed.

The opportunity to measure what matters is finally catching up.

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

Why most B2B buying decisions happen on Day 1 – and what video has to do with it

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:

  • The “viral” top-of-funnel asset that gets views but no leads.
  • The dry bottom-of-funnel product demo that gets leads but no views.

This binary thinking is breaking your pipeline.

In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.

When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.

Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.

The reality: The ‘first impression rose’

The window to influence a deal closes much earlier than most marketers realize.

LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.

Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.

If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.

That requires a three-play strategy.

Play 1: Reach and prime the ‘hidden’ buying committee

The goal: Reach the people who can say ‘no’

Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.

Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign. 

But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”

In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Reach the people who can say ‘no’

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.

The strategic shift: Cut-through creative

To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.

LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:

  • Be bold: Video ads featuring bold, distinctive colors see a 15% increase in engagement.
  • Be process-oriented: Messaging broken down into clear, visual steps drives 13% higher dwell times.
  • The “Goldilocks” length: Short videos between 7-15 seconds are the sweet spot for driving brand lift – outperforming both very short (under 6 seconds) and long-form ads.
  • The “Silent Movie” rule: Design for the eye, not the ear. 79% of LinkedIn’s audience scrolls with sound off. If your video relies on a talking head to explain the value prop in the first 5 seconds, you’ve lost 80% of the room. Use visual hooks and hard-coded captions to earn attention instantly.

Dig deeper: 5 tips to make your B2B content more human

Play 2: Educate and nudge by selling ‘buyability’

The goal: Mitigate personal and professional risk

This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).

When a B2B buyer is shortlisting vendors, they’re navigating career risk. 

Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

LinkedIn, Bain & Company - Mitigate personal and professional risk

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”

The strategic shift: Market the safety net

To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?

Momentum is safety (the “buzz” effect)

Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”

You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement. 

When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.

Authority builds trust (the “expert” effect)

If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.

Video ads featuring executive experts see 53% higher engagement.

When those experts are filmed on a conference stage, engagement lifts by 70%.

Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”

Consistency is credibility

You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.

Dig deeper: The future of B2B authority building in the AI search era

Get the newsletter search marketers rely on.


Play 3: Convert and capture by removing friction

The goal: Stop convincing, start helping

By this stage, the buyer knows you (Play 1) and trusts you (Play 2). 

Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.

Buyers at this stage feel three specific types of risk:

  • Execution risk: “Will this actually work for us?”
  • Decision risk: “What if I’m choosing wrong?”
  • Effort risk: “How much work is implementation?”

That’s why recommendations, relationships, and being relatable help close deals.

LinkedIn, Bain & Company - Number of buyability drivers influenced

The strategic shift: Answer the anxiety

Your creative should directly answer those anxieties.

Scale social proof – kill execution risk

90% of buyers say social proof is influential information. But don’t just post a logo. 

Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.

Activate your employees – kill decision risk

People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.

The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data. 

Show the humans who’ll answer the phone when things break.

The conversion combo – kill effort risk

Don’t leave them hanging with a generic “Learn More” button.

We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms. 

The video explains the value, the form captures the intent instantly.

  • Short sales cycle (under 30 days): Use video and lead gen forms for speed.
  • Long sales cycle: Retarget video viewers with message ads from a thought leader. Don’t ask for a sale; start a conversation.

Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing

It’s a flywheel, not a funnel

If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.

In most organizations, “brand” teams and “demand” teams operate in silos. 

  • Brand owns the top of the funnel (Play 1). 
  • Demand owns the bottom (Play 3). 

They fight over budget and rarely coordinate creative.

This fragmentation kills the multiplier effect.

When you break down those silos and run these plays as a single system, the data changes.

Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.

It creates a flywheel:

  • Your broad reach (Play 1) builds the retargeting pools.
  • Your educational content (Play 2) warms up those audiences, lifting CTRs.
  • Your conversion offers (Play 3) capture demand from buyers who are already sold, lowering your CPL.

The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.

And the ones on that list are the ones that win the revenue.

A closer look at February’s world title fights

The 2026 boxing calendar has kicked off in style, as Shakur Stevenson thrust his name into pound-for-pound contention, whilst both Dalton Smith and Josh Kelly have registered upset title wins to become Great Britain’s latest world champions. Now, the year is in full flow and we are set for another month of twists, turns and […]

The post A closer look at February’s world title fights appeared first on Boxing News.

Google Ads no longer runs on keywords. It runs on intent.

Why Google Ads auctions now run on intent, not keywords

Most PPC teams still build campaigns the same way: pull a keyword list, set match types, and organize ad groups around search terms. It’s muscle memory.

But Google’s auction no longer works that way.

Search now behaves more like a conversation than a lookup. In AI Mode, users ask follow-up questions and refine what they’re trying to solve. AI Overviews reason through an answer first, then determine which ads support that answer.

In Google Ads, the auction isn’t triggered by a keyword anymore – it’s triggered by inferred intent.

If you’re still structuring campaigns around exact and phrase match, you’re planning for a system that no longer exists. The new foundation is intent: not the words people type, but the goals behind them.

An intent-first approach gives you a more durable way to design campaigns, creative, and measurement as Google introduces new AI-driven formats.

Keywords aren’t dead, but they’re no longer the blueprint.

The mechanics under the hood have changed

Here’s what’s actually happening when someone searches now.

Google’s AI uses a technique called “query fan out,” splitting a complex question into subtopics and running multiple concurrent searches to build a comprehensive response.

The auction happens before the user even finishes typing.

And crucially, the AI infers commercial intent from purely informational queries.

For instance, someone asks, “Why is my pool green?” They’re not shopping. They’re troubleshooting.

But Google’s reasoning layer detects a problem that products can solve and serves ads for pool-cleaning supplies alongside the explanation. While the user didn’t search for a product, the AI knew they would need one.

This auction logic is fundamentally different from what we’re accustomed to. It’s not matching your keyword to the query. It’s matching your offering to the user’s inferred need state, based on conversational context. 

If your campaign structure still assumes people search in isolated, transactional moments, you’re missing the journey entirely.

Anatomy of a Google AI search query

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

What ‘intent-first’ actually means

An intent-first strategy doesn’t mean you stop doing keyword research. It means you stop treating keywords as the organizing principle.

Instead, you map campaigns to the why behind the search.

  • What problem is the user trying to solve?
  • What stage of decision-making are they in?
  • What job are they hiring your product to do?

The same intent can surface through dozens of different queries, and the same query can reflect multiple intents depending on context.

“Best CRM” could mean either “I need feature comparisons” or “I’m ready to buy and want validation.” Google’s AI now reads that difference, and your campaign structure should, too.

This is more of a mental model shift than a tactical one.

You’re still building keyword lists, but you’re grouping them by intent state rather than match type.

You’re still writing ad copy, but you’re speaking to user goals instead of echoing search terms back at them.

Get the newsletter search marketers rely on.


What changes in practice

Once campaigns are organized around intent instead of keywords, the downstream implications show up quickly – in eligibility, landing pages, and how the system learns.

Campaign eligibility

If you want to show up inside AI Overviews or AI Mode, you need broad match keywords, Performance Max, or the newer AI Max for Search campaigns.

Exact and phrase match still work for brand defense and high-visibility placements above the AI summaries, but they won’t get you into the conversational layer where exploration happens.

Landing page evolution

It’s not enough to list product features anymore. If your page explains why and how someone should use your product (not just what it is), you’re more likely to win the auction.

Google’s reasoning layer rewards contextual alignment. If the AI built an answer about solving a problem, and your page directly addresses that problem, you’re in.

Asset volume and training data

The algorithm prioritizes rich metadata, multiple high-quality images, and optimized shopping feeds with every relevant attribute filled in.

Using Customer Match lists to feed the system first-party data teaches the AI which user segments represent the highest value.

That training affects how aggressively it bids for similar users.

Dig deeper: In Google Ads automation, everything is a signal in 2026

The gaps worth knowing about

Even as intent-first campaigns unlock new reach, there are still blind spots in reporting, budget constraints, and performance expectations you need to plan around.

No reporting segmentation

Google doesn’t provide visibility into how ads perform specifically in AI Mode versus traditional search.

You’re monitoring overall cost-per-conversion and hoping high-funnel clicks convert downstream, but you can’t isolate which placements are actually driving results.

The budget barrier

AI-powered campaigns like Performance Max and AI Max need meaningful conversion volume to scale effectively, often 30 conversions in 30 days at a minimum.

Smaller advertisers with limited budgets or longer sales cycles face what some call a “scissors gap,” in which they lack the data needed to train algorithms and compete in automated auctions.

Funnel position matters

AI Mode attracts exploratory, high-funnel behavior. Conversion rates won’t match bottom-of-the-funnel branded searches. That’s expected if you’re planning for it.

It becomes a problem when you’re chasing immediate ROAS without adjusting how you define success for these placements.

Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro

Where to start

You don’t need to rebuild everything overnight.

Pick one campaign where you suspect intent is more complex than the keywords suggest. Map it to user goal states instead of search term buckets.

Test broad match in a limited way. Rewrite one landing page to answer the “why” instead of just listing specs.

The shift to intent-first is not a tactic – it’s a lens. And it’s the most durable way to plan as Google keeps introducing new AI-driven formats.

How AI is reshaping local search and what enterprises must do now

Local search in the AI-first era: From rankings to recommendations in 2026

AI is no longer an experimental layer in search. It’s actively mediating how customers discover, evaluate, and choose local businesses, increasingly without a traditional search interaction. 

The real risk is data stagnation. As AI systems act on local data for users, brands that fail to adapt risk declining visibility, data inconsistencies, and loss of control over how locations are represented across AI surfaces.

Learn how AI is changing local search and what you can do to stay visible in this new landscape. 

How AI search is different from traditional search

traditional vs ai-search

We are experiencing a platform shift where machine inference, not database retrieval, drives decisions. At the same time, AI is moving beyond screens into real-world execution.

AI now powers navigation systems, in-car assistants, logistics platforms, and autonomous decision-making.

In this environment, incorrect or fragmented location data does not just degrade search.

It leads to missed turns, failed deliveries, inaccurate recommendations, and lost revenue. Brands don’t simply lose visibility. They get bypassed.

Business implications in an AI-first, zero-click decision layer 

Local search has become an AI-first, zero-click decision layer.

Multi-location brands now win or lose based on whether AI systems can confidently recommend a location as the safest, most relevant answer.

That confidence is driven by structured data quality, Google Business Profile excellence, reviews, engagement, and real-world signals such as availability and proximity.

For 2026, the enterprise risk is not experimentation. It’s inertia.

Brands that fail to industrialize and centralize local data, content, and reputation operations will see declining AI visibility, fragmented brand representation, and lost conversion opportunities without knowing why.

Paradigm shifts to understand 

Here are four key ways the growth in AI search is changing the local journey:

  • AI answers are the new front door: Local discovery increasingly starts and ends inside AI answers and Google surfaces, where users select a business directly.
  • Context beats rankings: AI weighs conversation history, user intent, location context, citations, and engagement signals, not just position.
  • Zero-click journeys dominate: Most local actions now happen on-SERP (GBP, AI Overviews, service features), making on-platform optimization mission-critical.
  • Local search in 2026 is about being chosen, not clicked: Enterprises that combine entity intelligence, operational rigor by centralizing data and creating consistency, and on-SERP conversion discipline will remain visible and preferred as AI becomes the primary decision-maker.

Businesses that don’t grasp these changes quickly won’t fall behind quietly. They’ll be algorithmically bypassed.

Dig deeper: The enterprise blueprint for winning visibility in AI search

How AI composes local results (and why it matters)

AI systems build memory through entity and context graphs. Brands with clean, connected location, service, and review data become default answers.

Local queries increasingly fall into two intent categories: objective and subjective. 

  • Objective queries focus on verifiable facts:
    • “Is the downtown branch open right now?”
    • “Do you offer same-day service?”
    • “Is this product in stock nearby?”
  • Subjective queries rely on interpretation and sentiment:
    • “Best Italian restaurant near me”
    • “Top-rated bank in Denver”
    • “Most family-friendly hotel”

This distinction matters because AI systems treat risk differently depending on intent.

For objective queries, AI models prioritize first-party sources and structured data to reduce hallucination risk. These answers often drive direct actions like calls, visits, and bookings without a traditional website visit ever occurring.

For subjective queries, AI relies more heavily on reviews, third-party commentary, and editorial consensus. This data normally comes from various other channels, such as UGC sites.  

Dig deeper: How to deploy advanced schema at scale

Source authority matters

Industry research has shown that for objective local queries, brand websites and location-level pages act as primary “truth anchors.”

When an AI system needs to confirm hours, services, amenities, or availability, it prioritizes explicit, structured core data over inferred mentions.

Consider a simple example. If a user asks, “Find a coffee shop near me that serves oat milk and is open until 9,” the AI must reason across location, inventory, and hours simultaneously.

If those facts are not clearly linked and machine-readable, the brand cannot be confidently recommended.

This is why freshness, relevance, and machine clarity, powered by entity-rich structured data, help AI systems interpret the right response. 

Set yourself up for success

Ensure your data is fresh, relevant, and clear with these tips:

  • Build a centralized entity and context graph and syndicate it consistently across GBP, listings, schema, and content.
  • Industrialize local data and entities by developing one source of truth for locations, services, attributes, inventory – continuously audited and AI-normalized.
  • Make content AI-readable and hyper-local with structured FAQs, services, and how-to content by location, optimized for conversational and multimodal queries.
  • Treat GBP as a product surface with standardized photos, services, offers, and attributes — localized and continuously optimized.
  • Operationalize reviews and reputation by implementing always-on review generation, AI-assisted responses, and sentiment intelligence feeding CX and operations.
  • Adopt AI-first measurement and governance to track AI visibility, local answer share, and on-SERP conversions — not just rankings and traffic.

Dig deeper: From search to answer engines: How to optimize for the next era of discovery

The evolution of local search from listings management to an enterprise local journey

Historically, local search was managed as a collection of disconnected tactics: listings accuracy, review monitoring, and periodic updates to location pages.

That operating model is increasingly misaligned with how local discovery now works.

Local discovery has evolved into an end-to-end enterprise journey – one that spans data integrity, experience delivery, governance, and measurement across AI-driven surfaces.

Listings, location pages, structured data, reviews, and operational workflows now work together to determine whether a brand is trusted, cited, and repeatedly surfaced by AI systems.

Introducing local 4.0

Local 4.0 is a practical operating model for AI-first local discovery at an enterprise scale. The focus of this framework is to ensure your brand is callable, verifiable, and safe for AI systems to recommend. 

To understand why this matters, it helps to look at how local has evolved:

The evolution of local
  • Local 1.0 – Listings and basic NAP consistency: The goal was presence – being indexed and included.
  • Local 2.0 – Map pack optimization and reviews: Visibility was driven by proximity, profile completeness, and reputation.
  • Local 3.0 – Location pages, content, and ROI: Local became a traffic and conversion driver tied to websites.
  • Local 4.0 – AI-mediated discovery and recommendation: Local becomes decision infrastructure, not a channel.

Local 4.0 is a new operating model for AI-first local discovery at enterprise scale. The focus is on understanding, verifying, and recommending based on consumer intent.  

  • Understandable by AI systems (clean, structured, connected data).
  • Verifiable across platforms (consistent facts, citations, reviews).
  • Safe to recommend in real-world decision contexts.

In an AI-mediated environment, brands are no longer merely present. They are selected, reused, or ignored – often without a click. This is the core transformation enterprise leaders must internalize as they plan for 2026.

Dig deeper: AI and local search: The new rules of visibility and ROI

Get the newsletter search marketers rely on.


The local 4.0 journey for enterprise brands

four step enterprise local journey

Step 1: Discovery, consistency, and control

Discovery in an AI-driven environment is fundamentally about trust. When data is inconsistent or noisy, AI systems treat it as a risk signal and deprioritize it.

Core elements include:

  • Consistency across websites, profiles, directories, and attributes.
  • Listings as verification infrastructure.
  • Location pages as primary AI data sources.
  • Structured data and indexing as the machine clarity layer.
ensuring consistency across owned channels

Why ‘legacy’ sources still matter

Listings act as verification infrastructure. Interestingly, research suggests that LLMs often cross-reference data against highly structured legacy directories (such as MapQuest or the Yellow Pages).

While human traffic to these sites has waned, AI systems utilize them as “truth anchors” because their data is rigidly structured and verified.

If your hours are wrong on MapQuest, an AI agent may downgrade its confidence in your Google Business Profile, viewing the discrepancy as a risk.

Discovery is no longer about being crawled. It’s about being trusted and reused. Governance matters because ownership, workflows, and data quality now directly affect brand risk.

Dig deeper: 4 pillars of an effective enterprise AI strategy 

Step 2: Engagement and freshness 

AI systems increasingly reward data that is current, efficiently crawled, and easy to validate.

Stale content is no longer neutral. When an AI system encounters outdated information – such as incorrect hours, closed locations, or unavailable services – it may deprioritize or avoid that entity in future recommendations.

For enterprises, freshness must be operationalized, not managed manually. This requires tightly connecting the CMS with protocols like IndexNow, so updates are discovered and reflected by AI systems in near real time.

Beyond updates, enterprises must deliberately design for local-level engagement and signal velocity. Fresh, locally relevant content – such as events, offers, service updates, and community activity – should be surfaced on location pages, structured with schema, and distributed across platforms.

In an AI-first environment, freshness is trust, and trust determines whether a location is surfaced, reused, or skipped entirely.

Unlocking ‘trapped’ data

A major challenge for enterprise brands is “trapped” data, which is vital information, often locked behind PDFs, menu images, or static event calendars.

For example, a restaurant group may upload a PDF of their monthly live music schedule. To a human, this is visible. To a search crawler, it’s often opaque. In an AI-first era, this data must be extracted and structured.

If an agent cannot read the text inside the PDF, it cannot answer the query: “Find a bar with live jazz tonight.”

Key focus areas include:

  • Continuous content freshness.
  • Efficient indexing and crawl pathways.
  • Dynamic local updates such as events, availability, and offerings.

At enterprise scale, manual workflows break. Freshness is no longer tactical. It’s a competitive requirement.

Dig deeper: Chunk, cite, clarify, build: A content framework for AI search

Step 3: Experience and local relevance

AI does not select the best brand. It selects the location that best resolves intent.

Generic brand messaging consistently loses out to locally curated content. AI retrieval is context-driven and prioritizes specific attributes such as parking availability, accessibility, accepted insurance, or local services.

This exposes a structural problem for many enterprises: information is fragmented across systems and teams.

Solving AI-driven relevance requires organizing data as a context graph. This means connecting services, attributes, FAQs, policies, and location details into a coherent, machine-readable system that maps to customer intent rather than departmental ownership.

Enterprises should also consider omnichannel marketing approaches to achieve consistency.   

Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement

Step 4: Measurement that executives can trust

As AI-driven and zero-click journeys increase, traditional SEO metrics lose relevance. Attribution becomes fragmented across search, maps, AI interfaces, and third-party platforms.

Precision tracking gives way to directional confidence.

Executive-level KPIs should focus on:

  • AI visibility and recommendation presence.
  • Citation accuracy and consistency.
  • Location-level actions (calls, directions, bookings).
  • Incremental revenue or lead quality lift.

The goal is not perfect attribution. It’s confidence that local discovery is working and revenue risk is being mitigated.

Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026

Why local 4.0 needs to be the enterprise response

Fragmentation is a material revenue risk. When local data is inconsistent or disconnected, AI systems have lower confidence in it and are less likely to reuse or recommend those locations.

Treating local data as a living, governed asset and establishing a single, authoritative source of truth early prevents incorrect information from propagating across AI-driven ecosystems and avoids the costly remediation required to fix issues after they scale.

AI-mediated discovery is now the default – and local 4.0 gives enterprises control, confidence, and competitiveness by aligning data, experience, and governance into the AI discovery flywheel.

This isn’t about chasing trends; it’s about ensuring your brand is accurately represented and confidently chosen wherever customers discover you next.

Dig deeper: How to select a CMS that powers SEO, personalization and growth

Local 4.0 is integral to the localized AI discovery flywheel

AI discovery flywheel

AI-mediated discovery is becoming the default interface between customers and local brands.

Local 4.0 provides a framework for control, confidence, and competitiveness in that environment. It aligns data, experience, and governance around how AI systems actually operate through reasoning, verification, and reuse.

This is not about chasing AI trends. It’s about ensuring your brand is correctly represented and confidently recommended wherever customers discover you next.

In Google Ads automation, everything is a signal in 2026

In Google Ads automation, everything is a signal in 2026

In 2015, PPC was a game of direct control. You told Google exactly which keywords to target, set manual bids at the keyword level, and capped spend with a daily budget. If you were good with spreadsheets and understood match types, you could build and manage 30,000-keyword accounts all day long.

Those days are gone.

In 2026, platform automation is no longer a helpful assistant. It’s the primary driver of performance. Fighting that reality is a losing battle. 

Automation has leveled the playing field and, in many cases, given PPC marketers back their time. But staying effective now requires a different skill set: understanding how automated systems learn and how your data shapes their decisions.

This article breaks down how signals actually work inside Google Ads, how to identify and protect high-quality signals, and how to prevent automation from drifting into the wrong pockets of performance.

Automation runs on signals, not settings

Google’s automation isn’t a black box where you drop in a budget and hope for the best. It’s a learning system that gets smarter based on the signals you provide. 

Feed it strong, accurate signals, and it will outperform any manual approach.

Feed it poor or misleading data, and it will efficiently automate failure.

That’s the real dividing line in modern PPC. AI and automation run on signals. If a system can observe, measure, or infer something, it can use it to guide bidding and targeting.

Google’s official documentation still frames “audience signals” primarily as the segments advertisers manually add to products like Performance Max or Demand Gen. 

That definition isn’t wrong, but it’s incomplete. It reflects a legacy, surface-level view of inputs and not how automation actually learns at scale.

Dig deeper: Google Ads PMax: The truth about audience signals and search themes

What actually qualifies as a signal?

In practice, every element inside a Google Ads account functions as a signal. 

Structure, assets, budgets, pacing, conversion quality, landing page behavior, feed health, and real-time query patterns all shape how the AI interprets intent and decides where your money goes. 

Nothing is neutral. Everything contributes to the model’s understanding of who you want, who you don’t, and what outcomes you value.

So when we talk about “signals,” we’re not just talking about first-party data or demographic targeting. 

We’re talking about the full ecosystem of behavioral, structural, and quality indicators that guide the algorithm’s decision-making.

Here’s what actually matters:

  • Conversion actions and values: These are 100% necessary. They tell Google Ads what defines success for your specific business and which outcomes carry the most weight for your bottom line.
  • Keyword signals: These indicate search intent. Based on research shared by Brad Geddes at a recent Paid Search Association webinar, even “low-volume” keywords serve as vital signals. They help the system understand the semantic neighborhood of your target audience.
  • Ad creative signals: This goes beyond RSA word choice. I believe the platform now analyzes the environment within your images. If you show a luxury kitchen, the algorithm identifies those visual cues to find high-end customers. I base this hypothesis on my experience running a YouTube channel. I’ve watched how the algorithm serves content based on visual environments, not just metadata.
  • Landing page signals: Beyond copy, elements like color palettes, imagery, and engagement metrics signal how well your destination aligns with the user’s initial intent. This creates a feedback loop that tells Google whether the promise of the ad was kept.
  • Bid strategies and budgets: Your bidding strategy is another core signal for the AI. It tells the system whether you’re prioritizing efficiency, volume, or raw profit. Your budget signals your level of market commitment. It tells the system how much permission it has to explore and test.

In 2026, we’ve moved beyond the daily cap mindset. With the expansion of campaign total budgets to Search and Shopping, we are now signaling a total commitment window to Google.

In the announcement, UK retailer Escentual.com used this approach to signal a fixed promotional budget, resulting in a 16% traffic lift because the AI was given permission to pace spend based on real-time demand rather than arbitrary 24-hour cycles.

All of these elements function as signals because they actively shape the ad account’s learning environment.

Anything the ad platform can observe, measure, or infer becomes part of how it predicts intent, evaluates quality, and allocates budget. 

If a component influences who sees your ads, how they behave, or what outcomes the algorithm optimizes toward, it functions as a signal.

The auction-time reality: Finding the pockets

To understand why signal quality has become critical, you need to understand what’s actually happening every time someone searches.

Google’s auction-time bidding doesn’t set one bid for “mobile users in New York.” 

It calculates a unique bid for every single auction based on billions of signal combinations at that precise millisecond. This considers the user, not simply the keyword.

We are no longer looking for “black-and-white” performance.

We are finding pockets of performance and users who are predicted to take the outcomes we define as our goals in the platform.

The AI evaluates the specific intersection of a user on iOS 17, using Chrome, in London, at 8 p.m., who previously visited your pricing page. 

Because the bidding algorithm cross-references these attributes, it generates a precise bid. This level of granularity is impossible for humans to replicate. 

But this is also the “garbage in, garbage out” reality. Without quality signals, the system is forced to guess.

Dig deeper: How to build a modern Google Ads targeting strategy like a pro

Get the newsletter search marketers rely on.


The signal hierarchy: What Google actually listens to

If every element in a Google Ads account functions as a signal, we also have to acknowledge that not all signals carry equal weight.

Some signals shape the core of the model’s learning. Others simply refine it.

Based on my experience managing accounts spending six and seven figures monthly, this is the hierarchy that actually matters.

Conversion signals reign supreme

Your tracking is the most important data point. The algorithm needs a baseline of 30 to 50 conversions per month to recognize patterns. For B2B advertisers, this often requires shifting from high-funnel form fills to down-funnel CRM data.

As Andrea Cruz noted in her deep dive on Performance Max for B2B, optimizing for a “qualified lead” or “appointment booked” is the only way to ensure the AI doesn’t just chase cheap, irrelevant clicks.

Enhanced conversions and first-party data

We are witnessing a “death by a thousand cuts,” where browser restrictions from Safari and Firefox, coupled with aggressive global regulations, have dismantled the third-party cookie. 

Without enhanced conversions or server-side tracking, you are essentially flying blind, because the invisible trackers of the past are being replaced by a model where data must be earned through transparent value exchanges.

First-party audience signals

Your customer lists tell Google, “Here is who converted. Now go find more people like this.” 

Quality trumps quantity here. A stale or tiny list won’t be as effective as a list that is updated in real time.

Custom segments provide context

Using keywords and URLs to build segments creates a digital footprint of your ideal customer. 

This is especially critical in niche industries where Google’s prebuilt audiences are too broad or too generic.

These segments help the system understand the neighborhood your best prospects live in online.

To simplify this hierarchy, I’ve mapped out the most common signals used in 2026 by their actual weight in the bidding engine:

Signal categorySpecific input
(The “what”)
Weight/impactWhy it matters in 2026
Primary (Truth)Offline conversion imports (CRM)CriticalTrains the AI on profit, not just “leads.”
Primary (Truth)Value-based bidding (tROAS)CriticalSignals which products actually drive margin.
Secondary (Context)First-party customer match listsHighProvides a “Seed Audience” for the AI to model.
Secondary (Context)Visual environment (images/video)HighAI scans images to infer user “lifestyle” and price tier.
Tertiary (Intent)Low-volume/long-tail keywordsMediumDefines the “semantic neighborhood” of the search.
Tertiary (Intent)Landing page color and speedMediumSignals trust and relevance feedback loops.
Pollutant (Noise)“Soft” conversions (scrolls/clicks)NegativeDilutes intent. Trains AI to find “cheap clickers.”

Dig deeper: Auditing and optimizing Google Ads in an age of limited data

Beware of signal pollution

Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google’s AI uses to learn. 

It’s what happens when the system receives signals that don’t accurately represent your ideal client, your real conversion quality, or the true intent you want to attract in your ad campaigns.

Signal pollution doesn’t just “confuse” the bidding algorithm. It actively trains it in the wrong direction. 

It dilutes your high-value signals, expands your reach into low-intent audiences, and forces the model to optimize toward outcomes you don’t actually want.

Common sources include:

  • Bad conversion data, including junk leads, unqualified form fills, and misfires.
  • Overly broad structures that blend high- and low-intent traffic.
  • Creative that attracts the wrong people.
  • Landing page behavior that signals low relevance or low trust.
  • Budget or pacing patterns that imply you’re willing to pay for volume over quality.
  • Feed issues that distort product relevance.
  • Audience segments that don’t match your real buyer.

These sources create the initial pollution. But when marketers try to compensate for underperformance by feeding the machine more data, the root cause never gets addressed. 

That’s when soft conversions like scrolls or downloads get added as primary signals, and none of them correlate to revenue.

Like humans, algorithms focus on the metrics they are fed.

If you mix soft signals with high-intent revenue data, you dilute the profile of your ideal customer. 

You end up winning thousands of cheap, low-value auctions that look great in a report but fail to move the needle on the P&L. 

Your job is to be the gatekeeper, ensuring only the most profitable signals reach the bidding engine.

When signal pollution takes hold, the algorithm doesn’t just underperform. The ads start drifting toward the wrong users, and performance begins to decline. 

Before you can build a strong signal strategy, you have to understand how to spot that drift early and correct it before it compounds.

How to detect and correct algorithm drift

Algorithm drift happens when Google’s automation starts optimizing toward the wrong outcomes because the signals it’s receiving no longer match your real advertising goals. 

Drift doesn’t show up as a dramatic crash. It shows up as a slow shift in who you reach, what queries you win, and which conversions the system prioritizes. It looks like a gradual deterioration of lead quality.

To stay in control, you need a simple way to spot drift early and correct it before the machine locks in the wrong pattern.

Early warning signs of drift include:

  • A sudden rise in cheap conversions that don’t correlate with revenue.
  • A shift in search terms toward lower-intent or irrelevant queries.
  • A drop in average order value or lead quality.
  • A spike in new-user volume with no matching lift in sales.
  • A campaign that looks healthy in-platform but feels wrong in the CRM or P&L.

These are all indicators that the system is optimizing toward the wrong signals.

To correct drift without resetting learning:

  • Tighten your conversion signals: Remove soft conversions, misfires, or anything that doesn’t map to revenue. The machine can’t unlearn bad data, but you can stop feeding it.
  • Reinforce the right audience patterns:  Upload fresh customer lists, refresh custom segments, and remove stale data. Drift often comes from outdated or diluted audience signals.
  • Adjust structure to isolate intent:  If a campaign blends high- and low-intent traffic, split it. Give the ad platform a cleaner environment to relearn the right patterns.
  • Refresh creative to repel the wrong users: Creative is a signal. If the wrong people are clicking, your ads are attracting them. Update imagery, language, and value props to realign intent.
  • Let the system stabilize before making another change: After a correction, give the campaign 5-10 days to settle. Overcorrecting creates more drift.

Your job isn’t to fight automation in Google Ads, it’s to guide it. 

Drift happens when the machine is left unsupervised with weak or conflicting signals. Strong signal hygiene keeps the system aligned with your real business outcomes.

Once you can detect drift and correct it quickly, you’re finally in a position to build a signal strategy that compounds over time instead of constantly resetting.

The next step is structuring your ad account so every signal reinforces the outcomes you actually want.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

Building a strategy that actually works in 2026 with signals

If you want to build a signal strategy that becomes a competitive advantage, you have to start with the foundations.

For lead gen

Implement offline conversion imports. The difference between optimizing for a “form fill” and a “$50K closed deal” is the difference between wasting budget and growing a business. 

When “journey-aware bidding” eventually rolls out, it will be a game-changer because we can feed more data about the individual steps of a sale.

For ecommerce

Use value-based bidding. Don’t just count conversions. Differentiate between a customer buying a $20 accessory and one buying a $500 hero product.

Segment your data

Don’t just dump everyone into one list. A list of 5,000 recent purchasers is worth far more than 50,000 people who visited your homepage two years ago. 

Stale data hurts performance by teaching the algorithm to find people who matched your business 18 months ago, not today.

Separate brand and nonbrand campaigns

Brand traffic carries radically different intent and conversion rates than nonbrand. 

Mixing these campaigns forces the algorithm to average two incompatible behaviors, which muddies your signals and inflates your ROAS expectations. 

Brand should be isolated so it doesn’t subsidize poor nonbrand performance or distort bidding decisions in the ad platform.

Don’t mix high-ticket and low-ticket products under one ROAS target

A $600 product and a $20 product do not behave the same in auction-time bidding. 

When you put them in the same campaign with a single 4x ROAS target, the algorithm will get confused. 

This trains the system away from your hero products and toward low-value volume.

Centralize campaigns for data density, but only when the data belongs together

Google’s automation performs best when it has enough data to be consistent and high-quality data to recognize patterns. That means fewer, stronger campaigns are better as long as the signals inside them are aligned. 

Centralize campaigns when products share similar price points, margins, audiences, and intent. Decentralize campaigns when mixing them would pollute the signal pool.

The competitive advantage of 2026

When everyone has access to the same automation, the only real advantage left is the quality of the signals you feed it. 

Your job is to protect those signals, diagnose pollution early, and correct drift before the system locks onto the wrong patterns.

Once you build a deliberate signal strategy, Google’s automation stops being a constraint and becomes leverage. You stay in the loop, and the machine does the heavy lifting.

❌