Crypto is Europe’s best answer to Revolut’s fintech dominance | Opinion
With the joyous arrival of Super Bowl LX – which I’m told translates from Roman to “Super Bowl 60” – readers the world over are again turning to me for sure-thing prognostications and sound ways to wager.
As most of you know, I’m a 12-time Super Bowl watcher with eight rings, each from atop frosted cupcakes purchased for the big-game party. My NFL predictions, to the best of my knowledge, have never been wrong, as I do not believe in criticism and delete all angry emails without reading them.
My Super Bowl picks have been called “probably safer than Bitcoin, maybe” and “a small tick above setting money on fire.”
So with that, let’s get to my ironclad forecasts as the Seattle Superb-hawks take on the New England Patriots in America’s favorite mix of capitalism and violence – the Super Bowl.
I have it on good authority that this year’s game at Levi’s Stadium in Santa Clara, California, will involve literally dozens of large human men racing around a field after a ball that – in a real twist from one’s standard sense of “ball” – is oblong, more an air-filled egg than a spherical bouncy thing.
Some betting lines would have you believe this is the year the Super Bowl “football” is replaced with a proper spherical model, but my money is on the ball remaining a prolate spheroid. Bank on it.
Every Super Bowl seems to involve a beer commercial from Budweiser featuring Clydesdales, a type of horse that, to the best of my knowledge, knows literally nothing about brewing beer.
These commercials tend to grab at the heart, either via endearing visual scenes or the music the Clydesdales tromp along to. Having exhausted all charming horse-related scenarios, this is the year I predict Budweiser will dispatch roughly 135 million Clydesdales, one to each U.S. household, each accompanied by a band playing music that will make everyone cry.
Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don't have the app? Download it for free from your app store.
It will still have nothing to do with beer, but everyone will get drunk, so the horses don’t feel they’ve wasted their time.
This is one of the best bets you can place for this or any Super Bowl. At some point during the game, regardless of what is happening or what has happened and for no logical reason, one attendee will blurt out, without anyone asking, that Americans buy more salsa than ketchup each year.
This has been a fact for more than three decades, but that won’t stop your neighbor Phil from acting like he’s the king of breaking news. SHUT UP, PHIL! THIS WASN’T INTERESTING IN THE EARLY 1990s AND IT AIN’T INTERESTING NOW!
This is where I’m putting all my money this year. The Super Bowl halftime show was actually invented in 1967 so football fans would have an additional dumb thing to argue about. This year, I expect the stupidity to reach a fevered pitch. Take the over.
Opinion: WARNING ‒ Bad Bunny's Super Bowl show will turn you woke
Bad Bunny, one of the biggest pop stars on the planet, will perform his Spanish-language hits at the half. A good parlay would be how many times someone complains, “I can’t understand what he’s saying!” or the number of people who learn, for the first time, that people like Bad Bunny, who were born in Puerto Rico, are U.S. citizens. Again, take the over.
If there’s one thing President Donald Trump hates, it’s not being the center of attention, so anytime something significant like the Super Bowl happens, he finds a way to draw the spotlight to himself, usually by saying or doing something horrible.
This year will be no different, and might be far worse than usual. Expect that by the end of halftime, the leader of the free world will have posted something uncouth about the game itself (TOO WOKE!) or about Bad Bunny’s performance (RADICAL LEFTIST PERFORMANCE! SAD!).
The MAGA response to Bad Bunny’s utterly noncontroversial halftime show is an alternative halftime event featuring Kid Rock and some other people who can apparently sing, or something. While many doubt anyone will watch this sad display of xenophobia, I’m predicting an upset. There will be literally tens of people, possibly reaching into the low dozens, watching this streaming nonevent.
A favorable parlay here is that technical difficulties will make it nearly impossible to hear that one song you’ve never heard by that singer you’ve never seen before.
And that’s it. Enjoy your winnings, folks. And enjoy the big game, which I’m betting will be won by one of the two teams competing, by a score that is slightly higher than the other team’s score.
Follow USA TODAY columnist Rex Huppke on Bluesky at @rexhuppke.bsky.social and on Facebook at facebook.com/RexIsAJerk
You can read diverse opinions from our USA TODAY columnists and other writers on the Opinion front page, on X, formerly Twitter, @usatodayopinion and in our Opinion newsletter.
This article originally appeared on USA TODAY: Super Bowl 60 is here and I have guaranteed predictions | Opinion
Samsung is currently preparing to launch the Galaxy S26 series in late February 2026. Meanwhile, if you are thinking of buying a premium phone, the Galaxy S25 is still a very good phone. Several people may find it better than the new Galaxy S26. Below you can check why.
The main reason behind choosing the Galaxy S25 is the processor. The phone comes with the Snapdragon 8 Elite chip worldwide. Snapdragon chips are fast, smooth, and save battery.
On the other side, the Galaxy S26 will reportedly use Exynos chips in Asia, Africa, and Europe. Exynos chips are often slower and use more battery. However, Samsung claims better performance this time, but if you want a phone that works fast and lasts longer, the Galaxy S25 is a good choice.

Also, the Galaxy S25 is cheaper than the Galaxy S26. You can get a premium phone without paying a lot of money, as it is a last year’s smartphone. Ahead of the new Galaxy S series launch, Samsung is offering great deals on Galaxy S25 series phones.
The Galaxy S25 cameras are very good and can take nice photos and videos, just like the S26. It means that this upcoming Galaxy S26 smartphone to feature the same camera specs as the Galaxy S25.
Talking about the design, it is quite familiar except for the back camera island. The Galaxy S26 series to adopt the latest foldable phone-like camera design. For most people, the Galaxy S25 has almost everything they need in a premium phone.
The Galaxy S25 is still a smart buy. It has a fast Snapdragon processor, good cameras, a familiar design, and a lower price. The Galaxy S26 may have some new features, but the S25 is better for people who want a reliable, easy-to-use, and affordable phone.
The post Samsung Galaxy S25 is still a great buy over the S26 for many appeared first on Sammy Fans.
I have been in the Samsung club for a long time. Every Ultra since the S22, every Foldable from the Fold4 up to the current 7, I have owned them all. I use these things for everything from long work trips to late-night report preparation. I love the tech, but I am getting tired. Spending $1,000 on a phone should buy you big changes. Minor yearly upgrades just don’t justify that high price tag.
With Unpacked coming up on Feb 25, the leaks are all over the place. Faster 60W charging, rounder corners, a new privacy screen… okay, cool. But it feels like more of the same. If Samsung wants me to hit that “buy” button without thinking twice, the company needs to actually listen to what users like me are asking for.
1. Give us a real battery
Samsung should stop with the 5,000mAh cap. It’s been years – 7th year in a row. Between the massive screens, multitasking, and all the AI stuff running in the background, I started looking for a charger by 7 PM. Chinese smartphone brands are already hitting 6,000 or 7,000mAh by using denser silicon-carbon cells. Why is Samsung still stuck in 2022? I’ll take the slightly faster 60W charging, sure, but I’d trade that in a bigger battery that actually lasts two full days. There’s a rumor that Apple’s 2026 iPhone will include a 5500mAh battery inside.
2. A “Pro” size for normal humans
The 6.9-inch screen is amazing for movies, but it’s a total brick in every pocket. I wish Samsung would do what Apple does. Give us a smaller 6.5 or 6.7-inch Ultra that has the same cameras and chip, more like Pro and Pro Max. Honestly? I barely use the S Pen anymore, well, for drawing. I’d gladly trade away all those extra features if it meant I could finally use my phone comfortably with just one hand.

Source – Samsung Mobile Press
3. Better sensors > More Megapixels
I am done with the 200MP gimmick. Give me a 1-inch sensor instead. I am looking for a natural bokeh and less noise in the dark, not that weird “AI-smoothed” look we keep getting. If Xiaomi can fit a massive sensor in a flagship, why can’t the industry leader? Also, let us share custom camera profiles.
I’d rather download a pro’s “night mode” settings than adjust everything myself. Manually changing every setting every time is just too much work.
4. Magnets
The rumors saying no built-in Qi2 magnets for the S26 is just… disappointing. Apple’s MagSafe changed the game for car mounts and stands. Having to buy a special case just to use magnets on a $1,300 flagship in 2026 feels idiotic. Just build the coils into the body and get it over with. You can do this Sammy.
5. Colors
The “Titanium” grey/black stuff is classy, I get it, but it’s boring (for me). Bring back the fun. Give me a bright metallic orange like the S22 Ultra had, or a deep purple that is actually beautiful.
6. Overheating
The new Snapdragon chips are fast, but the Ultra still throttles and gets hot if anyone is gaming or shooting 8K for more than 15/20 minutes. We need better cooling. If Samsung is making the phone slimmer this year, I really hope the company didn’t sacrifice the vapor chamber to do it.
Look, as a Samsung fan, I love the S Pen, and the displays are second to none. But I want to feel that “wow” factor again. If the Galaxy S26 Ultra is just a slightly thinner S25, I might actually skip a generation for the first time.

When you are paying a premium price, small tweaks just don’t cut it anymore. Are you in or out this year? Drop your thoughts on @thesammyfans X Account, and let’s get into it.
The post Samsung Galaxy S26 Ultra is coming but is it enough? Check wishlist appeared first on Sammy Fans.
I am a Samsung guy. I love the S Pen, I love the ridiculous screens, and I usually think Apple is about three years late to every hardware party. But the 2026 leaks are starting to look… different. If the iPhone 18 Pro Max actually hits the marks we are hearing about, Samsung might face a real fight with Apple in 2026. Let’s talk about the stuff that actually matters.
The Battery Gap:
We have all been making fun of Apple’s “efficient” (tiny) batteries for a decade, but the tables are turning. Rumors suggest the iPhone 18 Pro Max is jumping to a 5,200mAh cell. Meanwhile, the S26 Ultra looks like it’s sticking to 5,000mAh for what feels like the 10th year (actually 7th) in a row.
I know capacity isn’t everything, but when you pair that bigger battery with Apple’s new 2nm A20 chipset, the gains are going to be stupid. We are talking “forgot to charge it last night and it doesn’t matter” levels of battery life.
Samsung is supposedly increasing the S26 Ultra up to 60W charging, which is great, but I’d honestly rather have a phone that just stays alive longer in the first place. You want it too, right?
Specs vs. Real Life Cameras:
Samsung always wins the megapixel war – 200MP is a cool number to put on a box, but Apple is finally joining the variable aperture game for 2026. This is the area that actually makes photos look like they came from a real camera/DSLR. The advantage of this tech, you will get better natural blur, better low-light, and none of that weird, fake-smooth look that Samsung’s AI sometimes overdoes.
Also, can we talk about the under-display Face ID? Apple is likely to add just a tiny punch-hole in 2026, as per rumors. Samsung has been doing under-display cameras on the Fold for a while, but let’s be honest: it usually looks like a screen door to me. If Apple manages to hide the sensors without ruining the display quality, the S26 Ultra is going to look a bit dated with its centered hole-punch.
Software Update Problems:
Samsung made a big deal about seven years of updates, and then Apple is like “hold my beer” and pushed a software update for an iPhone older than 11 years. For most of us, that doesn’t matter because we trade in every two or three years anyway. But for resale value? It’s huge. It makes an old iPhone worth way more than an old Galaxy.
More:
Apple’s in-house C2 modem is basically the chip that handles your phone’s internet connection (like 5G). Apple is making this one itself instead of buying it from Qualcomm (the company that usually supplies these chips for iPhones). This C2 modem should make the 5G faster and more stable. It will support super-quick mmWave 5G (the ultra-fast type you get in big cities, stadiums, or busy spots).
Rumors say it will also use less battery power and work more smoothly with the rest of the iPhone’s parts because Apple designed everything together. This could mean better signal in tough spots, quicker downloads, and your phone lasting longer on the same charge compared to older Qualcomm modems.
It’s like Apple finally building its own “engine” for the internet instead of using someone else’s, and early leaks suggest it could be more efficient and reliable.

My Take:
I am not switching to iOS anytime soon. I need my customization, but Samsung can’t just trick me with “big screen and a stylus” anymore. The S26 Ultra needs to be more than a spec bump. If Apple delivers a 2nm chip, a bigger battery, and a camera that actually rivals a DSLR, the “iPhone is boring” era might officially be over.
What do you think? Is 60W charging enough to keep you on Team Galaxy, or is that 5,200mAh iPhone actually attracting you? Drop a comment on @thesammyfans X account.
The post Apple is finally playing Samsung’s game and they might be winning appeared first on Sammy Fans.
Feb. 6—JAMESTOWN — Bismarck Legacy was up 10 points on the Blue Jay girls basketball team in the second half.
The Sabers couldn't hold onto the lead.
"We put up a strong team win, showing resilience and composure after falling behind .... in the second half," said JHS head coach Andy Skunberg.
The Blue Jays not only came back but ended up defeating Legacy 61-54 Friday night. Final stats were not available when The Jamestown Sun went to press.
Jamestown is now 6-6 in the West Region standings. The next game for the Jays is scheduled for Feb. 10 against Minot. Tipoff is set for 7 p.m. at Jerry Meyer Arena.
"Facing a well-coached Legacy squad that played with relentless effort, we responded with grit on both ends of the floor," Skunberg said. "The performance was one as coaches we are really proud of, while also serving as a reminder that continued improvement is needed in all areas as the team prepares for a challenging week ahead."
Feb. 6—The Irish goodbye is the most effective strategy I've found for exiting a social situation.
For those who are unaware, the explanation is simple — you just leave. It's that easy.
Maybe it's ducking out the side door of a building where everyone's gathered near the front, maybe it's dialing up an Uber to secretly take you away from an afterparty, maybe it's just deciding enough is enough and it's time to leave.
It works wonders, but there is one potential hazard. There's always the danger of that one person seeing you leave, who then announces your departure to everyone else and thus spoils the perfect escape plan.
In this case, I am both people.
By the time you read this, my time as the Aiken Standard sports editor will be over. After more than 11 years on staff full-time, not to mention my years as a part-timer before that, it's finally time to step aside. And I owe entirely too many thanks to too many people to silently walk out the side door one last time.
It was not a decision that was made lightly. When I took this job in December of 2014, I swore this was the only place I'd ever cover local sports. That's not because I thought I wouldn't do a good job elsewhere, but more so because I knew there was no other place in the world I'd be more invested in my coverage than here in Aiken.
I grew up here. I went to school here. I played sports here. I hoped my coaches wouldn't report my scores to this very paper. I grabbed the sports section every day when I came home from school, noted aloud how the paper favored our rival and hated us and mustered every bit of creative energy I had to call it the Sub-Standard. As it turns out, those ideas are both timeless.
I remember coming home from football games on Friday nights and rushing to turn on one of the local news channels to see how all of the other games went — back when you couldn't just get real-time updates live on your phone — and then reading the recaps in the next morning's paper. As it turns out, guys like Rob Gantt and Kenton Makin were teaching me how to do the job before I even knew it.
After tooling around on various hot-take blogs, thinking I was going to be the next Rick Reilly, I answered an ad in the paper heading into the 2009 season because the Standard was in dire need of help — longtime readers may remember that was the season we sent teams of two out to every single local football game, netbooks in hand, to provide live online stat updates from kickoff to the final whistle.
I couldn't have gotten luckier with my assignment from the great Cam Huffman. I was paired with the late legend Rob Novit, my first mentor in this business that became my career, and our team to cover was Williston-Elko. He drove and took photos, and I kept the stats and wrote the stories, most of them from his passenger seat as we drove back much earlier than the others after a running clock in the second half because the Blue Devils were so dominant during a 14-0 season that culminated in the Class A, Division II state championship.
Maybe it's only fitting that my 16th season — I missed 2010 due to illness — covering high school football ended with Strom Thurmond bringing home the Class AA crown.
So much happened in between. Coaches became friends. Friends and classmates became coaches. Kids I covered as high-schoolers became friends as adults. More recently, in a jarring "this is 40" wake-up call, my friends' kids started showing up on varsity rosters.
Some of the best stories to write were about our state champions — shoutout to the North Augusta girls' basketball team for giving me plenty of practice on that front. I've been lucky enough to have written about state champions, national champions, world champions, current professionals and future ones, games with 30 spectators and ones with closer to 100,000.
One of the biggest perks for a sports writer working in this part of the world is the opportunity to cover the Masters Tournament, and the adrenaline rush that comes from writing that final-round story on Sunday in the Augusta National media building is hard to beat — but the same can be said for absolute madhouses like the state semifinals between the Ridge Spring-Monetta and Wagener-Salley football teams in 2019, the Barnwell and Silver Bluff football teams in 2021 and the Aiken and South Aiken volleyball teams in 2022. And that's just a small sample.
My favorite stories to write, though, were the hundreds of college signing stories I've written over the years. I think it's fair to say I've seen more dreams come true than a ticket-taker at Disney World, and that's the entire reason to get into this business on the local level. Sure, the high-level events carry a different type of prestige, but getting to document someone from your community that may be the first from their family to go to college, one carrying on a family legacy, maybe thought they'd never go to college, maybe thought they'd never get to play again — that carries a hell of a lot more weight than someone we don't know winning a title that happened to be awarded in our area.
I owe countless thanks to countless people. News outlets only stay in business, and they only continue to employ their writers, if people read. I cannot say enough how much I appreciate every single eyeball.
To Cam Huffman, Noah Feit, Jeremy Timmerman, Eric Russell and Nick Terry, it was an honor and a privilege to sit alongside you in that newsroom working all of those late nights that never really felt like work because we got along so well and shared so many laughs.
To Melissa Hanna, I could not have asked for a more welcoming first boss in the newsroom. To Larry Taylor, there's no one I'd rather stress out with late on a Saturday when everything decides to go wrong at the worst possible time.
To John Boyette, there's no one I'd rather work for — period, paragraph, end of story. I look forward to seeing you again at Palmetto Golf Club, at Whiskey Alley and at Nacho Mama's — preferably sooner rather than later.
To Mike Dawson, most of this is your fault. Keeping that column I wrote bashing Notre Dame football 20 years ago folded up in your glovebox to show to your friends only encouraged me to keep doing this. Go figure, I got it from the best storyteller I know.
Anyone who ever trusted me to tell your story, called in a score, texted in stats, emailed a tip, tagged a tweet, forwarded a Facebook post, shared a story elsewhere and, yes, even reached out to complain — thank you. Our coverage area is a big one, especially during the years I was operating as a one-man shop and greatly struggled to figure out how to divvy up my time between three counties' worth of high schools and a Division II university, so every little bit of information I received to help me do my job better and shine a brighter light on our local student-athletes was greatly appreciated — even if I didn't sound so enthusiastic on some of those less-than-happy calls.
I'm not going anywhere. I'll still be here in my hometown, just not as a full-time sports writer anymore. I'm looking forward to enjoying nights, weekends and holidays with my wife and our dogs, and I'll always be glad to see our readers out in public.
Feel free to say hello. And I promise I won't leave without saying goodbye.

Traditional ranking performance no longer guarantees that content can be surfaced or reused by AI systems. A page can rank well, satisfy search intent, and follow established SEO best practices, yet still fail to appear in AI-generated answers or citations.
In most cases, the issue isn’t content quality. It’s that the information can’t reliably be extracted once it’s parsed, segmented, and embedded by AI retrieval systems.
This is an increasingly common challenge in AI search. Search engines evaluate pages as complete documents and can compensate for structural ambiguity through link context, historical performance, and other ranking signals.
AI systems don’t.
They operate on raw HTML, convert sections of content into embeddings, and retrieve meaning at the fragment level rather than the page level.
When key information is buried, inconsistently structured, or dependent on rendering or inference, it may rank successfully while producing weak or incomplete embeddings.
At that point, visibility in search and visibility in AI diverges. The page exists in the index, but its meaning doesn’t survive retrieval.
Traditional search operates on a ranking system that selects pages. Google can evaluate a URL using a broad set of signals – content quality, E-E-A-T proxies, link authority, historical performance, and query satisfaction – and reward that page even when its underlying structure is imperfect.
AI systems often operate on a different representation of the same content. Before information can be reused in a generated response, it’s extracted from the page, segmented, and converted into embeddings. Retrieval doesn’t select pages – it selects fragments of meaning that appear relevant and reliable in vector space.
This difference is where the visibility gap forms.
A page may perform well in rankings while the embedded representation of its content is incomplete, noisy, or semantically weak due to structure, rendering, or unclear entity definition.
Retrieval should be treated as a separate visibility layer. It’s not a ranking factor, and it doesn’t replace SEO. But it increasingly determines whether content can be surfaced, summarized, or cited once AI systems sit between users and traditional search results.
Dig deeper: What is GEO (generative engine optimization)?
One of the most common AI retrieval failures happens before content is ever evaluated for meaning. Many AI crawlers parse raw HTML only. They don’t execute JavaScript, wait for hydration, or render client-side content after the initial response.
This creates a structural blind spot for modern websites built around JavaScript-heavy frameworks. Core content can be visible to users and even indexable by Google, while remaining invisible to AI systems that rely on the initial HTML payload to generate embeddings.
In these cases, ranking performance becomes irrelevant. If content never embeds, it can’t be retrieved.
The simplest way to test whether content is available to AI crawlers is to inspect the initial HTML response, not the rendered page in a browser.
Using a basic curl request allows you to see exactly what a crawler receives at fetch time. If the primary content doesn’t appear in the response body, it won’t be embedded by systems that don’t execute JavaScript.
To do this, open your CMD (or Command Prompt) and enter the following prompt:


Running a request with an AI user agent (like “GPTBot”) often exposes this gap. Pages that appear fully populated to users can return nearly empty HTML when fetched directly.
From a retrieval standpoint, content that doesn’t appear in the initial response effectively doesn’t exist.
This can also be validated at scale using tools like Screaming Frog. Crawling with JavaScript rendering disabled surfaces the raw HTML delivered by the server.
If primary content only appears when JavaScript rendering is enabled, it may be indexable by Google while remaining invisible to AI retrieval systems.
Visibility issues don’t stop at “Is the content returned?” Even when content is technically present in the initial HTML, excessive markup, scripts, and framework noise can interfere with extraction.
AI crawlers don’t parse pages the way browsers do. They skim quickly, segment aggressively, and may truncate or deprioritize content buried deep within bloated HTML. The more code surrounding meaningful text, the harder it is for retrieval systems to isolate and embed that meaning cleanly.
This is why cleaner HTML matters. The clearer the signal-to-noise ratio, the stronger and more reliable the resulting embeddings. Heavy code does not just slow performance. It dilutes meaning.
The most reliable way to address rendering-related retrieval failures is to ensure that core content is delivered as fully rendered HTML at fetch time.
In practice, this can usually be achieved in one of two ways:
Pre-rendered HTML
Pre-rendering is the process of generating a fully rendered HTML version of a page ahead of time, so that when AI crawlers arrive, the content is already present in the initial response. No JavaScript execution is required, and no client-side hydration is needed for core content to be visible.
This ensures that primary information – value propositions, services, product details, and supporting context – is immediately accessible for extraction and embedding.
AI systems don’t wait for content to load, and they don’t resolve delays caused by script execution. If meaning isn’t present at fetch time, it’s skipped.
The most effective way to deliver pre-rendered HTML is at the edge layer. The edge is a globally distributed network that sits between the requester and the origin server. Every request reaches the edge first, making it the fastest and most reliable point to serve pre-rendered content.

When pre-rendered HTML is delivered from the edge, AI crawlers receive a complete, readable version of the page instantly. Human users can still be served the fully dynamic experience intended for interaction and conversion.
This approach doesn’t require sacrificing UX in favor of AI visibility. It simply delivers the appropriate version of content based on how it’s being accessed.
From a retrieval standpoint, this tactic removes guesswork, delays, and structural risk. The crawler sees real content immediately, and embeddings are generated from a clean, complete representation of meaning.
Clean initial content delivery
Pre-rendering isn’t always feasible, particularly for complex applications or legacy architectures. In those cases, the priority shifts to ensuring that essential content is available in the initial HTML response and delivered as cleanly as possible.
Even when content technically exists at fetch time, excessive markup, script-heavy scaffolding, and deeply nested DOM structures can interfere with extraction. AI systems segment content aggressively and may truncate or deprioritize text buried within bloated HTML.
Reducing noise around primary content improves signal isolation and results in stronger, more reliable embeddings.
From a visibility standpoint, the impact is asymmetric. As rendering complexity increases, SEO may lose efficiency. Retrieval loses existence altogether.
These approaches don’t replace SEO fundamentals, but they restore the baseline requirement for AI visibility: content that can be seen, extracted, and embedded in the first place.
Many pages fail AI retrieval not because content is missing, but because meaning is underspecified. Traditional SEO has long relied on keywords as proxies for relevance.
While that approach can support rankings, it doesn’t guarantee that content will embed clearly or consistently.
AI systems don’t retrieve keywords. They retrieve entities and the relationships between them.
When language is vague, overgeneralized, or loosely defined, the resulting embeddings lack the specificity needed for confident reuse. T
he content may rank for a query, but its meaning remains ambiguous at the vector level.
This issue commonly appears in pages that rely on broad claims, generic descriptors, or assumed context.
Statements that perform well in search can still fail retrieval when they don’t clearly establish who or what’s being discussed, where it applies, or why it matters.
Without explicit definition, entity signals weaken and associations fragment.
AI systems don’t consume content as complete pages.
Once extracted, sections are evaluated independently, often without the surrounding context that makes them coherent to a human reader. When structure is weak, meaning degrades quickly.
Strong content can underperform in AI retrieval, not because it lacks substance, but because its architecture doesn’t preserve meaning once the page is separated into parts.
Headers do more than organize content visually. They signal what a section represents. When heading hierarchy is inconsistent, vague, or driven by clever phrasing rather than clarity, sections lose definition once they’re isolated from the page.
Entity-rich, descriptive headers provide immediate context. They establish what the section is about before the body text is evaluated, reducing ambiguity during extraction. Weak headers produce weak signals, even when the underlying content is solid.
Dig deeper: The most important HTML tags to use for SEO success
Sections that try to do too much embed poorly. Mixing multiple ideas, intents, or audiences into a single block of content blurs semantic boundaries and makes it harder for AI systems to determine what the section actually represents.
Clear sections with a single, well-defined purpose are more resilient. When meaning is explicit and contained, it survives separation. When it depends on what came before or after, it often doesn’t.
Even when content is visible, well-defined, and structurally sound, conflicting signals can still undermine AI retrieval. This typically appears as embedding noise – situations where multiple, slightly different representations of the same information compete during extraction.
Common sources include:
When multiple URLs expose highly similar content with inconsistent or competing canonical signals, AI systems may encounter and embed more than one version. Unlike Google, which reconciles canonicals at the index level, retrieval systems may not consolidate meaning across versions.
The result is semantic dilution, where meaning is spread across multiple weaker embeddings instead of reinforced in one.
Variations in titles, descriptions, or contextual signals across similar pages introduce ambiguity about what the content represents. These meta tag inconsistencies can lead to multiple, slightly different embeddings for the same topic, reducing confidence during retrieval and making the content less likely to be selected or cited.
Reused content blocks, even when only slightly modified, fragment meaning across pages or sections. Instead of reinforcing a single, strong representation, repeated content competes with itself, producing multiple partial embeddings that weaken overall retrieval strength.
Google is designed to reconcile these inconsistencies over time. AI retrieval systems aren’t. When signals conflict, meaning is averaged rather than resolved, resulting in diluted embeddings, lower confidence, and reduced reuse in AI-generated responses.
SEO has always been about visibility, but visibility is no longer a single condition.
Ranking determines whether content can be surfaced in search results. Retrieval determines whether that content can be extracted, interpreted, and reused or cited by AI systems. Both matter.
Optimizing for one without the other creates blind spots that traditional SEO metrics don’t reveal.
The visibility gap occurs when content ranks and performs well yet fails to appear in AI-generated answers because it can’t be accessed, parsed, or understood with sufficient confidence to be reused. In those cases, the issue is rarely relevance or authority. It’s structural.
Complete visibility now requires more than competitive rankings. Content must be reachable, explicit, and durable once it’s separated from the page and evaluated on its own terms. When meaning survives that process, retrieval follows.
Visibility today isn’t a choice between ranking or retrieval. It requires both – and structure is what makes that possible.

PR measurement often breaks down in practice.
Limited budgets, no dedicated analytics staff, siloed teams, and competing priorities make it difficult to connect media outreach to real outcomes.
That’s where collaboration with SEO, PPC, and digital marketing teams becomes essential.
Working together, these teams can help PR do three things that are hard to accomplish alone:
This article lays out a practical way to do exactly that, without an enterprise budget or a data science team.

One of the biggest reasons PR measurement breaks down is the lingering assumption that communication follows a straight line: message → media → coverage → impact.
In reality, modern digital communication behaves more like a loop. Audiences discover content through search, social, AI-generated answers, and media coverage – often in unpredictable sequences. They move back and forth between channels before taking action, if they take action at all.
That’s why measurement must start by defining the response sought, not by counting outputs.
SEO and PPC professionals are already fluent in this way of thinking. Their work is judged not by impressions alone, but by what users do after exposure: search, click, subscribe, download, convert.
PR measurement becomes dramatically more actionable when it adopts the same mindset.
PR teams are often asked a frustrating question by executives: “That’s great coverage – but what did it actually do?”
The answer usually exists in the data. It’s just spread across systems owned by different teams.
SEO and paid media teams already track:
By integrating PR activity into this measurement ecosystem, teams can connect earned media to downstream behavior.
Practical examples
Tools like Google Analytics 4, Adobe Analytics, and Piwik PRO make this feasible – even for small teams – by allowing PR touchpoints to be analyzed alongside SEO and PPC data.
This reframes PR from a cost center to a demand-creation channel.
Matt Bailey, a digital marketing author, professor, and instructor, said:
Dig deeper: SEO vs. PPC vs. AI: The visibility dilemma
Most communications professionals now accept that SEO matters.
What’s less widely understood is how it should be measured in a PR context – and how that measurement is changing.
Traditional PR metrics focus on:
SEO-informed PR adds new outcome-level indicators:
These metrics answer a more strategic question: “Did this coverage improve our long-term discoverability?”
Enter GEO. As audiences shift from blue-link search results to conversational AI platforms, measurement must evolve again.
Generative engine optimization (also called answer engine optimization) focuses on whether your content becomes a source for AI-generated answers – not just a ranked result.
For PR and communications teams, this is a natural extension of credibility building:
Tools like Profound, the Semrush AI Visibility Toolkit, and Conductor’s AI Visibility Snapshot now provide early visibility into this emerging layer of search measurement.
The implication is clear: PR measurement is no longer just about visibility – it’s about influence over machine-mediated narratives.
David Meerman Scott, the best-selling author of “The New Rules of Marketing and PR,” shared:
Dig deeper: A 90-day SEO playbook for AI-driven search visibility
One reason measurement feels overwhelming is tool overload. The solution isn’t more software – it’s better alignment between goals and tools.
A useful framework is to work backward from the action you want audiences to take.
If the response sought is awareness or understanding:
If the response sought is engagement or behavior:
If the response sought is long-term influence:
The key is resisting the temptation to measure everything. Measure what aligns with strategy – and ignore the rest.
Katie Delahaye Paine, the CEO of Paine Publishing, publisher of The Measurement Advisor, and “Queen of Measurement,” said:
Dig deeper: 7 hard truths about measuring AI visibility and GEO performance
PR teams don’t need to become SEO experts overnight. And SEO teams don’t need to master media relations.
What’s required is shared ownership of outcomes.
When these groups collaborate:
This reduces duplication, saves budget, and produces insights that no single team could generate alone.
Nearly 20 years ago, Avinash Kaushik proposed the 10/90 rule: spend 10% of your analytics budget on tools and 90% on people.
Today, tools are cheaper – or free – but the rule still holds.
The most valuable asset isn’t software. It’s professionals who can:
Teams that begin experimenting now – especially with SEO-driven PR measurement and GEO – will have a measurable advantage.
Those who wait for “perfect” frameworks or universal standards may find they need to explain why they’re making a “career transition” or “exploring new opportunities.”
I’d rather learn how to effectively measure, evaluate, and report on my communications results than try to learn euphemisms for being a victim of rightsizing, restructuring, or a reduction in force.
Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts
The purpose of PR measurement isn’t to justify budgets after the fact. It’s to make smarter decisions before the next campaign launches.
By integrating SEO and GEO into PR measurement programs, communications professionals can finally close the loop between media outreach and real-world impact – without abandoning the principles they already know.
The theory hasn’t changed.
The opportunity to measure what matters is finally catching up.

There’s a dangerous misconception in B2B marketing that video is just a “brand awareness” play. We tend to bucket video into two extremes:
This binary thinking is breaking your pipeline.
In my role at LinkedIn, I have access to a unique view of the B2B buying ecosystem. What the data shows is that the most successful companies don’t treat video as a tactic for one stage of the funnel. They treat it as a multiplier.
When you integrate video strategy across the entire buying journey – connecting brand to demand – effectiveness multiplies, driving as many as 1.4x more leads.
Here’s the strategic framework for building that system, backed by new data on how B2B buyers actually make decisions.
The window to influence a deal closes much earlier than most marketers realize.
LinkedIn’s B2B Institute calls this the “first impression rose.” Like the reality TV show “The Bachelor,” if you don’t get a rose in the first ceremony, you’re unlikely to make it to the finale.
Research from LinkedIn and Bain & Company found 86% of buyers already have their choices predetermined on “Day 1” of a buying cycle. Even more critically, 81% ultimately purchase from a vendor on that Day 1 list.
If your video strategy waits until the buyer is “in-market” or “ready to buy” to show up, you’re fighting over the remaining 19% of the market. To win, you need to be on the shortlist before the RFP is even written.
That requires a three-play strategy.
Most video strategies target the “champion,” the person who uses the tool or service. But in B2B, the champion rarely holds the checkbook.
Consider this scenario. You’ve spent months courting the VP of marketing. They love your solution. They’re ready to sign.
But when they bring the contract to the procurement meeting, the CFO looks up and asks: “Who are they? Why haven’t I heard of them?”
In that moment, the deal stalls. You’re suddenly competing on price because you have zero brand equity with the person controlling the budget.

Our data shows you’re more than 20 times more likely to be bought when the entire buying group – not just the user – knows you on Day 1.
To reach that broader group, you can’t just be present. You have to be memorable. You need reach and recall, both.
LinkedIn data reveals exactly what “cut-through creative” looks like in the feed:
Dig deeper: 5 tips to make your B2B content more human
This is where most B2B content fails. We focus on selling capability (features, specs, speeds, feeds) and rarely focus on buyability (how safe it is to buy us).
When a B2B buyer is shortlisting vendors, they’re navigating career risk.
Our research with Bain & Company found the top five “emotional jobs” a buyer needs to fulfill. Only two were about product capability.

The No. 1 emotional job (at 34%) was simply, “I felt I could defend the decision if it went wrong.”
To drive consideration, your video content shouldn’t be a feature dump. It should be a safety net. What does that actually look like?
Momentum is safety (the “buzz” effect)
Buyers want to bet on a winner. Our data shows brands generate 10% more leads when they build momentum through “buzz.”
You can manufacture this buzz through cultural coding. When brands reference pop culture, we see a 41% lift in engagement.
When they leverage memes (yes, even in B2B), engagement can jump by 111%. It signals you’re relevant, human, and part of the current conversation.
Authority builds trust (the “expert” effect)
If momentum catches their eye, expertise wins their trust. But how you present that expertise matters.
Video ads featuring executive experts see 53% higher engagement.
When those experts are filmed on a conference stage, engagement lifts by 70%.
Why? The setting implies authority. It signals, “This person is smart enough that other people paid to listen to them.”
Consistency is credibility
You can’t “burst” your way to trust. Brands that maintain an always-on presence see 10% more conversions than those that stop and start. Trust is a cumulative metric.
Dig deeper: The future of B2B authority building in the AI search era
By this stage, the buyer knows you (Play 1) and trusts you (Play 2).
Don’t use your bottom-funnel video to “hard sell” them. Use it to remove the friction of the next step.
Buyers at this stage feel three specific types of risk:
That’s why recommendations, relationships, and being relatable help close deals.

Your creative should directly answer those anxieties.
Scale social proof – kill execution risk
90% of buyers say social proof is influential information. But don’t just post a logo.
Use video to show the peer. When a buyer sees someone with their exact job title succeeding, decision risk evaporates.
Activate your employees – kill decision risk
People trust people more than logos. Startups that activate their employees see massive returns because it humanizes the brand.
The stat that surprises most leaders. Just 3% of employees posting regularly can drive 20% more leads, per LinkedIn data.
Show the humans who’ll answer the phone when things break.
The conversion combo – kill effort risk
Don’t leave them hanging with a generic “Learn More” button.
We see 3x higher lead gen open rates when video ads are combined directly with lead gen forms.
The video explains the value, the form captures the intent instantly.
Dig deeper: LinkedIn’s new playbook taps creators as the future of B2B marketing
If this strategy is so effective, why isn’t everyone doing it? The problem isn’t usually budget or talent. It’s structure.
In most organizations, “brand” teams and “demand” teams operate in silos.
They fight over budget and rarely coordinate creative.
This fragmentation kills the multiplier effect.
When you break down those silos and run these plays as a single system, the data changes.
Our modeling shows an integrated strategy drives 1.4x more leads than running brand and demand in isolation.
It creates a flywheel:
The brands that balance the funnel – investing in memory and action – are the ones that make the “Day 1” list.
And the ones on that list are the ones that win the revenue.
The 2026 boxing calendar has kicked off in style, as Shakur Stevenson thrust his name into pound-for-pound contention, whilst both Dalton Smith and Josh Kelly have registered upset title wins to become Great Britain’s latest world champions. Now, the year is in full flow and we are set for another month of twists, turns and […]
The post A closer look at February’s world title fights appeared first on Boxing News.


Most PPC teams still build campaigns the same way: pull a keyword list, set match types, and organize ad groups around search terms. It’s muscle memory.
But Google’s auction no longer works that way.
Search now behaves more like a conversation than a lookup. In AI Mode, users ask follow-up questions and refine what they’re trying to solve. AI Overviews reason through an answer first, then determine which ads support that answer.
In Google Ads, the auction isn’t triggered by a keyword anymore – it’s triggered by inferred intent.
If you’re still structuring campaigns around exact and phrase match, you’re planning for a system that no longer exists. The new foundation is intent: not the words people type, but the goals behind them.
An intent-first approach gives you a more durable way to design campaigns, creative, and measurement as Google introduces new AI-driven formats.
Keywords aren’t dead, but they’re no longer the blueprint.
Here’s what’s actually happening when someone searches now.
Google’s AI uses a technique called “query fan out,” splitting a complex question into subtopics and running multiple concurrent searches to build a comprehensive response.
The auction happens before the user even finishes typing.
And crucially, the AI infers commercial intent from purely informational queries.
For instance, someone asks, “Why is my pool green?” They’re not shopping. They’re troubleshooting.
But Google’s reasoning layer detects a problem that products can solve and serves ads for pool-cleaning supplies alongside the explanation. While the user didn’t search for a product, the AI knew they would need one.
This auction logic is fundamentally different from what we’re accustomed to. It’s not matching your keyword to the query. It’s matching your offering to the user’s inferred need state, based on conversational context.
If your campaign structure still assumes people search in isolated, transactional moments, you’re missing the journey entirely.

Dig deeper: How to build a modern Google Ads targeting strategy like a pro
An intent-first strategy doesn’t mean you stop doing keyword research. It means you stop treating keywords as the organizing principle.
Instead, you map campaigns to the why behind the search.
The same intent can surface through dozens of different queries, and the same query can reflect multiple intents depending on context.
“Best CRM” could mean either “I need feature comparisons” or “I’m ready to buy and want validation.” Google’s AI now reads that difference, and your campaign structure should, too.
This is more of a mental model shift than a tactical one.
You’re still building keyword lists, but you’re grouping them by intent state rather than match type.
You’re still writing ad copy, but you’re speaking to user goals instead of echoing search terms back at them.
Once campaigns are organized around intent instead of keywords, the downstream implications show up quickly – in eligibility, landing pages, and how the system learns.
If you want to show up inside AI Overviews or AI Mode, you need broad match keywords, Performance Max, or the newer AI Max for Search campaigns.
Exact and phrase match still work for brand defense and high-visibility placements above the AI summaries, but they won’t get you into the conversational layer where exploration happens.
It’s not enough to list product features anymore. If your page explains why and how someone should use your product (not just what it is), you’re more likely to win the auction.
Google’s reasoning layer rewards contextual alignment. If the AI built an answer about solving a problem, and your page directly addresses that problem, you’re in.
The algorithm prioritizes rich metadata, multiple high-quality images, and optimized shopping feeds with every relevant attribute filled in.
Using Customer Match lists to feed the system first-party data teaches the AI which user segments represent the highest value.
That training affects how aggressively it bids for similar users.
Dig deeper: In Google Ads automation, everything is a signal in 2026
Even as intent-first campaigns unlock new reach, there are still blind spots in reporting, budget constraints, and performance expectations you need to plan around.
Google doesn’t provide visibility into how ads perform specifically in AI Mode versus traditional search.
You’re monitoring overall cost-per-conversion and hoping high-funnel clicks convert downstream, but you can’t isolate which placements are actually driving results.
AI-powered campaigns like Performance Max and AI Max need meaningful conversion volume to scale effectively, often 30 conversions in 30 days at a minimum.
Smaller advertisers with limited budgets or longer sales cycles face what some call a “scissors gap,” in which they lack the data needed to train algorithms and compete in automated auctions.
AI Mode attracts exploratory, high-funnel behavior. Conversion rates won’t match bottom-of-the-funnel branded searches. That’s expected if you’re planning for it.
It becomes a problem when you’re chasing immediate ROAS without adjusting how you define success for these placements.
Dig deeper: Outsmarting Google Ads: Insider strategies to navigate changes like a pro
You don’t need to rebuild everything overnight.
Pick one campaign where you suspect intent is more complex than the keywords suggest. Map it to user goal states instead of search term buckets.
Test broad match in a limited way. Rewrite one landing page to answer the “why” instead of just listing specs.
The shift to intent-first is not a tactic – it’s a lens. And it’s the most durable way to plan as Google keeps introducing new AI-driven formats.

AI is no longer an experimental layer in search. It’s actively mediating how customers discover, evaluate, and choose local businesses, increasingly without a traditional search interaction.
The real risk is data stagnation. As AI systems act on local data for users, brands that fail to adapt risk declining visibility, data inconsistencies, and loss of control over how locations are represented across AI surfaces.
Learn how AI is changing local search and what you can do to stay visible in this new landscape.

We are experiencing a platform shift where machine inference, not database retrieval, drives decisions. At the same time, AI is moving beyond screens into real-world execution.
AI now powers navigation systems, in-car assistants, logistics platforms, and autonomous decision-making.
In this environment, incorrect or fragmented location data does not just degrade search.
It leads to missed turns, failed deliveries, inaccurate recommendations, and lost revenue. Brands don’t simply lose visibility. They get bypassed.
Local search has become an AI-first, zero-click decision layer.
Multi-location brands now win or lose based on whether AI systems can confidently recommend a location as the safest, most relevant answer.
That confidence is driven by structured data quality, Google Business Profile excellence, reviews, engagement, and real-world signals such as availability and proximity.
For 2026, the enterprise risk is not experimentation. It’s inertia.
Brands that fail to industrialize and centralize local data, content, and reputation operations will see declining AI visibility, fragmented brand representation, and lost conversion opportunities without knowing why.
Here are four key ways the growth in AI search is changing the local journey:
Businesses that don’t grasp these changes quickly won’t fall behind quietly. They’ll be algorithmically bypassed.
Dig deeper: The enterprise blueprint for winning visibility in AI search
AI systems build memory through entity and context graphs. Brands with clean, connected location, service, and review data become default answers.
Local queries increasingly fall into two intent categories: objective and subjective.
This distinction matters because AI systems treat risk differently depending on intent.
For objective queries, AI models prioritize first-party sources and structured data to reduce hallucination risk. These answers often drive direct actions like calls, visits, and bookings without a traditional website visit ever occurring.
For subjective queries, AI relies more heavily on reviews, third-party commentary, and editorial consensus. This data normally comes from various other channels, such as UGC sites.
Dig deeper: How to deploy advanced schema at scale
Industry research has shown that for objective local queries, brand websites and location-level pages act as primary “truth anchors.”
When an AI system needs to confirm hours, services, amenities, or availability, it prioritizes explicit, structured core data over inferred mentions.
Consider a simple example. If a user asks, “Find a coffee shop near me that serves oat milk and is open until 9,” the AI must reason across location, inventory, and hours simultaneously.
If those facts are not clearly linked and machine-readable, the brand cannot be confidently recommended.
This is why freshness, relevance, and machine clarity, powered by entity-rich structured data, help AI systems interpret the right response.
Ensure your data is fresh, relevant, and clear with these tips:
Dig deeper: From search to answer engines: How to optimize for the next era of discovery
Historically, local search was managed as a collection of disconnected tactics: listings accuracy, review monitoring, and periodic updates to location pages.
That operating model is increasingly misaligned with how local discovery now works.
Local discovery has evolved into an end-to-end enterprise journey – one that spans data integrity, experience delivery, governance, and measurement across AI-driven surfaces.
Listings, location pages, structured data, reviews, and operational workflows now work together to determine whether a brand is trusted, cited, and repeatedly surfaced by AI systems.
Local 4.0 is a practical operating model for AI-first local discovery at an enterprise scale. The focus of this framework is to ensure your brand is callable, verifiable, and safe for AI systems to recommend.
To understand why this matters, it helps to look at how local has evolved:

Local 4.0 is a new operating model for AI-first local discovery at enterprise scale. The focus is on understanding, verifying, and recommending based on consumer intent.
In an AI-mediated environment, brands are no longer merely present. They are selected, reused, or ignored – often without a click. This is the core transformation enterprise leaders must internalize as they plan for 2026.
Dig deeper: AI and local search: The new rules of visibility and ROI

Discovery in an AI-driven environment is fundamentally about trust. When data is inconsistent or noisy, AI systems treat it as a risk signal and deprioritize it.
Core elements include:

Why ‘legacy’ sources still matter
Listings act as verification infrastructure. Interestingly, research suggests that LLMs often cross-reference data against highly structured legacy directories (such as MapQuest or the Yellow Pages).
While human traffic to these sites has waned, AI systems utilize them as “truth anchors” because their data is rigidly structured and verified.
If your hours are wrong on MapQuest, an AI agent may downgrade its confidence in your Google Business Profile, viewing the discrepancy as a risk.
Discovery is no longer about being crawled. It’s about being trusted and reused. Governance matters because ownership, workflows, and data quality now directly affect brand risk.
Dig deeper: 4 pillars of an effective enterprise AI strategy
AI systems increasingly reward data that is current, efficiently crawled, and easy to validate.
Stale content is no longer neutral. When an AI system encounters outdated information – such as incorrect hours, closed locations, or unavailable services – it may deprioritize or avoid that entity in future recommendations.
For enterprises, freshness must be operationalized, not managed manually. This requires tightly connecting the CMS with protocols like IndexNow, so updates are discovered and reflected by AI systems in near real time.
Beyond updates, enterprises must deliberately design for local-level engagement and signal velocity. Fresh, locally relevant content – such as events, offers, service updates, and community activity – should be surfaced on location pages, structured with schema, and distributed across platforms.
In an AI-first environment, freshness is trust, and trust determines whether a location is surfaced, reused, or skipped entirely.
Unlocking ‘trapped’ data
A major challenge for enterprise brands is “trapped” data, which is vital information, often locked behind PDFs, menu images, or static event calendars.
For example, a restaurant group may upload a PDF of their monthly live music schedule. To a human, this is visible. To a search crawler, it’s often opaque. In an AI-first era, this data must be extracted and structured.
If an agent cannot read the text inside the PDF, it cannot answer the query: “Find a bar with live jazz tonight.”
Key focus areas include:
At enterprise scale, manual workflows break. Freshness is no longer tactical. It’s a competitive requirement.
Dig deeper: Chunk, cite, clarify, build: A content framework for AI search
AI does not select the best brand. It selects the location that best resolves intent.
Generic brand messaging consistently loses out to locally curated content. AI retrieval is context-driven and prioritizes specific attributes such as parking availability, accessibility, accepted insurance, or local services.
This exposes a structural problem for many enterprises: information is fragmented across systems and teams.
Solving AI-driven relevance requires organizing data as a context graph. This means connecting services, attributes, FAQs, policies, and location details into a coherent, machine-readable system that maps to customer intent rather than departmental ownership.
Enterprises should also consider omnichannel marketing approaches to achieve consistency.
Dig deeper: Integrating SEO into omnichannel marketing for seamless engagement
As AI-driven and zero-click journeys increase, traditional SEO metrics lose relevance. Attribution becomes fragmented across search, maps, AI interfaces, and third-party platforms.
Precision tracking gives way to directional confidence.
Executive-level KPIs should focus on:
The goal is not perfect attribution. It’s confidence that local discovery is working and revenue risk is being mitigated.
Dig deeper: 7 focus areas as AI transforms search and the customer journey in 2026
Fragmentation is a material revenue risk. When local data is inconsistent or disconnected, AI systems have lower confidence in it and are less likely to reuse or recommend those locations.
Treating local data as a living, governed asset and establishing a single, authoritative source of truth early prevents incorrect information from propagating across AI-driven ecosystems and avoids the costly remediation required to fix issues after they scale.
AI-mediated discovery is now the default – and local 4.0 gives enterprises control, confidence, and competitiveness by aligning data, experience, and governance into the AI discovery flywheel.
This isn’t about chasing trends; it’s about ensuring your brand is accurately represented and confidently chosen wherever customers discover you next.
Dig deeper: How to select a CMS that powers SEO, personalization and growth

AI-mediated discovery is becoming the default interface between customers and local brands.
Local 4.0 provides a framework for control, confidence, and competitiveness in that environment. It aligns data, experience, and governance around how AI systems actually operate through reasoning, verification, and reuse.
This is not about chasing AI trends. It’s about ensuring your brand is correctly represented and confidently recommended wherever customers discover you next.

In 2015, PPC was a game of direct control. You told Google exactly which keywords to target, set manual bids at the keyword level, and capped spend with a daily budget. If you were good with spreadsheets and understood match types, you could build and manage 30,000-keyword accounts all day long.
Those days are gone.
In 2026, platform automation is no longer a helpful assistant. It’s the primary driver of performance. Fighting that reality is a losing battle.
Automation has leveled the playing field and, in many cases, given PPC marketers back their time. But staying effective now requires a different skill set: understanding how automated systems learn and how your data shapes their decisions.
This article breaks down how signals actually work inside Google Ads, how to identify and protect high-quality signals, and how to prevent automation from drifting into the wrong pockets of performance.
Google’s automation isn’t a black box where you drop in a budget and hope for the best. It’s a learning system that gets smarter based on the signals you provide.
Feed it strong, accurate signals, and it will outperform any manual approach.
Feed it poor or misleading data, and it will efficiently automate failure.
That’s the real dividing line in modern PPC. AI and automation run on signals. If a system can observe, measure, or infer something, it can use it to guide bidding and targeting.
Google’s official documentation still frames “audience signals” primarily as the segments advertisers manually add to products like Performance Max or Demand Gen.
That definition isn’t wrong, but it’s incomplete. It reflects a legacy, surface-level view of inputs and not how automation actually learns at scale.
Dig deeper: Google Ads PMax: The truth about audience signals and search themes
In practice, every element inside a Google Ads account functions as a signal.
Structure, assets, budgets, pacing, conversion quality, landing page behavior, feed health, and real-time query patterns all shape how the AI interprets intent and decides where your money goes.
Nothing is neutral. Everything contributes to the model’s understanding of who you want, who you don’t, and what outcomes you value.
So when we talk about “signals,” we’re not just talking about first-party data or demographic targeting.
We’re talking about the full ecosystem of behavioral, structural, and quality indicators that guide the algorithm’s decision-making.
Here’s what actually matters:
In 2026, we’ve moved beyond the daily cap mindset. With the expansion of campaign total budgets to Search and Shopping, we are now signaling a total commitment window to Google.
In the announcement, UK retailer Escentual.com used this approach to signal a fixed promotional budget, resulting in a 16% traffic lift because the AI was given permission to pace spend based on real-time demand rather than arbitrary 24-hour cycles.
All of these elements function as signals because they actively shape the ad account’s learning environment.
Anything the ad platform can observe, measure, or infer becomes part of how it predicts intent, evaluates quality, and allocates budget.
If a component influences who sees your ads, how they behave, or what outcomes the algorithm optimizes toward, it functions as a signal.
To understand why signal quality has become critical, you need to understand what’s actually happening every time someone searches.
Google’s auction-time bidding doesn’t set one bid for “mobile users in New York.”
It calculates a unique bid for every single auction based on billions of signal combinations at that precise millisecond. This considers the user, not simply the keyword.
We are no longer looking for “black-and-white” performance.
We are finding pockets of performance and users who are predicted to take the outcomes we define as our goals in the platform.
The AI evaluates the specific intersection of a user on iOS 17, using Chrome, in London, at 8 p.m., who previously visited your pricing page.
Because the bidding algorithm cross-references these attributes, it generates a precise bid. This level of granularity is impossible for humans to replicate.
But this is also the “garbage in, garbage out” reality. Without quality signals, the system is forced to guess.
Dig deeper: How to build a modern Google Ads targeting strategy like a pro
If every element in a Google Ads account functions as a signal, we also have to acknowledge that not all signals carry equal weight.
Some signals shape the core of the model’s learning. Others simply refine it.
Based on my experience managing accounts spending six and seven figures monthly, this is the hierarchy that actually matters.
Your tracking is the most important data point. The algorithm needs a baseline of 30 to 50 conversions per month to recognize patterns. For B2B advertisers, this often requires shifting from high-funnel form fills to down-funnel CRM data.
As Andrea Cruz noted in her deep dive on Performance Max for B2B, optimizing for a “qualified lead” or “appointment booked” is the only way to ensure the AI doesn’t just chase cheap, irrelevant clicks.
We are witnessing a “death by a thousand cuts,” where browser restrictions from Safari and Firefox, coupled with aggressive global regulations, have dismantled the third-party cookie.
Without enhanced conversions or server-side tracking, you are essentially flying blind, because the invisible trackers of the past are being replaced by a model where data must be earned through transparent value exchanges.
Your customer lists tell Google, “Here is who converted. Now go find more people like this.”
Quality trumps quantity here. A stale or tiny list won’t be as effective as a list that is updated in real time.
Using keywords and URLs to build segments creates a digital footprint of your ideal customer.
This is especially critical in niche industries where Google’s prebuilt audiences are too broad or too generic.
These segments help the system understand the neighborhood your best prospects live in online.
To simplify this hierarchy, I’ve mapped out the most common signals used in 2026 by their actual weight in the bidding engine:
| Signal category | Specific input (The “what”) | Weight/impact | Why it matters in 2026 |
| Primary (Truth) | Offline conversion imports (CRM) | Critical | Trains the AI on profit, not just “leads.” |
| Primary (Truth) | Value-based bidding (tROAS) | Critical | Signals which products actually drive margin. |
| Secondary (Context) | First-party customer match lists | High | Provides a “Seed Audience” for the AI to model. |
| Secondary (Context) | Visual environment (images/video) | High | AI scans images to infer user “lifestyle” and price tier. |
| Tertiary (Intent) | Low-volume/long-tail keywords | Medium | Defines the “semantic neighborhood” of the search. |
| Tertiary (Intent) | Landing page color and speed | Medium | Signals trust and relevance feedback loops. |
| Pollutant (Noise) | “Soft” conversions (scrolls/clicks) | Negative | Dilutes intent. Trains AI to find “cheap clickers.” |
Dig deeper: Auditing and optimizing Google Ads in an age of limited data
Signal pollution occurs when low-quality, conflicting, or misleading signals contaminate the data Google’s AI uses to learn.
It’s what happens when the system receives signals that don’t accurately represent your ideal client, your real conversion quality, or the true intent you want to attract in your ad campaigns.
Signal pollution doesn’t just “confuse” the bidding algorithm. It actively trains it in the wrong direction.
It dilutes your high-value signals, expands your reach into low-intent audiences, and forces the model to optimize toward outcomes you don’t actually want.
Common sources include:
These sources create the initial pollution. But when marketers try to compensate for underperformance by feeding the machine more data, the root cause never gets addressed.
That’s when soft conversions like scrolls or downloads get added as primary signals, and none of them correlate to revenue.
Like humans, algorithms focus on the metrics they are fed.
If you mix soft signals with high-intent revenue data, you dilute the profile of your ideal customer.
You end up winning thousands of cheap, low-value auctions that look great in a report but fail to move the needle on the P&L.
Your job is to be the gatekeeper, ensuring only the most profitable signals reach the bidding engine.
When signal pollution takes hold, the algorithm doesn’t just underperform. The ads start drifting toward the wrong users, and performance begins to decline.
Before you can build a strong signal strategy, you have to understand how to spot that drift early and correct it before it compounds.
Algorithm drift happens when Google’s automation starts optimizing toward the wrong outcomes because the signals it’s receiving no longer match your real advertising goals.
Drift doesn’t show up as a dramatic crash. It shows up as a slow shift in who you reach, what queries you win, and which conversions the system prioritizes. It looks like a gradual deterioration of lead quality.
To stay in control, you need a simple way to spot drift early and correct it before the machine locks in the wrong pattern.
Early warning signs of drift include:
These are all indicators that the system is optimizing toward the wrong signals.
To correct drift without resetting learning:
Your job isn’t to fight automation in Google Ads, it’s to guide it.
Drift happens when the machine is left unsupervised with weak or conflicting signals. Strong signal hygiene keeps the system aligned with your real business outcomes.
Once you can detect drift and correct it quickly, you’re finally in a position to build a signal strategy that compounds over time instead of constantly resetting.
The next step is structuring your ad account so every signal reinforces the outcomes you actually want.
Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns
If you want to build a signal strategy that becomes a competitive advantage, you have to start with the foundations.
Implement offline conversion imports. The difference between optimizing for a “form fill” and a “$50K closed deal” is the difference between wasting budget and growing a business.
When “journey-aware bidding” eventually rolls out, it will be a game-changer because we can feed more data about the individual steps of a sale.
Use value-based bidding. Don’t just count conversions. Differentiate between a customer buying a $20 accessory and one buying a $500 hero product.
Don’t just dump everyone into one list. A list of 5,000 recent purchasers is worth far more than 50,000 people who visited your homepage two years ago.
Stale data hurts performance by teaching the algorithm to find people who matched your business 18 months ago, not today.
Brand traffic carries radically different intent and conversion rates than nonbrand.
Mixing these campaigns forces the algorithm to average two incompatible behaviors, which muddies your signals and inflates your ROAS expectations.
Brand should be isolated so it doesn’t subsidize poor nonbrand performance or distort bidding decisions in the ad platform.
A $600 product and a $20 product do not behave the same in auction-time bidding.
When you put them in the same campaign with a single 4x ROAS target, the algorithm will get confused.
This trains the system away from your hero products and toward low-value volume.
Google’s automation performs best when it has enough data to be consistent and high-quality data to recognize patterns. That means fewer, stronger campaigns are better as long as the signals inside them are aligned.
Centralize campaigns when products share similar price points, margins, audiences, and intent. Decentralize campaigns when mixing them would pollute the signal pool.
When everyone has access to the same automation, the only real advantage left is the quality of the signals you feed it.
Your job is to protect those signals, diagnose pollution early, and correct drift before the system locks onto the wrong patterns.
Once you build a deliberate signal strategy, Google’s automation stops being a constraint and becomes leverage. You stay in the loop, and the machine does the heavy lifting.
