India’s crypto investors say tax laws are unfair — are they right? | Opinion
		 
	
		 
	
Today Affinity announced that it was going free to use for everyone, a move that probably surprised a lot of you because it was certainly unexpected to me. However, the more I think about it, the more it makes perfect sense.
		 
	


For years, I told bloggers the same thing: make your content easy enough for toddlers and drunk adults to understand.
That was my rule of thumb.
If a five-year-old can follow what you’ve written and someone paying half-attention can still find what they need on your site, you’re doing something right.
But the game has changed. It’s no longer just about toddlers and drunk adults.
You’re now writing for large language models (LLMs) quietly scanning, interpreting, and summarizing your work inside AI search results.
I used to believe that great writing and solid SEO were all it took to succeed. What I see now:
Clarity beats everything.
The blogs winning today aren’t simply well-written or packed with keywords. They’re clean, consistent, and instantly understandable to readers and machines alike.
Blogging isn’t dying. It’s moving from being a simple publishing tool to a real brand platform that supports off-site efforts more than ever before.
You can’t just drop a recipe or travel guide online and expect it to rank using the SEO tactics of the past.
Bloggers must now think of their site as an ecosystem where everything connects – posts, internal links, author bios, and signals of external authority all reinforce each other.
When I audit sites, the difference between those that thrive and those that struggle almost always comes down to focus.
The successful ones treat their blogs like living systems that grow smarter, clearer, and more intentional with time.
But if content creators want to survive what’s coming, they need to build their sites for toddlers, drunk adults, and LLMs.
In this article, bloggers will learn how to do the following:
Let’s be honest: the blogging world feels a little shaky right now.
One day, traffic is steady, and the next day, it’s down 40% after an update no one saw coming.
Bloggers are watching AI Overviews and “AI Mode” swallow up clicks that used to come straight to their sites. Pinterest doesn’t drive what it once did, and social media traffic in general is unpredictable.
It’s not your imagination. The rules of discovery have changed.
We’ve entered a stage where Google volatility is the norm, not the exception.
Core updates hit harder, AI summaries are doing the talking, and creators are realizing that search is no longer just about keywords and backlinks. It’s about context, clarity, and credibility.
But here’s the good news: the traffic that matters is still out there. It just presents differently.
The strongest blogs I work with are seeing direct traffic and returning visitors climb.
People remember them, type their names into search, open their newsletters, and click through from saved bookmarks. That’s not an accident – that’s the result of clarity and consistency.
If your site clearly explains who you are, what you offer, and how your content fits together, you’re building what I call resilient visibility.
It’s the kind of presence that lasts through algorithm swings, because your audience and Google both understand your purpose.
Think of it this way: the era of chasing random keyword wins is over.
The bloggers who’ll still be standing in five years are the ones who organize their sites like smart libraries: easy to navigate, full of expertise, and built for readers who come back again and again.
AI systems reward that same clarity.
They want content that’s connected, consistent, and confident about its subject matter.
That’s how you show up in AI Overviews, People Also Ask carousels, or Gemini-generated results.
In short, confusion costs you clicks, but clarity earns you staying power.
Takeaway
Dig deeper: Chunk, cite, clarify, build: A content framework for AI search
A few years ago, SEO was all about chasing rankings.
You picked your keywords, wrote your post, built some links, and hoped to land on page one.
Simple enough. But that world doesn’t exist anymore.
Today, we’re in what can best be called the retrieval era.
AI systems like ChatGPT, Gemini, and Perplexity don’t list links. They retrieve answers from the brands, authors, and sites they trust most.
Duane Forrester said it best – search is shifting from “ranking” to “retrieval.”
Instead of asking, “Where do I rank?” creators should be asking, “Am I retrievable?”
That mindset shift changes everything about how we create content.
Mike King expanded on this idea, introducing the concept of relevance engineering.
Search engines and LLMs now use context to understand relevance, not just keywords. They look at:
This is where structure and clarity start paying off.
AI systems want to understand who you are and where you stand.
They learn that from your internal links, schema, author bios, and consistent topical focus.
When everything aligns, you’re no longer just ranking in search – you’re becoming a known entity that AI can pull from.
I’ve seen this firsthand during site audits. Blogs with strong internal structures and clear topical authority are far more likely to be cited as sources in AI Overviews and LLM results.
You’re removing confusion and teaching both users and models to associate your brand with specific areas of expertise.
Takeaway
Here’s something I see a lot in my audits: two posts covering the same topic, both written by experienced bloggers, both technically sound. Yet one consistently outperforms the other.
The difference? One shows a clear “Last updated” date, and the other doesn’t.
That tiny detail matters more than most people realize.
Research from Metehan Yesilyurt confirms what many SEOs have suspected for a while: LLMs and AI-driven search results favor recency, and it’s already being exploited in the name of research.
It’s built into their design. When AI models have multiple possible answers to choose from, they often prefer newer or recently refreshed content.
This is recency bias, and it’s reshaping both AI search and Google’s click-through behavior.
We see the same pattern inside the traditional SERPs.
Posts that display visible “Last updated” dates tend to earn higher click-through rates.
People – and algorithms – trust fresh information.
That’s why one of the first things I check in an audit is how Google is interpreting the date structure on a blog.
Is it recognizing the correct updated date, or is it stuck on the original publish date?
Sometimes the fix is simple: remove the old “published on” markup and make sure the updated timestamp is clearly visible and crawlable.
Other times, the page’s HTML or schema sends conflicting signals that confuse Google, and those need to be cleaned up.
When Google or an LLM can’t identify the freshness of your content, you’re handing visibility to someone else who communicates that freshness better.
How do you prevent this? Don’t hide your updates. Celebrate them.
When you update recipes, add new travel information, or test a product, update your post and make the date obvious.
This will tell readers and AI systems, “This content is alive and relevant.”
Now, that being said, Google does keep a history of document versions.
The average post may have dozens of copies stored, and Google can easily compare the recently changed version to its repository of past versions.
Avoid making small changes that do not add value to users or republishing to a new date years later to fake relevancy. Google specifically calls that out in its guidelines.
Takeaway
Let’s talk about what really gets remembered in this new AI-driven world.
When you ask ChatGPT, Gemini, or Perplexity a question, it thinks in entities – people, brands, and concepts it already knows.
The more clearly those models recognize who you are and what you stand for, the more likely you are to be retrieved when it’s time to generate an answer.
That’s where brand SEO comes in.
Harry Clarkson-Bennett in “How to Build a Brand (with SEO) in a Post AI World” makes a great point: LLMs reward brand reinforcement.
They want to connect names, authors, and websites with a clear area of expertise. And they remember consistency.
If your name, site, and author profiles all align across the web (same logo, same tone, same expertise), you start training these models to trust you.
I tell bloggers all the time: AI learns the same way humans do. It remembers patterns, tone, and repetition. So make those patterns easy to see.
I originally discussed these AI buttons in my last article, “AI isn’t the enemy: How bloggers can thrive in a generative search world,” and provided a visual example.
These are simple on-site prompts encouraging readers to save or summarize your content using AI tools like ChatGPT or Gemini.
When users do that, those models start seeing your site as a trusted example. Over time, that can influence what those systems recall and recommend.
Think of this as reputation-building for the AI era. It’s not about trying to game the system. It’s about making sure your brand is memorable, consistent, and worth retrieving.
Fortunately, these buttons are becoming more mainstream, with theme designers like Feast including them as custom blocks.
And the buttons work – I’ve seen creators turn their blogs into small but powerful brands that LLMs now cite regularly.
They did it by reinforcing who they were, everywhere, and then using AI buttons to encourage their existing traffic to save their sites as high-quality examples to reference in the future.
Takeaway
Blogging has never been easy, but it’s never been harder than it is right now.
Between core updates, AI Overviews, and shifting algorithms, creators are expected to keep up with changes that even seasoned SEOs struggle to track.
And that’s the problem – too many bloggers are still trying to figure it all out alone.
If there’s one thing I’ve learned after doing more than 160 site audits this year, it’s this: almost every struggling blogger is closer to success than they think. They’re just missing clarity.
A good SEO audit does more than point out broken links or slow-loading pages. It shows you why your content isn’t connecting with Google, readers, and now LLMs.
My audits are built around what I call the “Toddlers, Drunk Adults, and LLMs” framework.
If your site works for those three audiences, you’re in great shape.
For toddlers
For drunk adults
For LLMs
When bloggers follow this approach, the numbers speak for themselves.
In 2025 alone, my audit clients have seen an average increase of 47% in Google traffic and RPM improvements of 21-33% within a few months of implementing recommendations.
This isn’t just about ranking better. Every audit is a roadmap to help bloggers position their sites for long-term visibility across traditional search and AI-powered discovery.
That means optimizing for things like:
You can’t control Google’s volatility, but you can control how clear, crawlable, and connected your site is. That’s what gets rewarded.
And while I’ll always advocate for professional audits, this isn’t about selling a service.
You need someone who can give you an honest, technical, and strategic look under the hood.
Why?
Because the difference between “doing fine” and “thriving in AI search” often comes down to a single, well-executed audit.
Takeaway
So where does all this lead? What does blogging even look like five years from now?
Here’s what I see coming.
We’re heading toward an increasingly agentic web, where AI systems do the searching, summarizing, and recommending for us.
Instead of typing a query into Google, people will ask their personal AI for a dinner idea, a travel itinerary, or a product recommendation.
And those systems will pull from a short list of trusted sources they already “know.”
That’s why what you’re doing today matters so much.
Every time you publish a post, refine your site structure, or strengthen your brand signals, you’re teaching AI who you are.
You’re building a long-term relationship with the systems that will decide what gets shown and what gets skipped.
Here’s how I expect the next few years to unfold:
The creators who will win in this next chapter are the ones who stop trying to outsmart Google and start building systems that AI can easily understand and humans genuinely connect with.
It’s not about chasing trends or reinventing your site every time an update hits. It’s about getting the fundamentals right and letting clarity, trust, and originality carry you forward.
Because the truth is, Google’s not the gatekeeper anymore. You are.
Your brand, expertise, and ability to communicate clearly will decide how visible you’ll be in search and AI-driven discovery.
Takeaway
If there’s one thing I want bloggers to take away from all this, it’s that clarity always wins.
We’re living through the fastest transformation in the history of search.
AI is rewriting how content is discovered, ranked, and retrieved.
Yes, that’s scary. But it’s also full of opportunity for those willing to adapt.
I’ve seen it hundreds of times in audits this year.
Bloggers who simplify their sites, clean up their data, and focus on authority signals see measurable results.
They show up in AI Overviews. They regain lost rankings. They build audiences that keep coming back, even when algorithms shift again.
This isn’t about fighting AI – it’s about working with it. The goal is to show the system who you are and why your content matters.
Here’s my advice, regardless of the professional you choose:
It’s never been harder to be a content creator, but it’s never been more possible to build something that lasts.
The blogs that survive the next five years will be organized, human, and clear.
The future of blogging belongs to the creators who embrace clarity over chaos. AI won’t erase the human voice – it’ll amplify the ones that are worth hearing.
Here’s to raised voices and future success. Good luck out there.
Dig deeper: Organizing content for AI search: A 3-level framework

Regex is a powerful – yet overlooked – tool in search and data analysis.
With just a single line, you can automate what would otherwise take dozens of lines of code.
Short for “regular expression,” regex is a sequence of characters used to define a pattern for matching text.
It’s what allows you to find, extract, or replace specific strings of data with precision.
In SEO, regex helps you extract and filter information efficiently – from analyzing keyword variations to cleaning messy query data.
But its value extends well beyond SEO.
Regex is also fundamental to natural language processing (NLP), offering insight into how machines read, parse, and process text – even how large language models (LLMs) tokenize language behind the scenes.
Before getting started with regex basics, I want to highlight some of its uses in our daily workflows.
Google Search Console has a regex filter functionality to isolate specific query types.
One of the simplest regex expressions commonly used is the brand regex brandname1|brandname2|brandname3, which is very useful when users write your brand name in different ways.

Google Analytics also supports regex for defining filters, key events, segments, audiences, and content groups.
Looker Studio allows you to use regex to create filters, calculated fields, and validation rules.
Screaming Frog supports the use of regex to filter and extract data during a crawl and also to exclude specific URLs from your crawl.

Google Sheets enables you to test whether a cell matches a specific regex. Simply use the function REGEXMATCH (text, regular_expression).
In SEO, we’re surrounded by tools and features just waiting for a well-written regex to unlock their full potential.
If you’re building SEO tools, especially those that involve content processing, regex is your secret weapon.
It gives you the power to search, validate, and replace text based on advanced, customizable patterns.
Here’s a Google Colab notebook with an example of a Python script that takes a list of queries and extracts different variations of my brand name.
You can easily customize this code by plugging it into ChatGPT or Claude alongside your brand name.

I’m a fan of vibe coding – but not the kind where you skip the basics and rely entirely on LLMs.
After all, you can’t use a calculator properly if you don’t understand numbers or how addition, multiplication, division, and subtraction work.
I support the kind of vibe coding that builds on a little coding knowledge – enough to use LLMs effectively, test what they produce, and troubleshoot when needed.
Likewise, learning the basics of regex helps you use LLMs to create more advanced expressions.
| Symbol | Meaning | 
| . | Matches any single character. | 
| ^ | Matches the start of a string. | 
| $ | Matches the end of a string. | 
| * | Matches 0 or more of the preceding character. | 
| + | Matches 1 or more of the preceding character. | 
| ? | Makes the preceding character optional (0 or 1 time). | 
| {} | Matches the preceding character a specific number of times. | 
| [] | Matches any one character inside the brackets. | 
| \ | Escapes special characters or signals special sequences like \d. | 
| ` | Matches a literal backtick character. | 
| () | Groups characters together (for operators or capturing). | 
Here’s a list of 10 long-tail keywords. Let’s explore how different regex patterns filter them using the Regex101 tool.
Example 1: Extract any two-character sequence that starts with an “a.” The second character can be anything (i.e., a, then anything).
a.
Example 2: Extract any string that starts with the letter “a” (i.e., a is the start of the string, then followed by anything).
^a.
Example 3: Extract any string that starts with an “a” and ends with an “e” (i.e., any line that starts with a, followed by anything, then ends with an e).
^a.*e$
Example 4: Extract any string that contains two “s.”
s{2}
Example 5: Extract any string that contains “for” or “with.”
for|with
I’ve also built a sample regex Google Sheet so you can play around, test, and experience the feature in Google Sheets, too. Check it out here.

Note: Cells in the Extracted Text column showing #N/A indicate that the regex didn’t find a matching pattern.
By exploring regex, you’ll open new doors for analyzing and organizing search data.
It’s one of those skills that quietly makes you faster and more precise – whether you’re segmenting keywords, cleaning messy queries, or setting up advanced filters in Search Console or Looker Studio.
Once you’re comfortable with the basics, start spotting where regex can save you time.
Use it to identify branded versus nonbranded searches, group URLs by pattern, or validate large text datasets before they reach your reports.
Experiment with different expressions in tools like Regex101 or Google Sheets to see how small syntax changes affect results.
The more you practice, the easier it becomes to recognize patterns in both data and problem-solving.
That’s where regex truly earns its place in your SEO toolkit.

		 
	


Most marketing teams still treat SEO and PPC as budget rivals, not as complementary systems facing the same performance challenges.
In practice, these relationships fall into three types:
Only mutualism creates sustainable performance gains – and it’s the shift marketing teams need to make next.
One glaring problem unites online marketers: we’re getting less traffic for the same budget.
Navigating the coming years requires more than the coexistence many teams mistake for collaboration.
We need mutualism – shared technical standards that optimize for both organic visibility and paid performance.
Shared accountability drives lower acquisition costs, faster market response, and sustainable gains that neither channel can achieve alone.
Here’s what it looks like in practice:
During SEO penalties and core updates, PPC can maintain traffic until recovery.
Core updates cause fluctuations in organic rankings and user behavior, which, in turn, can affect ad relevance and placements.
PPC-only landing pages affect the Core Web Vitals of entire sites, influencing Google’s default assumptions for URLs without enough traffic to calculate individual scores.
Paid pages are penalized for slow loading just as much as organic ones, impacting Quality Score and, ultimately, bids.
PPC should answer a simple question: Are we getting the types of results we expect and want?
Setting clear PPC baselines by market and country provides valuable, real-time keyword and conversion data that SEO teams can use to strengthen organic strategies.
By analyzing which PPC clicks drive signups or demo requests, SEO teams can prioritize content and keyword targets with proven high intent.
Sharing PPC insights enables organic search teams to make smarter decisions, improve rankings, and drive better-qualified traffic.

Dig deeper: The end of SEO-PPC silos: Building a unified search strategy for the AI era
One key question to ask is: how do we measure incrementality?
We need to quantify the true, additional contribution PPC and SEO drive above the baseline.
Guerrilla testing offers a lo-fi way to do this – turning campaigns on or off in specific markets to see whether organic conversions are affected.
A more targeted test involves turning off branded campaigns.
PPC ads on branded terms can capture conversions that would have occurred organically, making paid results appear stronger and SEO weaker.
That’s exactly what Arturs Cavniss’ company did – and here are the results.

For teams ready to operate in a more sophisticated way, several options are available.
One worth exploring is Robyn, an open-source, AI/ML-powered marketing mix modeling (MMM) package.
Core Web Vitals measures layout stability, rendering efficiency, and server response times – key factors influencing search visibility and overall performance.
These metrics are weighted by Google in evaluating page experience.
| Core Web Vitals Metric | Google’s Weight | 
| First Contentful Paint | 10% | 
| Speed Index | 10% | 
| Largest Contentful Paint | 25% | 
| Total Blocking Time | 30% | 
| Cumulative Layout Shift | 25% | 
Core Web Vitals:
You can create a modified weighted system to reflect a combined SEO and PPC baseline. (Here’s a quick MVP spreadsheet to get started.)
However, SEO-focused weightings don’t capture PPC’s Quality Score requirements or conversion optimization needs.
Clicking an ad link can be slower than an organic one because Google’s ad network introduces extra processes – additional data handling and script execution – before the page loads.
The hypothesis is that ad clicks may consistently load slower than organic ones due to these extra steps in the ad-serving process.
This suggests that performance standards designed for organic results may not fully represent the experience of paid users.
Microsoft Ads Liaison Navah Hopkins notes that paid pages are penalized for slow loading just as severely as organic ones – a factor that directly affects Quality Score and bids.

SEOs also take responsibility for improving PPC-only landing pages, even without being asked. As Jono Alderson explains:
PPC-only landing pages influence the Core Web Vitals of entire sites, shaping Google’s assumptions for low-traffic URLs.
Agentic AI’s sensitivity to interaction delays has made Interaction to Next Paint (INP) a critical performance metric.
INP measures how quickly a website responds when a human or AI agent interacts with a page – clicking, scrolling, or filling out forms while completing tasks.
When response times lag, agents fail tasks, abandon the site, and may turn to competitors.
INP doesn’t appear in Chrome Lighthouse or PageSpeed Insights because those are synthetic testing tools that don’t simulate real interactions.
Real user monitoring helps reveal what’s happening in practice, but it still can’t capture the full picture for AI-driven interactions.
PPC practitioners have long relied on Quality Score – a 1-10 scale measuring expected CTR and user intent fit – to optimize landing pages and reduce costs.
SEO lacks an equivalent unified metric, leaving teams to juggle separate signals like Core Web Vitals, keyword relevance, and user engagement without a clear prioritization framework.
You can create a company-wide quality score for pages to incentivize optimization and align teams while maintaining channel-specific goals.
This score can account for page type, with sub-scores for trial, demo, or usage pages – adaptable to the content that drives the most business value.
The system should account for overlapping metrics across subscores yet remain simple enough for all teams – SEO, PPC, engineering, and product – to understand and act on.
A unified scoring model gives everyone a common language and turns distributed accountability into daily practice.
When both channels share quality standards, teams can prioritize fixes that strengthen organic rankings and paid performance simultaneously.
Display advertising and SEO rarely share performance metrics, yet both pursue the same goal – converting impressions into engaged users.
Click-per-thousand impressions (CPTI) measures the number of clicks generated per 1,000 impressions, creating a shared language for evaluating content effectiveness across paid display and organic search.
For display teams, CPTI reveals which creative and targeting combinations drive engagement beyond vanity metrics like reach.
For SEO teams, applying CPTI to search impressions (via Google Search Console) shows which pages and queries convert visibility into traffic – exposing content that ranks well but fails to earn clicks.
This shared metric allows teams to compare efficiency directly: if a blog post drives 50 clicks per 1,000 organic impressions while a display campaign with similar visibility generates only 15 clicks, the performance gap warrants investigation.

Reverse CPM offers another useful lens. It measures how long content takes to “pay for itself” – the point where it reaches ROI.
For example, if an article earns 1 million impressions in a month, it should deliver roughly 1,000 clicks.
As generative AI continues to reshape traffic patterns, this metric will need refinement.
The most valuable insights emerge when SEO and PPC teams share operational intelligence rather than compete for credit.
PPC provides quick keyword performance data to respond to market trends faster, while SEO uncovers emerging search intent that PPC can immediately act on.
Together, these feedback loops create compound advantages.
SEO signals PPC should act on:
PPC signals SEO should act on:

When both channels share intelligence, insights extend beyond marketing performance into product and business strategy.
These feedback loops don’t require expensive tools – only an organizational commitment to regular cross-channel reviews in which teams share what’s working, what’s failing, and what deserves coordinated testing.
Treat technical performance as shared infrastructure, not channel-specific optimization.
Teams that implement unified Core Web Vitals standards, cross-channel attribution models, and distributed accountability systems will capture opportunities that siloed operations miss.
As agentic AI adoption accelerates and digital marketing grows more complex, symbiotic SEO-PPC operations become a competitive advantage rather than a luxury.

Something’s shifting in how SEO services are being marketed, and if you’ve been shopping for help with search lately, you’ve probably noticed it.
Over the past few months, “AI SEO” has emerged as a distinct service offering.
Browse service provider websites, scroll through Fiverr, or sit through sales presentations, and you’ll see it positioned as something fundamentally new and separate from traditional SEO.
Some are packaging it as “GEO” (generative engine optimization) or “AEO” (answer engine optimization), with separate pricing, distinct deliverables, and the implication that you need both this and traditional SEO to compete.
The pitch goes like this:
The data helps explain why the industry is moving so quickly.
AI-sourced traffic jumped 527% year-over-year from early 2024 to early 2025.
Service providers are responding to genuine market demand for AI search optimization.
But here’s what I’ve observed after evaluating what these AI SEO services actually deliver.
Many of these so-called new tactics are the same SEO fundamentals – just repackaged under a different name.
As a marketer responsible for budget and results, understanding this distinction matters.
It affects how you allocate resources, evaluate agency partners, and structure your search strategy.
Let’s dig into what’s really happening so you can make smarter decisions about where to invest.
The typical AI SEO sales deck has become pretty standardized.
Here are the most common claims I’m hearing.
They’ll show you how ChatGPT, Perplexity, and Claude are changing search behavior, and they’re not wrong about that.
Research shows that 82% of consumers agree that “AI-powered search is more helpful than traditional search engines,” signaling how search behavior is evolving.
The pitch emphasizes passage-level optimization, structured data, and Q&A formatting specifically for AI retrieval.
They’ll discuss how AI values mentions and citations differently than backlinks and how entity recognition matters more than keywords.
This creates urgency around a supposedly new practice that requires immediate investment.
The urgency is real. Only 22% of marketers have set up LLM brand visibility monitoring, but the question is whether this requires a separate “AI SEO” service or an expansion of your existing search strategy.
To be clear, the AI capabilities are real. What’s new is the positioning – familiar SEO practices rebranded to sound more revolutionary than they are.
When you examine what’s actually being recommended (passage-level content structure, semantic clarity, Q&A formatting, earning citations and mentions), you will find that these practices have been core to SEO for years.
Google introduced passage ranking in 2020 and featured snippets back in 2014.
Research from Fractl, Search Engine Land, and MFour found that generative engine optimization “is based on similar value systems that advanced SEOs, content marketers, and digital PR teams are already experts in.”
Let me show you what I mean.
What you’re hearing: “AI-powered semantic analysis and predictive keyword intelligence.”
What you’re hearing: “Machine learning content optimization that aligns with AI algorithms.”
What you’re hearing: “Entity-based authority building for AI platforms.”
Dig deeper: AI search is booming, but SEO is still not dead
I want to be fair here. There’s genuine debate in the SEO community about whether optimizing for AI-powered search represents a distinct discipline or an evolution of existing practices.
The differences are real.
These differences affect execution, but the strategic foundation remains consistent.
You still need to:
And here’s something that reinforces the overlap.
SEO professionals recently discovered that ChatGPT’s Atlas browser directly uses Google search results.
Even AI-powered search platforms are relying on traditional search infrastructure.
So yes, there are platform-specific tactics that matter.
The question for you as a marketer isn’t whether differences exist (they do).
The real question is whether those differences justify treating this as an entirely separate service with its own strategy and budget.
Or are they simply tactical adaptations of the same fundamental approach?
Dig deeper: GEO and SEO: How to invest your time and efforts wisely
The “separate AI SEO service” approach comes with a real risk.
It can shift focus toward short-term, platform-specific tactics at the expense of long-term fundamentals.
I’m seeing recommendations that feel remarkably similar to the blackhat SEO tactics we saw a decade ago:
These tactics might work today, but they’re playing a dangerous game.
Dig deeper: Black hat GEO is real – Here’s why you should pay attention
AI platforms are still in their infancy. Their spam detection systems aren’t yet as mature as Google’s or Bing’s, but that will change, likely faster than many expect.
AI platforms like Perplexity are building their own search indexes (hundreds of billions of documents).
They’ll need to develop the same core systems traditional search engines have:
They’re supposedly buying link data from third-party providers, recognizing that understanding authority requires signals beyond just content analysis.
We’ve seen this with Google.
In the early days, keyword stuffing and link schemes worked great.
Then, Google developed Panda and Penguin updates that devastated sites relying on those tactics.
Overnight, sites lost 50-90% of their traffic.
The same thing will likely happen with AI platforms.
Sites gaming visibility now with spammy tactics will face serious problems when these platforms implement stronger quality and spam detection.
As one SEO veteran put it, “It works until it doesn’t.”
Building around platform-specific tactics is like building on sand.
Focus instead on fundamentals – creating valuable content, earning authority, demonstrating expertise, and optimizing for intent – and you’ll have something sustainable across platforms.
I’m not anti-AI. Used well, it meaningfully improves SEO workflows and results.
AI excels at large-scale research and ideation – analyzing competitor content, spotting gaps, and mapping topic clusters in minutes.
For one client, it surfaced 73 subtopics we hadn’t fully considered.
But human expertise was still essential to align those ideas with business goals and strategic priorities.
AI also transforms data analysis and workflow automation – from reporting and rank tracking to technical monitoring – freeing more time for strategy.
AI clearly helps. The real question is whether these AI offerings bring truly new strategies or familiar ones powered by better tools.
After working with clients to evaluate various service models, I’ve seen consistent patterns in proposals that overpromise and underdeliver.
When evaluating any service provider, ask:
After working in SEO for 20 years, through multiple algorithm updates and trend cycles, I keep coming back to the same fundamentals:
Dig deeper: Thriving in AI search starts with SEO fundamentals
AI is genuinely changing how we work in search marketing – and that’s mostly positive.
The tools make us more efficient and enable analysis that wasn’t previously practical.
But AI only enhances good strategy. It doesn’t replace it.
Fundamentals still matter – along with audience understanding, quality, and expertise.
Search behavior is fragmenting across Google, ChatGPT, Perplexity, and social platforms, but the principles that drive visibility and trust remain consistent.
Real advantage doesn’t come from the newest tools or the flashiest “GEO” tactics.
It comes from a clear strategy, deep market understanding, strong execution of fundamentals, and smart use of technology to strengthen human expertise.
Don’t get distracted by hype or dismiss innovation. The balance lies in thoughtful AI integration within a solid strategic framework focused on business goals.
That’s what delivers sustainable results – whether people find you through Google, ChatGPT, or whatever comes next.
		 
	


Marketing, technology, and business leaders today are asking an important question: how do you optimize for large language models (LLMs) like ChatGPT, Gemini, and Claude?
LLM optimization is taking shape as a new discipline focused on how brands surface in AI-generated results and what can be measured today.
For decision makers, the challenge is separating signal from noise – identifying the technologies worth tracking and the efforts that lead to tangible outcomes.
The discussion comes down to two core areas – and the timeline and work required to act on them:
Just as SEO evolved through better tracking and measurement, LLM optimization will only mature once visibility becomes measurable.
We’re still in a pre-Semrush/Moz/Ahrefs era for LLMs.
Tracking is the foundation of identifying what truly works and building strategies that drive brand growth.
Without it, everyone is shooting in the dark, hoping great content alone will deliver results.
The core challenges are threefold:
Why LLM queries are different
Traditional search behavior is repetitive – millions of identical phrases drive stable volume metrics. LLM interactions are conversational and variable.
People rephrase questions in different ways, often within a single session. That makes pattern recognition harder with small datasets but feasible at scale.
These structural differences explain why LLM visibility demands a different measurement model.
This variability requires a different tracking approach than traditional SEO or marketing analytics.
The leading method uses a polling-based model inspired by election forecasting.
A representative sample of 250–500 high-intent queries is defined for your brand or category, functioning as your population proxy.
These queries are run daily or weekly to capture repeated samples from the underlying distribution of LLM responses.

Tracking tools record when your brand and competitors appear as citations (linked sources) or mentions (text references), enabling share of voice calculations across all competitors.
Over time, aggregate sampling produces statistically stable estimates of your brand visibility within LLM-generated content.
Early tools providing this capability include:

Consistent sampling at scale transforms apparent randomness into interpretable signals.
Over time, aggregate sampling provides a stable estimate of your brand’s visibility in LLM-generated responses – much like how political polls deliver reliable forecasts despite individual variations.
While share of voice paints a picture of your presence in the LLM landscape, it doesn’t tell the complete story.
Just as keyword rankings show visibility but not clicks, LLM presence doesn’t automatically translate to user engagement.
Brands need to understand how people interact with their content to build a compelling business case.
Because no single tool captures the entire picture, the best current approach layers multiple tracking signals:
Nobody has complete visibility into LLM impact on their business today, but these methods cover all the bases you can currently measure.
Be wary of any vendor or consultant promising complete visibility. That simply isn’t possible yet.
Understanding these limitations is just as important as implementing the tracking itself.
Because no perfect models exist yet, treat current tracking data as directional – useful for decisions, but not definitive.

Dig deeper: In GEO, brand mentions do what links alone can’t
Measuring LLM impact is one thing. Identifying which queries and topics matter most is another.
Compared to SEO or PPC, marketers have far less visibility. While no direct search volume exists, new tools and methods are beginning to close the gap.
The key shift is moving from tracking individual queries – which vary widely – to analyzing broader themes and topics.
The real question becomes: which areas is your site missing, and where should your content strategy focus?
To approximate relative volume, consider three approaches:
Correlate with SEO search volume
Start with your top-performing SEO keywords.
If a keyword drives organic traffic and has commercial intent, similar questions are likely being asked within LLMs. Use this as your baseline.
Layer in industry adoption of AI
Estimate what percentage of your target audience uses LLMs for research or purchasing decisions:
Apply these percentages to your existing SEO keyword volume. For example, a keyword with 25,000 monthly searches could translate to 1,250-6,250 LLM-based queries in your category.
Using emerging inferential tools
New platforms are beginning to track query data through API-level monitoring and machine learning models.
Accuracy isn’t perfect yet, but these tools are improving quickly. Expect major advancements in inferential LLM query modeling within the next year or two.
The technologies that help companies identify what to improve are evolving quickly.
While still imperfect, they’re beginning to form a framework that parallels early SEO development, where better tracking and data gradually turned intuition into science.
Optimization breaks down into two main questions:
One of the most effective ways to assess your current position is to take a representative sample of high-intent queries that people might ask an LLM and see how your brand shows up relative to competitors. This is where the Share of Voice tracking tools we discussed earlier become invaluable.
These same tools can help answer your optimization questions:


From this data, several key insights emerge:
LLMs may be reshaping discovery, but SEO remains the foundation of digital visibility.
Across five competitive categories, brands ranking on Google’s first page appeared in ChatGPT answers 62% of the time – a clear but incomplete overlap between search and AI results.
That correlation isn’t accidental.
Many retrieval-augmented generation (RAG) systems pull data from search results and expand it with additional context.
The more often your content appears in those results, the more likely it is to be cited by LLMs.
Brands with the strongest share of voice in LLM responses are typically those that invested in SEO first.
Strong technical health, structured data, and authority signals remain the bedrock for AI visibility.
What this means for marketers:
Just as SEO has both on-page and off-page elements, LLM optimization follows the same logic – but with different tactics and priorities.
Off-page: The new link building
Most industries show a consistent pattern in the types of resources LLMs cite:
Citation patterns across ChatGPT, Gemini, Perplexity, and Google’s AI Overviews show consistent trends, though each engine favors different sources.
This means that traditional link acquisition strategies, guest posts, PR placements, or brand mentions in review content will likely evolve.
Instead of chasing links anywhere, brands should increasingly target:
The core principle holds: brands gain the most visibility by appearing in sources LLMs already trust – and identifying those sources requires consistent tracking.
On-page: What your own content reveals
The same technologies that analyze third-party mentions can also reveal which first-party assets, content on your own website, are being cited by LLMs.
This provides valuable insight into what type of content performs well in your space.
For example, these tools can identify:
From there, three key opportunities emerge:
The next major evolution in LLM optimization will likely come from tools that connect insight to action.
Early solutions already use vector embeddings of your website content to compare it against LLM queries and responses. This allows you to:
Current tools mostly generate outlines or recommendations.
The next frontier is automation – systems that turn data into actionable content aligned with business goals.
While comprehensive LLM visibility typically builds over 6-12 months, early results can emerge faster than traditional SEO.
The advantage: LLMs can incorporate new content within days rather than waiting months for Google’s crawl and ranking cycles.
However, the fundamentals remain unchanged.
Quality content creation, securing third-party mentions, and building authority still require sustained effort and resources.
Think of LLM optimization as having a faster feedback loop than SEO, but requiring the same strategic commitment to content excellence and relationship building that has always driven digital visibility.
LLM traffic remains small compared to traditional search, but it’s growing fast.
A major shift in resources would be premature, but ignoring LLMs would be shortsighted.
The smartest path is balance: maintain focus on SEO while layering in LLM strategies that address new ranking mechanisms.
Like early SEO, LLM optimization is still imperfect and experimental – but full of opportunity.
Brands that begin tracking citations, analyzing third-party mentions, and aligning SEO with LLM visibility now will gain a measurable advantage as these systems mature.
In short:
Approach LLM optimization as both research and brand-building.
Don’t abandon proven SEO fundamentals. Rather, extend them to how AI systems discover, interpret, and cite information.

AI tools can help teams move faster than ever – but speed alone isn’t a strategy.
As more marketers rely on LLMs to help create and optimize content, credibility becomes the true differentiator.
And as AI systems decide which information to trust, quality signals like accuracy, expertise, and authority matter more than ever.
It’s not just what you write but how you structure it. AI-driven search rewards clear answers, strong organization, and content it can easily interpret.
This article highlights key strategies for smarter AI workflows – from governance and training to editorial oversight – so your content remains accurate, authoritative, and unmistakably human.
More than half of marketers are using AI for creative endeavors like content creation, IAB reports.
Still, AI policies are not always the norm.
Your organization will benefit from clear boundaries and expectations. Creating policies for AI use ensures consistency and accountability.
Only 7% of companies using genAI in marketing have a full-blown governance framework, according to SAS.
However, 63% invest in creating policies that govern how generative AI is used across the organization.

Even a simple, one-page policy can prevent major mistakes and unify efforts across teams that may be doing things differently.
As Cathy McPhillips, chief growth officer at the Marketing Artificial Intelligence Institute, puts it:
So drafting an internal policy sets expectations for AI use in the organization (or at least the creative teams).
When creating a policy, consider the following guidelines:
Logically, the policy will evolve as the technology and regulations change.
It can be easy to fall into the trap of believing AI-generated content is good because it reads well.
LLMs are great at predicting the next best sentence and making it sound convincing.
But reviewing each sentence, paragraph, and the overall structure with a critical eye is absolutely necessary.
Think: Would an expert say it like that? Would you normally write like that? Does it offer the depth of human experience that it should?
“People-first content,” as Google puts it, is really just thinking about the end user and whether what you are putting into the world is adding value.
Any LLM can create mediocre content, and any marketer can publish it. And that’s the problem.
People-first content aligns with Google’s E-E-A-T framework, which outlines the characteristics of high-quality, trustworthy content.
E-E-A-T isn’t a novel idea, but it’s increasingly relevant in a world where AI systems need to determine if your content is good enough to be included in search.
According to evidence in U.S. v. Google LLC, we see quality remains central to ranking:

It suggests that the same quality factors reflected in E-E-A-T likely influence how AI systems assess which pages are trustworthy enough to ground their answers.
So what does E-E-A-T look like practically when working with AI content? You can:

Dig deeper: Writing people-first content: A process and template
LLMs are trained on vast amounts of data – but they’re not trained on your data.
Put in the work to train the LLM, and you can get better results and more efficient workflows.
Here are some ideas.
If you already have a corporate style guide, great – you can use that to train the model. If not, create a simple one-pager that covers things like:
You can refresh this as needed and use it to further train the model over time.
Put together a packet of instructions that prompts the LLM. Here are some ideas to start with:
With that in mind, you can put together a prompt checklist that includes:
Dig deeper: Advanced AI prompt engineering strategies for SEO
A custom GPT is a personalized version of ChatGPT that’s trained on your materials so it can better create in your brand voice and follow brand rules.
It mostly remembers tone and format, but that doesn’t guarantee the accuracy of output beyond what’s uploaded.
Some companies are exploring RAG (retrieval-augmented generation) to further train LLMs on the company’s own knowledge base.
RAG connects an LLM to a private knowledge base, retrieving relevant documents at query time so the model can ground its responses in approved information.
While custom GPTs are easy, no-code setups, RAG implementation is more technical – but there are companies/technologies out there that can make it easier to implement.
That’s why GPTs tend to work best for small or medium-scale projects or for non-technical teams focused on maintaining brand consistency.

RAG, on the other hand, is an option for enterprise-level content generation in industries where accuracy is critical and information changes frequently.
Create parameters so the model can self-assess the content before further editorial review. You can create a checklist of things to prompt it.
For example:
Even the best AI workflow still depends on trained editors and fact-checkers. This human layer of quality assurance protects accuracy, tone, and credibility.
About 33% of content writers and 24% of marketing managers added AI skills to their LinkedIn profiles in 2024.
Writers and editors need to continue to upskill in the coming year, and, according to the Microsoft 2025 annual Work Trend Index, AI skilling is the top priority.

Professional training creates baseline knowledge so your team gets up to speed faster and can confidently handle outputs consistently.
This includes training on how to effectively use LLMs and how to best create and edit AI content.
In addition, training content teams on SEO helps them build best practices into prompts and drafts.
Ground your AI-assisted content creation in editorial best practices to ensure the highest quality.
This might include:

Build a checklist to use during the review process for quality assurance. Here are some ideas to get you started:
AI is transforming how we create, but it doesn’t change why we create.
Every policy, workflow, and prompt should ultimately support one mission: to deliver accurate, helpful, and human-centered content that strengthens your brand’s authority and improves your visibility in search.
Dig deeper: An AI-assisted content process that outperforms human-only copy
		 
	

Many PPC advertisers obsess over click-through rates, using them as a quick measure of ad performance.
But CTR alone doesn’t tell the whole story – what matters most is what happens after the click. That’s where many campaigns go wrong.
Most advertisers think the ad with the highest CTR is often the best. It should have a high Quality Score and attract lots of clicks.
However, in most cases, lower CTR ads usually outperform higher CTR ads in terms of total conversions and revenue.
If all I cared about was CTR, then I could write an ad:
That ad would get an impressive CTR for many keywords, and I’d go out of business pretty quickly, giving away free money.
When creating ads, we must consider:
I can take my free money ad and refine it:
I’ve now:
If you focus solely on CTR and don’t consider attracting the right audience, your advertising will suffer.
While this sentiment applies to both B2C and B2B companies, B2B companies must be exceptionally aware of how their ads appear to consumers versus business searchers.
If you are advertising for a B2B company, you’ll often notice that CTR and conversion rates have an inverse relationship. As CTR increases, conversion rates decrease.
The most common reason for this phenomenon is that consumers and businesses can search for many B2B keywords.
B2B companies must try to show that their products are for businesses, not consumers.
For instance, “safety gates” is a common search term.
The majority of people looking to buy a safety gate are consumers who want to keep pets or babies out of rooms or away from stairs.
However, safety gates and railings are important for businesses with factories, plants, or industrial sites.
These two ads are both for companies that sell safety gates. The first ad’s headlines for Uline could be for a consumer or a business.
It’s not until you look at the description that you realize this is for mezzanines and catwalks, which is something consumers don’t have in their homes.
As many searchers do not read descriptions, this ad will attract both B2B and B2C searchers.

The second ad mentions Industrial in the headline and follows that up with a mention of OSHA compliance in the description and the sitelinks.
While both ads promote similar products, the second one will achieve a better conversion rate because it speaks to a single audience.
We have a client who specializes in factory parts, and when we graph their conversion rates by Quality Score, we can see that as their Quality Score increases, their conversion rates decrease.
They will review their keywords and ads whenever they have a 5+ Quality Score on any B2B or B2C terms.

This same logic does not apply to B2B search terms.
Those terms often contain more jargon or qualifying statements when looking for B2B services and products.
B2B advertisers don’t have to use characters to weed out B2C consumers and can focus their ads only on B2B searchers.
As you are testing various ads to find your best pre-qualifying statements, it can be tricky to examine the metrics. Which one of these would be your best ad?
When examining mixed metrics, CTR and conversion rates, we can use additional metrics to define our best ads. My favorite two are:
You can also multiply the results by 1,000 to make the numbers easier to digest instead of working with many decimal points. So, we might write:
By using impression metrics, you can find the opportunity for a given set of impressions.
| CTR | Conversion rate | Impressions | Clicks | Conversions | CPI | 
| 15% | 3% | 5,000 | 750 | 22.5 | 4.5 | 
| 10% | 7% | 4,000 | 400 | 28 | 7 | 
| 5% | 11% | 4,500 | 225 | 24.75 | 5.5 | 
By doing some simple math, we can see that option 2, with a 10% CTR and a 7% conversion rate, gives us the most total conversions.
Dig deeper: CRO for PPC: Key areas to optimize beyond landing pages
A good CTR helps bring more people to your website, improves your audience size, and can influence your Quality Scores.
However, high CTR ads can easily attract the wrong audience, leading you to waste your budget.
As you are creating headlines, consider your audience.
By considering each of these questions as you create ads, you can find ads that speak to the type of users you want to attract to your site.
These ads are rarely your best CTRs. These ads balance the appeal of high CTRs with pre-qualifying statements that ensure the clicks you receive have the potential to turn into your next customer.

The web’s purpose is shifting. Once a link graph – a network of pages for users and crawlers to navigate – it’s rapidly becoming a queryable knowledge graph.
For technical SEOs, that means the goal has evolved from optimizing for clicks to optimizing for visibility and even direct machine interaction.
At the forefront of this evolution is NLWeb (Natural Language Web), an open-source project developed by Microsoft.
NLWeb simplifies the creation of natural language interfaces for any website, allowing publishers to transform existing sites into AI-powered applications where users and intelligent agents can query content conversationally – much like interacting with an AI assistant.
Developers suggest NLWeb could play a role similar to HTML in the emerging agentic web.
Its open-source, standards-based design makes it technology-agnostic, ensuring compatibility across vendors and large language models (LLMs).
This positions NLWeb as a foundational framework for long-term digital visibility.
NLWeb proves that structured data isn’t just an SEO best practice for rich results – it’s the foundation of AI readiness.
Its architecture is designed to convert a site’s existing structured data into a semantic, actionable interface for AI systems.
In the age of NLWeb, a website is no longer just a destination. It’s a source of information that AI agents can query programmatically.
The technical requirements confirm that a high-quality schema.org implementation is the primary key to entry.
The NLWeb toolkit begins by crawling the site and extracting the schema markup.
The schema.org JSON-LD format is the preferred and most effective input for the system.
This means the protocol consumes every detail, relationship, and property defined in your schema, from product types to organization entities.
For any data not in JSON-LD, such as RSS feeds, NLWeb is engineered to convert it into schema.org types for effective use.
Once collected, this structured data is stored in a vector database. This element is critical because it moves the interaction beyond traditional keyword matching.
Vector databases represent text as mathematical vectors, allowing the AI to search based on semantic similarity and meaning.
For example, the system can understand that a query using the term “structured data” is conceptually the same as content marked up with “schema markup.”
This capacity for conceptual understanding is absolutely essential for enabling authentic conversational functionality.
The final layer is the connectivity provided by the Model Context Protocol (MCP).
Every NLWeb instance operates as an MCP server, an emerging standard for packaging and consistently exchanging data between various AI systems and agents.
MCP is currently the most promising path forward for ensuring interoperability in the highly fragmented AI ecosystem.
Since NLWeb relies entirely on crawling and extracting schema markup, the precision, completeness, and interconnectedness of your site’s content knowledge graph determine success.
The key challenge for SEO teams is addressing technical debt.
Custom, in-house solutions to manage AI ingestion are often high-cost, slow to adopt, and create systems that are difficult to scale or incompatible with future standards like MCP.
NLWeb addresses the protocol’s complexity, but it cannot fix faulty data.
If your structured data is poorly maintained, inaccurate, or missing critical entity relationships, the resulting vector database will store flawed semantic information.
This leads inevitably to suboptimal outputs, potentially resulting in inaccurate conversational responses or “hallucinations” by the AI interface.
Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web.
By leveraging the structured data you already have, NLWeb allows you to unlock new value without starting from scratch, thereby future-proofing your digital strategy.
The need for AI crawlers to process web content efficiently has led to multiple proposed standards.
A comparison between NLWeb and the proposed llms.txt file illustrates a clear divergence between dynamic interaction and passive guidance.
The llms.txt file is a proposed static standard designed to improve the efficiency of AI crawlers by:
In sharp contrast, NLWeb is a dynamic protocol that establishes a conversational API endpoint.
Its purpose is not just to point to content, but to actively receive natural language queries, process the site’s knowledge graph, and return structured JSON responses using schema.org.
NLWeb fundamentally changes the relationship from “AI reads the site” to “AI queries the site.”
| Attribute | NLWeb | llms.txt | 
| Primary goal | Enables dynamic, conversational interaction and structured data output | Improves crawler efficiency and guides static content ingestion | 
| Operational model | API/Protocol (active endpoint) | Static Text File (passive guidance) | 
| Data format used | Schema.org JSON-LD | Markdown | 
| Adoption status | Open project; connectors available for major LLMs, including Gemini, OpenAI, and Anthropic | Proposed standard; not adopted by Google, OpenAI, or other major LLMs | 
| Strategic advantage | Unlocks existing schema investment for transactional AI uses, future-proofing content | Reduces computational cost for LLM training/crawling | 
The market’s preference for dynamic utility is clear. Despite addressing a real technical challenge for crawlers, llms.txt has failed to gain traction so far.
NLWeb’s functional superiority stems from its ability to enable richer, transactional AI interactions.
It allows AI agents to dynamically reason about and execute complex data queries using structured schema output.
While NLWeb is still an emerging open standard, its value is clear.
It maximizes the utility and discoverability of specialized content that often sits deep in archives or databases.
This value is realized through operational efficiency and stronger brand authority, rather than immediate traffic metrics.
Several organizations are already exploring how NLWeb could let users ask complex questions and receive intelligent answers that synthesize information from multiple resources – something traditional search struggles to deliver.
The ROI comes from reducing user friction and reinforcing the brand as an authoritative, queryable knowledge source.
For website owners and digital marketing professionals, the path forward is undeniable: mandate an entity-first schema audit.
Because NLWeb depends on schema markup, technical SEO teams must prioritize auditing existing JSON-LD for integrity, completeness, and interconnectedness.
Minimalist schema is no longer enough – optimization must be entity-first.
Publishers should ensure their schema accurately reflects the relationships among all entities, products, services, locations, and personnel to provide the context necessary for precise semantic querying.
The transition to the agentic web is already underway, and NLWeb offers the most viable open-source path to long-term visibility and utility.
It’s a strategic necessity to ensure your organization can communicate effectively as AI agents and LLMs begin integrating conversational protocols for third-party content interaction.

Samsung is reportedly preparing for One UI 8.5, which could debut alongside the Galaxy S26 series early next year. However, recent reports suggest the company might be running late with the Galaxy S26 launch, possibly pushing the event beyond January 2026.
The delay appears to be connected to Samsung’s change in its phone lineup. Earlier rumors said the regular Galaxy S26 might be called “Pro” and a slim “Edge” model would replace the “Plus”.
Now, those plans are reportedly canceled. Samsung is going back to the familiar lineup – Galaxy S26, Galaxy S26 Plus, and Galaxy S26 Ultra. The Plus model is back, while the Edge and Pro names are gone.
This could also affect One UI 8.5. Since the Galaxy S26 Plus development is running late, the release of One UI 8.5 Beta may also be delayed. If the Galaxy Unpacked event is postponed to late February or early March, users will also have to wait longer to get another major update.

Phones in picture – Galaxy S25 Ultra, Plus and vanilla
However, the One UI 8.5 Beta program might start in late November, which gives users an early look at new features. But if the phone launch is postponed, the beta could run for several months before the official release, which may feel long for eager users. Or Samsung might delay One UI 8.5 Beta Program.
Despite these delays, the changes could be beneficial. Samsung seems focused on improving hardware and software, with upgrades expected in performance and camera capabilities with the next series. Going back to a simple naming system also makes it easier for people to understand the lineup.
While fans might be disappointed by the delay, it could mean a more polished experience when new phones and software finally launch. Samsung has not confirmed any dates yet, so users will have to wait for official announcements.
The return of Galaxy S26 Plus and the lineup reshuffle may push back the One UI 8.5 beta, but it could result in better phones and a smoother software update for users in 2026. Stay tuned.
The post Could Galaxy S26 Plus delay One UI 8.5 Beta launch? appeared first on Sammy Fans.

The death of an ad, like the end of the world, doesn’t happen with a bang but with a whimper.
If you’re paying attention, you’ll notice the warning signs: click-through rate (CTR) slips, engagement falls, and cost-per-click (CPC) creeps up.
If you’re not, one day your former top performer is suddenly costing you money.
Creative fatigue – the decline in ad performance caused by overexposure or audience saturation – is often the culprit.
It’s been around as long as advertising itself, but in an era where platforms control targeting, bidding, and even creative testing, it’s become one of the few variables marketers can still influence.
This article explains how to spot early signs of fatigue across PPC platforms before your ROI turns sour, and how to manually refresh your creative in the age of AI-driven optimization.
We’ll look at four key factors:

Low-quality ads burn out much faster than high-quality ones.
To stand the test of time, your creative needs to be both relevant and resonant – it has to connect with the viewer.
But it’s important to remember that creative fatigue isn’t the same as bad creative. Even a brilliant ad will wear out if it’s shown too often or for too long.
Think of it like a joke – no matter how good it is, it stops landing once the audience has heard it a dozen times.
To track ad quality, monitor how your key metrics trend over time – especially CTR, CPC, and conversion rate (CVR).
A high initial CTR followed by a gradual decline usually signals a strong performer reaching the end of its natural run.
Because every campaign operates in a different context, it’s best to compare an ad’s results against your own historical benchmarks rather than rigid KPI targets.
Factor in elements like seasonality and placement to avoid overgeneralizing performance trends.
And to read the data accurately, make sure you’re analyzing results by creative ID, not just by campaign or ad set.
Dig deeper: How Google Ads’ AI tools fix creative bottlenecks, streamline asset creation
Every ad has a natural lifespan – and every platform its own life expectancy.
No matter how timely or novel your ad was at launch, your audience will eventually acclimate to its visuals or message.
Keeping your creative fresh helps reset the clock on fatigue.
Refreshing doesn’t have to mean reinventing.
Sometimes a new headline, a different opening shot, or an updated call to action is enough to restore performance. (See the table below for rule-of-thumb refresh guidelines by platform.)

To distinguish a normal lifecycle from an accelerated one that signals deeper issues, track declining performance metrics like CTR and frequency – how many times a user sees your ad.
A high-performing ad typically follows a predictable curve.
Engagement drops about 20-30% week over week as it nears the end of its run. Any faster, and something else needs fixing.
Your refresh rate should also match your spend. Bigger budgets drive higher frequency, which naturally shortens a creative’s lifespan.

You’ve got your “cool ad” – engaging visuals, a catchy hook, and a refresh cadence all mapped out.
You put a big budget behind it, only to watch performance drop like a stone after a single day. Ouch.
You’re likely running into the third factor of creative fatigue: audience saturation – when the same people see your ad again and again, driving performance steadily downward.
Failing to balance budget and audience size leads even the strongest creative to overexposure and a shorter lifespan.
To spot early signs of saturation, track frequency, and reach together.
Frequency measures how many times each person sees your ad, while reach counts the number of unique people who’ve seen it.
When frequency rises but reach plateaus, your ad hits the same people repeatedly instead of expanding to new audiences.
Ideally, both numbers should climb in tandem.
Some platforms – including Google, Microsoft, LinkedIn, and DSP providers – offer frequency caps to control exposure.
Others, like Meta, Amazon, and TikTok, don’t.
Dig deeper: How to beat audience saturation in PPC: KPIs, methodology and case studies
These days, algorithms don’t just reflect performance – they shape it.
Once an ad starts to underperform, a feedback loop kicks in.
Automated systems reduce delivery, which further hurts performance, which leads to even less delivery.
How each platform evaluates creative health – and how quickly you respond before your ad is demoted – is the fourth and final factor in understanding creative fatigue.
Every platform has its own system for grading creative performance, but the clearest sign of algorithmic demotion is declining impressions or spend despite stable budgets and targeting.
The tricky part is that this kind of underdelivery can look a lot like normal lifecycle decline or audience saturation. In reality, it’s often a machine-level penalty.
To spot it, monitor impression share and spend velocity week over week, at the creative level (not by campaign or ad set).
When impressions or spend drop despite a stable budget and consistent targeting, your ad has likely been demoted by the platform.
That doesn’t necessarily mean it’s poor quality.
This usually means the algorithm has lost “confidence” in its ability to achieve your chosen goal, such as engagement or conversions.
Here’s how to recover:
When the algorithm cools your ad, don’t panic.
Act quickly to identify whether the issue lies in quality, freshness, audience, or budget – and make deliberate adjustments, not hasty ones.
Creative fatigue, like death and taxes, is inevitable. Every ad has a beginning, middle, and end.
The key is recognizing those stages early through vigilant data monitoring, so you can extend performance instead of waiting for the crash.
While automation may be taking over much of marketing, ad creative, and copy remain one arena where humans still outperform machines.
Great marketers today don’t just make good ads. They know how to sustain them through smart refreshes, rotations, and timely retirements.
Because when you can see the whimper coming, you can make sure your next ad lands with a bang.
Dig deeper: 7 best AI ad creative tools, for beginners to pros

Q4 is here – and for ecommerce brands, that means the biggest sales opportunities of the year are just ahead.
Black Friday, Cyber Monday, Christmas – the biggest sales events are just around the corner. To hit your targets, preparation is key. It’s not too late to act, and the opportunities ahead are huge.
Use this checklist to get up to speed quickly and set your account up for success.
Start with a website audit to identify any red flags. Tools like PageSpeed Insights can help diagnose technical issues.
Encourage clients to review key pages and the checkout process on multiple devices to ensure there are no bottlenecks.
If resources allow, use heatmap or session analysis tools such as Microsoft Clarity or Hotjar to better understand user behavior and improve the on-site experience.
Double-check that all tracking is configured correctly across platforms.
Don’t just verify that tags are firing – make sure all events are set up to their fullest potential.
For example, confirm high match rates in Meta and ensure Enhanced Conversions is fully configured.
Before the sales period begins, encourage users to join a VIP list for Black Friday or holiday promotions.
This can give them early access or exclusive deals. Set up a separate automated email flow to follow up with these subscribers.
Publish your sale page as soon as possible so Google can crawl and index it for SEO.
The page doesn’t need to be accessible from your site navigation or populated with products right away – the key is to get it live early.
If possible, reuse the same URL from previous years to build on existing SEO equity.
You can also add a data capture form to collect VIP sign-ups until the page goes live with products.
If shipping cutoff dates aren’t clear, many users won’t risk placing an order close to the deadline.
Clearly display both standard and express delivery cutoff dates on your website.
Don’t rely solely on a homepage carousel to promote your sale.
Add a banner or header across all pages so users know a sale is happening, no matter where they land.
Dig deeper: Holiday ecommerce to hit record $253 billion – here’s what’s driving it
As mentioned with pop-ups, supplementing that strategy with lead generation ads can help grow your email list and build early buzz around your upcoming sale.
These will be your Black Friday or holiday sale ads running for most of the campaign.
Keep the messaging and promotion straightforward. Any confusion in a crowded feed will make users scroll past.
Use strong branding, put the offer front and center, and include a clear CTA. On Meta, this often works best as a simple image ad.
Many brands simply extend their Black Friday sale rather than creating Cyber Monday-specific ads and web banners.
Take advantage of the opportunity to give your campaign a fresh angle – both in messaging and offer.
Since it’s often the final day of your sale, you can go bigger on discounts for one day or add a free gift with purchases over a certain amount.
It’s also a great way to move slower-selling inventory left over from Black Friday.
Add urgency to your messaging as the sale nears its end by including countdowns or end dates.
This tactic works especially well for longer campaigns where ad fatigue can set in.
November and December are busy months for ad builds and platform reviews.
Make sure all sale assets are ready several weeks before launch to avoid rushed builds and delays from longer approval times.
Make sure item disapprovals and limited products are kept to a minimum. Double-check that your setup is current.
For example, if your return window has changed, update that information in Google Merchant Center.
Update any lists you plan to use this season.
If you don’t have direct integrations, upload new or revised lists manually.
Review your integrations and confirm that data is flowing correctly.
Start building audiences as soon as your first-party and remarketing lists are refreshed.
Create Meta Lookalike Audiences, Performance Max audience signals, and Custom Audiences.
If you run into volume issues, you’ll have time to adjust or explore alternatives.
Agree on budgets early so you know your spending limits. Don’t plan just by month. Map out weekly spend, too.
You’ll likely want to invest more heavily in the final week of November than in the first.
Updating search ad copy can be tedious and time-consuming.
These tools let you control and update copy dynamically without editing every RSA manually – saving hours in campaign builds.
Enable sale-related sitelinks, callouts, and promotion extensions across search campaigns so your offers appear everywhere.
In Shopping, set up Google Merchant Center promotions to highlight deals and incentives in your Shopping ad annotations.
Add a dynamic countdown timer to search ads to show exactly when your sale ends.
This feature helps your ads stand out and adds urgency as the sale nears its close.
Bid on generic keywords you wouldn’t normally target, but limit them to remarketing or first-party data audiences.
For example, people searching for “Black Friday deals” who have purchased from your site in the past 30 days already know your brand and are primed to buy again.
If you use Google Ads or Microsoft Ads with a target ROAS strategy, apply seasonality adjustments to prepare the algorithm for higher conversion rates during the sale period.
Remember to apply a negative adjustment once the sale ends to prevent unnecessary spend spikes.
Dig deeper: Seasonal PPC: Your guide to boosting holiday ad performance
Not every tactic will fit your business or resources – and that’s OK.
The key is to focus on what will have the biggest impact on your store.
By addressing most of the points in this checklist, you’ll build a solid foundation for a strong Q4 and set yourself up to capture more sales during the busiest shopping season of the year.
Preparation is everything. The earlier you audit, test, and launch, the smoother your campaigns will run when traffic – and competition – start to surge.


In the early days of SEO, ranking algorithms were easy to game with simple tactics that became known as “black hat” SEO – white text on a white background, hidden links, keyword stuffing, and paid link farms.
Early algorithms weren’t sophisticated enough to detect these schemes, and sites that used them often ranked higher.
Today, large language models power the next generation of search, and a new wave of black hat techniques are emerging to manipulate rankings and prompt results for advantage.
Up to 21% of U.S. users access AI tools like ChatGPT, Claude, Gemini, Copilot, Perplexity, and DeepSeek more than 10 times per month, according to SparkToro.
Overall adoption has jumped from 8% in 2023 to 38% in 2025.

It’s no surprise that brands are chasing visibility – especially while standards and best practices are still taking shape.
One clear sign of this shift is the surge in AI-generated content. Graphite.io and Axios report that the share of articles written by AI has now surpassed those created by humans.
Two years ago, Sports Illustrated was caught publishing AI-generated articles under fake writer profiles – a well-intentioned shortcut that backfired.
The move damaged the brand’s credibility without driving additional traffic.
Its authoritativeness, one of the pillars of Google’s E-E-A-T (experience, expertise, authoritativeness, and trustworthiness) framework, was compromised.
While Google continues to emphasize E-E-A-T as the North Star for quality, some brands are testing the limits.
With powerful AI tools now able to execute these tactics faster and at scale, a new wave of black hat practices is emerging.
As black hat GEO gains traction, several distinct tactics are emerging – each designed to exploit how AI models interpret and rank content.
LLMs are being used to automatically produce thousands of low-quality, keyword-stuffed articles, blog posts, or entire websites – often to build private blog networks (PBNs).
The goal is sheer volume, which artificially boosts link authority and keyword rankings without human oversight or original insight.
Search engines still prioritize experience, expertise, authoritativeness, and trustworthiness.
Black hat GEO now fabricates these signals using AI to:
A more advanced form of cloaking, this tactic serves one version of content to AI crawlers – packed with hidden prompts, keywords, or deceptive schema markup – and another to human users.
The goal is to trick the AI into citing or ranking the content more prominently.
Structured data helps AI understand context, but black hat users can inject misleading or irrelevant schema to misrepresent the page’s true purpose, forcing it into AI-generated answers or rich snippets for unrelated, high-value searches.
AI can quickly generate high volumes of misleading or harmful content targeting competitor brands or industry terms.
The aim is to damage reputations, manipulate rankings, and push legitimate content down in search results.
Dig deeper: Hidden prompt injection: The black hat trick AI outgrew
Even Google surfaces YouTube videos that explain how these tactics work. But just because they’re easy to find doesn’t mean they’re worth trying.
The risks of engaging in – or being targeted by – black hat GEO are significant and far-reaching, threatening a brand’s visibility, revenue, and reputation.
Search engines like Google are deploying increasingly advanced AI-powered detection systems (such as SpamBrain) to identify and penalize these tactics.
Black hat tactics inherently prioritize manipulation over user value, leading to poor user experience, spammy content, and deceptive practices.
The growth of AI-driven platforms is remarkable – but history tends to repeat itself.
Black hat SEO in the age of LLMs is no different.
While the tools have evolved, the principle remains the same: best practices win.
Google has made that clear, and brands that stay focused on quality and authenticity will continue to rise above the noise.
