When we talk about mobile software updates, three names come into mind – Apple and Samsung. Why is that? Apple is consistent with iPhone software development and rollout, two metrics that Samsung follows but cannot achieve. However, the scale of deployment matters, which is why Samsung comes second.
Let me help you understand the scenario and the reason you are here.
Apple
Apple has a systematic channel for software update development and release. The chain starts at its developer conference – WWDC, where all of the new software features are announced. Simultaneously, it releases a developer beta to all eligible devices so that they can participate. The same goes for the public beta campaign; all eligible iPhones can enter without restrictions on the number of models.
After months of testing, Apple releases the new iOS software update soon after the release of the latest iPhones. Users have become familiar with this pattern for years. They know what’s heading their way and when. Importantly, Apple keeps this process consistent.
The same goes for Pixel phones: Google releases updates for all Pixels at once, and the rollout is the same. Unlike Apple, Google changes its release date based on the development of the software. It means it has to cope with the Android ecosystem partners and ensure that their experience doesn’t lag.
On the positive side, Google keeps everyone posted about the development roadmap to the final release date.
What’s important?
It’s about approach; Apple has an unmatched consistency, everything is transparent and familiar to the users. The same goes for Pixel phone users.
What about Samsung?
Samsung used to act like a leader in this segment, but not anymore. The company had an annual developer conference, but that is no longer the case. It now announces a beta program with three models, prioritizes new software for new launches, and delays the rollout for old devices.
Unlike Apple and Google, Samsung publishes no prior information about its software development roadmap, shares no estimated launch date, or anything else regarding the final rollout. So, basically, existing Galaxy device owners don’t know when they will receive the next update because there’s no pattern in software development or the rollout.
I’ve also seen many people taking sides with Samsung on this matter, saying it launches more devices in the market at once. So, it can’t release the firmware update for all devices. Let’s agree to this for once, but why can’t it be transparent, share a development roadmap, and the final release date? What’s the loss in sharing a timeline and abiding by it?
That goes for a more consumer-friendly answer, but here’s a more befitting fact. Apple allows all eligible iPhones to test the latest iOS software update. For those who don’t know, Apple sells almost an identical number of iPhones as combined Galaxy devices each quarter. And, Samsung cannot even open the beta program for all S-series at once.
In the Apple ecosystem, users don’t wait for an update, protest about that in online forums, or wait endlessly. The iPhone maker gives them the after-sales service they deserve. Meanwhile, Samsung has become the opposite; you buy an expensive Ultra model, get a new pre-installed software, next year, you have to protest to get new features from the newest Ultra model, and keep on wondering about the final release.
One UI 8.5
The latest update has become a topic of discussion, but due to a lack of transparency. Users first protested against Samsung’s denial of the latest AI features to the previous flagship. Once confirmed, they are now waiting for the final release.
Basically, the beta program opened in December 2025, and the test continues through April 2026. In between, Samsung launched the Galaxy S26 series as the first phone with One UI 8.5. And, the beta is still open as of early May 3rd.
Galaxy smartphone users don’t know when this update will drop on their devices; there’s no announcement in this regard.
Conclusion
Yes, you may have a different opinion on this, but when it comes to consumer satisfaction, transparency plays lead role. This element is completely lacking in Samsung’s software ecosystem. Consumers want the best after-sales services, and they should get them because that’s what they’re paying for. Unfortunately, Samsung is taking consumers for granted just by offering them flashy hardware upgrades with new models and overlooking the after-sales service.
I’ve built 10+ SEO agent skills in 34 days. Six worked on the first try. The other four taught me everything I’m about to show you about the folder structure most LinkedIn posts about AI SEO skills gloss over.
What makes these agents reliable isn’t better prompts. It’s the architecture behind them. Here’s how to build an agent from scratch, test it, fix it, and ship it with confidence.
Why most AI SEO skills fail
Here’s what a typical “AI SEO prompt” looks like on LinkedIn:
You are an SEO expert. Analyze the following website and provide a comprehensive audit with recommendations.
That’s it. One prompt. Maybe some formatting instructions. The person posts a screenshot of the output, gets 500 likes, and moves on. The output looks professional. It reads well. It’s also 40% wrong.
I know because I tried this exact approach. Early in the build, I pointed an agent at a website and said, “find SEO issues.” It came back with 20 findings. Eight didn’t exist. The agent had never visited some of the URLs it was reporting on.
Three problems kill single-prompt skills:
No tools: The agent has no way to actually check the website. It’s working from training data and guessing. When you ask, “Does this site have canonical tags?” the agent imagines what the site probably looks like rather than fetching the HTML and parsing it.
No verification: Nobody checks if the output is true. The agent says, “missing meta descriptions on 15 pages.” Which 15? Are those pages even indexed? Are they noindexed on purpose? No one asks. No one verifies.
No memory: Run the same skill twice, you get different output. Different structure. Different severity labels. Sometimes different findings entirely. There’s no consistency because there’s no template, no schema, no record of past runs.
If your skill is a prompt in a single file, you don’t have a skill. You have a coin flip.
Every agent in our system has a workspace. Think of it like a new hire’s desk, stocked with everything they need. Here’s what the workspace looks like for the agent that crawls websites and maps their architecture:
agent-workspace/
AGENTS.md instructions, rules, output format
SOUL.md personality, principles, quality bar
scripts/
crawl_site.js tool the agent calls to crawl
parse_sitemap.sh tool to read XML sitemaps
references/
criteria.md what counts as an issue vs noise
gotchas.md known false positives to watch for
memory/
runs.log past execution history
templates/
output.md expected output structure
Six components. One prompt file would cover maybe 20% of this.
AGENTS.md is the instruction manual
I wrote thousands of words of methodology into AGENTS.md. Instead of “crawl the site,” I laid out the steps: “Start with the sitemap. If no sitemap exists, check /sitemap.xml, /sitemap_index.xml, and robots.txt for sitemap references.
Respect crawl-delay. Use a browser user-agent string, never a bare request. If you get 403s, note the pattern and try with different headers before reporting it as a block.”
Scripts are the agent’s tools
The agent calls node crawl_site.js –url to analyze website data. It doesn’t write curl commands from scratch every time. That’s the difference between giving someone a toolbox and telling them to forge their own wrench.
References are the judgment calls
This contains criteria for what counts as an issue. Known false positives to watch for. Edge cases that took me 20 years to learn. The agent reads these when it encounters something ambiguous.
Memory is institutional knowledge
Here I keep a log of past runs:
What it found last time.
How long the crawl took.
What broke.
The next execution benefits from the last.
Templates enforce consistency
This is where I get specific about the output I want: “Use this exact structure. These exact fields. This severity scale.” Output templates are the difference between getting the same quality in run 14 as you did in run 1.
Walkthrough: Building the crawler from scratch
Let me show you exactly how I built the crawler. It maps a site’s architecture, discovers every page, and reports what it finds.
Version 1: The naive approach
I provided the instruction: “Crawl this website and list all pages.”
The agent wrote its own HTTP requests, used bare curl, and got blocked by the first site it touched. Every modern CDN blocks requests without a browser user-agent string, so it was dead on arrival.
Version 2: Added a script
I built crawl_site.js using Playwright. This version used a headless browser and a real user-agent. The agent calls the script instead of writing its own requests.
This worked on small sites, but it crashed on anything over 200 pages. Because there was no rate limiting and no resume capability, it hammered servers until they blocked us.
Version 3: Introducing rate limiting and resume
I added throttling with a two requests per second default and never every two seconds for CDN-protected sites. The agent reads robots.txt and adjusts its speed without asking permission. I also added checkpoint files so a crashed crawl can resume from where it stopped.
This worked on most sites, but it failed on sites that require JavaScript rendering.
Version 4: JavaSript rendering
This time, I added a browser rendering mode. The agent detects whether a site is a single-page app (React, Next.js, Angular) and automatically switches to full browser rendering.
It also compares rendered HTML against source HTML, and I found real issues this way: Sites where the source HTML was an empty shell but the rendered page was full of content. Google might or might not render it properly. Now we check both.
This version worked on everything, but the output was inconsistent between runs.
Version 5: Time for templates and memory
For this version, I added templates/output.md with exact fields: URL count, sitemap coverage, blocked paths, response code distribution, render mode used, and issues found. This way every run produces the same structure.
I also added memory/runs.log. The agent appends a summary after every execution. Next time it runs, it reads the log and can compare results, like “Last crawl found 485 pages. This crawl found 487. Two new pages added.”
Version 5 is what we run today. Five iterations in one day of building.
THE CRAWLER'S EVOLUTION
v1: Raw curl → blocked everywhere
v2: Playwright script → crashed on large sites
v3: Rate limiting → couldn't handle JS sites
v4: Browser rendering → inconsistent output
v5: Templates + memory → stable, consistent, reliable
Time: 1 day. Lesson: the first version never works.
The pattern is always the same: Start small, hit a wall, fix the wall, hit the next wall.
Five versions in one day doesn’t mean five failures. It means five lessons that are now permanently encoded. I’ve rebuilt delivery systems four times over 20 years. The process doesn’t change. You start with what’s elegant, then reality hits, and you end up with what works.
Tip: Don’t try to build the perfect skill on the first attempt. Build the simplest thing that could possibly work. Run it on real data and watch it fail. The failures tell you exactly what to add next. Every version of our crawler was a direct response to a specific failure. Not a feature we imagined. A problem we hit.
This is the most important architectural decision I made.
When you write “use curl to fetch the sitemap” in your instructions, the agent generates a curl command from scratch every time. Sometimes it adds the right headers. Sometimes it doesn’t. Sometimes it follows redirects. Sometimes it forgets.
When you give the agent a script called parse_sitemap.sh, it calls the script. The script always has the right headers, always follows redirects, and always handles edge cases. The agent’s judgment goes into WHEN to call the tool and WHAT to do with the results. The tool handles HOW.
Our agents have tools for everything:
crawl_site.js: Playwright-based crawler with rate limiting, resume, and rendering
parse_sitemap.sh: Fetches and parses XML sitemaps, counts URLs, detects nested indexes
check_status.sh: Tests HTTP response codes with proper user-agent strings
extract_links.sh: Pulls internal and external links from page HTML
The agent decides which tools to use and what parameters to set. The crawler chooses its own crawl speed based on what it encounters. It reads robots.txt and adjusts. It has judgment within guardrails.
Think of it this way: You give a new hire a CRM, not instructions on how to build a database. The tools are the CRM. The instructions are the process for using them.
Progressive disclosure: Don’t dump everything at once
Here’s a mistake I made early: I put everything in AGENTS.md. Every rule. Every edge case. Every gotcha. Thousands of words.
The agent got confused. It had too much context and it started prioritizing obscure edge cases over common tasks. It would spend time checking for hash routing issues on a WordPress blog.
The fix: progressive disclosure.
Core rules that affect the 80% case go in AGENTS.md. This is what the agent needs to know for every single run.
Edge cases go in references/gotchas.md. The agent reads this file when it encounters something ambiguous. Not before every task. Only when it needs it.
Criteria for severity scoring go in references/criteria.md. The agent checks this when it finds an issue and needs to decide how bad it is. Not upfront.
This is the same way a skilled employee operates. They know the core process by heart. They check the handbook when something weird comes up. They don’t re-read the entire handbook before answering every email.
If your agent output is inconsistent but your instructions are detailed, the problem is usually too much context. Agents, like new hires, perform better with clear priorities and a reference shelf than with a 50-page manual they have to digest before every task.
The 10 gotchas: Failure modes that will burn you
Every one of these lessons cost me hours. They’re now encoded in our agents’ references/gotchas.md files so they can’t happen again.
Agents hallucinate data they can’t verify
I asked the research agent to find law firms and count their attorneys. It made every number up. It had never visited any of their websites.
Only ask agents to produce data they can actually fetch and verify. Separate what they know (training data) from what they can prove (fetched data).
Knowledge doesn’t transfer between agents
This fix I figured out on day one (use a browser user-agent string to avoid CDN blocks) had to be re-taught to every new agent. Day 34, a brand new agent hit the exact same problem.
Agents don’t share memories. Encode shared lessons in a common gotchas file that multiple agents can reference.
Output format drifts between runs
The same prompt can result in different field names: “note” vs. “assessment.” “lead_score” vs. “qualification_rating.” If you run it twice, get two different schemas.
The fix: Create strict output templates with exact field names. Not “write a report.” “Use this exact template with these exact fields.”
Agents confidently report issues that don’t exist
The first three audits delivered false positives with total confidence.
The fix wasn’t a better prompt. It was a better boss. A dedicated reviewer agent whose only job is to verify everyone else’s work. The same reason code review exists for human developers.
Bare HTTP requests get blocked everywhere
Every modern CDN blocks requests without a browser user-agent string. The crawler learned this on audit number two when an entire site returned 403s.
All it required was a one-line fix, and now it’s in the gotchas file. Every new agent reads it on day one.
Don’t guess URL paths
Agents love to construct URLs they think should exist: /about-us, /blog, /contact. Half the time, those URLs 404.
My rule is: Fetch the homepage first, read the navigation, follow real links. Never guess.
‘Done’ vs. ‘in review’ matters
Agents marked tasks as “done” when posting their findings. Wrong. “Done” means approved. “In review” means waiting for human verification.
This small distinction has a huge impact on workflow clarity when you have 10 agents posting work simultaneously.
Categories must be hyper-specific
“Fintech” is useless for prospecting because it’s too broad. “PI law firms in Houston” works. Every company in a category should directly compete with every other company.
My first attempt at sales categories was “Personal finance & fintech.” A crypto exchange doesn’t compete with a budgeting app. Lesson learned in 20 minutes.
Never ask an LLM to compile data
Unless you want fabricated results. I asked an agent to summarize findings from five separate reports into one document. It invented findings that weren’t in any of the source reports.
Always build data compilations programmatically. Script it. Never prompt it.
Agents will try things you never planned
The research agent tried to call an API we never set up. It assumed we had access because it knew the API existed.
The fix: Be explicit about what tools are available. If a script doesn’t exist in the scripts folder, the agent can’t use it. Boundaries prevent creative failures.
Build the reviewer first
This is counterintuitive. When you’re excited about building, you want to build the workers. The crawler. The analyzers. The fun parts.
Build the reviewer first. Without a review layer, you have no way to measure quality. You ship the first audit and it looks great. But 40% of the findings are wrong. You don’t know that until a client or a colleague spots it.
Our review agent reads every finding from every specialist agent. It checks:
Does the evidence support the claim?
Is the severity appropriate for the actual impact?
Are there duplicates across different specialists?
Did the agent check what it says it checked?
That single agent was the biggest quality improvement I made. Bigger than any prompt tweak. Bigger than any new tool.
The human approval rate across 270 internal linking recommendations: 99.6%. That number exists because a reviewer verifies every single one.
I’ve seen the same pattern with human SEO teams for 20 years. The teams that produce great work aren’t the ones with the best analysts. They’re the ones with the best review process. The analysis is table stakes. The review is the product.
BUILD ORDER (WHAT I LEARNED THE HARD WAY)
What I did first: Build workers → Ship output → Discover quality problems → Build reviewer
What I should have done: Build reviewer → Build workers → Ship reviewed output → Iterate both
The reviewer defines quality. Build it first. Everything else gets measured against it.
Tip: If you’re building multiple agents, the reviewer should be the first agent you build. Define what “good output” looks like before you build the thing that produces output. Otherwise, you’re shipping hallucinations with formatting. I learned this across three audits that were embarrassing in hindsight.
The validation standard (Our unfair advantage)
The reviewer catches technical errors. But there’s a higher bar than “technically correct.”
We have a real SEO agency with real clients and a team with 50 years of combined experience. Every agent finding gets validated against one question: “Would we stake our reputation on this?”
Would we actually send this to a client, put our name on the report, and tell the developer to build it?
Below are four tests we use for every finding:
The Google engineer test: If this client’s cousin works at Google, would they read this finding and nod? Would they say, “Yes, this is a real issue, this makes sense”? If the answer is no, it doesn’t ship.
The developer test: Can a developer reproduce this without asking a single follow-up question? “Fix your canonicals” fails. “Change CANONICAL_BASE_URL from http to https in your production .env” passes.
The agency reputation test: Would we defend this finding in a client meeting? If I’d be embarrassed explaining it to a technical CMO, it gets cut.
The implementation test: Is this specific enough to actually fix? Not “improve your page speed” but “your hero video is 3.4MB, which is 72% of total page weight. Serve a compressed version to mobile. Here’s the file.”
This is our unfair advantage. We’re not building agents in a vacuum. Most people building AI SEO tools have never run a real audit. They don’t know what “good” looks like. We do. We’ve been delivering it for 20 years with real clients. That’s why our approval rate is 99.6%.
Sandbox testing: Train on planted bugs
You don’t train an agent on real client sites. You build a test environment where you KNOW the answers. We built two sandbox websites with SEO issues we planted on purpose:
A WordPress-style site with 27+ planted issues: missing canonicals, redirect chains, orphan pages, duplicate content, broken schema markup.
A Node.js site simulating React/Next.js/Angular patterns with ~90 planted issues: empty SPA shells, hash routing, stale cached pages, hydration mismatches, cloaking.
The training loop:
Run agent against sandbox.
Compare agent’s findings to known issues.
Agent missed something? Fix the instructions.
Agent reported a false positive? Add it to gotchas.md.
Re-run. Compare again.
Only when it passes the sandbox consistently does it touch real data.
Think of it like a driving test course. Every accident on real roads becomes a new obstacle on the course. New drivers face every known challenge before they hit the highway.
The sandbox is a living test suite. Every verified issue from a real audit gets baked back in. It only gets harder. The agents only get better.
Consistency: The unsexy secret
Nobody writes about this because it’s boring. But consistency is what separates a demo from a product.
Three things that make output consistent:
Templates: Every agent has an output template in templates/output.md: Exact fields, structure, and severity scale. If the output looks different every run, you don’t need a better prompt. You need a template file.
Run logs: After every execution, the agent appends a summary to memory/runs.log. Timestamp, site, pages crawled, issues found, duration. The next run reads this log. It knows what happened last time. It can compare and provide outputs like, “Found 14 issues last run. Found 16 this run. 2 new issues identified.”
Schema enforcement: Field names are locked: “severity” not “priority,” “url” not “page_url,” “description” not “summary.” When you let field names drift, downstream tooling breaks. Templates solve this permanently.
If your agent output looks different every run, you need a template file, not a better prompt. I cannot stress this enough. The single fastest way to improve quality for any agent is a strict output template.
The stack that makes it work
A quick note on infrastructure, because the tools matter.
Our agents run on OpenClaw. It’s the runtime that handles wake-ups, sessions, memory, and tool routing. Think of it as the operating system the agents run on. When an agent finishes one task and needs to pick up the next, OpenClaw handles that transition. When an agent needs to remember what it did last session, OpenClaw provides that memory.
Paperclip is the company OS. Org charts, goals, issue tracking, task assignments. It’s where agents coordinate. When the crawler finishes mapping a site and needs to hand off to the specialist agents, Paperclip manages that handoff through its issue system. Agents create tasks for each other. Auto-wake on assignment.
Claude Code is the builder. Every script, every agent instruction file, every tool was built with Claude Code running Opus 4.6. I’m a vibe coder with 20 years of SEO expertise and zero traditional programming training. Claude Code turns domain knowledge into working software.
The combination: OpenClaw runs the agents. Paperclip coordinates them. Claude Code builds everything.
This process resulted in 14+ audits completed with 12 to 20 developer-ready tickets per audit, including exact URLs and fix instructions. All produced in hours, not weeks.
We have a 99.6% approval rate on internal linking recommendations on 270 links across two sites, verified by a dedicated review process.
We completed more than 80 SEO checks mapped across seven specialist agents. Each check has expected outcomes, evidence requirements, and false positive rules. Every finding is specific (i.e., “the main app JavaScript bundle is 78% unused. Here are the exact files to fix”).
That level of specificity comes from the skill architecture. The folder structure. The tools. The references. The templates. The review layer. Not the prompt.
If you want to build SEO agent skills that actually work, stop writing prompts and start building workspaces. Give your agents tools, not instructions. Test on sandboxes, not clients.
Build the reviewer first. Enforce templates. Log everything. The first version will fail. The fifth version will surprise you.
This is how you turn agent output into something repeatable. The same system produces the same quality — whether it’s the first audit or the 14th — because every step is structured, verified, and encoded.
Not because the AI is smarter. Because the architecture is.
Over the past few years, Performance Max has gone from an opaque experiment to a more capable — though still imperfect — campaign type for B2B marketers.
The fundamentals haven’t changed: skepticism still matters, first-party data is critical, experimentation is non-negotiable, and actionable reporting drives optimization. What has changed is how much better Google has gotten at operationalizing those inputs.
That means your Performance Max strategy needs to adapt. Here are five best practices for running more effective PMax campaigns for B2B today.
1. Guide AI with the right inputs
In 2022, given the automated nature of PMax campaigns and the aggressive way Google reps were pushing them, I predicted we’d see an accelerated move toward AI integration. That’s certainly played out, probably in part because of competitive pressures introduced by ChatGPT and the like.
AI Max for Search (launched in 2025) and PMax are both being prioritized by Google, and that’s not necessarily a bad thing since Google hasn’t deprecated standard Search campaign for B2B and has provided a slew of helpful updates that make PMax more viable for B2B.
Three updates worth using include:
Search themes, which are useful for more precise targeting.
Brand exclusions, which help minimize CPC inflation and over-investment on less-incremental queries.
Account-level channel reporting, which gives you a single dashboard look at performance across campaigns. For this feature, segment by conversion metrics to drill down on ROI by channel. You’ll quickly see overperformers where you can increase investment and underperformers that cry out for further optimization or reduced budget.
B2B lead quality in search campaigns has always been a challenge, and PMax’s relative lack of advertiser control makes that challenge tougher. I’ve pushed offline conversion tracking (OCT) since we’ve had that capability, but it’s an absolute non-negotiable for B2B campaigns.
Citing the phase-out of third-party cookies that still hasn’t happened (!), Google officially sunsetted Similar Audiences in 2023, which — well, it was a big loss for advertisers.
To compensate, understand and adapt according to the nature of PMax targeting, which is based on audience signals. Feed the AI high-quality first-party data (CRM lists) and let the algorithm find “lookalikes” through its own internal signals.
CRM lists for B2B are obviously critical, and this should give you even more incentive to clean up and segment CRM data, with audience lists closest to the point of revenue (e.g., SQLs or revenue if you don’t have enough closed-won data to send strong signals), especially valuable for finding high-value new users.
Creative is an important part of the puzzle for PMax. Good creative can prompt the right audience to engage, and great creative can deter the wrong audience from engaging.
Because YouTube is now a massive part of PMax campaigns, video — which has never been a B2B strength — should be prioritized more than ever for performance marketing.
Google has made this easier by adding the ability to build AI-generated assets right in the Google Ads interface. Just recently, they launched an important complementary feature in beta: PMax A/B creative testing to help advertisers understand which creatives are actually driving performance, and to use test-and-control structures to surface winning (and losing) elements.
A major source of frustration with PMax has been a lack of transparency into results. Over the last few years, Google has introduced reporting updates to address some of those concerns.
Search term insights and auction insights in the Insights tab provide more visibility into performance. Search term insights show how your ads perform for the queries users actually type, including how those ads are being matched and served. This added nuance makes optimization more precise.
Auction insights add competitive context, showing how your campaigns perform against others in the same auctions through metrics like impression share and outranking share.
Finally, asset-level reporting brings visibility to creative performance, with data on impressions, clicks, cost, and conversions for each asset.
Together, these updates give you a clearer view into what’s driving performance — and where to focus optimization efforts.
Taken together, recent updates make PMax more viable for B2B marketers than it used to be, especially for those with strong first-party data to train bidding algorithms and a need to find new customer pockets.
After more than 10 years in marketing, I still prefer having controllable levers — and I’m not willing to fully trust Google to act more in my (or my clients’) best interests than its own. Use everything at your disposal to make PMax campaigns work for you, and keep an eye out for new features Google releases that can give you more visibility and control over your account performance.
Programmatic SEO (pSEO) has been viewed with suspicion by the market. For many SEOs, the term is synonymous with low-quality pages, duplicate content, and the old tactic of “find and replace” city names in static templates.
Google’s spam policies on scaled content abuse are clear: generating vast amounts of unoriginal content primarily to manipulate search rankings is a violation.
Modern pSEO replaces mass page generation with an infrastructure that answers thousands of specific search intents with local nuance and semantic depth at a scale that isn’t possible manually.
This blueprint shows how to evolve from syntax-based pSEO (swapping keywords) to semantics-based pSEO (meaning and context), using a methodology we’ve applied to major players in Brazil.
The fallacy of the static template vs. semantic granularity
The most common mistake when starting a pSEO project is starting with the template, not the data. The old mindset said: “I have a template for ‘Best Hotel in [City].’ I’ll replicate this for 500 cities.”
The problem? The search intent for “Best Hotel in [Las Vegas]” (focused on nightlife, casinos, and luxury) can be radically different from the intent for “Best Hotel in [Orlando]” (focused on family suites, park shuttles, and pools). The user priorities, amenities sought, and decision-making criteria change completely.
The semantic approach requires us to use AI to granularize content. Instead of just swapping the {{City}} variable, we use LLMs to rewrite entire sections of the page based on the specific travel intent of that destination.
We don’t want to create 1,000 pages that say the same thing. We want 1,000 pages that answer 1,000 unique travel needs while maintaining a scalable technical structure.
Before writing a single line of content, you must answer a critical question: Where do I have permission to rank?
Many pSEO projects fail because they try to cover topics where the domain lacks historical authority. The solution we developed involves a deep analysis of topic clusters based on real Google Search Console (GSC) data, not just third-party search volume.
The authority map methodology works in three stages:
Cluster audit: Identify which topics the domain already dominates, which are opportunities, and where semantic gaps exist.
Priority definition: pSEO should be used surgically to fill these gaps and strengthen topical authority, not to shoot in all directions.
Connection with the calendar: The pSEO strategy must be born from this data. If GSC shows you have growing authority in a topic like “Mortgage Credit,” that is where scale should be applied first.
From there, AI suggests themes and direction, taking into account seasonality and brand guide specifications. This approach transforms pSEO from a “gamble” into a tactic of territorial defense and expansion based on proprietary data.
Solving ‘brand hallucination’: Context governance
The biggest barrier to AI adoption in enterprise companies is brand consistency. How do you ensure that 500 AI-generated articles don’t sound generic or, even worse, hallucinate information outside the company’s tone of voice?
The answer lies in context governance. Instead of relying on isolated prompts, the pSEO architecture must include a brand guidelines layer that acts as a guardian before text generation. This means systematically injecting:
Brand persona: (e.g., “We are technical, but accessible”).
Negative constraints: (e.g., “Never use the word ‘cheap,’ use ‘affordable’”).
Proprietary data: Institutional information that AI doesn’t have in its training data.
By centralizing these guidelines in a digital brand guide that feeds all AI agents, we ensure that multiple sites within the same corporate group (such as a retail conglomerate) maintain their distinct verbal identities, even when producing content on the same topic (like Black Friday) simultaneously.
The AI stops being a “junior copywriter” and starts acting as a specialist trained in the company’s culture.
The architecture: The semantic mesh (internal linking)
You’ve created 1,000 excellent pages. How do you ensure Google finds and values all of them? The answer isn’t using “related posts” plugins that only look for matching tags. You need to create a strategy based on real data.
The end of the ‘dead end’
You don’t want the user to land on a page and leave. You want to offer the next logical step. Cross-reference search intent with the destination:
The practical example: If a user lands on the site searching for “What is a CRM,” they are in the discovery phase. If that page doesn’t link semantically to “Advantages of [your company’s] CRM,” the user journey “dies” there. The semantic mesh connects the question to the solution.
Strategic reasoning in practice
Instead of randomness, our analysis works based on semantic meaning. The AI identifies:
“I noticed you are about to write about ‘customer retention.’ We have an older article about ‘churn rate’ that complements this topic perfectly. Insert a link to it.”
The tool suggests links between these pages because the context is relevant, strengthening the site’s Topical Mesh.
In programmatic SEO projects, where site depth can grow rapidly, this automation via vectors is the only way to ensure no good page gets forgotten at the bottom of the index.
This closes the loop of topical authority, ensuring no page generated at scale becomes an orphan page.
Case study: Regionalization and seasonality at scale
Theory is nice, but seeing it in practice is even better. Let’s analyze the case of Ânima Educação, one of the largest private education players in Brazil, with about 310,000 students and 18 higher education institutions.
The challenge
The National High School Exam (ENEM) is the “Black Friday” of Brazilian education. Search volume explodes in a short period, competition is brutal, and search intents shift rapidly (from “how to study” to “what is my score good for”). Furthermore, Brazil has continental dimensions; the questions of a student in the Northeast are different from those of a student in the extreme South.
The execution
Using the semantic pSEO methodology and the brand governance mentioned above, it was possible to structure complete coverage of the candidate journey — from exam preparation to the release of grades.
We ensured that all 18 brands were positioned to answer student questions at the exact moment of the search, respecting local nuances.
The results
Scale with precision: During five months, hundreds of undergraduate course pages and articles were optimized or created with granular local relevance.
Business impact: Surpassed the organic revenue target by 110% during the critical ENEM season.
Omnichannel dominance: Visibility across Google Search, Google Discover, and AI Overviews, and LLMs like Gemini and ChatGPT.
Strategic shift: The SEO team transitioned from repetitive manual tasks to high-level strategic oversight.
The technical guardian: Conversational monitoring
Scaling content without scaling technical monitoring is a recipe for disaster. Publishing 500 pages that result in 404 errors, redirect loops, or poor Core Web Vitals (CWV) can destroy the site’s crawl budget.
Modern pSEO requires a layer of real-time technical SEO. It isn’t enough to wait for the monthly report. You need to connect data to the workflow.
The trend now is the use of technical SEO agents — conversational interfaces that allow the professional to ask the data: “Of the 200 pages published today, which ones have indexing issues?” or “Which clusters are suffering from high LCP?”
This closes the cycle:
Planning (authority map).
Execution (pSEO with brand governance and semantic linking).
Programmatic SEO has ceased to be about volume to become about relevance. Success won’t come from publishing 10,000 pages tomorrow, but from building an infrastructure that delivers genuine value at scale.
You can use this semantic pSEO roadmap to start your transformation:
Start with data, not templates: Use your authority map (GSC) to identify where you already have permission to grow. Don’t waste resources attacking territories where your brand has no history.
Implement context governance: Before scaling, create the “rules of the game.” Inject your brand guidelines and proprietary data into prompts to avoid generic content and hallucinations. The AI should sound like your best expert.
Build bridges, not islands: Ensure every new page is integrated into a robust semantic mesh. Use internal linking to transfer authority and guide the user toward conversion, avoiding dead ends.
Monitor with AI: Abandon sporadic manual audits. Adopt technical agents that monitor your site’s health in real time as you scale.
The future of SEO isn’t about who creates the most content. It’s about who can unite the scale of the machine with the sensitivity of the human to deliver the best answer, at the right moment, for each individual user.
The trial is live, limited to the U.S. for now, and moving faster than you likely expected. ChatGPT ads launched Feb. 9 for logged-in users on Free and Go tiers, with 600+ advertisers already in.
With 800 million weekly active users, a global rollout of ChatGPT ads is a matter of when, not if.
OpenAI has confirmed the next expansion to Australia, New Zealand, and Canada. The latest update from Adthena trialists suggests the UK could see ads as early as mid-May.
We’ve tracked ChatGPT ad placements since rollout. With an index of 50,000+ daily placements across B2B software, ecommerce, fintech, and consumer verticals, we’ve had a front-row view of how this format is evolving. Here’s what we’ve found.
What ChatGPT ads actually look like
ChatGPT ads appear inline within conversation responses. When you ask something with commercial intent like “best weekend getaway” or “top running shoes under $100,” a sponsored result can appear alongside the AI’s answer, clearly labeled “Sponsored.”
This isn’t a search bar. It’s a conversation. Users arrive already engaged, already researching, often close to a decision.
The format is tighter than traditional search: no sitelinks or extensions — just a headline, short body copy, and a destination.
But here’s what we didn’t expect. Our data shows what we’re calling the Adthena “Double Parked” phenomenon: a single brand appearing twice in the same response.
We spotted New Balance with two separate sponsored placements in one ChatGPT answer. This raises a key question around visibility, frequency, and what it means to own a conversation on this platform.
10 things we’ve learned from 50,000+ daily placements
If you move fast, this is a rare moment: a new format, an uncontested landscape, and data most competitors don’t have yet. Here’s what it shows.
Headlines follow a “Brand: Benefit” formula. A name, a colon, a value claim. Think “Betterment: 5.25% APY Cash Account.” Dominant across top performers.
Almost every ad leads with the brand name. Awareness thinking for a format where users are already deep in a conversation, not just entering a search bar.
Headlines average just 30 characters, with a ceiling around 36. The constraint forces hyper-concise messaging and every word earns its place.
Body copy runs around 19 words, structured as two tight sentences. One lead proof point, one offer or nudge. One reason to click.
Context mirroring is a defining feature. The strongest ads echo the user’s query directly. A running shoe ad referencing “the transition from 5k to 21.1k” isn’t a coincidence.
The $ symbol drives conversion. Specific dollar figures, precise APY rates, credit amounts. Concrete claims consistently outperform vague promises in intent-heavy environments.
Numbers dominate body copy. Specs, trial lengths, rates. Hard numbers feel more native and trustworthy than soft superlatives in a research-led environment.
“Free” is the most common conversion lever. It removes friction for users already in research mode and close to a decision.
CTAs are action-specific and generic “Learn More” is virtually absent. “Open Account,” “Shop Cell Phones,” “Claim Credits.” Every CTA names the brand, offer, or next step.
Tone is confident and measured. Exclamation marks are rare. The best ads mirror ChatGPT’s calm register—hype punctuation kills trust here.
What this means for your paid search strategy
Top-performing brands in ChatGPT don’t repurpose Google ad copy and hope for the best. They write for a conversational, intent-rich environment where users are already halfway through a decision before the ad appears.
Lead with your brand name. Anchor value in specifics. Make low-friction offers central to your creative. If you’re not thinking about context mirroring, you’re leaving performance on the table.
The bigger question is visibility. If your competitors show up in ChatGPT conversations and you don’t, you’re not just missing clicks — you’re missing the conversation.
See exactly what’s happening with Adthena’s ChatGPT Ads Intelligence
Knowing the trends is one thing. Knowing what your competitors are doing on your exact prompts is another. That’s the problem we set out to solve.
Right now, ChatGPT ads give you impressions and clicks — nothing more. No competitive context, no prompt-level visibility, no insight into who else appears in the same conversations or where you’re missing coverage. You’re optimizing blind.
Adthena’s ChatGPT Ads Intelligence changes that. Here’s what you get.
Your performance, in context
The Ads Performance tab gives you a live snapshot of your ChatGPT activity: ad presence rate, top-performing intent group, total impressions, average CTR, and unique competitors detected. The trend chart shows your presence over time so you can clearly see whether you’re gaining or losing momentum.
Know which topics you’re winning and where to close the gap
The Topics and Keywords Analysis view breaks down performance by intent group, showing your ad presence rate against the competitor average. Each group includes a built-in tactical recommendation, so you always know your next move.
See your own ads as users see them
The Ads Sampling tab shows all your ChatGPT creatives with their headline, description, image, and format. The insight panel highlights your top-performing creative and surfaces optimization opportunities, like pairing a price anchor with a time-limited offer.
Understand exactly what competitors are running
The Competitor Creative Analysis panel breaks down rival ads across your tracked prompts: the images they use, the dominant copy themes, and their format mix. No more guessing what your competition is doing.
Never miss a shift in the competitive landscape
The Ads Benchmarking tab shows who’s advertising on your prompts and how their presence changes week to week. The “What changed this week?” feed flags new entrants and share shifts in plain language before your next campaign review.
Find the gaps before your competitors do
The Competitor Gap Analysis table shows every prompt where competitors have presence and you don’t, flagged by intent group and competitor count. A clear, prioritized view of where to expand your ChatGPT coverage.
The first prompt is the new first click
We’re tracking early-stage data from a platform still in limited rollout. As OpenAI expands to new countries and the advertiser base grows, the competitive landscape will shift fast. Brands building their ChatGPT presence now — learning the format, testing creative, mapping competitive gaps — will have a meaningful head start over those who wait.
Don’t let competitors win the first prompt. Join the product waitlist to uncover your ChatGPT ads landscape.
In the meantime, get your ads ready with Adthena’s free ChatGPT AdBridge. Connect your Google Ads account and we’ll build your ChatGPT ads setup with AI-enriched campaigns and smarter negative keywords — delivered to your inbox, ready to import.
The Kentucky Derby has been kidnapped and dressed in a seersucker suit.
Not stolen in the dramatic sense. No masked men. No midnight escape. No, this was a rather polite abduction. Signed contracts. Corporate lanyards. A slow, suffocating takeover by people who think bourbon goes well with a quarterly earnings report.
Somewhere along the line, the Derby stopped being a Louisville event and started being a global product. A traveling circus of money men and brand strategists who fly in, drink just enough mint julep to say they did, and then vanish back into whatever glass tower they crawled out of. They leave behind nothing but higher prices.
Because that’s what it is now. Not a celebration. Not a civic ritual. A product. A gleaming, overpriced, overmanaged product sitting inside Churchill Downs like a prize hog at auction, fattened up for people who don’t know the difference between Central Avenue and a country club valet line.
I remember stories from my uncles about the 70s, and they don’t sound like this manicured hallucination we’ve got now. They made their way to the infield where rules were lax and nobody was asking for a credit card. That was the Derby.
Not this sanitized pageant of wealth and soft hands. Back then it was loud and ugly and alive. Central Avenue would explode into a block party that didn’t ask permission from anyone with a clipboard. Music pouring out of cars, grills smoking, strangers arguing and laughing and, occasionally, falling down.
It was local, and it was ours.
Now, it’s been polished until it squeaks.
The fun has been trimmed back like an overgrown hedge. The rough edges sanded down by people who fear anything that can’t be controlled or neatly packaged between commercial breaks. The infield is “managed.” Every inch of it is branded and quietly sold off to the highest bidder, little slices of a once living thing.
And the people of Louisville?
We stand around like spectators at our own funeral.
We complain. Oh, we complain beautifully. We talk about how the corporations have ruined everything, how the Derby doesn’t feel the same, how it’s all gotten too big, too expensive, too sterile. We say it with conviction, with a little bourbon in our system, like we’re delivering some grand indictment of the modern world.
And then we go home.
That’s the part that gnaws at me.
We’ve lost the nerve for it. Somewhere along the way, we traded participation for observation. We let the thing slip out of our hands and, now, we act surprised that it doesn’t recognize us anymore.
You don’t lose something like the Derby all at once. It erodes. Piece by piece. A corporate tent here. A price hike there. A new rule, a new barrier, a new reason why the people who built it should stand a little farther back.
Until one day, you look up and realize you’re just a guest in your own city.
It’s not going to come from a press release. Not from a committee. Not from some carefully branded “return to roots” campaign sponsored by the same people who paved over those roots in the first place.
If Louisville wants its Derby back, it's going to have to return to what's real.
Bring back Central Avenue, not as a nostalgia act with security barricades and corporate-approved fun. Let it breathe. Let it get loud. Let people take up space without asking permission from someone in a polo shirt with a logo stitched over where a heart should be.
Because culture doesn’t live in VIP sections.
It lives in the cracks. In the noise.
And here’s the ugly truth nobody likes to admit.
Those corporations didn’t take the Derby from us. We handed it over, one polite concession at a time.
So, if you want it back, you're not going to get it back by reminiscing about good times fifty years ago. Change will require us to do more.
Otherwise, you can keep your hats, your cocktails, your tidy little version of tradition.
And the real Derby, that wild, grimy, beautiful beast, will stay exactly where it is now, locked behind velvet ropes, owned by people who never loved it in the first place.
PSG’s 5-4 win over Bayern Munich on Tuesday night was undoubtedly one of the games of the season in terms of sheer end-to-end entertainment.
But does it, as so much of social media would have you think, spell bad news for Arsenal in the Champions League? I’m not so convinced.
Don’t get me wrong, it’s an absolute joy watching elite attacking players expressing themselves like this – Khvicha Kvaratskhelia and Desire Doue at one end, and Michael Olise and Luis Diaz at the other is as good as anything in world football right now.
DOWNLOAD THE OFFICIAL CAUGHTOFFSIDE APP FOR ALL THE LATEST & BREAKING UPDATES – STRAIGHT TO YOUR PHONE! ON APPLE & GOOGLE PLAY
But those rushing to condemn the Premier League, and particularly Arsenal, for moving away from this kind of football to a more tactical and defensive-minded game that has dominated in England this season, would do well not to overlook some important factors at play here.
Why PSG and Bayern look so much better than Premier League sides
Free-flowing attacking football is a luxury Arsenal and other Premier League teams can’t afford. Even Manchester City haven’t really played like their old selves for much of this season, and found Championship side Southampton in the FA Cup semi-final a tougher opponent than PSG or Bayern face in most of their domestic matches.
Bayern have already won the Bundesliga, as they do most seasons without any opposition, while PSG will probably come out on top again in Ligue 1 this term, as you’d expect for two clubs whose wage bills dwarf what their so-called competition in these leagues can spend.
In that context, it’s no surprise that Luis Enrique and Vincent Kompany can afford to focus on the kind of freedom and self expression we all want to see from the best attacking players in the Premier League, but there’s little need for control and discipline when your nearest challengers are probably about on a par with mid-table sides like Brentford.
Case-in-point: Arsenal beat Bayern 3-1 at the Emirates Stadium in the league phase of this competition earlier this season and Harry Kane didn’t have a single shot. Not so easy when you’re not playing against the likes of Hamburg and St Pauli, is it, Harry?
Want more CaughtOffside coverage? Add us as a preferred source on Google to your favourites list for news you can trust
Where’s the defending?
“Five-four is a hockey score, not a football score,” Jose Mourinho said back in 2004 after a particularly crazy North London Derby. “In a three-against-three training match, if the score reaches 5-4 I send the players back to the dressing rooms as they are not defending properly. So to get a result like that in a game of 11 against 11 is disgraceful.”
So, let’s be real, if Bayern or PSG defend like this against Arsenal in the Champions League final, then the Gunners are in with a serious shout.
Don’t get angry at me yet – I’m not saying it’s a guaranteed win for Arsenal, and it would be unwise to even make them the automatic favourites against Atletico Madrid.
But don’t let a vastly entertaining game cloud your judgement – these are great players, yes, but they are still being allowed to look far better than they are because they’re not being tested week in, week out the way the best players in the Premier League are. And Arsenal have already shown they can shut them down if their defence is at its best.
Far from being a warning for Arsenal if they go through against Atletico Madrid, this game tonight just shows there’s a perfectly winnable final ahead for Mikel Arteta’s side. Arguably the tougher test might come against the more defensive-minded Diego Simeone in Madrid tomorrow night.
Hey Sammy Fans, I need to get this off my chest. I was a true Samsung Fold fanboy. The first time I unfolded my Galaxy Z Fold (It was Fold3), it felt like I was holding the future of smartphones. The reason was obvious – a big screen for videos, split screen, and cool folding tricks. I showed it off to everyone. “This is the best phone ever.”
After using my first foldable daily for over almost a year, the problems started coming up. The center crease started getting deeper and more annoying. Then one day, when I unfolded my phone, the inner screen was dead. There were no drops, no scratches, nothing crazy. I took it to the service center, hoping for warranty support. The service guy looked at it and said it was “user damage,” and quoted me almost $600-700 for the inner screen replacement. That’s more than half the price of a normal flagship phone. I was shocked.
This isn’t just my bad luck, the same happened with Fold4. I have seen many people in the Samsung community and on X with the same stories. The fold hinges were getting loose or making noise, screens failing after 8-12 months, and dust getting inside somehow.
Here’s another annoying issue. The protective film on the inner screen started peeling off by itself in less than 6 months. I didn’t misuse the phone, the film just lifted from the middle and started bubbling and peeling on its own. It looked ugly and felt terrible while using. I tried pushing it back, but it kept coming off worse. I ended up replacing the film once or twice on both my Fold5 and Fold6, but the same problem kept returning.
Samsung promises about “200,000 folds” or even more now, but in real life, it won’t stand up (my experience is horrible) to the company’s promise. If anything goes wrong, the repair cost is brutal. Many times, instead of repairing the current fold, it feels like the company is pushing you to just buy a new one.
There’s another downside – the battery life. The foldables have average battery life, but not great for such an expensive device. The cameras are fine, but nothing compared to regular Samsung S series phones. The phone feels thick even after Samsung made Fold7 thinner. Multitasking is cool at first, but while watching videos, you notice little bugs and that disturbing crease.
Look, I still love the idea of foldables. With foldable, you can have a big screen when you need it and small when you don’t. But right now, Samsung’s foldable devices are not ready to be a primary phone.
If you really want a foldable, you should wait for a perfect Fold in coming years. Or honestly? Just get a normal flagship like the Galaxy S26 Ultra. The available option is cheaper, tougher, and you won’t stress every time you open your phone. Even the repair cost is also less.
I stopped using my Samsung Fold (Fold3 after 11 months, Fold4 after 6 months, Fold5 in just 2 months, Fold6 in 1 month, and Fold7 after 10 days) and went back to a regular phone. No regrets. The excitement disappeared quickly once I started worrying about the next repair bill.
If you are thinking about buying a Samsung Fold, please read this first. I love the concept, but hate the experience. It is “never again” for me. What about you? Drop your experiences on X handle @thesammyfans.
Nikhyl Singhal, founder of The Skip, explains how AI is shifting product management from a communication-heavy role into one focused on judgment and building.