Google is giving advertisers new visibility into whether its automated recommendations actually drive performance β a long-standing blind spot in the platform.
Whatβs happening. A new βResultsβ tab within Recommendations shows the incremental impact of bidding and budget changes after theyβve been applied, allowing marketers to evaluate outcomes instead of relying on assumptions.
How it works. The feature attributes performance changes to specific recommendations, helping advertisers understand what effect adjustments like budget increases or bid strategy shifts had on results.
Why we care. Marketers can now validate whether recommendations improved performance, making it easier to decide which automated suggestions are worth adopting in the future.
Between the lines. Google has a vested interest in encouraging adoption of its recommendations, so providing performance data could build trust β but it also raises questions about how that impact is measured.
The catch. Advertisers may question whether the reported results are fully objective or skewed toward showing positive outcomes, given Googleβs incentives.
What to watch. How detailed and transparent the reporting becomes β and whether advertisers see mixed or negative results alongside wins.
Bottom line. Google is moving from βtrust usβ to βhereβs the proof,β but advertisers will be watching closely to see how impartial that proof really is.
First seen. This update was first spotted by Arpan Banerjee who shared seeing the new tab on LinkedIn.
Google is giving advertisers more control over how AI generates ad copy, making it easier to scale campaigns without losing brand consistency.
Whatβs happening. Google Ads is rolling out a beta feature that allows marketers to copy text guidelines from existing campaigns and apply them to new ones, eliminating the need to rewrite brand rules from scratch.
How it works. Advertisers can replicate approved tone, style and messaging rules across campaigns in one click, ensuring AI-generated ads stay aligned with brand standards while reducing setup time.
Why we care. The feature helps teams launch campaigns faster by reusing what already works, while maintaining consistency across large accounts where multiple campaigns run simultaneously.
Between the lines. This shift reflects a growing demand from marketers to βtrainβ AI systems rather than rely on them blindly, effectively turning brand guidelines into reusable inputs for automation.
Bottom line. AI is speeding up ad creation, but control is becoming the real differentiator β and Google is starting to hand more of it back to advertisers.
First spotted. This update was spotted by Paid Media expert Arpan Banerjee when he shared spotting the alert on LinkedIn.
UK publisher Kwalee and independent studio Out of the Blue are pleased to announce that the Lovecraftian narrative puzzle adventure Call of the Elder Gods will launch on May 12, 2026. The game is coming to PC (via Steam), Nintendo Switch 2, PlayStation 5, and Xbox Series X|S, and is available day one with Xbox Game Pass. A sequel to 2020's award-winning Call of the Sea, Call of the Elder Gods is a single-player, first-person puzzle adventure with a strong narrative focus.
Players step into the roles of Professor Harry Everhart and newcomer Evangeline Drayton, solving intricate puzzles driven by logic, observation, and environmental interaction.Together, they journey from New England to the Australian desert, the frozen Arctic and the ancient city of Pnakotus, searching for missing loved ones while confronting personal grief.
A little over a year after the release of Assassin's Creed Shadows, Ubisoft has shipped the Title Update 1.1.10, which, aside from the usual bug fixes, adds PSSR 2 support to the game for PS5 Pro players and a number of quality-of-life updates and gameplay changes across all of the game's platforms. The full changelog is available via an Ubisoft news post.
Ubisoft has not detailed the exact visual upgrades wrought by the addition of PSSR 2, but we can likely expect smoother, higher frame rates with sharper upscaling, as has been seen in other games, like Resident Evil: Requiem and Cyberpunk 2077. As of the new update, all players will be able to access the Bo staff, which was previously locked behind the Claws of Awaji expansion. The Switch 2 version of Assassin's Creed Shadows also now features mouse and keyboard support, and the laundry list of bug fixes include UI fixes for damage indicators, a fix for an unintentional +100% stat cap in some cases, issues with fast travel points not being available, and progression getting stuck at 97.89% despite all content being completed.
According to longtime Intel watcher Jaykihn, Nova Lake's integrated graphics will be built around Xe3, the same generation used in Panther Lake integrated GPUs. Jaykihn had previously suggested that Nova Lake would include an Xe4 media component but has since walked that back, stating that "there is nothing Xe4 on...
Intel is working on its own neural texture compression with similar compression performance to Nvidia's counterpart. Best of all, Intel has a fallback version of its compression tech that will work with GPUs that don't come with Intel's XMX engine.
Anthropic's latest frontier AI model, Claude Mythos Preview, is so adept at finding software vulnerabilities that the lab is holding it back to allow companies and institutions to proactively patch their products against the 'thousands' of bugs it has already uncovered.
ZeroTwo lets you access the combined capabilities of Claude, Perplexity, ChatGPT, Manus, and Higgsfield. These top AI platforms each have unique features that give them special abilities beyond their models. Now you can use all of them without paying for several subscriptions. Perplexity's agentic search, Claude's agentic connector, ChatGPT's apps, and Higgsfield's AI tools for creatives are all available on one platform.
The platform also offers deep research, canvas mode, and shared access to threads and projects. Plans include unlimited messages, expanded memory, priority performance, and team features for businesses.
OrbitMeet is a browser-based AI meeting co-pilot that listens to your meetings in real time, surfaces questions you might miss every 75 seconds, and builds your summary as you talk with no plugins or installation.
It detects action items by speaker name, generates follow-up documents such as emails, memos, and action trackers in seconds, and works across Zoom, Teams, Google Meet, or in-person meetings. It's designed for consultants, founders, and distributed teams working in multiple languages. A free plan is available, with Pro at $20.5 CAD/month.
It's officially marathon season, and if you're looking for new gear, I've rounded up our top-rated running shoes and smartwatches that are currently on sale at Amazon.
Google says its AI-powered advertising tools are starting to deliver meaningful results, including major revenue gains for some retailers, as it experiments with how ads work in AI-driven search.
The big picture. Fears that AI chatbots like ChatGPT would disrupt Googleβs core search business havenβt materialized, and instead the companyβs ads business continues to grow, suggesting AI may be expanding how people search rather than replacing it.
By the numbers:
Alphabet Inc. surpassed $400 billion in revenue in 2025.
Q4 ad revenue: $82.28 billion (+13.5% YoY).
YouTube ads: $11.38 billion (+~9% YoY).
Whatβs happened. Google is embedding ads into its AI-powered search experiences, including AI Mode powered by Gemini, while introducing new ad formats designed for conversational queries and tools that allow brands to shape how they appear in AI-generated answers, with a new βbusiness agentβ feature enabling companies like Poshmark and Reebok to control how their products are represented.
Driving the results. AI-driven campaigns like Performance Max and AI Max match ads to more detailed and conversational search intent, and Google says queries in AI Mode are often two to three times longer than traditional searches, giving the system more context to connect users with relevant products, as seen with Aritzia, which reported an 80% increase in revenue after adopting AI Max.
How it works. The system scans a retailerβs website and creative assets, interprets user intent from conversational queries, and dynamically matches products and messaging in real time. This is increasingly important given that 15% of daily searches are entirely new (according to Google) and cannot be predicted through traditional keyword targeting.
Why we care. Google is shifting from keyword-based ads to intent-driven, AI-matched advertising, meaning campaigns can reach consumers with far more precision at the moment theyβre ready to buy. As search becomes more conversational and unpredictable, advertisers who rely on traditional targeting risk falling behind those using AI-driven formats that automatically adapt to new user behavior.
Commerce push. Google is also advancing its commerce strategy through a Universal Commerce Protocol developed with Shopify, which allows purchases to happen directly within AI conversations.
What theyβre saying, Google positions itself as a βmatchmakerβ rather than a retailer, emphasizing that AI helps deliver more relevant and personalized ads while allowing brands to maintain control over their messaging and build user trust by showing the right product at the right moment.
Whatβs next. Gooogle says it has no current plans to introduce ads directly into Gemini but will continue testing and expanding advertising within AI Mode, including more personalized offers and AI-driven shopping experiences.
Bottom line. AI is not replacing search but reshaping it, and for Google that shift is making advertising more conversational, more targeted and, in some cases, significantly more profitable.
Google Search is evolving beyond links and answers into a system that completes tasks, potentially fundamentally changing how users interact with the web. Thatβs according to Alphabet CEO Sundar Pichai, speaking on the Cheeky Pint podcast.
Why we care. Google is signaling a move from information retrieval to task execution.
Search becoming agentic. Traditional search behavior is already changing and will continue to, Pichai said.
βIf I fast-forward, a lot of what are just information-seeking queries will be agentic in Search. Youβll be completing tasks. Youβll have many threads running.β
Pichai also described a future where Google Search acts less like a list of results and more like a system that coordinates actions:
βSearch would be an agent manager in which youβre doing a lot of things. I think in some ways, I use Antigravity today, and you have a bunch of agents doing stuff. I can see search doing versions of those things, and youβre getting a bunch of stuff done.β
AI Mode is already changing queries. Users are already adapting their behavior in Googleβs AI-powered search experiences, Pichai said:
βBut today in AI Mode in Search, people do deep research queries. That doesnβt quite fit the definition of what youβre saying. But people adapted to that. I think people will do long-running tasks.β
Search vs. Gemini overlap. Despite the rise of Gemini, Pichai said Google isnβt replacing Search with a chatbot. Instead, the two will coexist βΒ and diverge (echoing what Liz Reid said last month):
βWe are doing both Search and Gemini. They will overlap in certain ways. They will profoundly diverge in certain ways. I think itβs good to have both and embrace it.β
Googleβs AI Overviews answered a standard factual benchmark correctly 91% of the time in February, up from 85% in October, according to a New York Times analysis with AI startup Oumi.
Why we care. Weβve watched Google shift from linking to sources to summarizing them for more than two years. This report suggests AI Overviews are improving, but still mix correct answers, weak sourcing, and clear errors in ways that can mislead searchers and reshape which publishers get visibility and clicks.
The details. Oumi tested 4,326 Google searches using SimpleQA, a widely used benchmark for measuring factual accuracy in AI systems, the Times reported. It found AI Overviews were accurate 85% of the time with Gemini 2 and 91% after an upgrade to Gemini 3.
The bigger problem may be sourcing. Oumi found that more than half of the correct February responses were βungrounded,β meaning the linked sources didnβt fully support the answer.
That makes verification harder. The answer may be right, but the cited pages may not clearly show why.
What changed. Accuracy improved between October and February, but grounding worsened. In October, 37% of correct answers were ungrounded; in February, that rose to 56%.
Examples. The Times highlighted several misses:
For a query about when Bob Marleyβs home became a museum, Google answered 1987; the correct year was 1986, according to the Times, and the cited sources didnβt support the claim or conflicted.
For a query about Yo-Yo Ma and the Classical Music Hall of Fame, Google linked to the organizationβs site but still said there was no record of his induction.
In another case, Google gave the correct age at Dick Dragoβs death but misstated his date of death.
Googleβs response: Google disputed the Times analysis, saying the study used a flawed benchmark and didnβt reflect what people actually search. Google spokesperson Ned Adriance told the Times the study had βserious holes.β
Google also said AI Overviews use search ranking and safety systems to reduce spam and has long warned that AI responses can contain mistakes.
Microsoft has announced April's wave of Xbox Game Pass additions, and it includes the Call of Duty: Modern Warfare reboot, Hades 2, and many other titles.
Cyberpunk 2077 has been out since 2022, but it seems as though CD Projekt Red's dedication to the game has not waned, with a new April 8 update bringing enhancements to the game on the PS5 Pro. As detailed in a new PlayStation Blog post, Cyberpunk 2077's PS5 Pro update will bring a slew of visual changes to the game on PlayStation 5 Pro. The biggest change is that it will now use PSSR to upscale the game to 4K, and it will feature ray traced lighting, shadows, and reflections on PS5 Pro. It will be a free update that will be available for players playing on a PlayStation 5 Pro.
Cyberpunbk 2077 PS5 Pro Enhanced version will feature three gameplay modes, giving gamers the choice to optimize their gameplay experience for visuals or the best performance. Ray Tracing Pro mode will enable all RT features, including RT reflections, ambient occlusion, skylight, shadows, and emissive lighting, with a frame rate target of 40 FPS on VRR displays or 30 FPS without VRR. Performance mode will feature the highest frame rate target, at 90 FPS with "high image fidelity," although it isn't specified which features are enabled in Performance mode. Meanwhile, Ray Tracing mode will target 60 FPS with "select ray tracing enhancements" enabled, although CDPR again doesn't specify resolution or RT enhancements for this mode.
PeaZip 11.0 refines one of the most capable free archivers with faster browsing, smoother drag-and-drop across tabs, and a cleaner, more responsive UI. The update also improves scaling, adds flexible icon rendering, and introduces batch archive testing, alongside the usual fixes and cleanup.
The newly announced Netflix Playground is an all-in-one app designed to give children a curated gaming experience built around familiar cartoon characters. The streaming giant describes it as an ever-growing library of instantly playable games for kids aged 8 and under.
Shadow OS is the first decision-making app built on 64 hexagrams, the same system Carl Jung studied for over two decades and called his most significant method for surfacing what the unconscious already knows. Other decision apps use random spinner wheels, AI chatbots validate whatever you say, and astrology apps offer forecasts open to interpretation. Shadow OS gives you one committed answer: move forward, hold, or pull back.
BeMusic AI is a free AI music generator that turns text prompts into fully produced, royalty-free songs in under 30 seconds. Choose from 50+ genres, adjust mood, tempo, and energy, and download high-quality WAV or MP3 for videos, games, podcasts, and ads. It also offers tools to write lyrics, create instrumentals, convert audio to MIDI, edit MIDI, make AI covers, remove vocals, extend tracks, and analyze songs. Use it to avoid copyright issues and keep full ownership of every track.
The Russia-linked threat actor knownΒ as APT28 (aka Forest Blizzard) has been linked to a new campaign that has compromised insecure MikroTik and TP-Link routers and modified their settings to turn them into malicious infrastructure under their control as part of a cyber espionage campaign since at least MayΒ 2025.
The large-scale exploitation campaign hasΒ been codenamedΒ
The Nikon D5's still-unbeaten low-light performance and proven build quality made it NASA's choice for the Artemis II mission's most important photographs.
The news follows a report from Nikkei Asia on Tuesday that raised concerns the companyβs foldable iPhone could be delayed due to challenges during the phoneβs engineering test phase.
Google has begun placing sponsored ad units directly inside the Images tab of mobile search results β a new placement that eligible campaigns can access without any changes to existing keyword targeting.
Whatβs happening.Β When a user navigates to the Images tab within Google Search on mobile, they may now see sponsored units appearing within the image grid. Each unit shows a full image creative as the primary visual alongside text, and is clearly labelled βSponsoredβ β consistent with how Google labels ads elsewhere in search results.
How it works.Β Eligible campaigns can serve into the Images tab without any changes to keyword targeting or campaign structure. The placement draws from existing image assets, meaning advertisers running Search or Performance Max campaigns with strong visual creative are best positioned to benefit. No separate image-only campaign setup is required.
Why we care.Β This is a meaningful expansion of Googleβs paid search real estate. For product-led and catalog-heavy advertisers, the Images tab is where purchase-intent discovery often starts β and now ads can appear right in that moment. If your campaigns already use strong image assets, you may be picking up incremental impressions without lifting a finger.
The big picture.Β Early indications suggest this placement behaves more like a visual discovery surface than classic paid search. Expect high impression volume but lower click-through rates β more in line with display or Shopping than traditional text ads. That said, the assist value in multi-touch conversion paths could be significant, particularly for retail and direct-to-consumer brands. Treat it as upper-funnel reach, not a last-click channel.
What to watch.Β Google has not made a formal announcement, and there is no dedicated reporting breakdown for Images tab placements yet. Monitor your impression share and segment data closely to understand whether this placement is contributing β and whether itβs eating into organic image visibility for competitors.
First seen.Β The placement was spotted by Google Ads Expert β Matteo Braghetta, who shared seeing this update on LinkedIn. No official documentation has been published by Google at the time of writing.
Over 30% of outbound clicks go to just 10 domains, with Google alone taking more than 20%, according to a new Semrush study published today.
ChatGPT also relies less on the live web, triggering search on 34.5% of queries, down from 46% in late 2024.
The big picture. ChatGPTβs growth has plateaued, and its role in how users navigate the web is evolving unevenly.
Referral traffic from ChatGPT grew 206%, comparing January 2025 to January 2026.
The details. Most ChatGPT referral traffic still goes to a small set of sites, even as more sites receive some traffic.
Google accounts for 21.6% of all ChatGPT referral traffic.
The next nine domains bring the top 10 to just over 30% of referrals.
Most other sites get a long tail of minimal traffic.
The number of domains receiving referrals expanded, peaking at around 260,000 in 2025 before settling near 170,000.
Why we care. Visibility in ChatGPT doesnβt translate evenly into traffic, and youβll likely see marginal referral impact. The decline in search-triggered queries also limits your chances to earn citations and traffic.
When ChatGPT searches. It defaults to pre-trained knowledge and uses web search in specific cases, including:
User requests for sources.
Questions about recent events.
Situations where the model lacks confidence.
Behavior shift. Most ChatGPT prompts still donβt resemble traditional search queries.
Between 65% and 85% of prompts donβt match standard keywords, reflecting more complex, conversational inputs.
Meanwhile, engagement is deepening. Queries per session jumped 50% in late 2025.
About the data. Semrush analyzed more than 1 billion lines of U.S. clickstream data from October 2024 to February 2026 across a 200 million-user panel, tracking prompts, referral destinations, and search usage.
Malware ploys from bad actors are getting more elaborate, as axios maintainer Jason Saayman explains how the registry's hijacking was weeks in the making and involved a fake Teams update that delivered a trojan.
2026 has thus far been a busy year for gaming mice releases, with hits like the Razer Viper V4 Pro and VXE's upcoming Logitech G305 alternative launching already. Now, SteelSeries seems to be making something of a come-back in the gaming mouse game, with an as yet unreleased Rival Pro and Rival Pro Mini, which have shown up on Reddit in what appears to be an accidental early leak. If the retail packaging shown off in the Reddit post is anything to go by, the mouse will have a couple of nifty features to set it apart from the rest of SteelSeries's line-up.
Despite the Rival moniker, the Rival Pro Mini looks a lot more like the SteelSeries Prime wireless mouse than the rest of the SteelSeries Rival gaming mice. The Rival Pro Mini will weigh in at 49 g and use the PixArt PAW 3950 sensor that has become ubiquitous in flagship gaming mice in recent years. The Rival Pro Mini's main clicks will be optical switches with a 100 million-click MTBF rating. One of the standout features is the "Infinite Power" swappable battery system, which is similar to those used by Angry Miao in the Infinity AM series and Glorious in the Model O3 Wireless. The Rival Pro and Pro Mini will also have 8 kHz wireless connectivity and 100% PTFE skates.
Late last week, we reported on a new series of rowhammer bit-flip attacks targeting GDDR6-based NVIDIA GPUs. Most of these attacks can be mitigated by enabling IOMMU through the BIOS, which restricts the memory regions the GPU can access on the host system, thereby closing the primary attack path. However, researchers from the University of Toronto have introduced "GPUBreach," which can bypass IOMMU and enable CPU-side privilege escalation, unlike the previous "GDDRHammer" and "GeForge" attacks. In most typical server, workstation, and even PC configurations, IOMMU restricts the GPU's access to the CPU's physical addresses, preventing direct memory access. These are the typical DMA-based attacks that the Input-Output Memory Management Unit protects users from. However, the new "GPUBreach" operates differently.
For example, "GPUBreach" exploits memory-safe bugs in the actual GPU driver and corrupts them. When IOMMU confines the GPU's direct memory access to driver-assigned buffers, the new exploit corrupts metadata within these permitted buffers. This causes the driver, which has kernel privileges enabled on the CPU host, to perform out-of-band writes to the buffer, effectively bypassing any protection IOMMU can offer. This logic is built into the kernel by default, as the GPU driver is one of the most trusted components of the operating system. Hence, IOMMU bypass is possible when the metadata is corrupted. Since "GPUBreach" grants an attacker full root privilege escalation, the attack differs significantly from previous rowhammer attacks.
Speaking to Windows users on X this week, Microsoft's Director of Design, March Rogers, said the company is working to address several UI issues across Windows 11. To that end, all settings options are being consolidated in a single location, ensuring users will no longer need to switch between the...
The 2TB SanDisk Extreme Pro UHS-II SD card costs $2,000, bringing its price per GB to nearly $1, making it more than four times more expensive than much faster microSD Express cards.
A compact office mini PC powered by Intel's 12th Gen Core i5-12600H processor with plenty of upgrade potential and power enough for a three-monitor setup.
Xbox has unveiled the batch of Xbox Game Pass titles for the month of April, and it includes some absolute bangers like The Elder Scrolls 4: Oblivion Remastered, Day Z, and more.
In his guidelines, Russia's Ministry of Digital Development highlighted some limitations in VPN detection, which could also help residents navigate the crackdown.
After a wave of screen-only cameras and even the removal of viewfinders in recent updates of various models, I asked you if you'd buy a camera without a viewfinder. Here's what you said.
Sony has revealed more about its 2026 True RGB TV tech, which replaces the traditional backlight with individually controlled red, green, and blue LEDs for greater brightness, color accuracy, and control.
What happens when a controversial lawyer rewrites rules of justice with dangerous brilliance? Here's how to watch Avvocato Ligas from anywhere in the world.
Amazon launched a big tech sale over the Easter bank holiday weekend, but there's still time to score several of the best deals before they're gone β I've picked out the 18 top offers.
Fancy Bear, also known as APT28, has taken over thousands of residential home routers to steal passwords and authentication tokens in a wide-ranging espionage operation.
Google is rolling out new Google Maps features that make it easier to contribute photos, reviews, and local insights, while adding Gemini-powered caption suggestions.
Local Guides redesign. Contributor profiles are getting more visibility. Total points now appear more prominently, Local Guide levels are easier to spot, and badge designs have been refreshed.
Top contributors will also stand out more in reviews with new gold profile indicators.
AI caption drafts. Google is also introducing AI-generated caption drafts. Gemini analyzes selected images and suggests text you can edit or discard.
Caption suggestions are available in English on iOS in the U.S., with Android and broader global expansion planned.
Media sharing. Google Maps now shows recent photos and videos directly in the Contribute tab, making uploads faster.
If you enable media access, Google Maps will suggest images from your camera roll that are ready to post with a tap.
This feature is now live globally on iOS and Android.
Why we care. Google is making it easier to create and scale fresh local content, which can directly affect rankings and visibility. At the same time, stronger contributor signals may influence which reviews users trust and which businesses win clicks.
Google once attributed two of Barry Schwartzβs Search Engine Land articles to me β a misclassification at the annotation layer that briefly rewrote authorship in Googleβs systems.
For a few days, when you searched for certain Search Engine Land articles Schwartz had written, Google listed me as the author. The articles appeared in my entityβs publication list and were connected to my Knowledge Panel.
What happened illustrates something the SEO industry has almost entirely overlooked: that annotation β not the content itself β is the key to what users see and thus your success.
How Google annotated the page and got the author wrong
Googlebot crawled those pages, found my name prominently displayed below the article (my author bio appeared as the first recognized entity name beneath the content), and the algorithm at the annotation gate added the βPost-Itβ that classified me as the author with high confidence.
This is the most important point to bear in mind: the bot can misclassify and annotate, and that defines everything the algorithms do downstream (in recruitment, grounding, display, and won). In this case, the issue was authorship, which isnβt going to kill my business or Schwartzβs.
But if that were a product, a price, an attribute, or anything else that matters to the intent of a user search query where your brand should be one of the obvious candidates, when any aspect of content is inaccurately annotated, youβve lost the βranking gameβ before you even started competing.
Annotation is the single most important gate in taking your brand from discover to won, whatever query, intent, or engine youβre optimizing for.
Indexing (Gate 4) breaks your content into semantic chunks, converts it, and stores it in a proprietary format. Annotation (Gate 5) then labels those chunks with a confidence-driven βPost-Itβ classification system.
Itβs a pragmatic labeler and attaches classifications to each chunk, describing:
What that chunk contains factually.
In what circumstances it might be useful.
The trustworthiness of the information.
Importantly, itβs mostly unopinionated when labeling facts, context, and trustworthiness. Microsoftβs Fabrice Canel confirmed the principle that the bot tags without judging, and that filtering happens at query time.
What does that mean? The bot annotates neutrally at crawl time, classifying your content without knowing what query will eventually trigger retrieval.Β
Annotation carries no intent at all. Itβs the insight that has completely changed my approach to βcrawl and index.β
That clearly shows you that indexing isnβt the ultimate goal. Getting your page indexed is table stakes. Full, correct, and confident annotation is where the action happens: an indexed page that is poorly annotated is invisible to each of the algorithmic trinity.
The annotation system analyzes each chunk using one or more language models, cross-referenced against the web index, the knowledge graph, and the modelsβ own parametric knowledge. But it analyzes each chunk in the context of the page wrapper.Β
The page-level topic, entity associations, and intent provide the frame for classifying each chunk. If the page-level understanding is confused (unclear topic, ambiguous entity, mixed intent), every chunk annotation inherits that confusion. Even more importantly, it assigns confidence to every piece of information it adds to the βPost-Its.β
The choices happen downstream: each of the algorithmic trinity (LLMs, search engines, and knowledge graphs) uses the annotation to decide whether to absorb your content at recruitment (Gate 6). Each has different criteria, so you need to assess your own content for its βannotatabilityβ in the context of all three.
And a small but telling detail: Back in 2020, Martin Splitt suggested that Google compares your meta description to its own LLM-generated summary of the page. When they match, the systemβs confidence in its page-level understanding increases, and that confidence cascades into better annotation scores for every chunk βΒ one of thousands of tiny signals that accumulate.
Annotation is the key midpoint of the 10-gate pipeline, where the scoreboard turns on. Everything before it is infrastructure: βCan the system access and store your content?β Everything after it is competition:
When you consider what happens at the annotation gate and its depth, links and keywords become the wrong lens entirely. They describe how you tried to influence a ranking system, whereas annotation is the mechanism behind how the algorithmic trinity chooses the content that builds its understanding of what you are.
The frame has to shift. Youβre educating algorithms. They behave like children, learning from what you consistently, clearly, and coherently put in front of them. With consistent, corroborated information, they build an accurate understanding.
Given inconsistent or ambiguous signals, they learn incorrectly and then confidently repeat those errors over time. Building confidence in the machineβs understanding of you is the most important variable in this work, whether you call it SEO or AAO.
βConfianceβ (confidence) is the signal that drives how systems understand content. Slide from my SEOCamp Lyon 2017 presentation.
In 2026, every AI assistive engine and agent is that same child, operating at a greater scale and with higher stakes than Google ever had. Educating the algorithms isnβt a metaphor. Itβs the operational model for everything that follows.
5 levels of annotation: 24+ dimensions classifying your content at Gate 5
When mapping the annotation dimensions, I identified 24, organized across five functional categories. After presenting this to Canel, his response was: βOh, there is definitely more.β
Of course there are. This taxonomy is built through observation first, then naming what consistently appears. The [know/guess] distinctions follow the same logic: test hypotheses, eliminate what doesnβt hold up, and keep what remains.
The five functional categories form the foundation of the model. They are simple by design β once you understand the categories, the dimensions follow naturally. There are likely additional dimensions beyond those mapped here.
What follows is the taxonomy: the categories are directionally sound (as confirmed by Canel), while the specific dimension assignments reflect observed behavior and remain incomplete.
Level 1: Gatekeepers (eliminate)
Temporal scope, geographic scope, language, and entity resolution. Binary: pass or fail.Β
If your content fails a gatekeeper (wrong language, wrong geography, or ambiguous entity), it is eliminated from that queryβs candidate pool instantly. The other dimensions donβt come into play.
Level 2: Core identity (define)
Entities, attributes, relationships, sentiment.Β
This is where the system decides what your content means:
Who is being discussed.
What facts are stated.
How entities relate.
What the tone is.Β
Without clear core identity annotations, a chunk carries no semantic weight in any downstream gate.
Level 3: Selection filters (route)Β
Intent category, expertise level, claim structure, and actionability.Β
These determine which competition pool your content enters.
Is this informational or transactional?Β
Beginner or expert?Β
Wrong pool placement means competing against content that is a better match for the query, and youβve lost before recruitment or ranking begins.
Level 4: Confidence multipliers (rank)
Verifiability, provenance, corroboration count, specificity, evidence type, controversy level, and consensus alignment. These scale your ranking within the pool.Β
This is where validated, corroborated, and specific content outranks accurate but unvalidated content.Β
The multipliers explain why a well-sourced third-party article about you often outperforms your own claims: provenance and corroboration scores are higher.
Confidence has a multiplier effect on everything else and is the most powerful of all signals. Full stop.
Level 5: Extraction quality (deploy)
Sufficiency, dependency, standalone score, entity salience, and entity role. These determine how your content appears in the final output.Β
Is this chunk a complete answer, or does it need context? Is your entity the subject, the authority cited, or a passing mention?Β
Extraction quality determines whether AI quotes you, summarizes you, or ignores you.
Across all five levels, a confidence score is attached to every individual annotation. Not just what the system thinks your content means, but how certain it is.
Clarity drives confidence. Ambiguity kills it.
Canel also confirmed additional dimensions I had not initially mapped: audience suitability, ingestion fidelity, and freshness delta. These sit across the existing categories rather than forming a sixth level.
In 2022, Splitt named three annotation behaviors in a Duda webinar that map directly onto the five-level model. The centerpiece annotation is Level 2 in direct operation:Β
βWe have a thing called the centerpiece annotation,β Splitt confirmed, a classification that identifies which content on the page is the primary subject and routes everything else β supplementary, peripheral, and boilerplate β relative to it.Β
βThereβs a few other annotationsβ of this type, he noted.Β
Annotation runs before recruitment, which means a chunk classified as non-centerpiece carries that verdict into every gate that follows. Boilerplate detection is Level 3: content that appears consistently across pages β headers, footers, navigation, and repeated blocks β enters a different competition pool based on its structural role alone.Β
βWe figure out what looks like boilerplate and then that gets weighted differently,β Splitt saidΒ
Off-topic routing closes the picture. A page classified around a primary topic annotates every chunk relative to that centerpiece, and content peripheral to the primary topic starts its own competition pool at a disadvantage before Recruitment begins.Β
Splittβs example: a page with 10,000 words on dog food and a thousand on bikes is βprobably not good content for bikes.β The system isnβt ignoring the bike content. Itβs annotating it as peripheral, and that annotation is the routing decision.
The multiplicative destruction effect: When one near-zero kills everything
In Sydney in 2019, I was at a conference with Gary Illyes and Brent Payne. Illyes explained that Googleβs quality assessment across annotation dimensions was multiplicative, not additive.Β
Illyes asked us not to film, so I grabbed a beer mat and noted a simple calculation: if you score 0.9 across each of 10 dimensions, 0.9 to the power of 10 is 0.35. You survive at 35% of your original signal. If you score 0.8 across 10 dimensions, you survive at 11%. If one dimension scores close to zero, the multiplication produces a result close to zero, regardless of how well you score on every other dimension.
Payneβs phrasing of the practical implication was better than mine: βBetter to be a straight C student than three As and an F.β
The beer mat went into my bag. The principle became central to everything Iβve built since.
The multiplicative destruction effect has a direct consequence for annotation strategy: the C-student principle is your guide.Β
A brand with consistently adequate signals across all 24+ dimensions outperforms a brand with brilliant signals on most dimensions and a near-zero on one. The near-zero cascades.Β
A gatekeeper failure (Level 1) eliminates the content entirely.Β
A core identity failure (Level 2) misclassifies it so badly that high confidence multipliers at Level 4 are applied to the wrong entity.Β
An extraction quality failure (Level 5) produces a chunk that the system can retrieve but canβt deploy usefully. The failure doesnβt have to be dramatic to be fatal.
At the annotation stage, misclassification, low confidence, or near-zero on one dimension will kill your content and take it out of the race.
Nathan Chalmers, who works at Bing on quality, told me something that puts this in a different light entirely. Bingβs internal quality algorithm, the one making these multiplicative assessments across annotation dimensions, is literally called Darwin.Β
Natural selection is the explicit model: content with near-zero on any fitness dimension is selected against. The annotations are the fitness test. The multiplicative destruction effect is the selection mechanism.
How annotation routes content to specialist language models
The system doesnβt use one giant language model to classify all content. It routes content to specialized small language models (SLMs): domain-specific models that are cheaper, faster, and paradoxically more accurate than general LLMs for niche content.Β
A medical SLM classifies medical content better than GPT-4 would, because it has been trained specifically on medical literature and knows the entities, the relationships, the standard claims, and the red flags in that domain.
What follows is my model of how the routing works, reconstructed from observable behavior and confirmed principles. The existence of specialist models is confirmed. The specific cascade mechanism is my reconstruction.
The routing follows what I call the annotation cascade. The choice of SLM cascades like this:
Site level (What kind of site is this?)
Refined by category level (What section?)
Refined by page level (what specific topic?)
Applied at chunk level (What does this paragraph claim?)
Each level narrows the SLM selection, and each level either confirms or overrides the routing from above. This maps directly to the wrapper hierarchy from the fourth piece: the site wrapper, category wrapper, and page wrapper each provide context that influences which specialist model the system selects.
The system deploys three types of SLM simultaneously for each topic. This is my model, derived from the behavior I have observed: annotation errors cluster into patterns that suggest three distinct classification axes.Β
The subject SLM classifies by subject matter β what is this about? β routing content into the right topical domain.Β
The entity SLM resolves entities and assesses centrality and authority: who are the key players, is this entity the subject, an authority cited, or a passing mention?Β
The concept SLM maps claims to established concepts and evaluates novelty, checking whether what the content asserts aligns with consensus or contradicts it.
When all three return high confidence on the same entity for the same content, annotation cost is minimal, and the confidence score is very high. When they disagree (i.e., the subject SLM says βmarketing,β but the entity SLM canβt resolve the entity, and the concept SLM flags the claims as novel), confidence drops, and the system falls back to a more general, less accurate model.
The key insight? LLM annotation is the failure mode. The system wants to use a specialist. It defaults to a generalist only when it canβt route to a specialist. Generalist annotation produces lower confidence across all dimensions.Β
The practical implicationΒ
Content thatβs category-clear within its first 100 words, uses standard industry terminology, follows structural conventions for its content type, and references well-known entities in its domain triggers SLM routing.Β
Content thatβs topically ambiguous or terminologically creative gets the generalist. Lower confidence propagates through every downstream gate.
Now, this may not be the exact way the SLMs are applied as a triad (and it might not even be a trio). However, two things strike me:
Observed outputs act that way.
If it doesnβt function this way, it would be.
First-impression persistence: Why the initial annotation is the hardest to correct
Here is something Iβve observed over years of tracking annotation behavior. It aligns with a principle Canel confirmed explicitly for URL status changes (404s and 301 redirects): the systemβs initial classification tends to stick.
When the bot first crawls a page, it selects an SLM, runs the annotation, assigns confidence scores, and saves the classification. The next time it crawls the same page, it logically starts with the previously assigned model and annotations. I call this first-impression persistence.Β
The initial annotation is the baseline against which all subsequent signals are measured. The system doesnβt re-evaluate from scratch. It checks whether the new crawl is consistent with the existing classification, and if it is, the classification is reinforced.
Canel confirmed a related mechanism: when a URL returns a 404 or is redirected with a 301, the system allows a grace period (very roughly a week for a page, and between one and three months for content, in my observation) during which it assumes the change might revert. After the grace period, the new state becomes persistent. I believe the same principle applies to content classification: a window of fluidity after first publication, then crystallization.
I have direct evidence for the correction side from the evolution of my own terminologies. When I first described the algorithmic trinity, I used the phrase βknowledge graphs, large language models, and web index.β Google, ChatGPT, and Perplexity all picked up on the new term and defined it correctly.
A month later, I changed the last one to βsearch engineβ because it occurred to me that the web index is what all three systems feed off, not just the search system itself. At the point of correction, I had published roughly 10 articles using the original terminology.Β
I went back and invested the time to change every single one, updating every reference, leaving zero traces. A month later, AI assistive engines were consistently using βsearch engineβ in place of βweb index.β
The lesson is that change is possible, but you need to be thorough: any residual contradictory signal (one old article, one unchanged social post, and one cached version) maintains inertia proportionally. Thoroughness is the unlock, rather than time.
A rebrand, career pivot, or repositioning is the practical example. You can change the AI modelβs understanding and representation of your corporate or personal brand, but it requires thoroughly and consistently pivoting your digital footprint to the new reality.
In my experience, βon a sixpenceβ within a week. Iβve done this with my podcast several times. Facebook achieved the ultimate rebrand from an algorithmic perspective when it changed its name to Meta.
The practical implication
Get your annotation right before you publish. The first crawl sets the baseline. A page published prematurely (with an unclear topic or ambiguous entity signals) crystallizes into a low-confidence annotation, and changing it later requires significantly more effort than getting it right the first time.
Annotation-time grounding: The bot cross-references three sources while classifying your content
The system doesnβt annotate in a vacuum. When the bot classifies your content at Gate 5, it cross-references against at least three sources simultaneously. This is my model of the mechanism. The observable effect β that annotation confidence correlates with entity presence across multiple systems β is confirmed from our tracking data.
The bot carries prioritized access to the web index during crawling, checking your content against what it already knows:Β
Who links to you.
What context those links provide.
How your claims relate to claims on other pages.Β
Against the knowledge graph, it checks annotated entities during classification β an entity already in the graph with high confidence means annotation inherits that confidence, while absence starts from a much lower baseline.Β
The SLMβs own parametric knowledge provides the third cross-reference: each SLM compares encountered claims against its training data, granting higher confidence to claims that align, flagging contradictions, and giving lower confidence to novel claims until corroboration accumulates.
This means annotation quality isnβt just about how well your content is written. Itβs about how well your entity is already represented across all three of the algorithmic trinity. An entity with strong knowledge graph presence, authoritative web index links, and consistent SLM-domain representation gets higher annotation confidence on new content automatically.Β
The flywheel: better presence leads to better annotation, which leads to better recruitment, which strengthens presence, and which improves future annotation.
Once again, better to have an average presence in all three than to have a dominant presence in two and no presence in one.
And this is why knowledge graph optimization (what Iβve been advocating for over a decade) isnβt separate from content optimization. They are the same pipeline. Your knowledge graph presence directly improves how accurately, verbosely, and confidently the system annotates every new piece of content you publish.
If youβre thinking βKnowledge graph? Thatβs just Google,β think again.
In November 2025, Andrea Volpini intercepted ChatGPTβs internal data streams and found an operational entity layer running beneath every conversation: structured entity resolution connected to what amounts to a product graph mirroring Google Shopping feeds.Β
OpenAI is building its own knowledge graph inside the LLM. My bet is that they will externalize it for several reasons: a knowledge graph in an LLM doesnβt scale, an LLM will self-confirm, so the value is limited, a standalone knowledge graph can be easily updated in real time without retraining the model, and itβs only useful at scale when it stays current.
The algorithmic trinity isnβt a Google phenomenon. Itβs the architectural pattern every AI assistive engine and agent converges on, because you canβt generate reliable recommendations without a concept graph, structured entity data, and up-to-date search results to ground them.
Why Google and Bing annotate differently from engines that rent their index
Google and Bing own their crawling infrastructure, indexes, and knowledge graphs. They can afford grace periods, schedule rechecks, and maintain temporal state for URLs and entities over months.
OpenAI, Perplexity, and every engine that rents index access from Google or Bing operate on a fundamentally different model. They have two speeds:Β
A slow Boolean gate (Does this content exist in the index I have access to?)
A fast display layer (What does the content say right now when I fetch it for grounding?)
The Boolean gate inherits Googleβs and Bingβs annotations. Whether your content appears at all depends on whether it was recruited from the index those engines draw from, and that recruitment depends on annotation and selection decisions made by the algorithmic trinity. But what these engines show when they cite you is fetched in real time.
The practical implication
For Google and Bing, youβre optimizing for annotation quality with the benefit of grace periods and gradual reclassification. For engines that donβt own their index, the Boolean presence is inherited from the rented index and is slow to change, but the surface-level display changes every time they re-fetch.
That means what you are seeing in the results is not a direct measure of your annotation quality. Itβs a snapshot of your page at the moment of fetch, and those two things may have nothing to do with each other.
How to optimize for annotation quality: The six practical principles
The SEO industry has spent two decades optimizing for search and assistive results β what happens after the system has already decided what your content means. We should be optimizing for annotation.Β
If the annotation is wrong, everything downstream suffers. When the annotation is accurate, verbose, and confident, your content has a significant advantage in recruitment, grounding, display, and, ultimately, won.
1. Trigger SLM routing
Make your topic category obvious within the first 100 words. Use standard industry terminology. Follow structural conventions. Reference well-known entities. The goal: specialist model, not generalist.
2. Write for all three SLMs
Clear signals for subject (what is this about?), entity (who is the authority?), and concept (what established ideas does this connect to?). Ambiguity on any axis reduces confidence.
3. Get it right before publishing
First-impression persistence means the initial annotation is the hardest to change. Publish only when topic, entity signals, and claims are unambiguous.
4. Build the flywheel
Knowledge graph presence, web index centrality, LLM parameter strengthening, and correct SLM-domain representation all feed annotation confidence for new content. Invest in entity foundation, and every future piece benefits from inherited credibility.
5. Eliminate noise when correcting
Change every reference. Leave zero contradictory signals. Noise maintains inertia proportionally.
6. Audit for annotation, not just indexing
A page can be indexed and still misannotated. If the AI response is wrong about you, the problem is almost certainly at Gate 5, not Gate 8.
Annotation is the gate where most brands silently lose. The SEO industry doesnβt yet have a vocabulary for it. That needs to change, because the gap between brands that get annotation right and brands that donβt is the gap between consistent AI visibility and permanent algorithmic obscurity.
Why annotation matters so much and why it should be your main focus
Youβve done everything within your power to create the best possible content that maps to intent of your ideal customer profile, you have methodically optimized your digital footprint, your data feeds every entry mode simultaneously: pull, push discovery, push data, MCP, and ambient, so they are all drawing from the same clean, consistent source
So, content about your brand has passed through the DSCRI infrastructure phase, survived the rendering and conversion fidelity boundaries, and arrived in the index (Gate 4) intact. Phew!
Now it gets classified. Annotation is the last moment in the pipeline where you have the field to yourself. Every decision in DSCRI was absolute: you vs. the machine, with no competitor in the frame.Β
Annotation is still absolute. The system classifies your content based on your signals alone, independently of what any competitor has done. Nobody elseβs data changes how your entity is annotated.
But this is the last time you arenβt competing. From recruitment onward, everything is relative. The field opens, every brand that passed annotation enters the same competitive pool, and the advantage you carried through the absolute phase becomes your starting position in the competitive race you have to win.
That means:Β
Get annotation right, and you start ahead, with confidence that compounds through every downstream gate in RGDW.Β
Get it wrong, and the multiplicative destruction effect does its work β a near-zero on one annotation dimension cascades through recruitment, grounding, display, and won. No amount of excellent content, structural signals, or entry-mode advantage recovers it.
Warning: First-impression persistence (remember, the first time you are annotated is the baseline) means you donβt get a clean retry. Changing the baseline requires thoroughness, time, and more effort than getting it right on the first crawl.
Annotation isnβt the gate that most brands focus on. Itβs the gate where most brands silently lose.
This is the eighth piece in my AI authority series.Β
SOVOL is about to enter its multi-material era SOVOL has started teasing βsomething newβ, a new 3D printer that promises to be both βmulti-materialβ and βmulti-colourβ. Until now, SOVOL has specialised in single-colour/material 3D printing solutions, promising βopen-source freedomβ and a wealth of customisation options. Based on their teaser image, SOVOLβs new 3D printer appears [β¦]
Lenovo and HP have both devised on-device AI assistants designed to make your digital life easier. I dug into the features to discover similarities and differences.
Intel has officially announced its participation in Elon Musk's "Terafab" project, which aims to reimagine chip manufacturing. Specifically, Intel Foundry plans to join this ambitious initiative, leveraging its significant manufacturing capabilities as one of the strategically important companies in the U.S. However, the specifics of Intel's involvement remain unclear, as it is not yet known how Intel will officially contribute to the Terafab project. The company has stated that since Terafab aims to produce 1 terawatt per year of compute power for AI and robots serving xAI, SpaceX, and Tesla. Intel will assist in designing silicon, manufacturing it, and providing some of the world's most advanced packaging technologies, such as EMIB. It is likely that some of Intel's facilities, which are currently being expanded, will become part of the network needed for the Terafab project, while the Terafab facility itself conducts custom work guided by Intel.
The goal of Terafab is to consolidate the entire chip manufacturing process under one roof. The plant is expected to integrate several stages of semiconductor production at a single site, including logic fabrication, memory, packaging, testing, and mask production. This setup is unusual, as these steps are typically spread across multiple specialized facilities and companies. The original idea behind Terafab is that consolidating these processes could accelerate development by enabling engineers to design, test, and revise chips with fewer delays, essentially allowing for rapid prototyping. This contrasts with the traditional, lengthy process of manufacturing chips at one site, packaging them at another, and testing them in-house. Elon Musk visited Intel's CEO Lip-Bu Tan over the past weekend securing a deal.
The feature is now rolling out as part of the latest Play Store update, version 50.7.24-31. Google recently confirmed the release through its official support documentation, following months of limited testing.
For the Postal Service, which reported a $9 billion net loss last year, the deal averts what could have been a serious revenue shock. Amazon accounts for nearly 15% of USPS package deliveries nationwide, translating to about $6 billion annually. Any major pullback by the tech company β which already...
Java 26 is here with fresh language features, faster performance, stronger security, and a wave of library and tooling upgrades. Early developer reaction has been upbeat, with many praising Java's steady pace of meaningful improvements.
Nikkei Asia reported that Apple has encountered unexpected setbacks during the engineering test phase of its first foldable iPhone, raising the possibility of production delays. Sources familiar with the matter told the publication that the early test production phase has thrown up more problems than anticipated and will require extra...
Zortos293 uploaded an open-source GeForce Now client to GitHub, allowing gamers to connect to Nvidia's service without being tracked by the tech giant.
PanelShot generates realistic AI personas, shows them your website, and delivers structured feedback in minutes. Pick audience segments or create your own, select a research rubric, and let AI evaluate screenshots, copy, and accessibility to surface insights. Review an executive summary and per-page analysis, replay the same personas on new versions, track sentiment trends over time, and chat with any persona for deeper understanding, all for cents per persona.
REWRITE is a 30-day interactive story and voice-first coaching platform that measures personal transformation through your voice. You follow the narrative, talk with an AI coach by text or voice, and see objective signals like stress, confidence, engagement, cognitive load, and authenticity. After the story, daily prompts and monthly voice reports track your progress, giving data you can act on. Coaches get a dashboard with client trends, attention flags, and AI-generated prep notes.
A high-severity security vulnerability has been disclosed in Docker Engine that could permit an attacker to bypass authorization pluginsΒ (AuthZ) under specific circumstances.
The vulnerability, trackedΒ as CVE-2026-34040 (CVSS score: 8.8), stems from an incomplete fixΒ for CVE-2024-41110, a maximum-severity vulnerability in the same component that came to light in JulyΒ 2024.
"
A government block on Telegram in Iraq has triggered a massive 1,200% surge in Proton VPN sign-ups as citizens look for workarounds. The company now warns residents against downloading sketchy apps that could put their data in danger.
Google is rolling out new features to make it easier for users to contribute local knowledge to Maps. Most notably, Gemini can now create captions when users are looking to share a photo or video about a place.
Many of todayβs PPC tools were designed to be easily accessible to ecommerce. That doesnβt mean lead gen canβt take advantage of them, but it does mean more intentional application is required.
Lead gen with AI still requires a creative approach, and many conventional ecommerce tools still apply β but not always in the same way.
Here are the priorities that matter most for succeeding with lead gen using AI.
Disclosure:Iβm a Microsoft employee. While this guidance is platform-agnostic, Iβll reference examples that lean into Microsoft Advertising tooling. The principles apply broadly across platforms.
1. Fix your conversion data first
This is the single most important thing you can do as AI becomes more embedded in media buying.
Between evolving attribution models, privacy changes, different platform connections, and shifts in how consumers engage with brands, itβs reasonable to ask whether your data is still telling an accurate story.
Start by auditing your CRM or lead management system. Make sure the data you pass back to advertising platforms is clean, consistent, and intentional.
In most cases, data issues stem from human choices rather than technical failures. Still, there are a few technical checks that matter:
Confirm conversions are firing consistently.
Regularly review conversion goal diagnostics.
Validate that lead status updates and downstream signals are actually flowing back.
If AI systems are learning from your data, you want to be confident that the feedback loop reflects reality.
2. Make landing pages easy to ingest and easy to understand
Lead gen campaigns often have multiple conversion paths, which can be helpful for users. But from an AI perspective, ambiguity is a risk.
Your landing pages should make it clear:
What action you want the user to take.
What happens after action is taken.
Which conversions matter most.
Redundant or unclear conversion paths can confuse both users and systems. If AI crawlers detect that anticipated outcomes are inconsistent, they may begin to question the accuracy of what your site claims to do. That can limit eligibility for certain placements.
Language clarity matters just as much. Avoid jargon, eccentric terminology, or internally focused phrasing when describing your services. Clear, plain language makes it easier for AI systems to understand who you are, what you offer, and how to match creative to the right audience.
A practical test: Put your website content into a Performance Max campaign builder and review how the system attempts to position your business. If you agree with the messaging, imagery, and framing, your site is likely easy to understand. If not, that feedback is valuable.
You can also paste your site content into AI assistants and ask them to describe your business and services. If the response aligns with reality, youβre in a good place. If it doesnβt, thatβs a signal to refine your content.
Behavioral analytics tools, like Clarity, can help you understand exactly how humans are engaging with your site and how often AI tools are crawling your site.
Lead gen has always struggled with long conversion cycles. That challenge doesnβt go away, and in some ways, it becomes more pronounced.
AI-driven systems increasingly weigh sentiment, visibility, and contextual signals, not just last-click performance. If all of your budget and reporting focuses on immediate traffic, you may miss meaningful impact higher in the funnel.
That means:
Budgeting intentionally across awareness, consideration, and conversion.
Applying the right metrics at each stage.
Looking beyond traffic as the primary success indicator.
In many lead gen models, citations, qualified leads, and eventual revenue tell a more accurate story than clicks alone.
You may not think you have a βfeedβ in your lead gen setup, but that absence can put you at a disadvantage.
Feeds help AI systems understand your business structure, services, and site architecture. Even if you donβt have hundreds of pages, a simple, well-maintained feed in an Excel document can provide valuable context when uploaded to ad platforms.
Example of a feed for lead gen
Feed hygiene matters. Use clear, specific columns. Follow platform standards for text, images, and categorization. Make sure all relevant categories are represented.
On the local side, claim and maintain all map profiles. Ensure information is accurate and consistent. If you use call tracking in map placements, review your labeling carefully. AI systems may pull data from map listings or your website, and mismatches can create attribution confusion, particularly for phone leads.
Account for potential AI-driven inflation in reporting, whether youβre looking at map pack data, direct reporting, or site-level performance. Any changes you make should also be reflected correctly in your conversion goals.
5. Pressure-test your creative for clarity
Creative assets may be mixed, matched, or shortened using AI. In some cases, you may only get one headline to explain who you are and why someone should contact you.
If your value proposition requires three headlines, or a headline plus a description, to make sense, thatβs a risk.
Review your existing creative and identify assets that stand on their own. You should have at least some options where a single headline clearly communicates:
What you do
Who you help
Why it matters
If that clarity isnβt there, AI-driven placements can quickly become confusing.
Most of the actions that matter today are things strong advertisers already do: clean data, clear messaging, intentional budgeting, and disciplined execution. What changes is how attribution may shift, and how much weight systems place on different signals.
The fundamentals still win. The difference is that AI makes weaknesses more visible and strengths more scalable.
If you focus on clarity, accuracy, and alignment across your funnel, you give both people and systems the best possible chance to understand your business β and thatβs where sustainable performance comes from.
We got our hands on the new ASUS Zenbook A16 for in-depth testing, and it's clear that the new Windows laptop is gunning for Apple's lightweight MacBook Air 15. Here's how the two devices compare in terms of design, features, displays, performance, efficiency, and pricing.
A combination of Qualcomm's phenomenal generational performance gains and refinements to ASUS' already stellar Zenbook design has crafted a practically perfect Windows laptop.
Major suppliers are continuing to phase out production of mature products below DDR4, according to TrendForce's latest research on the memory industry. As supply tightens structurally, DRAM prices have already posted significant cumulative increases in recent months.
TrendForce forecasts that consumer DRAM contract prices will continue to rise by 45-50% QoQ in 2Q26 after taking into account ongoing supply reductions, order transfers, and the slower pace of capacity expansion among Taiwanese suppliers.
StarTech.com, a global provider of performance connectivity solutions for IT professionals, announced the release of its next-generation Driverless Multi-monitor USB-C Docking Stations for Windows environments utilizing multi-stream transport (MST) with HDMI and DisplayPort compatible models. Built for enterprise Windows environments, the docks support enterprise mixed hardware platforms including Intel, AMD and Snapdragon-based systems while enabling driverless deployment and up to 100 W of power delivery.
Key features include:
Dual 4K 60 Hz display support or Dual 4K 60 Hz + one 4K 30 Hz with the triple display dock.
Driverless deployment for faster rollout and less troubleshooting.
USB ports up to 10 Gbps.
Mountable design with integrated security lock slots.
Introducing GHS Eternal and GHS Eternal RGB, wired gaming headsets that prove great gear doesn't have to cost a fortune. GHS Eternal and GHS Eternal RGB are the first entry into the new Gaming Headset line, and round out the Glorious product portfolio, covering keyboards, mice, accessories, and now gaming audio.
GHS Eternal and GHS Eternal RGB add to the Glorious Eternal product lineup alongside Model O Eternal, to bring high-quality and affordable gear to gamers around the world. The Eternal lineup exists because great gear shouldn't cost a fortune, and GHS Eternal and GHS Eternal RGB extend that ethos into the world of gaming audio.
ASUS today announced the U.S. availability of its latest Zenbook lineup, headlined by the all-new Zenbook A16. Setting a new standard for groundbreaking performance, the Zenbook A16 debuts as the fastest Snapdragon -powered laptop on the market, equipped with the top-of-the-line Snapdragon X2 Elite Extreme processor for unprecedented local AI capabilities. The new Zenbook seriesβwhich also features the Zenbook A14, Zenbook S16, and Zenbook S14βis unified by Ceraluminum, an ASUS-exclusive material that combines the refined touch of ceramic and strength of aluminium to offer a unique tactile experience and lasting durability. As fully certified Copilot+ PCs, these devices are built to harness the full potential of local AI, furthering ASUS's commitment to deliver future-ready computing today and beyond.
ASUS Zenbook A16
ASUS Zenbook A16 (UX3607)βfeaturing the latest Snapdragon X2 Elite Extreme processor, which combines 18 cores and up to 80 TOPS NPU performance to unlock the next era of AI-enhanced computingβbridges the gap between ultra-portability and uncompromised performance. With a remarkable leap in CPU and GPU performance, while also optimized for superior battery efficiency, Zenbook A16 delivers fluid, lag-free performance across every scenario including media editing and rendering as well as productivity tasks. The laptop also features a vibrant 16-inch 3K 120 Hz ASUS Lumina OLED display, six super-liner speakers, and a comprehensive array of full-size I/O ports. Despite its expansive display, the laptop's sleek, all-Ceraluminum 2.65lbs.
Every year, the moto g stylus stands apart as the only smartphone in its price tier to offer a true stylus experience, giving users a precise, intuitive way to capture ideas the moment inspiration strikes. This year, Motorola builds on that foundation with the new moto g stylus - 2026, now featuring a built-in active stylus, and marks an important expansion of its portfolio with the moto pad - 2026. Together, these devices are designed to support creativity, productivity, and play across screens.
moto g stylus - 2026: Active pen within reach
From focused study sessions to well-earned downtime, today's devices need to move as fast as inspiration does. The integrated active stylus on the moto g stylus - 2026 delivers next-level precision for note-taking, gaming, and creative expression. The new active stylus responds to tilt and pressure in supported apps, enabling broader shading, finer lines, and more natural strokes, bringing a penβonβpaper feel to everyday tasks.
Corsair is proud to unveil the newest additions to the modular FRAME Series Case family; the FRAME 4000X RS and the FRAME 4000D WOOD RS. The FRAME 4000X features an all-new ventilated front panel with built-in RGB lighting and 64 RGB LEDs for a customizable light show, while the FRAME 4000D WOOD sports a front panel made with real wood for great airflow and a natural look. Both cases deliver new aesthetic options to the FRAME 4000 Series lineup while offering excellent cooling performance and easy upgradeability.
The all-new FRAME 4000X RS was created for DIY PC builders who want a great looking PC with RGB lighting and excellent airflow. It includes the new RGB Flow front panel that features 64 built-in RGB LEDs for a customizable light show with effective airflow. The RGB lights on the front panel can be connected to the motherboard's +5V RGB header for easy lighting management via motherboard software.
ZOWIE, a leading global esports brand and part of BenQ Corporation, has been named by Riot Games as the official monitor for the VALORANT Champions Tour Americas (VCTA). This collaboration is grounded in a shared commitment to competitive performance and player-first standards. Trusted by professional FPS players worldwide, ZOWIE monitors are engineered for precision, responsiveness, and consistency under tournament conditions, delivering a performance benchmark that meets the demands of top-tier competition.
ZOWIE's best in class XL2566X+ Gaming Monitor will be used on stage during VCT Americas competitions, giving pro players elite performance when it matters most. The XL2566X+ features a 400 Hz Fast TN Panel with native FHD and DyAc 2 technology to deliver industry-leading motion clarity, and clear, sharp visuals with enhanced color modes. Designed specifically for FPS games, ZOWIE monitors provide stable, predictable performance, empowering players to Strive for Perfection.
H.264 has so far carried a flat annual cap of $100,000 for large subscription platforms. That may sound like a lot, but for these companies, the numbers are so small that most of them probably forgot it even existed on their balance sheet. Well, that comfortable arrangement just got a...
The Asus Zenbook A16 is a lightweight housing for Qualcomm's Snapdragon X2 Elite Extreme, but comes with compromises in build quality and battery life.
A city councilor's home has been shot up for allegedly supporting a data center project in Indianapolis. Opposing neighborhood groups condemn the shooting, while the authorities have yet to determine who's behind the crime.
ByWordy is an AI writing workspace for creating contracts, articles, and other documents in your own voice. The platform offers jurisdiction-aware legal documents generated from templates, with e-sign capabilities. You can draft, rewrite, and refine with an AI editor. Legal templates are free to use, and credits are offered upon signing in.
CloverNut centralizes operations for music labels, publishers, workshops, and other creative businesses. Manage artists, products, and releases; build public homepages; and support eight languages with real-time API sync. Handle streaming links for Spotify and Apple Music, create press kits, and control team access with roles. Flexible plans scale from solo creators to enterprises.
Vala is an AI financial intelligence app that turns transactions into clear insights and practical actions. It connects bank accounts, categorizes expenses, tracks subscriptions, and helps manage shared spending for a simple, complete view of your finances.
Vala also offers goal tracking, budget savings tools, and real-time alerts for bills or unusual spending. With visual insight cards and guided suggestions, it helps individuals, couples, and families understand patterns and make better decisions without manual tracking or complex budgeting.
An active campaign has been observed targeting internet-exposed instances running ComfyUI, a popular stable diffusion platform, to enlist them into a cryptocurrency mining and proxyΒ botnet.
"A purpose-built Python scanner continuously sweeps major cloud IP ranges for vulnerable targets, automatically installing malicious nodesΒ via ComfyUI-Manager if no exploitable node is already
Polaroid's Hi-Print 3x3 portable printer-cum-frame breathes analog life into your smartphone pics, producing and giving a home to display square prints
A fan has discovered what appears to be the early model for the main character for Rockstar Games' canceled project, Agent, within the leaked source code for Grand Theft Auto 5.
The budget proposal would force CISA to operate with a significantly lower budget than previous years, citing the government's claims that the election misinformation programs were used to "target the President."
On a recent episode of Equity, we talked to Arena Private Wealth to explore a growing trend: family offices bypassing VCs to gain direct exposure to AI startups, turning them from passive investors into active participants.
For most people, βMad Menβ means the TV show. But the phrase points to something more specific: Madison Avenue in the 1950s and β60s, when agencies grew brands through persuasion, positioning, and earned trust in a world of scarce media channels and powerful gatekeepers. If you wanted attention, you bought your way in, then made your product the obvious choice.
When the internet arrived and Google made the chaos navigable, an entire industry was built on getting brands found. Search and SEO became one of the most commercially valuable disciplines in marketing.
That model isnβt disappearing. But something new is taking shape on top of it β and most of the industry is still using the wrong language to describe whatβs happening.
AI is exposing everything SEO has neglected. Brands that win recommendations from AI systems wonβt do so by publishing more content. Theyβll win through positioning, persuasion, and corroborated proof.
In other words, theyβll win the way Madison Avenue always did.
SEO was never really about content
One of the strangest things about the current industry conversation is how many people talk as if the job of SEO is to create content. It isnβt. Not for most businesses.
If youβre a publisher, content is the product. Traffic is the commercial engine. But for most brands, content never did what people thought.
Early on, people wrote content for customers, and it worked. Then it changed. Content became a keyword vehicle. βGet people to our siteβ replaced good marketing comms.
Traffic became a proxy for exposure. It worked because search rewarded retrieval: type a query, get a page, get a click. All you needed to sell that model was the belief that any traffic was good traffic. That traffic somehow led to revenue that your agency could keep delivering.
That model is now under serious pressure.Β
Google and ChatGPT are increasingly taking the click. Every serious large language model is trying to satisfy informational intent before the user reaches the source. They arenβt trying to be better search engines. Theyβre trying to make search engines unnecessaryΒ β and thatβs the entire point.
Thereβs too much information on the web. People donβt want to open 10 tabs and read five near-identical blog posts to find a basic answer. They want the answer. The AI systems exist precisely to give it to them.
So if informational retrieval gets absorbed into the interface, what remains? Marketing. Thatβs the part many SEOs are still not fully grappling with.
The cleanest way to understand this shift is through the β4 Psβ of marketing: product, price, place, and promotion.
Traditional SEO has been, almost entirely, a place discipline. Itβs been about getting your products, services, or information onto the digital shelf when people go looking.
Keyword rankings are shelf position. Paid search is just a more expensive version of the same principle. In commercial search, you pay for premium placement in a digital aisle.
That still matters enormously.
Buyer-intent search remains valuable. Google hasnβt solved its commercial transition to a fully AI-led interface, and wonβt overnight. Search is too important to Googleβs revenue to disappear fast. But another layer is emerging above it, and this is the layer that most agencies arenβt yet equipped to compete on.
As AI systems become the first interaction point for more users, the game shifts from being present to being preferred.
Users donβt just search. They ask. They describe a problem. They want the best CRM for a mid-market SaaS company, the best estate agent in their area, the best sandwich shop near the office. And the system responds with recommendations.
If classic SEO was about rankings, the next phase is about recommendations. If classic SEO was about digital placement, the next phase is about shaping preference. And recommendation, in practice, is advertising.
Not a display banner. Not a 30-second TV spot. But advertising in the oldest and most commercially powerful sense: influencing the choice someone makes before theyβve even consciously made it.
An AI-generated recommendation is an invisible ad unit. It doesnβt bill by impression.
Why AI recommendations hit differently
When an LLM recommends a brand, it canβt know with certainty what will work best. So it infers. It weighs signals: past success, prominence, reviews, case studies, corroborating sources, and repeated associations between a brand and a specific type of problem.
Humans do something almost identical.Β
Where performance is clearly bounded, we can identify a winner. We know who won the Oscar. We know which film topped the box office.
But when performance isnβt obvious in advance, we rely on proxies. We ask friends, read reviews, and scan for authority. We use familiarity, logic, and social proof to estimate what is likely to be right.
Thatβs exactly the territory AI recommendation is now entering β the consideration set problem. If I ask an LLM to find me a reliable accountant for a small business, Iβm not asking it to retrieve a blog post. Iβm asking it to build me a shortlist.Β
Unlike traditional search, the recommendation layer is invisible to brands unless they test for it actively. You donβt see the prompt or the source chain. You donβt even know why one brand made the cut and another didnβt.
But the commercial effect is real, possibly stronger than anything traditional search produced. If youβre in the recommendation set, youβre in the running. If youβre absent, youβve lost the sale before the conversation started.
The first practical consequence: your website can no longer function like a polite digital brochure. Despite being optimized for search, many commercial web pages simply:
Introduce the company.
Gesture vaguely at services.
Bury differentiation under generic corporate language.
Treat the page as an endpoint for a ranking rather than a persuasive asset.
Still, theyβre weak where it matters most: actual selling.
In the Mad Men era of SEO, your landing pages and service pages need to function like sales pages, not in a cheesy direct-response way, but in the strategic sense that they must clearly answer four things:
Who is this for?
What problem does it solve?
Why is it different?
Why choose it over the alternatives?
This comes down to positioning, which is key to GEO. If seven brands do broadly the same thing, the model needs distinctions. It needs enough clarity to say: this brand is best for X kind of buyer with Y kind of problem because it does Z better than everyone else.
Your website copy must surface real performance attributes: the specific things you genuinely do better or more distinctively than competitors. Your pages must become machine-readable arguments for preference.
Copywriting is back
Actual commercial copywriting β not fluffy brand storytelling or word count for its own sake β identifies a target customer, sharpens the problem, articulates the value, and makes the offer easy to recommend.
Good copy isnβt optional.
Take a local sandwich shop. The old SEO conversation runs to βbest sandwich near me,β local pack, and review acquisition. Itβs useful, but limited.Β
The GEO version starts with the shopβs actual performance attributes.Β
Is it the speed?Β
The handmade bread?Β
The office catering?Β
The locally sourced produce?
Those claims must be clear on the website first. Then they need corroboration everywhere else:
Reviews that mention the sourdough specifically.
A local food bloggerβs write-up.
Inclusion in βbest lunch spotsβ roundups.
Theyβre specific, repeated, retrievable evidence of why this shop is the right recommendation for a particular type of customer.
Scale that logic to a B2B software company, and the principle holds. Pages that clearly explain who the product is for, which problems it solves, and why it outperforms rivals. Then build mentions, customer reviews, and gain trade-press coverage β the body of evidence to support recommending you to buyers β and let the AI find it.
Thatβs pretty much GEO in a nutshell.
Keywords donβt disappear, but they lose their throne
Keywords are a human workaround. Approximations of intent, built for a retrieval system that needed exact string matching. LLMs process fuller context, layered needs, and comparative requirements. They move from keyword matching toward problem understanding.
Keyword research still matters for classic search, paid search, and buyer-intent pages. But the center of gravity shifts.
Instead of asking only βwhat terms should we rank for?β, the better question is: what attributes make us the right recommendation for the buyer we actually want, and what evidence exists across the web to support that claim?
The future of SEO is starting to look like the old agency model, as the work is increasingly promotional. Once your website clearly expresses your positioning, the challenge becomes promoting that position across the wider web through credible, repeated, relevant signals.
Digital PR.Β
Traditional PR.Β
Expert commentary.Β
Case studies.Β
Reviews.Β
Listicles.
Awards.Β
Trade press.
Brand mentions.Β
Conference speaking.Β
Events.Β
Creator coverage.Β
Product comparisons.Β
Original data studies that other people actually cite.Β
These are the things you go after, create, and encourage. Sadly, many βAI visibilityβ conversations flatten this into nonsense.
The goal isnβt merely to have content cited by AI. Itβs to gather enough market evidence that AI systems repeatedly encounter your brand in the right contexts, with the right associations.
The work stops being optimization and becomes maximization: building the largest possible volume of persuasive, corroborated, retrievable evidence that your brand is a sensible recommendation for a specific kind of buyer.
Thatβs a fundamentally different model from anything the SEO industry has been selling. Itβs promotional and strategic brand marketing.
SEOs need to grow up. Thereβs still significant value in buyer-intent search, technical site architecture, entity clarity, internal linking, and structured data. SEOs are well placed to monitor recommendation environments, test prompts, and identify where visibility is being won or lost.
But the identity crisis is real. Many agencies were built for a world of rankings, informational blogs, and monthly traffic graphs. They arenβt equipped to lead a world defined by positioning, copy, PR, brand evidence, and recommendation science.
Tracking brand citations inside AI outputs isnβt a complete strategy. Itβs a temporary metric.Β
Winning agencies look like hybrid commercial strategy firms: part SEO, part copywriting, part PR, part brand strategy, part technical infrastructure. They know how to protect buyer-intent search revenue today while building the fame, clarity, and corroborated authority that earns recommendation tomorrow.
This is the Mad Men model of SEO. Persuasion, positioning, and clear claims backed by public proof matter again. And the job is to become recommended by AI.
LG Electronics, the #1 OLED Gaming Monitor Brand in the USA, today announced pricing and availability for two new additions to its 2026 UltraGear Evo gaming monitor lineup: the LG UltraGear Evo GX9 (model: 39GX950B-B), the world's first 39-inch 5K2K curved OLED gaming monitor, and the LG UltraGear Evo GM9 (model: 27GM950B-B), a 27-inch 5K Hyper Mini LED gaming monitor. Both monitors bring next-generation display performance and AI-powered features to competitive and immersive gaming, giving players sharper visuals, faster response times, and smarter connectivity than previous generations. Both monitors are available for pre-order today at LG.com - The LG UltraGear GX9 at $1,799.99 and the LG UltraGear Evo GM9 for $1,199. Pre-orders placed through May 3 include the option to add LG Premium Care, extending the standard warranty by two years, for only $1.
LG UltraGear Evo GX9βWorld's First 39-Inch 5K2K Curved OLED
From the #1 OLED Gaming Monitor Brand in the USA the LG UltraGear Evo GX9 brings impeccable OLED picture performance to a size and scale not previously available, with near-instant 0.03 ms (GtG) response time and a 39-inch 5K2K canvas. As the world's first 39-inch 5K2K (5120Γ2160) curved OLED gaming monitor, its 21:9 ultrawide format with 1500R curve and 143 PPI pixel density, offering a wider, more panoramic view and crisp text clarity that pulls players deeper into the action.
Zyxel Networks, a leader in delivering secure and AI-powered cloud networking solutions, today announced the launch of the WBE665S BE22000 12-stream Wi-Fi 7 Triple-Radio NebulaFlex Pro ruggedized access point. The new solution presents MSPs and installers with an opportunity to address the rising demand for fast, reliable wireless connectivity within industrialized and challenging environments. Combining a durable IP67-rated weatherproof design and AI-powered cloud management, the WBE665S is designed for professional installers deploying networks in demanding locations.
In warehousing and distribution, manufacturing, cold storage, large-scale retail and other sectors, Wi-Fi is now being extended into zones that were once considered too harsh for wireless connectivity. Forklift trucks run connected tablets, while IoT sensors track the movement of consignments and goods, and handheld barcode scanners are used to drive greater efficiency and accuracy. In these environments, hazards such as extreme temperatures, humidity and dust are common and dropped connections, downtime and hardware failures can disrupt operations.
Indianapolis City-County Council member Ron Gibson, a Democrat who has held his position since 2023, recently expressed support for rezoning related to a proposed $500 million data center project. Two large buildings, from Los Angeles-based startup Metrobloks, will be built on a 14-acre site located in the Martindale-Brightwood neighborhood of...
TSNC is being positioned as a practical path for developers who already ship BC-compressed assets and want to squeeze more data into the same storage, bandwidth, or VRAM budgets without rethinking their pipelines.
Save $200 on this awesome AMD build from iBuyPower, featuring an AMD Ryzen 7 7800X3D, RTX 5070, 32GB of DDR5 RAM, and a 2TB SSD, all for just $2,049 right now.
InΒ the rapid evolution ofΒ the 2026 threat landscape, a frustrating paradox has emerged for CISOs and securityΒ leaders: Identity programs are maturing, yet the risk is actually increasing.
According to new research fromΒ the PonemonΒ Institute, hundreds of applications within the typical enterprise remain disconnected from centralized identity systems.Β TheseΒ "dark
When talking about credential security, the focus usually lands on breach prevention. ThisΒ makes senseΒ when IBMβs 2025 Cost of a Data BreachΒ Report puts the average cost of a breach at $4.4Β million. Avoiding even one major incident is enough to justify most security investments, but that headline figure obscures the more persistent problems caused by recurring credential
Iβm getting a mid-career executive MBA. Last week, in class, we discussed the interaction between automation and advertising.Β The lecture covered why A/B testing in Meta is less valuable now, since Facebook can auto-optimize faster and better than marketers can on their own.
A classmate took the logical leap and asked the professor, βIf digital channels have more data and more processing power, why donβt advertisers just give them a URL and a credit card and let them go wild?β
The argument has real merit. Google, Meta, and LinkedIn have access to more data than any agency ever will. Their optimization engines are improving fast. Handing them a budget and a URL and walking away isnβt entirely crazy.
But that means weβd need to have faith in the channels to optimize media in a businessβs best interests, and thereβs a long, proud history of that not being the case.
1. The opt-in that wasnβt
About six years ago, we met with a Google rep who pitched a product that introduced broader, more aggressive targeting and bidding. We listened to the pitch and said no. We didnβt want to try it. The reps turned it on anyway.
What happened next was what we predicted. The campaigns spent significantly more money and didnβt generate any additional conversions.
We had to comp the client for the wasted spend, which was bad enough. But what made it worse was the principle of the thing: we hadnβt agreed to this. Google made unauthorized changes to our account.
When I tried to get the money back, Googleβs position was that weβd set our campaign budgets at a certain level, and they were within their rights to spend up to that amount. That framing ignores that a budget cap is a ceiling, not an invitation.Β
Our agency methodology is to never hit a budget cap. We set those numbers based on the strategy weβd approved, not the one they decided to test. I hounded them for weeks, but never got any resolution. It still makes me angry.
The reps were clearly incentivized to get adoption of the new feature. When it didnβt work, there was no accountability and no recourse. We were left covering the cost of a decision we explicitly declined.
Whatβs being misrepresented
Budget caps were treated as implicit consent to spend. A product we declined was activated without authorization, and when it failed, the platform pointed to our own settings as justification.
The incentive structure rewarded the reps for turning it on. There was no corresponding mechanism to make the advertiser whole when it didnβt work.
This was years ago for a successful retainer. A pair of senior Google reps sat across from us and asked what our clientβs gross margin was. Around 50%, we said. They went to the whiteboard and wrote out: if overall revenue/2 β overall media cost >= 0, then we should keep spending money on ads.
On the surface, the math sounds right. In practice, it has two problems.
It assumes the reported conversions are incremental, meaning they wouldnβt have happened without the paid ad. A substantial portion of any Google campaignβs reported conversions, particularly in brand and retargeting, are users who were already going to convert.
The model assumes a flat cost curve, where the 500th conversion costs the same as the 50th. It does not. Marginal returns fall as you scale. The last dollars of spend are always the least efficient, but theyβre exactly what this pitch is designed to help Google access. (They should have said marginal revenue/2 β marginal cost = 0 is profit maximization.)
Whatβs being misrepresented
The model treats all reported conversions as incremental and assumes cost per conversion is constant across spend levels. Both assumptions are wrong, and together they can justify significant overspend.
3. The βhigher CPCs buy better clicksβ pitch
This one still happens all the time. The pitch is that if you raise your CPCs, youβll get access to higher-quality traffic. The implied logic is that conversion rate is influenced by CPC, and that if your investment isnβt high enough, youβre missing the best clicks.
Thereβs a version of this that has some truth to it. Higher CPCs can mean higher ad positions, which can mean higher impression frequency against the same users. More frequency can drive higher aggregate conversion rates, because repeated exposure matters.
But the argument glosses over the other side of that equation.Β
Higher frequency has diminishing marginal returns.Β
The third impression is worth less than the first. The tenth is worth a lot less.
The cost curve isnβt flat. Youβre paying more per click at every step.
In practice, raising CPCs to chase quality traffic is almost always correlated with substantially worse overall return on ad spend.
This is a variant of the marginal return problem seen across these cases. The pitch frames the upside without acknowledging the cost curve. More spend gets positioned as access to better outcomes, when it often delivers the same outcomes at a higher price.
Whatβs being misrepresented
CPC and conversion rate are presented as if higher bids unlock better traffic. In most cases, the incremental cost outpaces the incremental return. The pitch frames diminishing returns as an opportunity, rather than a constraint.
βIf your Meta campaigns are underperforming, itβs because the algorithm just needs more time to learn.β
βDonβt make changes, and donβt reduce budget, just give the platform more data.βΒ
This is sometimes true. Machine learning systems need volume to optimize effectively, and premature intervention can reset progress.
But βit needs to learnβ has become a catch-all explanation thatβs almost impossible to disprove in the short run. It explains away poor CPAs, delays accountability, and keeps spend flowing when a reasonable advertiser might otherwise pull back and reassess.
Thereβs rarely a clear definition of when the learning phase ends, which makes it a moving target. The learning phase ends when performance improves. If performance doesnβt improve, more learning is prescribed.
Whatβs being misrepresented
A real technical concept is being used in ways that resist falsification. When thereβs no defined endpoint and no stated criteria for success, βit needs to learnβ serves as a blank check for budgetary continuity.
5. The metric pivot: When conversions fail, sell sentiment
In many cases, YouTube or display campaigns arenβt driving measurable conversions. The repβs suggestion: letβs look at brand measurement. We can measure recall rates, positive sentiment, and intent to purchase. These are real signals of brand health, and they matter in the long run.
But the shift from conversion to sentiment metrics tends to occur when conversion metrics are poor, not as a principled measurement strategy. Brand lift surveys measure awareness under controlled conditions, but they rely on self-reported intent and donβt connect to downstream revenue.
Recall is almost never translated into a cost per point of lift that can be compared across the media plan. You end up with a number thatβs positive and presented as evidence of success, with no agreed-upon framework for what sufficient lift would look like.
Whatβs being misrepresented
A softer metric is substituted for a harder one after the harder one fails. Brand lift is a legitimate measurement tool when defined upfront as a success criterion. Introduced afterward, it functions as a consolation prize.
6. Upper funnel combined with lower funnel for a blended average
Upper-funnel and lower-funnel campaigns serve different purposes and perform differently on a cost-per-acquisition basis. When a channel reports blended CPA across all campaign types, an average that looks acceptable can hide the fact that some portion of the media plan is wildly inefficient at the margin.
The argument for blending is that upper-funnel spend creates the conditions for lower-funnel performance. That is plausible, but plausibility isnβt the same as demonstrated causality.Β
Often, itβs assumed the upper funnel is directly contributing and that, in aggregate, the system is profitable and fully incremental. This is never the case.
Whatβs being misrepresented
Aggregate CPA can look fine while specific segments of spend have no measurable return. Blending is a reporting choice, and it can obscure where money is and isnβt working.
7. View-through conversions: The numbers that shouldnβt count
A view-through conversion is counted when a user sees an ad, doesnβt click it, and then converts within some attribution window, often 24 hours or more. Platforms report these alongside click-through conversions by default.Β
For retargeting campaigns, which by definition serve ads to people who have already visited your site, view-through attribution is particularly problematic. These users were likely going to return and convert regardless. The ad may have had nothing to do with it.
The issue isnβt that view-throughs arenβt meaningful. For a cold audience, some brand-influenced conversions happen without clicks.
The issue is that those conversions are almost never broken out proactively (you have to ask). And when you remove view-throughs from retargeting campaigns, the ROAS numbers can change dramatically.Β
Weβve seen cases where removing VTAs cuts reported conversions by more than half. I would note that by moving to incremental measurement options, Meta has become substantially more transparent.
Whatβs being misrepresented
View-through conversions inflate reported performance, particularly in retargeting, where incrementality is already low. Default reporting includes them without flagging the methodological problem.
This one is a pattern. A channel rep brings industry benchmark data to a meeting showing that your competitors are spending at a level above your current budget. The implication is clear: youβre being outspent, and you should close the gap.
Industry benchmarks are among the most valuable inputs a channel can provide. Knowing where you sit relative to the market is useful context for planning. The problem is how they get deployed. More often than not, benchmark data shows up as a tool to expand media spend, not as a neutral input into strategy.
And it works. CEOs and CMOs are particularly susceptible to this framing. Nobody wants to hear that a competitor is outspending them.
The emotional pull of βtheyβre investing more than youβ is hard to counter with a measured conversation about marginal returns or strategic fit. The benchmark becomes the argument, and the argument is almost always βspend more.β
What gets lost is any discussion of whether:
The competitorβs spend is actually working for them.
Your business model and margins support the same level of investment.
The benchmark even reflects an apples-to-apples comparison.
Competitive spend data without context is just a number that makes your budget feel inadequate.
Whatβs being misrepresented
Benchmark data is real, but itβs selectively introduced to justify budget increases rather than treated as one input among many. The framing skips over whether the comparison is meaningful and relies on competitive anxiety to sell.
9. The default settings trap
This one is hard to frame as a single incident because itβs everywhere. Iβve talked to so many people trying to break into the industry, or launch their first campaigns, and the story is almost always the same.Β
They follow the platformβs setup guide, accept the default settings, and end up opted into programs that have close to zero chance of being successful.
This is true across pretty much every major channel.Β
LinkedIn defaults you into audience network inventory that runs outside the LinkedIn feed.Β
Google opts you into display inventory when youβre trying to run search. Broad match keywords are set way too far out of the box. Suggested CPCs are astronomical.Β
Googleβs geographic targeting defaults to βpresence or interestβ rather than actual location.Β
Each of these defaults, taken individually, could be defended as a reasonable starting point. Taken together, they create a setup that maximizes the platformβs revenue from day one, before the advertiser knows whatβs happening.
A new advertiser following the guided setup is accepting a configuration that the platform designed, and the platformβs incentives arenβt aligned with efficient spend.
This one is genuinely difficult to solve. Platforms need to provide default settings, and they canβt expect every new advertiser to understand every option.Β
But thereβs something predatory about the gap between what people think theyβre signing up for and what theyβre getting. The defaults are revenue-optimized for the channel, not performance-optimized for the advertiser.
Whatβs being misrepresented
Setup guides and default settings are presented as best practices when theyβre actually configurations that favor the platformβs revenue. New advertisers trust the guided experience, and have no reason to suspect the defaults are working against them.
Privacy regulations and platform changes have created real limitations in conversion tracking. GDPR and Appleβs App Tracking Transparency arenβt invented problems.Β
We have less visibility than we used to, and the platforms have responded by layering probabilistic modeling and modeled conversions on top of deterministic tracking.
But the tracking gap has also become a convenient shelter for underperformance. The argument goes like this:
βThe conversions are happening, we just canβt see them all yet. Thereβs latency in the data.β
βThere are limits to what can be tracked. We need a longer attribution window.β
βWe need more time for the modeled data to populate. And in the meantime, here are some proxy metrics that we think are directionally valid, so letβs keep pushing.β
Each of those can be true in isolation. Modeled conversions take time to appear. Attribution is harder than it was five years ago. Proxy metrics can be useful when direct measurement breaks down.Β
The problem is when all of these caveats get stacked together and used to justify sustained spend in the absence of any measurable result. At some point, βthe data will come inβ stops being a reasonable expectation and becomes an article of faith.
The tracking gap is real, but it cuts both ways. If you canβt measure the result, you also canβt prove the spend is working. The platformβs default position is to assume it is, and keep going. The advertiserβs job is to ask what happens if the modeled conversions never materialize, and what the fallback plan looks like if they donβt.
Whatβs being misrepresented
Legitimate tracking limitations are used to defer accountability indefinitely. When measurement is hard, the platformβs recommendation is always to maintain or increase spend, never to reduce it. The uncertainty gets resolved in the channelβs favor by default.
None of this is an argument that agencies are irreplaceable in their current form. We used to question tCPA, and now itβs a preferred bidding strategy. Automation handles execution-level work that used to require skilled practitioners. In-house teams are viable for more companies than they used to be.
But the argument for fully autonomous, channel-run advertising assumes the channel will optimize for your outcomes rather than revenue. Even if we imagine new profit-sharing contracts, this assumption carries real risk.
And Iβm not blaming reps or the channels. They believe in their products, but theyβre also measured on metrics that create a predictable drift in how they frame data. I should note that agencies struggle with misaligned incentives as well.
The advertiserβs job, with or without an agency, is to keep asking the inconvenient questions.
What is the marginal return at this spend level?
What percentage of conversions are view-throughs?
What does performance look like if we exclude brand search?
Are we measuring incrementality, or are we measuring correlation, and calling it causation?
Maybe the answer to everything is eventually full automation. But the entity building the machine shouldnβt be the one telling you when itβs ready.
For years, Salesforce Marketing Cloud was the safe choice.
Powerful. Enterprise. Trusted.
But lately, weβre hearing something different:
βOur data is too tangled to activate.β
βWeβre locked into contracts.β
βWeβre stuck sending the same emails on repeat.β
βEverything is Band-Aids and duct tape β I donβt know how we can move without breaking everything.β
βWe feel stuck.β
Sound familiar? If so, this fireside chat is for you.
Weβve helped dozens of brands migrate off Salesforce and into modern, composable engagement architectures built for real CRM performance. Not because itβs trendy β but because marketers needed more speed, flexibility, and innovation.
In this April 14 session, weβll cover:
Why brands feel stuck (and why itβs more common than you think).
Whatβs happening inside the Salesforce ecosystem.
The biggest misconceptions about migrating.
Understanding the martech landscape.
What life actually looks like after moving to a modern platform like Braze.
How CMOs and martech leaders should think about platform decisions over the next 3 to 5 years.
How to get the rest of your org on board with making a move.
The steps to take now to set yourself up for migration success.
To be clear: this isnβt a Salesforce-bashing session.
Itβs a candid conversation about innovation velocity, marketing ownership, and what the next era of marketing actually requires.
Disclaimer: To ensure a candid and open conversation, the live session is open only to brand-side marketing leaders. Registrants who are not verified brand-side marketing leaders will not be permitted to attend the live session. However, the recorded session will be made available to all registrants upon completion of the event.
Intelβs first CPUs to integrate Nvidia graphics chiplets are reportedly called βSerpent Lakeβ, and they could launch in late 2028 Last year, Intel struck a deal with Nvidia that would allow them to βbuild and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets.β According to the leaker Jaykihn, Intelβs first [β¦]
Turtle Beach's VelocityOne Race KD3 Racing Wheel and Pedals is currently one of the company's best accessories for emulating real-life racing on Xbox and PCs, and it's now on sale for 44% off.
Microsoft is changing how OneDrive handles deleted files. Soon, cloud deletions won't hit your local Recycle Bin, forcing you to use the web for recovery.
Copilot's terms of use indicated that it's for entertainment purposes only. However, Microsoft has indicated that the phrasing is a legacy language from when Copilot originally launched as a search companion service in Bing.
Sonnet Technologies, a US-based long-time provider of connectivity solutions for Mac, Windows, and Linux systems, has released two new Thunderbolt 5 docking stations: the Echo 20 SecureDock and Echo 21 SuperDock. Both target professional users looking for high-speed I/O, with the key difference being built-in storage support on the higher-end model. The two docking stations are mostly identical in terms of connectivity. Both feature three Thunderbolt 5 ports, a host connection capable of up to 140 W of power, and two downstream ports for peripherals and daisy-chaining. There are also nine USB 3.2 Gen 2 ports split between Type-A and Type-C, a 10 Gigabit Ethernet port with backward compatibility, and dual display outputs via HDMI and DisplayPort. Depending on the host system, the docks can drive up to four displays, with support reaching 8K at 60 Hz on Windows systems.
The Echo 21 adds an internal M.2 NVMe slot taking drives up to 8 TB, with transfer speeds reaching 3,300 MB/s. This makes it suitable for local media storage or backup use cases without requiring external drives. Both models feature high-res audio I/O at up to 24-bit/192 kHz, plus full-size SD and microSD card readers. Compatibility covers Apple M-series Macs, older Intel Macs with Thunderbolt 3, and Windows or Chromebook machines with Thunderbolt 4, 5, or USB4. The Echo 20 SecureDock is available now at $449.99, with the Echo 21 SuperDock following in late May at $499.99.
Errol Segal has been a LA Dodgers season ticket holder for over 50 years, but that run has come to an end as the team transitions to all-digital for season tickets.
Robert Hallock, Intel's vice president and general manager of client segment technical marketing, confirmed in an interview with Club386 that the Raptor Lake lineup remains "a big part" of Intel's client segment strategy and will stay in production alongside newer chips.
New academic research has identified multiple RowHammer attacks against high-performance graphics processing units (GPUs) that could be exploited to escalate privileges and, in some cases, even take full control of aΒ host.
The efforts have beenΒ codenamed GPUBreach, GDDRHammer,Β and GeForge.
GPUBreach goes a step furtherΒ than GPUHammer, demonstrating for the first time that
STALKER 2 is getting some free content ahead of its Cost og Hope DLC this summer GSC Game World has confirmed that STALKER 2 is getting a free content update this month. This update is βSealed Truthβ, which will allow players inside the X-18 Lab. STALKER fans should already be aware of the Lab X-18, [β¦]
Based on the satirical novel and film franchise, Starship Troopers: Ultimate Bug War! puts the player in the shoes of Major Dietz as they fight back an army of alien arachnids. Or try things out from the bugβs perspective and destroy all humans. Either way, youβre in for a good time.
IBASE Technology Inc., a leading manufacturer of embedded and edge computing solutions, launches the MBB1002, a powerful AI-ready eATX motherboard engineered to accelerate next-generation edge AI and data-intensive applications. Powered by AMD EPYC Embedded 8004 series processors, it delivers exceptional multi-core performance and outstanding power efficiency, enabling faster AI inference, real-time analytics, and high-throughput computing at the edge.
Built for scalability and performance, the MBB1002 supports up to 576 GB DDR5-4800 ECC memory for reliable, high-speed data processing. Five PCIe Gen 5 x16 slots unlock unmatched flexibility for integrating GPUs and AI accelerators, empowering system integrators to scale performance based on evolving workload demands. With dual 10GbE LAN and high-speed NVMe storage support, the platform ensures ultra-fast data transfer and seamless system responsiveness for mission-critical deployments.
ASUS today announced the ProArt Router PRT-BE5000 and ProArt Switch PQG-U1080, introducing networking solutions into the ProArt family of Creator-First devices designed for modern studios. Joining the existing ProArt lineup of laptops, displays, graphics cards, motherboards, and other creator-focused products, these new devices help build a more complete studio infrastructure for creators. Combining dual-band Wi-Fi 7 connectivity, intelligent traffic prioritization, and high-speed multi-gigabit wired expansion, the ProArt Router PRT-BE5000 and ProArt Switch PQG-U1080 enable quick file transfers, cloud collaboration, and stable connections across multiple creative devices.
The ProArt Router PRT-BE5000 delivers dual-band Wi-Fi 7 with throughput performance of up to 5000 Mbps plus Multi-Link Operation (MLO), and dual 2.5G WAN/LAN connectivity for flexible, high-speed wired connections. Creator-First adaptive QoE intelligently prioritizes creative traffic in real time, helping to ensure fast file transfers, smooth cloud collaboration, and streaming across connected devices, in harmony with other network activity. ASUS Smart Home Master software further simplifies network segmentation through dedicated SSIDs for IoT devices and VPN connections, enabling more intuitive management across studio and personal environments.
Motherboards, who needs them? Not Breadboarding Labs, which recently drafted plans for a retro Intel 80386 (i386) PC build using solderless breadboards.
Giraffe Gold lets you build ownership of a physical gold, silver, or platinum bar through small monthly contributions and automatic round-ups. Connect your bank and spending card, set a contribution starting at $50, and watch your bar balance grow in real time with market prices.
When you hit the bar price, Giraffe Gold purchases from certified refiners and ships your bar fully insured to your door. The platform uses Plaid for secure, read-only connectivity and partners with Upstate Coin & Gold and ShipSecure to ensure authenticity and safe delivery.
Stay the Week helps homeowners privately share lake houses, cabins, beach houses, ski condos, and second homes with friends and family. Invite guests to a private booking page to see availability, request dates, and receive automatic confirmations and reminders. Owners control availability, blackout dates, and access from a simple dashboard, with directions and property info attached to each booking, replacing messy text threads with a clean, controlled process.
The UK's cybersecurity workforce has nearly tripled, but headcount growth is masking a deeper crisis - privacy teams remain critically understaffed, underfunded, and underpowered just as threats intensify.
FORMLOVA is a chat-first form service powered by MCP. Create forms from ChatGPT, Claude, or Cursor in under a minute, then manage response routing, follow-up emails, reminders, analytics, and CRM handoffs from the same conversation. It integrates with 118 tools across 24 categories, focusing on the 95% of form work that happens after the form exists.
Most AI form tools stop at creation. FORMLOVA was built by a solo founder with years in digital marketing who knew the real burden was in post-publish operations. It's free to start with unlimited forms and responses.
If you're running a website on Cloudflare's free or pro plan and don't have time to babysit logs or tune WAF rules, Detect7 fits. It gives you automated, intelligent protection that works in the background without needing you to be a security expert. You set it up once, and it handles the rest: detecting threats, escalating from challenge to block, learning traffic patterns, and managing Cloudflare firewall rules and IP lists for you. It analyzes 100% of your origin logs in real time, learns your normal patterns, and auto-blocks threats with adaptive rules pushed to your Cloudflare integration.
AΒ China-based threat actor known for deploying Medusa ransomware has been linked to the weaponization of a combination of zero-day and N-day vulnerabilities to orchestrate "high-velocity" attacks and break into susceptible internet-facingΒ systems.
"The threat actor's high operational tempo and proficiency in identifying exposed perimeter assets have proven successful, with recent
Spawnbase turns recurring work into reliable AI-powered workflows. Teams describe goals in plain language, then build agentic flows with triggers, AI steps, and app actions across seven providers and 200+ models. Configure nodes, test each run, and deploy on schedules or events while monitoring performance. Connect Slack, GitHub, HubSpot, Notion, Jira, and more, and pay only for executions with credit-based pricing.
Threat actors are exploiting a maximum-severity security flaw in Flowise, an open-source artificial intelligence (AI) platform, according to new findings from VulnCheck.
The vulnerability in questionΒ is CVE-2025-59528 (CVSS score: 10.0), a code injection vulnerability that could result in remote code execution.
"The CustomMCP node allows users to input configuration settings for connecting
Axe:ploit signs up with its own email and phone, discovers APIs, and probes for over 7,500 issues including IDOR, auth bypass, and complex business logic flaws. Backed by a constantly updated CVE feed, large password and fuzzing datasets, and layout-aware crawling, it adapts as your UI and logic change.
Simmerce lets e-commerce teams run synthetic customer simulations to validate features before launch. It generates diverse shopper personasβbudget shoppers, brand loyalists, impulse buyers, and researchersβand simulates how they search, browse, and convert to reveal friction and opportunities.
Upload a product pages, search results, or recommendations and get insights in minutes, including which segments convert, where users get stuck, and what content is missing. Use pay-as-you-go credits to test ideas fast and ship with confidence.
The program invites creators to create original content with Picsart tools for a specific campaign, share it on their social channels, and earn revenue based on how their audience engages.
Intel Desk connects geopolitical events to market instruments every 30 seconds. It aggregates over 60 OSINT and wire sources, scores sentiment, and maps topics to instruments to surface directional trade signals with evidence trails for energy traders, macro researchers, and defense analysts. Users get live market data via Finnhub, portfolio impact alerts, and twice-daily briefings, with transparent source chains and low-latency delivery you can trace end to end.
TrackACert.io is a simple tool for IT managers who need to know what certifications their team holds, what's about to expire, and where the gaps are. We connect GIAC, CompTIA, ISCΒ², AWS, Microsoft, and more in one place. Key features include IT certification tracking, certification expiry notices for your team, skill gap analysis to identify missing skills and certifications, and full reports to help justify financing for more certifications.
Following the Artemis II mission sparked a closer look at LEGOβs space-themed sets, including an interactive rocket build and a wider collection inspired by real space exploration.
Keyspace delivers AI-powered inventory management and warehouse automation for teams of any size. It connects to your existing tools, monitors stock in real time, and drives decisions on forecasting, replenishment, and operations.
Use collaborative workflows, advanced analytics, and a customizable interface to streamline processes, cut stockouts, and reduce carrying costs while scaling with confidence.
Despite its merger with xAI and the upcoming SpaceX initial public offering in June, running the app is costing Elon Musk more money than itβs bringing in.
Several delisted Xbox and Xbox 360 games have recently appeared on the Xbox Store, leading many to wonder if they'll be made backwards compatible soon.
As the previously leaked launch date of Sony's upcoming PlayStation 6 approaches, leaks and rumors abound, with many claiming that the upcoming gaming console will launch later than initially expected. Now, reputable leaker, KeplerL2, has taken to the NeoGAF forums to dispel some of the doom and gloom surrounding the launch date and potential delays of the PS6. The leaker's reasoning stems not from some insider information, but rather from a simple application of logic, asking a fellow commenter "What copium? You think AMD is gonna waste resources doing validation on something they think will get delayed?"
The reasoning seems to be that, based on information like prior leaks, that AMD has already been working on custom APUs for both the living room version and the handheld model of the PlayStation 6, and that AMD would not continue validation of those APUs if it thought there were supply constraints that would lead to a delay ahead of the console family's expected launch date. It's also possible that AMD and Sony signed the supply contracts for the Canis and Orion APUs before the current DRAM crisis was in full swing, effectively making a launch delay impossible or at least less likely. However, this would mean that the console makers would have a batch of hardware ready for launch followed by intermittent or delayed supplyβat least as long as the DRAM shortage holds. There have also been rumors that Sony will be drastically increasing the price of the PS6 consoles compared to the PS5 generation, although this appears to be at least somewhat contingent on Microsoft's Xbox Helix pricing strategy.
Stormgate, a free-to-play, StarCraft-style RTS developed by Frost Giant Studios, relies on a third-party "game server orchestration partner" to run its online modes. Frost Giant told players on Discord that the provider had been acquired by an AI company, forcing a planned outage that will take Stormgate's multiplayer modes offline...
More than a dozen investors are pressuring Amazon, Microsoft, and Alphabet's Google to provide detailed data on water and energy consumption at their U.S. data centers.
Google research indicates quantum computers may break cryptocurrency encryption sooner, prompting urgency for post quantum cryptography adoption and coordinated disclosure strategies
Zero Shot, a new venture capital fund with deep ties to OpenAI, is aiming to raise $100 million for its first fund. It has already written some checks.
Betvisors connects bettors with verified sports betting advisors and lets you tail their picks with one tap. You only tip when a pick wins; if it loses, you pay nothing. Advisors prove their track records with betslip screenshots, and a public leaderboard shows win rate, profit, and streaks. Gambly integration places bets at your sportsbook instantly, while Stripe handles secure payments. Advisors can monetize winning picks and grow a following on the platform.
Geysera helps ecommerce brands recover revenue by identifying returning anonymous visitors and syncing them to your ESP. It builds and manages cart, checkout, browse, and winback email flows inside Klaviyo, Mailchimp, or SendGrid, then optimizes continuously within your discount guardrails. Always-on holdout groups and RCTs prove incremental lift, and pricing ties to verified revenue. Brands and agencies get white-glove setup, real-time dashboards, and compliance-aware copy.
According to Telegram CEO Pavel Durov, the Kremlin's increasing efforts to control and censor the global internet are causing widespread problems for Russian users. The Russian-born entrepreneur confirmed that Telegram is now banned in the country, yet more than 50 million Russians continue to use it daily via VPNs.
Bartlett Lake, Intel's P-core only family of CPUs intended for edge and industrial use cases has been modded to run on consumer Z790 motherboards. At the moment, the flagship Core 9 273QPE with 12 Raptor Cove P-cores is posting around 33,000 points in Cinbenech R23, which is around the Core i7-14700 mark.
Bipartisan group of U.S. senators propose to impose blanket ban on export of advanced DUV lithography and etching tools to Chinese companies known to have worked with China's military, such as CXMT, Huawei, SMIC, and YMTC.
Apple's MacBook Air M5 is the 'best mix of winning design, near-pro-level performance, and battery life,' and Amazon has an impressive $150 discount on the 15-inch model.
If you rank your own product #1 in βbest ofβ listicles, itβs not just a search-quality issue β it may violate FTC rules that took effect in October 2024.
Driving the news. As Lily Ray noted on LinkedIn, the FTCβs Consumer Review Rule (16 CFR Part 465) prohibits several deceptive practices tied to reviews and testimonials, including:
Presenting company-controlled content as independent reviews.
Publishing reviews of products or services never actually used.
Attributing reviews to people who didnβt write them.
Penalties can reach up to $53,088 per violation, and each page may count separately. Ray also shared a reference table she generated with the help of Claude:
Why now. βBest Xβ and βTop 10 Yβ listicles have surged as a GEO tactic over the past couple of years. These pages often perform well in search and increasingly influence AI-generated answers.
The backstory. Before the rule was formalized, Ray said at least one company faced legal action for publishing hundreds of βbest ofβ pages that:
Ranked its own services #1.
Included fabricated competitor reviews.
Used fake reviews on third-party platforms.
The Better Business Bureau later censured the company for unsubstantiated claims.
Whatβs happening. Many modern listicles follow a similar pattern:
A brand publishes a βbest toolsβ list.
Includes competitors it hasnβt tested.
Uses subjective or invented scoring systems.
Ranks itself #1.
These listicles may imply independence or firsthand evaluation when neither exists.
The nuance. You can publish comparison content that includes your own product. However, based on FTC guidance, risk increases when:
You imply objectivity, but promote your own product.
You present reviews not based on real experience.
You fail to clearly disclose material relationships.
What Google is saying. Google is aware of the low-quality listicle trend. In a statement to The Verge, a Google spokesperson said the company applies protections against manipulation in Search and Gemini, and reiterated its guidance: create content for people and ensure itβs understandable to search systems.
Why we care. What has worked as a visibility tactic may carry risk on two fronts β regulators and a potential Google Search algorithm change. That means this popular GEO tactic could decline quickly as its effectiveness drops.
Caveat. Iβm not a lawyer. Consult your own legal counsel if youβre concerned about using this tactic.
Intel preps a huge socket for future CPUs with HUGE graphics chips Intel is reportedly working on a new CPU that aims to challenge AMDβs βStrix Haloβ and Apple Silicon. With its huge 4326 socket, Razor Lake AX aims to bring together strong CPU and GPU hardware to deliver a strong single-package computing solution for [β¦]
Ahead of STALKER 2's upcoming Cost of Hope DLC, dev GSC Game World is releasing a free Sealed Truth content update for the open-world survival shooter.
Microsoft has been addressing the recent wave of "Microslop" criticism that has emerged online in response to the forced integration of AI into its products. Specifically, Microsoft has been promoting its Copilot applications, products, and even Copilot-branded hardware like Copilot+ AI PCs to consumers. However, this is just the scratching the surface, as the actual number of Copilot variants is much higher than what the average PC enthusiast might consider. If you've ever wondered how many Copilot applications exist, the official count stands at 80 Copilot applications, products, services, and hardware that the Redmond giant has developed. Across every Microsoft vertical, there is a Copilot icon in some form, even present on Copilot+ PCs with its own dedicated Copilot key. This represents the biggest branding overhaul in Microsoft's history, as the company traditionally distinguished products with unique features and names.
However, the popularity of its ecosystem is at an all-time low, particularly within the PC community, which interacts most with the Windows 11 operating system and the Microsoft 365 suite of applications, formerly known as the Office package, including Excel, Word, PowerPoint, and others. Regular consumers are largely unaware of the extent of the Copilot branding, as Microsoft has extended its AI narrative to consumer and business chatbots, developer tools, desktop applications, Copilot applications within other applications, enterprise platforms, hardware, and business software serving the enterprise sector. At some point, the community narrative suggests that the branding is being pushed a bit too aggressively, as Windows 11 users, who interact daily with the world's most widely distributed operating system, have openly discussed the drawbacks of the forced Copilot integration.
An enthusiast has discovered that there are now a staggering 80 distinct Microsoft products carrying the name "Copilot" and perhaps even more. That figure comes from Tey Bannerman, an AI strategy and design consultant, who undertook the challenging β but rewarding β task of counting all the Copilot products now...
According to Microsoft's latest timeline, Windows 11 25H2 will soon be rolled out to all devices running the Home and Pro editions of Windows 11 24H2. The latter is set to reach the end of official support on October 13, 2026, and Redmond is clearly aiming to move as many...
AnΒ Iran-nexus threat actor is suspected to be behind a password-spraying campaign targeting Microsoft 365 environments in Israel and the U.A.E. amidΒ ongoing conflict in the MiddleΒ East.
TheΒ activity, assessed to be ongoing, was carried out in three distinct attack waves that took place on March 3, March 13, and March 23, 2026, per CheckΒ Point.
"The campaign is primarily
The NHTSA closed its investigation into Tesla's "Actually Smart Summon" feature, saying that only a fraction of cases resulted in an incident and that no incidents resulted in injury. Tesla has also issued a number of software updates.
AI travel companion that generates day-by-day itineraries, then stays with you throughout the trip. Proactive morning briefings, smart packing lists, budget tracking, local insider tips, and a Spotify Wrapped-style trip recap. Free to try.
co-parenting.ai is the AI family app for separated parents β an assistant that communicates and coordinates the logistics, a place for family context, a village hub for grandparents, nannies, and schools, and a dedicated space for kids that shields them from adult conflict. It works whether your co-parent joins or not.
We built this for the kids. Every conflict de-escalated means two present parents instead of two stressed ones. Every caregiver and extended family member who shows up knowing the schedule is a child who feels held by a bigger family.
Human-written content dominates Googleβs top rankings, appearing in the No. 1 position 80% of the time versus just 9% for purely AI-generated pages, based on a Semrush analysis of 42,000 blog posts.
The details. Semrush analyzed 20,000 keywords and their top 10 results, classifying content with an AI detector.
Human-written pages outperformed AI and mixed content across all top 10 positions.
The gap was widest at Position 1, where human content was 8x more likely to rank.
AI content appeared more often lower on Page 1, nearly doubling from Positions 1 to 4.
Yes, but. AI detection tools are widely known to be inconsistent and can misclassify human and AI-written content, creating some possible βfuzzinessβ in these classifications.
Why we care. AI-generated content works, until it doesnβt. Yes, AI can help you rank, but this data suggests human insight still drives the best performance. For competitive queries, originality, expertise, and editorial judgment remain your unfair advantages.
Perception vs. data. 72% of SEOs said AI content performs as well as or better than human content, yet ranking data showed a clear human advantage at the top.
How teams use AI. No surprise, AI is widely adopted and often used in a hybrid approach:
87% of teams keep humans heavily involved in content creation.
64% use a human-led, AI-assisted workflow.
AI is most common in research, drafting, and optimization.
Use drops sharply for multimedia, localization, and higher-judgment tasks.
Whatβs driving adoption. AI accelerates output, but doesnβt reliably improve it.
70% cite faster production as AIβs top benefit.
Only 19% say it improves content quality.
About the data: The analysis examined 42,000 blog pages from 200,000 URLs tied to 20,000 keywords, using GPTZero to classify content. It also includes a survey of 224 SEO professionals working in content and search.
Microsoft MVP Lance McCarthy highlights how easy it actually is to add AI to an app using Microsoft's Windows AI APIs during development. I want to see more of this type of AI and less of the unecessary bloat that's giving it a bad name.
Intel's upcoming "Nova Lake" CPU generation, part of the Core Ultra 400 series, will be a major refresh of the company's P and E-core hybrid design. While many specifications have been largely leaked, the exact integrated GPU configuration remained a mystery until now. One of the most reliable Intel leakers, Jaykihn, has revealed that Intel plans to use the Xe3 generation of graphics, which is found in the current "Panther Lake" Arc B300 series of integrated GPUs. The display and media engine will come from the Xe3P "Crescent Island." Previously, we reported the source's claim that the "Nova Lake" display and media engine would incorporate some IP elements from the Xe4 "Druid" generation of graphics. However, the actual underlying hardware is not related to Xe4 and instead borrows IP from Xe3P.
Intel's plans for "Nova Lake" are focused on late 2026, with the entire lineup expected to roll out in early 2027. The platform will support DDR5 memory at 8,000 MT/s out of the box, without any overclocking. This indicates an improved integrated memory controller on the Nova Lake platform, which seems ready to handle those speeds even before XMP or factory-overclocked modules are considered. It also suggests that Intel is pushing memory support further than its current controller, which reaches DDR5-7200 on the current "Arrow Lake Refresh," alongside the new core IP and updated configuration.
ThreatΒ actors likely associated with the Democratic People's Republic of Korea (DPRK) have been observed using GitHub as command-and-control (C2) infrastructure in multi-stage attacks targeting organizations in SouthΒ Korea.
TheΒ attack chain,Β per Fortinet FortiGuardΒ Labs, involves obfuscated Windows shortcut (LNK) files acting as the starting point to drop a decoy PDF
The company, which commands 37% of the global electric vehicle battery market and 22% of the energy storage segment, has already equipped roughly 900 ships with its batteries. Most of these are small, nearshore vessels that operate along China's coastlines, at ports, or on inland waterways.
The Artemis II mission is starting to capture never-before-viewed (by human eyes) views of the moon, and this should excite stargazers and photographers.
Apple plans to ask the Supreme Court to review its App Store fight with Epic Games, as it challenges a ruling limiting its ability to charge fees on external payments.
MetricSign monitors your Power BI datasets every five minutes and tells you when something breaks, such as a failed refresh, a missing column, or a changed schedule. Each alert includes the exact error, what caused it, and a direct link to fix it in Power BI. It works with ADF, Fabric Pipelines, and Databricks too, so you can see the full chain from source to dashboard. Setup takes two minutes: sign in with Microsoft, pick your workspaces, and you're done.
LLM Pulse helps companies understand how they appear in AI search and how to improve it. We track brand presence across leading LLMs like ChatGPT, Google AI Mode, Gemini, and Perplexity, analyzing prompts, responses, citations, and sentiment. Rather than relying on abstract scores, LLM Pulse shows the actual answers users see, making it easy to spot gaps, understand competitors, and take action through content and technical improvements. Designed for marketing, SEO, and growth teams, it turns AI visibility into something you can measure, understand, and act on.
AutoScaled generates personalized presentations directly from your CRM and spreadsheet data using a single prompt. Connect HubSpot, Salesforce, Attio, or your data sheet and specify which records to create tailored presentations for. Upload a template from Google Slides or PowerPoint to build sales decks in seconds.
The AI agent personalizes your content based on your data, maintains brand consistency, and saves you time. You can trigger presentations when CRM data changes, schedule recurring decks, and refresh existing ones with one click. Share content via branded pages, track engagement, and see who viewed what.
In this case study, we went deep instead of broad. We focused on one question: why wasnβt a brand present in a single ChatGPT prompt across ~70 iterations?
We chose one prompt: βWhat are the best hotels in New York City?βΒ
We analyzed mentions, citations, fanouts, and SERPs in Google and Bing. We also planned to analyze GPT memory, but it made no discernible difference to mentions, citations, or fanouts.
What we did and what we found
We chose NYC hotels because itβs a crowded, mature market with juggernauts and up-and-comers. We also have no connection to the NYC luxury hotel space β we intentionally picked an area where we could stay objective and learn from scratch.
After running the prompt βwhat are the best hotels in New York Cityβ 68 times, we identified which hotels appeared most consistently and which were nearly invisible.
We chose the Baccarat Hotel as our βclientβ because it appeared only once (1.5% of the time), despite strong reviews and clear alignment with the promptβs intent. We wanted to know why β and whether it could change that.
Key findings:
You can dominate query fanouts on Google SERPs and still underperform in ChatGPT brand mentions.
Bing matters most. Ranking in Bing articles for fanouts aligns more directly with ChatGPT mentions β not just citations.
In verticals dominated by third-party content, you face complex digital PR paths to increase visibility.
Note: A full methodology breakdown appears in the appendix.
Mentions of the Baccarat vs. the Fifth Avenue Hotel show just how wide the disparity in ChatGPT visibility can be
The Baccarat Hotel appeared once in 68 trials (1.5%).
Top performers were large luxury hotels like the Four Seasons Hotel New York Downtown.
ChatGPT also identified boutique hotels as a subcategory, generating a secondary list in its answers. Boutique hotels like the Baccarat are typically smaller and not part of large chains.
Within this boutique subcategory, the Baccarat still underperformed. The Fifth Avenue Hotel, the top-performing boutique property, appeared 13 times, cited 20% of the time, versus the Baccaratβs 1.5%.
Reputation canβt explain visibility disparities
We first checked whether anything in the hotelβs history or reputation could explain the gap. As the chart below shows, nothing significant did:
The BaccaratΒ
The Fifth Avenue
Year Founded
2015
2023
Current Price
$930
$563
Number of Google Reviews
1.3k
213
Google Reviews Rating
4.6
4.6
Number of Expedia Reviews
531
201
Expedia Reviews Rating
9.4
9.6
Overall, the Baccarat has been around longer and has more reviews. On quality, the Fifth Avenue Hotel has no edge in Google reviews and only a slight edge in Expedia reviews. The only area where the Baccarat lags is price β but thatβs unlikely the issue when The Ritz-Carlton, a consistent non-boutique winner, is listed at $1,100.
Further reinforcing the Fifth Avenueβs underdog status: one of its most prominent Google results (rank 2) was a Wikipedia page for a different Fifth Avenue Hotel that closed in 1908, creating potential entity confusion similar to the two Danny Goodwins.
If the Fifth Avenue Hotel had been the one missing, it would suggest a less established brand with entity confusion. But the opposite happened β it prevailed in ChatGPT.
So what was the problem for the Baccarat Hotel?
Winning Google SERPs for query fanouts doesnβt help, but winning Bing SERPs doesΒ
When ChatGPT performs a web search, it sends a series of queries you can extract via Chrome DevTools. In this case study, examples included:
[Best hotels in new york city]
[Top rated luxury hotels in new york city recommendations]
[Best hotels in nyc top luxury and boutique hotels new york]
[Best luxury and boutique hotels in new york city recommendations reviews]
[Best hotels in new york city nyc top hotels]
[Top hotels in nyc luxury boutique best places to stay new york city]
In total, we extracted 25 unique query fanouts.
What we saw in the Google SERPs
If we only looked at the articles dominating fanout SERPs in Google, weβd expect the Baccarat to narrowly outperform the Fifth Avenue in ChatGPT. That didnβt happen.
In the table below, the Baccarat βwinsβ three of the top 10 most frequently appearing pages, while the Fifth Avenue Hotel βwinsβ two. The other five feature neither. A βwinβ means one of the following:
The Baccarat is listed as a βone keyβ hotel, placing it at the bottom of the list. The Fifth Avenue HotelΒ is listed as a βtwo keyβ hotel, placing it in the middle of the list.
Both mentioned, but the Fifth Avenue much more positively
What we saw in the Bing SERPs
By contrast, looking only at the articles dominating fanout SERPs in Bing, weβd expect the Fifth Avenue to outperform the Baccarat in ChatGPT β and it did.
In the table below, the Fifth Avenue βwinsβ five of the eight most frequently appearing URLs.
Note: The table includes two fewer URLs because Bing SERPs were slightly less diverse for these fanouts.
Both are listed, but the Fifth Avenue is listed under βOur Top Picksβ
https://travel.usnews.com/hotels/new_york_ny/
The Baccarat
The Baccarat is #11 on the list, the Fifth Avenue is #16
The connection between Bing visibility and brand mentions
Bing rank strongly predicts ChatGPT citations β 87% align with Bingβs top results, Seer Interactive found. Our case study supports this and extends it.
We examined the relationship between fanouts (Seer focused on prompts) and brand mentions.
Example mention: βFor a luxury boutique feel: listings like The Fifth Avenue Hotel or Crosby Street Hotel consistently make βtop NYCβ lists from travel editors.β
Mentions are often more valuable than citations. Most people wonβt follow citations but will remember the top recommendation.
Thereβs ongoing debate about whether fanouts shape ChatGPTβs answers and mentions, or simply support answers generated from training data. For example, Leigh McKenzie argued on LinkedIn:
βThe citations you see at the bottom? Those are surfaced after the answer is generated, not before. Itβs post-hoc rationalization. The model didnβt choose your brand because it found your URL. It generated an answer based on what it already knows, then pointed to sources that support it.β
By contrast, our data aligns with Beehiivβs research, which suggests citations do shape mentions.
Training data doesnβt appear to be the issue for the Baccarat. Compared to the Fifth Avenue, itβs older, has more reviews, and holds similarly high ratings across major platforms. What it lacks is strong presence in Bing results for fanouts and citations, which appears to lead to fewer mentions.
A simple flow might look like this:
Brand ranks in Bing β ChatGPT fanouts pull in Bing pages β ChatGPT synthesizes training and Bing data to generate mentions
Coda: A tale of two Forbes articles, or why the details matter
Our data shows that βtargeting Forbesβ isnβt specific enough.
The top result surfaced in both Bing and ChatGPT was the same Forbes article. In Google, the most frequent fanout result was also a Forbes article β but a different one.
As weβve seen, getting into Googleβs Forbes article likely wouldnβt provide a meaningful boost. The Baccarat βwonβ in that piece.
Getting into Bingβs Forbes article, where the Baccarat wasnβt mentioned, could make all the difference. This requires a highly surgical approach grounded in Bing data.
Generalities wonβt work; detail reigns supreme.
Appendix: Methodology
Model: We prompted GPT-5.2 Instant and manually extracted results. We didnβt use APIs within ChatGPT.
Number of iterations: We ran the same prompt 68 times.
Prompt: βWhat are the best hotels in New York City?β
Settings: We tested three memory states:
Saved memories off
Saved memories on, using unrelated real user memories
Saved memories on, with one memory about needing gluten-free travel accommodations
For all trials, we turned off βreference chat historyβ to avoid interference across iterations.
We expected differences based on memory settings but found none, so we treated all trials as a single dataset.
Citing unnamed industry sources, South Korean outlet ETNews reports that the Pro model would slot between the Plus and the Ultra in the upcoming S27 series, effectively becoming the second-most premium option. It is expected to inherit much of the Ultra's feature set, including Samsung's new Privacy Display technology, which...
Target is one of many retailers jumping on the AI bandwagon by introducing an assistant that can suggest products and complete purchases for customers. The pitch is convenience: less browsing, fewer clicks, and an easier way to fill a cart. The risk is that shoppers may end up handing over...
OpenAI proposes taxes on AI profits, public wealth funds, and expanded safety nets to address job loss and inequality, blending redistribution with capitalism as policymakers debate AIβs economic impact.
Nominate your startup, or one you know that deserves the spotlight, and finish the process by applying. Selected 200 have a chance at VC access, TechCrunch coverage, and $100K for Startup Battlefield 200. Applications close on May 27.
The MSI MAG A1200PLS PCIE5 delivers strong cold efficiency numbers and excellent build presentation, but a thermal weakness at high load, a questionable 80Plus Platinum claim, and a steep asking price temper an otherwise capable unit.
Is it possible to get an accurate view of the current state of SEO?
There have been multiple attempts to reach consensus on what works, predict what might be coming, and identify the factors that may play a role in βgoodβ (or βbadβ) SEO.
As useful and productive as some of this may be, none of it offers the same grounded data as the Web Almanac, a project I was honored to be a part of. With the publication of the 2025 SEO chapter, we can now review the data and spot the emerging trends from 2025 and what that could mean for SEO in 2026.
SEO standards on the rise
2025 has been another year of increasingly higher SEO standards β which can only be a good thing:
Near-universal adoption of HTTPS (now up to 91%+).
Increased use of title tags at nearly 99% adoption, and even viewport meta tags at over 93% adoption.
Canonical adoption rose from 65% in 2024 to 67%+ in 2025.
HTML validity is slowly improving. For example, invalid <head> elements dropped to 10.1% on desktop and 10.3% on mobile from 10.6% and 10.9%, respectively, in the previous year.
Robots.txt error rates fell404s declined to 13% from 14% the previous year, and 5xx responses fell to ~0.1%.
Meta robots usage has crept up to 46.2% in 2025 from 45.5% the prior year.
Not all of these statistics represent rapid change, but they do show steady and consistent change, at the very least. The 2025 Web Almanac data presents the web as a more secure and easier-to-crawl place, which is certainly a positive.Β
So, can SEOs take a victory lap right now? No, as there is more to do in 2026, even if the basics do feel like theyβre stable or steadily improving.
Content management systems (CMSs) and SEO plugins play a huge role in developing SEO best practices and cementing the βdefaultβ or de facto standards.
As the CMS chapter in the 2025 Web Almanac shows, more and more websites are now powered by a CMS:
Of these, the top five most popular systems over the last four years likely arenβt surprising.
Frequently underpinning many SEO defaults are SEO tools typically utilized by WordPress sites:
Thatβs not to say that using these platforms or tools ensures a perfect website setup. That said, key elements or functions of these tools can become industry standard due to their ubiquity:
Robots.txt.
Sitemap.xml.
Canonical tags.
Semantic HTML.
Structured data.
Not all of these are on by default. Sometimes they require inputting basic details or simple implementation. Regardless, their ease of access increases the likelihood that they will become an SEO best practice.
This is happening, and itβs proving effective. What this means for 2026 and beyond is that:
Working with or lobbying major platform and tool makers is one of the key ways to shape SEOβs future direction.
SEO tools and platforms will continue to enforce best practices on the front end, but they could also benefit from AI and assistive features behind the scenes. While it may be less visible in the data itself, these tools offer the opportunity to move quickly and gain deeper insight.
Structured data usage was previously driven by what Google rewarded in the search engine results pages (SERPs). SEOs and plugin developers alike could be inspired to move beyond whatβs beneficial for the SERPs and onto what contributes to a more predictable, structured, and retrievable data set.
Deprecated, but not forgotten
Defaults and best practices help, but they donβt finish the job. While attention often shifts to new features, old or forgotten standards still see widespread use.
There have been many different cases where deprecated settings or standards have prominently appeared in the data.
For example, in meta robots bot declarations, βmsnbotβ is still in the top 5, even though it was replaced over 16 years ago.Β
AMP use has plummeted over the years, but itβs still found on over 38,000 homepages. While technically not deprecated, amp.dev has seen no recent activity for nearly four years now.
The most common meta robots attributes are βindexβ and βfollow,β which are implicit and largely ignored.
Web changes β no matter how small β are often neither quick nor easy to get done, and weβll likely see traces of deprecated features and settings in the data for years to come.
More work is needed
The improvement in SEO standards doesnβt apply to all features and sites. There are some that arenβt moving in the same direction:
The mobile performance gap stubbornly lingers β even as it continues to improve.
Duplicate content management is still lagging, with nearly 33% of pages missing canonical implementation.
Advanced configurations have barely moved from the previous year β nearly 67% of images donβt have loading attributes set, and over 91% of iframes donβt have set loading attributes.
Many deprecated standards refuse to go away.
While CMS default settings or configurations can take credit for some of the larger changes, they also bear some of the responsibility for the issues above. For example, median Lighthouse scores for some of the major CMS platforms are still lagging, especially on mobile (while seeing increases over last year).
The long tail of the web is still messy, and this will probably always be the case. The Web Almanac dataset doesnβt exclude websites that are no longer relevant or abandoned.
Site metrics that meet the βtopβ standards from an SEO best practices point of view can likely be achieved with an out-of-the-box site built on any major CMS with a modern theme and 30 mins of carefully considered configuration. This is one of the most significant opportunities in technical SEO.
In 2026, weβll likely:
Continue to see performance gaps converge between desktop and mobile experiences β but slowly.
Still be able to see echoes of past markup and decisions. Even if the collective focus is pulled to the βnew worldβ of AI search, many SEOs wonβt abandon proven tactics and approaches from past years. This dataset develops slowly.
Observe something thatβs mostly βbusiness as usual.β
One of the more eagerly awaited elements of the Web Almanac data was whether we can chart the increasing presence and impact of AI search and crawlers in the decisions of SEOs and developers.
Within the data, we observed two major developments:
Robots.txt is increasingly used more as a policy document rather than crawler control.
Creation and adoption of llms.txt is one of the few signs of LLM-first decision-making.
Commenting on the state of SEO is challenging because the definition isnβt fixed. Whatβs good or bad practice is often hotly debated, and in the world of AI search, another (painful) metamorphosis is now taking place.
In the HTTP Archive data we can observe the influences working on SEO from a βnuts and boltsβ point of view, report on what we see, and enable people to make up their own minds.
Specifically, one of the elements we added this year was the analysis of the llms.txt file.Β
This is a highly controversial text file, but our inclusion was not an endorsement. Itβs a recognition that changing trends may (or may not) shape the web. Whether itβs effective or accepted, its adoption says something, and we felt it was important to review that.
Robots.txt as a bouncer
Itβs clear that robots.txt has a more important job now than ever. Until relatively recently, it was largely used for targeted control of crawlers, particularly Googlebot and Bingbot.Β
For most SEOs, however, robots.txt was mostly an exercise in both ensuring we werenβt blocking anything by accident and resolving problem areas with Disallow rules. This has changed:
Gptbot: 4.5% on desktop and 4.2% on mobile in 2025 is up from 2.9% on desktop and 2.7% on mobile in 2024, representing a ~55% increase.
Ccbot: 3.5% on desktop and 3.2% on mobile in 2025 is up from 2.7% on desktop and 2.4% on mobile in 2024.
Petalbot: 4.0% on desktop and 4.4% on mobile in 2025 (not separately tracked in 2024).
Claudebot: 3.6% on desktop and 3.4% on mobile in 2025 is up from 1.9% on desktop and 1.6% on mobile in 2024, nearly doubling.
Robots.txt isnβt the only way to manage bots β and arguably isnβt the best β but it introduces a new decision that must be made: How should websites handle LLM crawlbots?
This will be one of the biggest areas weβll see change in on the technical side of 2026:
Businesses with existing bot strategies will need to evolve them.
Businesses that donβt meaningfully manage crawlers will start feeling the pressure to do so.
Robots.txt will still be the clearest and easiest way to handle crawlers. We will almost certainly see more good and bad bots alike.
In 2026, SEOs will be drawn into bot management conversations spanning marketing, technology, and security. βWhich bots should we allow?β is a question with downstream effects on budgets, revenue, and users, and weβll need to closely monitor what develops.
LLMs.txt
LLMs.txt is an aspiring web standard that aims to guide LLM crawlbot behavior and make it easier for them to retrieve content before generating an answer. Itβs a highly controversial .txt file, and thereβs a vigorous debate on whether it actually benefits LLMs, will gain widespread use, and is a possible vector for manipulation.
The rationale or efficacy of this file isnβt something we need to cover here. For this article, the true point of interest with llms.txt is the adoption of this file as a statement of intent.Β
At the start of 2025, I crawled the Majestic Million, a regularly updated list of the top 1 million websites ranked by backlink authority, in search of llms.txt and found that adoption was extremely low (0.015% of sites, or just 15).Β
While searching one million sites versus 16 million presents some logistical differences, I was expecting a very low level of adoption based on prior experience. I was surprised at how wrong I was.
According to the 2025 data, just over 2% of sites had a valid llms.txt file, and:
39.6% of llms.txt files are related to All in One SEO (AIOSEO)
3.6% of llms.txt files are related to Yoast SEO
This number is still relatively low, but itβs much higher than I thought it would be and potentially represents a huge acceleration.
The primary reason fueling adoption of llms.txtβs SEO plugins that make this easier to enable.Β
We can see that llms.txt adoption has continued to rise ever since we started collecting data from across the web:
If, however, the implementation of this file is actually a default feature in some scenarios, it could be easy to overvalue its significance.
LLMs.txt will still be a barometer of AI search decision-making in 2026:
More tools and plugins will offer this functionality if they donβt already.
Yoast and Rank Math (which donβt default llms.txt to βonβ) represent more growth opportunities for this file. Many SEOs may decide to switch it on even if there isnβt strong evidence of its efficacy.
The rate of adoption will continue to climb, but whether itβll reach a point where it becomes an accepted best practice is harder to forecast.
FAQ growth
Another interesting trend worth discussing is the increase in the use of the FAQPage schema.Β
While this isnβt as explicit a trend as robots.txt or llms.txt usage, the increased adoption of this schema type is particularly interesting.
However, you can see from the last three publications of the Web Almanac that this isnβt the case:
The use of FAQPage schema is now an emerging trend as AI search heavily cites FAQ content in its outputs.
This could be correlation rather than causation, but the steady increase in FAQPage schema is a strong sign of AI search strategies changing the shape of the web.
To echo another conclusion from earlier, 2026 may well see continued growth of structured data types even if they donβt result in an obvious improvement. While the growth is unlikely to be explosive, making a case for their implementation is easier when we donβt just optimize for Google.
Will AI search reshape the web in 2026? Unlikely. Will we continue to see signs of its importance? Almost certainly, but letβs not get carried away.Β
SEO has a reputation for changing quickly. Sometimes thatβs true. More often, itβs the conversation that moves quickly, while the web itself changes at a steadier pace.
The 2025 Web Almanac data clearly reflects that tension. Core SEO hygiene continues to improve year over year, but largely through default features and settings, tools, and platform behavior rather than deliberate optimization.
At the same time, long-deprecated standards linger, advanced configurations remain uneven, and the long tail of the web remains untidy. Progress is real, but itβs incremental β and sometimes accidental.
What has shifted meaningfully is intent.
Robots.txt is no longer just crawl housekeeping. Itβs becoming a policy surface.
LLMs.txt, regardless of whether it proves useful, represents a new class of decision-making entirely.
FAQ patterns are on the rise again, and not because of SERP features, but because structured, extractable answers have immense value elsewhere.Β
2026 will not be remembered as the year SEO ended or was reborn. It may, however, be considered the year the AI search layer became more defined. A new patch applied β not a fundamental rewriting.
Most guidance on optimizing for AI still focuses on how content is written. But AI systems donβt read content the way humans do. These systems extract information, break it into parts, and reuse it in new contexts. What matters is whether your content can be pulled into an AI-sourced answer cleanly.
Where traditional SEO has centered on ranking pages, AI systems prioritize retrievable units of meaning. That changes how content needs to be built:
The 5 core principles of AI-preferred content design
When content is retrieved in pieces, used in generated answers, and selectively attributed, structure becomes the lever. These principles show up consistently in content that gets surfaced by AI systems:
1. Modular by design
Content is more useful when itβs built in discrete units. Each section should:
Address a specific question or subtopic.
Be understandable without relying on surrounding text.
Long sections that depend on earlier context are harder to reuse in isolation. Modular structure also makes content easier to update, test, and repurpose across surfaces β without rewriting the entire page.
2. Hierarchically structured
A clear hierarchy helps systems understand what each section contains and how it relates to the rest of the page. H2 β H3 β H4 structure should signal:
Topic: What the section is about.
Intent: What question it answers.
Scope: How narrow or specific it is.
Headings should make each sectionβs purpose immediately clear. When that signal is weak, it becomes harder to match the right section to the right query.
3. Explicit over implied
AI systems rely on whatβs stated directly. Make relationships and conclusions clear by:
Defining terms when theyβre introduced.
Stating outcomes or takeaways directly.
Clarify cause-and-effect or comparisons, rather than implying them.
If something is important, it should be written plainly. Copy that requires inference is harder to interpret and more likely to be skipped in favor of clearer alternatives.
4. Answer-first formatting
Place the direct answer to the sectionβs core question at the top, then expand.Β
AI systems prioritize passages that resolve a query immediately. When the answer is delayed or embedded within a longer explanation, the relevance of that passage becomes less obvious.
The rest of the section can then add deeper nuance, examples, or other details that further understanding without changing the core response.
5. Designed for passage-level extraction
Passages compete for selection, both within the same article and across the web.
When multiple sections address the same question in similar ways, they dilute each other. Clear, specific, and well-scoped content βchunksβ are more likely to be selected.
You can audit a passageβs usefulness by asking:
Is it understandable without additional context?
Does it fully answer a single question?
Can it be quoted as an answer without any editing?
If the passage needs context or cleanup, itβs less competitive.
Common content patterns that improve AI retrieval and use
These patterns show how structured, answer-first content is applied in practice β making it easier for AI systems to match, extract, and use.
The βdefinition + expansionβ block pattern
Start with a clear definition. Then add detail. This works best for:
Concepts.
Terminology.
Processes.
The definition should establish what something is in a way that can be quoted independently. The expansion then adds context, nuance, or examples.
This pattern helps position your content as a reference point for core concepts β especially when AI systems need a clean, authoritative definition.
The βquestion β direct answer β contextβ pattern
AI systems are designed to respond to queries. This pattern aligns your content to that structure.
Order your content as:
Question.
Immediate answer.
Supporting detail.
The answer should resolve the query in one to two sentences, using the same language or phrasing as the question where possible.Β
Remaining content can add depth through nuance and edge cases that extend beyond the core answer.
The βframed listβ pattern
Lists work best when theyβre introduced by a clear framing sentence that tells the reader β and the retrieval system β what the items represent.
Follow a consistent structure (e.g., all actions, all criteria, all features)
Stay at the same level of detail
Clearly map back to the framing sentence
This pattern works especially well for steps, criteria, features, and takeaways.
Well-structured lists are easier for systems to parse and reuse, especially when each item is clearly defined within the context of the list.
The βcomparisonβ pattern
Structure content to make differences explicit. This works well for alternatives (βX vs Yβ), tradeoffs, and decision-making criteria. You can use:
Side-by-side comparisons.
Clear evaluation criteria (price, features, use case, limitations).
Direct statements of when to choose each option.
Content that clearly outlines differences is easier for AI systems to extract and reuse in answers that involve evaluation or recommendations.
Top content design mistakes that limit AI visibility
Most AI surfacing issues come back to content structure. When structure is weak, answers are harder to identify and extract. That tends to show up in the form of:
Overly narrative, under-structured content
Long paragraphs with key points buried inside make it harder to isolate a clear answer. Without strong subheadings to define what each section covers, systems have fewer signals to identify where that answer lives.
Ask:
Does this section answer a clear question, or just explore a topic?
Is the main point easy to identify in the first few lines?
Do the subheadings clearly signal what each section contains?
Vague or non-descriptive headers
Headers like βOverview,β βIntroduction,β or βKey Takeawaysβ donβt provide enough signal about what the section actually contains.
Headings help systems understand what a section covers and how it relates to a query. When theyβre vague, the relationship between section and query becomes less explicit.
Ask:
Would this header make sense out of context?
Does it clearly reflect the question or topic being answered?
Could multiple sections on the page use the same header?Β
Answers buried mid-paragraph
When the answer appears halfway through a paragraph, itβs harder to isolate as a clean, reusable unit.
AI systems look for segments that clearly resolve a query. When the answer is embedded within surrounding context, it becomes less distinct and more likely to be overlooked or reassembled.
Ask:
Is the answer clearly distinguishable from the neighboring text?
Does contextual copy clarify or dilute the answerβs main point?
Redundant or repetitive sections
When sections overlap, they compete for the same query and weaken the overall signal. Instead of reinforcing the topic, similar sections can fragment it across multiple passages, making it less clear which one should be selected.
Ask:
Do multiple sections answer the same question in slightly different ways?
Is each section clearly scoped to a distinct angle or subtopic?
Clear separation improves both retrieval and selection.
How to evolve existing content for AI without starting over
Most teams donβt need to totally rebuild content from scratch. Updating existing content for todayβs landscape just requires a few structural changes.
Break content into logical units
Identify where natural sections exist and what question each one answers.
Split broad or mixed sections so each one resolves a single idea or query.
If a section covers multiple points, separate them into distinct sections.
Rewrite for answer-first clarity
Move the clearest version of the answer to the top of each section.
Remove lead-in language, qualifiers, or examples that appear before the answer.
Ensure the opening lines can be understood without relying on the rest of the page.
Strengthen structural signals
Make headings specific enough to reflect both the topic and the question being answered.
Use formatting (lists, short paragraphs, summaries) to make key points easier to scan and isolate.
Check that each sectionβs purpose is immediately clear from its heading and first sentence.
Introduce distinct framing
Turn generic sections into clearly defined units, like:
Ensure each section covers a distinct angle and does not repeat or overlap with others. This helps consolidate signal and makes it easier for systems to select and attribute the right passage.
The future of content design in AI-mediated search
AI systems are already reshaping how content is surfaced, and that shift will continue as answers become more personalized and draw from multiple sources.
As a result, page-level ranking matters less on its own. Content value is shifting toward contribution β how clearly a piece of content can inform, support, or shape an answer.
The content that performs best will be:
Structurally clear, with sections that are easy to identify and extract.
Modular, so individual passages can be selected and reused independently.
Distinct, with clearly defined ideas that donβt overlap or compete internally.
Designed to be selected and used, not just indexed or ranked.
Content that meets these criteria is more likely to be surfaced, reused, and attributed as AI-mediated search continues to evolve.
An Italian TV station has claimed ownership of Nvidiaβs DLSS 5 trailer, proving that YouTubeβs copyright system is absurdly broken Who owns Nvidiaβs DLSS 5 trailer: Nvidia, or a bunch of Italians and their TV station? The Italian broadcaster La7 has blocked Nvidiaβs DLSS5 reveal trailer, citing copyright grounds. Why? La7 showcased the trailer during [β¦]
Age of Empires 2: Definitive Edition and Age of Empires 4 are both being played at the Red Bull Wololo: Londinium tournament, with the finale set for today at the Royal Albert Hall. It's been a crazy weekend of Age gameplay, and I nearly missed the announcement for a new Age 4 DLC.
Your attack surface noΒ longer livesΒ on one operating system, and neither do the campaigns targetingΒ it. In enterprise environments, attackers move across Windows endpoints, executive MacBooks, Linux infrastructure, and mobile devices, taking advantage of the fact that many SOCΒ workflows are still fragmented byΒ platform.Β
For security leaders, this createsΒ a
ThisΒ week had real hits. TheΒ key software got tampered with. ActiveΒ bugs showed up in the tools people use every day. SomeΒ attacks didnβt even need much effort because the path was alreadyΒ there.
One weak spot now spreads wider than before. WhatΒ starts small can reach a lot of systems fast. NewΒ bugs, faster use, less time toΒ react.
Thatβs this week. Read&
In a recently published study, the Dutch team describes an unexpected behavior observed in a 3D-printed microstructure. The "robot" consists of a flexible chain of self-propelling elements, each smaller than the width of a human hair. When exposed to an electric field, the structure begins to move on its own,...
The deployment marks a new test of drone-based security technology in US schools. Backed by more than $500,000 in state funding, Florida and Georgia are rolling out Mithril's technology as part of broader school safety initiatives. The goal, company executives say, is speed. Pilots operating from Mithril's Austin headquarters could...
While backups continue to be essential, they no longer determine preparedness when attackers steal sensitive data and use exposure as the primary pressure point.
Here's how to watch I'm A Celebrity 2026 online for free and from anywhere as Ant and Dec head to South Africa. Streaming guide and watch free with this info
Trump's first and second terms have aligned with the release of both The Handmaid's Tale and The Testaments β but for the creators, nothing has changed.
Starting today, you have 5 days to saveΒ nearly $500Β on your ticket toΒ TechCrunchΒ Disrupt 2026. This offer disappears Friday, April 10, at 11:59 p.m. PT. Register here to secure these low rates.
Cliptude helps creators produce documentary-style videos quickly. It researches topics, writes scripts, sources stock footage and relevant A-roll, and assembles a full edit with motion graphics, maps, and timelines. It delivers premium AI voiceovers with natural pacing and exports ready files for YouTube, TikTok, and Instagram. Start from a prompt or script, then download the final cut or separate stems while keeping full rights.
For a long time, links were the primary signal of authority in search. If you wanted visibility, you built backlinks. If you wanted credibility, you earned placements. That still matters β but itβs no longer enough.
In AI-driven search, authority is shaped by how often your brand is mentioned, cited, and clearly associated with a topic. Visibility comes from being referenced in AI-generated answers.
With that shift in mind, the goal is to create content that earns consistent brand mentions and citations β the signals that now drive AEO visibility.
The philosophy driving content that fuels AEO growth
In 2026 organic discovery, authority incorporates entity recognition.
On both Google and LLMs like ChatGPT and AI Overviews, authority is reinforced through:
High-quality backlinks.
Brand mentions (linked or unlinked).
Consistent citations across trusted publications.
Clear entity associations (who you are, what youβre known for, and what topics you βownβ).
Since LLMs synthesize information instead of ranking pages, you need repeatable, credible mentions across the web to strengthen your brandβs likelihood of being cited or referenced in AI answers. Importantly, you also need to use your owned media to define your brand entity very clearly.
That makes building authority even more critical. Your content will now be battling with even more competition in the form of AI results in the SERP and AI-produced content from other publishers.
The TL;DR is that you need to establish a clear brand and, underneath that brand, create content thatβs so valuable that other experts, journalists, creators, and AI systems repeatedly reference your brand when theyβre discussing a topic core to your business.
The principles and formatting of AEO-friendly content
Youβll use many of the same SEO principles as a base for AEO-friendly content. Content aligned with Googleβs helpful content guidelines β focused on value and user experience β appeals to the people (and LLMs) discussing these concepts and sourcing experts to validate their positions.
That said, to produce truly AEO-friendly content, you need to incorporate formatting that supports LLM extraction.
Key formatting principles include:
Clear definitions: Have short, clean definitions high on the page:
βX isβ¦β
βY refers toβ¦β
Structured formatting:
Use descriptive H2s and H3s.
Employ bullet points.
Keep paragraphs short.
Include direct answers under question-based headers.
Explicit context:
Avoid vague pronouns and implied references.
Remember that LLMs perform better when context is explicit and self-contained.
The specific objectives for your AEO content to address
If youβre solely focused on AEO, Iβd approach your content with these objectives in mind:
Be highly citable: Include original data or perspectives a journalist or influencer would use in media like podcasts, expert roundups, contributor columns, or co-marketing content)
Be highly quotable: Provide at least one clean, quotable insight.
Be specific: Answer specific questions an AI system would try to answer. You can clearly articulate a question your content answers β and answer it verbatim with a section or paragraph in your content.
Be clear: Define a topic in an easily extracted manner.Β
To address these objectives, it can be helpful to think beyond blog posts to ideate βreference-gradeβ assets, including:
Practical steps to build AEO authority with content
Hereβs how to turn those principles into a repeatable process for building AEO authority:
Research keywords where bloggers and journalists are searching for references (these keywords often include βstatisticsβ or βreportsβ). Use Reddit, Quora, X, Ahrefs (Matching terms report), and Exploding Topics among your references.
From those keywords, build a list of topics around which your team has the expertise to share valuable insights and perspectives.
Research a list of writers and journalists who cover those topics.
Find expert resources (either internal or closely connected) and interview them to build a cache of content.
Refine and develop that content into contemporary insights using Google Trends and social listening, using timing and a list of audience modifiers to heighten relevance.
Example: Get a list of tips from an expert targeted to help hay fever sufferers (niche audience/modifier) get a better nightβs sleep (core topic/target) during a particularly bad high pollen count period (relevance).
Pitch a group of writers and journalists who cover your theme and/or sub-theme on why this matters right now, and how itβs different from other content they might find to reference.
If (or even before) those writers and journalists link to your content, follow them on their social channels to deepen your connection for future opportunities.
Writing for AEO isnβt at odds with writing for humans. Even from its early days, AEO shared many of the SEO fundamentals derived from appeal to actual users.
That said, there are enough differences with the way LLMs extract and digest content (and the way users ask LLMs for information) that you need to keep specific nuances in mind in your content approach.Β
With a clearly defined brand on your owned media, and an understanding of the tenets of AEO and how to address them, you should have a good idea how to leverage your teamβs expertise for greater visibility on the AI search landscape.
TheΒ most active piece of enterprise infrastructure in the company is the developer workstation. ThatΒ laptop is where credentials are created, tested, cached, copied, and reused across services, bots, build tools, and now local AIΒ agents.
InΒ March 2026, the TeamPCP threatΒ actor proved just howΒ valuable developerΒ machines are. TheirΒ supply chain attack on
Intel has quietly added the Core Ultra 7 251HX to its Arrow Lake HX lineup, skipping any formal announcement. The chip simply appeared on the Intel website a few weeks after it surfaced in Lenovo Legion 5i 2026 and MSI Raider 16 HX listings. The 251HX is an 18-core, 18-thread part with 6 Performance cores and 12 Efficient cores, slotting between the Core Ultra 5 245HX and the Core Ultra 7 255HX. Compared to the 255HX, it drops two P-cores and loses two threads, but keeps the same 12 E-cores and 30 MB of Smart Cache. TDP range stays identical at 55 W base and up to 160 W maximum turbo power. Max Turbo comes in at 5.1 GHz, 100 MHz below the 255HX, but the E-core base clock actually jumps 700 MHz to 2.5 GHz, and the P-core base is up 500 MHz as well at 2.9 GHz. Memory support goes up to DDR5-6400.
The integrated graphics drop to three Xe3 cores clocked up to 1.8 GHz, down from four on the 255HX, which also trims AI performance from 33 TOPS to 30. Not a dramatic difference, but worth noting if NPU performance matters for a specific workload. As an endnote, the Core Ultra 7 251HX sits between the Core Ultra 5 245HX with its 14 cores in a 6P+8E layout and 24 MB of cache, and the 20-core Core Ultra 7 255HX and 265HX sitting above.
Steam is reportedly in the process of adding a "Frame Estimator" tool that can estimate your PC's performance before you purchase a game. As you know, Valve's Steam platform is the largest gaming platform in the world, with access to millions of PCs. The Steam Client application offers an option to include your PC in Valve's telemetry system, which processes data such as your PC's specifications and game information, including your library. Using these data points, Steam will estimate how many frames per second your PC can generate in any game, depending on your configuration. For example, for a specific CPU, GPU, and available system memory, the Steam Client will indicate whether a game can reach 60 FPS at 1440p using high settings, or whatever your preference is. We can only speculate at this point about what the feature will look like, as Steam is still refining it before the public beta release.
Additionally, Valve has already started asking users for anonymous FPS data collection about a month ago whenever they run a game. With this data pool, likely involving millions of participants, Valve aims to build a system that estimates your FPS output based on your specific PC configuration, without needing to run a game first. Reportedly, this feature will appear in the Steam Client and show how much performance your PC can deliver before you even purchase a game. This is a classic recommendation system that will indicate what your configuration typically delivers at specific game settings and resolutions.
The confirmation comes from an End-of-Service notice on Samsung's US website, which says the app will be discontinued in July 2026. It advises users to move to Google Messages to maintain a consistent Android messaging experience.
Rapidus' first plant, IIM-1 in Chitose, Hokkaido, has shifted from a construction site to a pilot line. In mid-2025, the company had activated the cleanroom, installed extreme ultraviolet tools, and began running test wafers through a two-nanometer gate-all-around process developed with IBM.
Multical ends double bookings for portfolio careerists, fractionals, multi-hyphenates, and consultants. It syncs your Google, Outlook, and Apple iCloud calendars so every organization sees your real availability across all accounts. It blocks conflicts in real time, lets you set custom rules and filters, and never permanently stores event content.
Use Multical to manage unlimited calendars at one price, create scheduling links for each client or role, and view, create, and edit events in a unified, mobile-friendly calendar. Control what details others can see, revoke access anytime, and keep credentials encrypted.
Since 2021, Iβve worked on more than 350 published guest posts. In that time, Iβve refined a repeatable guest posting outreach process that consistently drives approvals without ever paying for a placement.
Although guest blogging is becoming more difficult, the basics of personalized guest posting outreach remain the same. If your mindset is to create mutual value, this process will work for you in 2026 and beyond.
Step 1: Build your outreach list
Your outreach list is a collection of the websites youβll email to offer guest-written content. You can build your list in several ways.
The easiest way to find potential websites is by googling your niche alongside βwrite for us.β
Plenty of reputable websites openly accept guest posts and have an established approval process you can find online. Thatβs the exact approach I used to publish an article on G2βs Learning Hub.
Alternatively, search the name of a prominent person in your niche and add keywords such as βguest post,β βguest author,β or similar. Chances are that if a website has published guest posts from someone in your industry, theyβll be receptive to accepting guest posts from you as well.
Browse your competitorsβ backlink profile with an SEO tool. In Semrush, Backlinks is one of the SEO tools under Link Building.
Once youβve gathered a list of sites that potentially accept guest posts, run them by your website quality criteria.
Consider the website niche, top pages, organic traffic over time, countries where the traffic is coming from, authority score, and outgoing backlinks. You can also automate this step with the API of your favorite SEO tool.
Even the best guest post outreach will fail if youβre writing to the wrong person.
Most people ignore emails that arenβt relevant to them, nor do they forward them to the right colleague.
Thatβs why you need to do your homework. Thereβs likely a specific department or person you should be addressing.
Hereβs how to find the right person through LinkedIn:
Open the company LinkedIn profile and select the People tab.
Type relevant keywords into the search bar to filter out profiles. Youβre looking for a person who decides what content goes on the blog.
To do this, you can type βcontentβ and browse the results for a content manager, content editor, or similar.
In smaller companies, you can search for βmarketingβ or βgrowthβ to find whoβs the one-person marketing team.
For micro companies, your best contact person might be the founder or co-founder.
Use Apollo or Hunter to find the work email of the best contact you find.
Sometimes, youβll come across companies that have no listed employees on LinkedIn, or their emails are not available. In this case, your only option might be a generic email such as contact@ or support@. For micro companies or in certain niches (typically B2C websites), these emails can still work.
Verify all email addresses. Many outreach tools have built-in email verification features.
This step helps you protect your sender reputation and ensures your emails end up in the inbox, not the spam folder.
Step 3: Choose your outreach approach
There are two distinct ways to approach guest posting outreach.
Send out a generic email template with basic personalization
Ask whether the website accepts guest-written content. This way, you donβt invest a lot of time upfront into every pitch and your only focus is on building an outreach list.
As the emails arenβt highly personalized (they usually just include the names of the person and the company), they generate a moderate reply rate.Β
To drive results with this approach, you need a large outreach list so youβll still get enough opportunities to work with at a 3% to 5% reply rate.
Hyper-personalize your emails
The email you send to company A offers something completely different than the email youβre sending to company B. It takes a lot of time to research and tailor your pitch, but it also enjoys a higher reply rate (around 19%, from my experience).
This approach works best when you have a small outreach list or when youβre pitching to prominent websites.
Step 4: Research the right topics
No matter your outreach approach, you usually need to pitch guest post topics. With basic personalization, you suggest topics only to the websites that reply to you. But with the hyper-personalized email approach, you propose topics in the first email you send.
Top-tier websites typically only accept specific types of guest articles. Find the websiteβs editorial guidelines by googling β[company name] + guest postβ and see their requirements.
Letβs look at HubSpot as an example. Theyβre only publishing marketing experiments, original data analyses, or super detailed tactical guides.
Similarly, writing a guest article for Zapierβs blog requires specific experience. Generic topics wonβt make the cut.
Buffer takes things a step further by opening rounds for guest posting under specific themes.
Following each websiteβs requirements increases your chances of landing a successful pitch. But most websites are open to a broader range of suggestions.
Some editors have a list of keywords or topics they want to target. They may share it with you so you can choose a topic to write on based on your expertise.
Alternatively, you can bring your own guest post ideas. When thatβs the case, you can use a keyword gap analysis to uncover relevant topic ideas.
How to do a keyword gap analysis with Semrush
Letβs say you want to pitch a guest article to monday.com. Hereβs how to go about it:
Go to Semrushβs SEO tools and select Keyword Gap. Add the URLs of Monday.comβs blog along with the blogs of leading competitor brands, then click on Compare.
Next, filter out the keywords.
Look only at keywords where competitors are ranking in the top 100 results.
Limit the keyword search volume to 2,000. This filters out broad, highly competitive terms that typically require long-form, comprehensive guides to rank.
In the keywords report, choose Missing to see keywords that competitors are ranking for but monday.com isnβt. This is their keyword gap.
Look deeper into individual keywords that seem interesting and match your expertise.Β
For example, βwhat is time boxingβ has 49% keyword difficulty.
In the search bar, add the domain URL to get a personalized keyword difficulty calculation. The goal is to find keywords for which your article has real potential to rank.
After selecting βmonday.com,β you see the site has low topical authority for βwhat is time boxing,β and ranking for it would be very hard.
Looking at βcost management in project management,β the Personal Keyword Difficulty is 60%. While thatβs still high, thereβs more to consider.
Check how your target domain compares against other websites ranking for this keyword.Β
Monday.comβs Authority Score (AS) is 67, while the average in the top 10 is AS 52. Despite this being a competitive keyword, with the right content, monday.com has real ranking potential.
Double-check the website isnβt targeting this keyword already. Sometimes, the website already has content on a similar topic β theyβre just targeting a variation of your keyword.
To do this, use the βsite:β search operator and add your keyword into Google search.
In this case, βtask priorityβ came up in the keyword gap analysis. While monday.com doesnβt have an article with this keyword in the H1, it does have very similar content on how to create a priority list or prioritize tasks.
Select three to four keywords that would make sense for the website to target. This ensures that the website editors will have enough options to choose from. If you put all of your eggs into one topic idea, it might not land. But three or four ideas increase your chances of success.
Adding extra value is about what else you can bring to the table besides guest content.
Are you an established author in the siteβs niche?
Do you have a social media following that would be interested in this piece?
Are you running a relevant newsletter?
Or do you participate in a private community that cares about this topic?
Your extra value proposition is unique to your profile, and different value props can appeal to different websites.
For example, I have 11,000 followers on LinkedIn. When reaching out to a project management toolβs blog editor, I can mention that 54% of my followers are founders, executives, or senior-level professionals in small to mid-sized companies β the very people responsible for managing processes and tools within their organizations.
If Iβm personalizing this pitch for a lead-generation blog, I can highlight that 35% of my audience works in the marketing or advertising industry.
Step 6: Prepare your emails
When it comes to your emails, you need to consider the subject line, the email body, and follow-ups.
Mention the website name (but not the personβs name).
Use title case (vs. sentence case).
On to the email body: Keep your emails concise and skimmable. Editors rarely have time for long messages.
Finally: follow-ups. Statistically, the more you follow up, the higher your overall campaign reply rate. Some people reply after the first follow-up, others after the third.
My recommendation? Limit follow-ups to two. A third one feels too pushy.
Step 7: Send your outreach emails
Youβve done a lot of preparation work. Itβs finally time to send your emails. Hereβs what to consider:
Send daysΒ
An analysis of 85,000 personalized emails showed the best day to send a cold email is Monday, closely followed by Tuesday and Wednesday. These are the days with the highest email open and reply rates.
Send times
The same study suggests you should be sending your emails between 6 to 9 a.m. PT (9 a.m. to 12 p.m. ET). But since most editors are based in different countries, aim to send your email before noon in their local time.
Unsubscribe option
Always give recipients a clear way to opt out of more emails. Without an unsubscribe option, recipients may mark your message as spam. This can damage your sender reputation and reduce future deliverability.
Step 8: Track and adjust
Most outreach tools allow you to track open, reply, and success rates. Letβs break down what each metric tells you.
Open rate is the percentage of recipients who open your email. Your subject line, preview text, sender name, and domain reputation directly influence this number.
Reply rate is the percentage of recipients who respond to your email. Exclude automatic replies (like out-of-office messages) to avoid inflated performance numbers. Your email body, topic relevance, and positioning drive this metric.
Success rate is the percentage of sent emails that result in a published guest post. Your topic selection, communication with the editor, and adherence to editorial guidelines are some of the aspects that influence success rates.
Track these metrics to identify weak points in your outreach campaigns.
After you establish a baseline, run controlled A/B tests. Send different versions of your campaign to similarly sized groups and compare performance. Change only one variable at a time so you can clearly measure its impact.
Test ideas such as:
Subject line with an emoji vs. without.
First email with an extra value proposition vs. without.
Three suggested topics vs. four.
One follow-up vs. two follow-ups.
Small improvements across different elements of your campaign can compound into measurable gains in success rate.
Step 9: Build relationships with editors
I mentioned Iβve worked on more than 350 guest articles. But that doesnβt mean they were all published on different websites. When you provide quality, youβre very likely to build lasting relationships that result in ongoing work.
Thatβs one reason I use keyword gap analysis to choose topics. I target keywords that the website has real potential to rank for. When an article brings meaningful traffic, it becomes much easier to pitch the next one.
To establish lasting relationships with editors:
Provide exceptional content: Structure the article around search intent. Create original value with custom visuals, expert quotes, and practical examples. Support the publisherβs internal linking by adding multiple links to other resources on their website. Ensure perfect grammar and spelling.
Support the article after publication: Promote it through your social media, newsletter, or community. When appropriate, link to it from other relevant content you write.
Be reliable and easy to work with: Communicate clearly, respect editorial guidelines, and meet every deadline.
My guest posting template with 18% success rate
Below is the guest post outreach template that has delivered the strongest results in my campaigns.
Between 2023 and 2025, I sent more than 300 pitches using variations of this template, primarily to content managers at B2B SaaS companies in the marketing and HR niches. It generated a 19% reply rate, and 18% of sent emails resulted in a published guest post.
Subject: Fresh content ideas for [Company Name]
Hi [First Name],
My name is [Your Name], and Iβm the [Your Job Title] at [Your Company], a [short company description].
Iβm reaching out to see if [Company Name] is open to guest contributions. I have extensive experience in [your expertise area], having worked on projects for brands such as [Brand 1] and [Brand 2].
Here are a few topic ideas Iβd love to propose:
keyword: [primary keyword 1], US search volume: [search volume]
[Proposed Article Title 1]
keyword: [primary keyword 2], US search volume: [search volume]
[Proposed Article Title 2]
keyword: [primary keyword 3], US search volume: [search volume]
[Proposed Article Title 3]
To learn more about my background, you can view my [LinkedIn profile link] or review articles Iβve written for [Publication 1], [Publication 2], and [Publication 3].
If the article is a fit and gets published, Iβd be happy to promote it to my community of [audience description or size].
Your author profile directly influences your approval rate.
If youβre just starting out and donβt have a portfolio of published work, editors will hesitate to approve your topics. Start by reaching out to small or mid-sized industry blogs.
As you build your portfolio, pitching becomes easier. Publishing on recognized industry websites and creating content that drives measurable results strengthens your credibility and improves your success rate over time.
Bottom line: Invest in your author profile. Thatβs your biggest asset for successful guest blogging.
Threat actors associatedΒ with QilinΒ and Warlock ransomware operationsΒ have beenΒ observed using the bring your own vulnerable driverΒ (BYOVD) technique to silence security tools running on compromised hosts, according to findings from Cisco Talos and TrendΒ Micro.
Qilin attacks analyzed by TalosΒ have beenΒ found to deploy a malicious DLL named "msimg32.dll,"
The IRGC released a video vowing retaliatory measures should the US attack its power facilities. Spokesperson Brigadier General Ebrahim Zolfaghari said the actions would entail "complete and utter annihilation" of power plants, energy infrastructure, and IT and communications facilities belonging to Israel and to companies with American shareholders.
Four people familiar with the matter told The New York Times that Musk is requiring banks, law firms, auditors, and other advisers on the IPO to buy subscriptions to Grok. Some of the banks have agreed to spend tens of millions of dollars on the AI and have begun integrating...
WD's leadership, now fully split from Sandisk, talks about its expectations for the future with industry analysts. We cover the full transcript of the press-only session right here.
YouTube's AI moderator acted on an errant DMCA takedown, affecting nearly every video that contained clips of the DLSS 5 trailer, including Nvidia's own YouTube video.
LLM Consensus sends your prompt to GPT-5.2, Claude Opus 4.6, and Gemini 2.5 Pro simultaneously. The models critique each other's responses, then combine the best elements into a single answer with a quality score from 0 to 1. This results in less hallucination and better answers for important questions. There are three modes: fast (~10s), balanced (~25s), and deep (~60s). The standard REST API is OpenAI-compatible. You can pay per request with USDC via the x402 protocol, or use API keys with prepaid credit packs and a usage dashboard.
The Witcher Onlineβs 2.0 update adds trading, multiplayer boat/horse riding, and more The Witcher 3: The Wild Huntβs Online mod has received its 2.0 update, transforming the mod into a more MMO-like experience. Now, the mod supports item trading, customisable horses, new emotes, and other feature upgrades. With the addition of horse sync and boat [β¦]
Intel will continue to ensure production of its 14th Gen Core "Raptor Lake Refresh" desktop processors, 700-series motherboard chipset, and ensure continued availability of the Socket LGA1700 platform. In an interview with Club386, Intel's VP and GM of client segment technical marketing, Robert Hallock, said that Raptor Lake remains a big part of Intel's client segment strategy, and that these processors will continue to be "abundantly available." Hallock also hinted that Intel could get motherboard vendors to innovate boards with both DDR4 and DDR5 memory slots, so consumers can choose between the two memory types, picking cheaper DDR4 memory, and upgrading to DDR5 down the line.
"Raptor Lake is a big part of our strategy - I want to be very clear about that," says Hallock. "It's still really, really good, even with multiple generations of hardware from other vendors coming after it, so it's not going anywhere. I want people to understand that Raptor Lake will continue to be abundantly available," Hallock said. "You've also seen some new motherboard announcements that support both DDR4 and 5 on Raptor Lake, as kind of like a bridge between worlds for people," he added. "That is reflective of our overall confidence and expectations." Companies like ASRock are already innovating such boards, and we could expect more such products in 2026.
Seasonic teased the Japan-exclusive FOCUS ATX 3.1 Sakura Limited Edition power supplies. These are design variants of the FOCUS ATX 3.1 line of PSUs that feature a white housing with cherry blossom printed design, Sakura-pink lettering, and a matching white 135 mm cooling fan. The PSU includes white, individually-sleeved modular cables. The PSU offers 80 Plus Gold switching efficiency, and meets both ATX 3.1 and PCIe 5.1CEM specifications, a native 12V-2x6 connector, Seasonic's innovative OptiSink cooling design, and a segment-leading 10-year product warranty. The company didn't reveal pricing or availability information, the PSU is likely to be Japan-exclusive.
Canadian online retailers have started putting up early listings of the AMD Ryzen 9 9950X3D2 Dual Edition desktop processor, which was launched earlier this month, but without a price announcement. The processor will start selling from April 22, 2026. Ahead of this date, Canadian retailers, ShopRBC and PC-Canada, listed the processor. ShopRBC listed it for CAD $1,375, while PC-Canada had it up for CAD $1,374. It so happens that these prices convert to approximately USD $990, confirming the popular theory that AMD could give the 9950X3D2 Dual Edition an MSRP of $999, making it the Ryzen-branded desktop processor with the highest launch price, not counting Threadripper HEDT SKUs.
The Ryzen 9 9950X3D2 is designed to be a flagship 16-core/32-thread Socket AM5 desktop part with 3D V-Cache memory on both its 8-core "Zen 5" chiplets, for a combined L3 cache of 192 MB, and total cache of 208 MB. The chip should, in theory, offer better gaming performance than the regular 9950X3D, since game workloads could be executed on either of the two CCDs. Multithreaded productivity workloads that are heavy on streaming data should benefit from the large caches, too. The chip comes with a feisty TDP of 200 W.
AvailSim helps travelers find the right eSIM plan worldwide by aggregating offers from trusted providers. It normalizes plan details and calculates true price per GB so you can compare coverage, data, and cost side by side, with independent rankings untouched by affiliate payouts. You also get destination guides, a data calculator, and a device compatibility checker, then purchase directly from the provider.
AI Sign Designer transforms any uploaded image into a production-ready custom sign design with instant pricing. Customers upload a logo, photo, or sketch, the AI converts it to clean vector files, and they get a quote in 2 minutes instead of the usual 48-hour wait. It generates layered SVGs compatible with Illustrator, CorelDRAW, and FlexiSign. It handles neon, channel letters, lightboxes, and more with real-time visual previews. Sign shops embed it on their website to automate quoting from design to production files.
Germany's Federal Criminal Police Office (aka BKA or the Bundeskriminalamt) has unmasked the real identities of two of the key figures associated with the now-defunct REvil (aka Sodinokibi) ransomware-as-a-service (RaaS) operation.
One of the threat actors, who went by the alias UNKN, functioned as a representative of the group, advertising the ransomware in June 2019 on the XSS cybercrime forum
AdaptlyPost is a social media scheduling and management platform that lets you create once and publish to Instagram, TikTok, YouTube, X, and more from a single dashboard. Plan with a visual calendar, queue posts, and track what's going live.
Use AI Image Studio and an AI caption co-pilot to generate visuals and copy, then automate posting via API or OpenClaw agents. Collaborate across workspaces and teams, and scale with plans for creators, businesses, and agencies.
PokerBotAI makes desktop poker bots powered by neural networks. PokerX Bot reads the table, decides, and clicks, working in fully automatic or advisory mode. It handles Hold'em, PLO, Short Deck, MTT, and OFC across 20+ rooms. The engine combines 300M real hand histories with 7B simulated scenarios. It also offers club management tools, a managed bot farm with profit sharing, and custom development.
Malwarebytes has successfully passed its first independent no-logs audit, opening up its VPN infrastructure to X41 D-Sec. Here's why this "white-box" assessment is a massive win for user privacy.
I have no idea how Blizzard pulled this off, but during World of Warcraft's Mythic raiding "Race to World First" esports event, U.S. team Liquid discovered live to thousands that the final boss had an insane, secret final phase.
Intel has recently released a new video showcasing its latest Texture Set Neural Compression (TSNC) technology, which delivers textures up to 18 times smaller while maintaining visual quality with little to no noticeable difference compared to the industry-standard compression. Using AI-based neural networks, Intel's graphics team processes input data from industry-standard BCn textures. These textures are compressed through an AI model encoder, encoded in the latest space values, and then decoded by a network decoder to decompress the textures. The result is output data textures that are up to 18 times smaller, with some quality loss at maximum compression settings. As with any neural technology, TSNC is trained on millions of standardized textures to create an AI model that can replace traditionally compressed textures in the BCn format. This results in new, much smaller game textures that load faster, use less VRAM, and perform better thanks to modern GPU technology.
There are several ways to apply TSNC neural compression, depending on the desired outcome, whether it's saving game installation size, reducing VRAM usage, or improving performance. Variant A, as Intel calls it, can achieve up to 9 times compression of the standard texture set, with little to no difference in visual qualityβalmost an unnoticeable drop. However, when the goal is maximum efficiency and requires up to 18 times texture compression, Intel offers Variant B of the TSNC neural network. This variant provides a significant performance boost, with the trade-off being a modest visual change. Using NVIDIA's FLIP tool to measure quality drop in generated images, Intel notes that Variant A experiences a 5% visual quality drop, while Variant B sees up to a 7% quality drop, which is noticeably more.
You can judge for yourself by viewing the comparison images below.
Intel has unexpectedly discontinued the official XeSS plugin for the Unity game engine, leaving the Unity ecosystem without XeSS frame generation, temporal super sampling, and antialiasing technology. This decision comes just a month after Intel released its official XeSS 3.0 software development kit for game studios, which includes features like multi-frame generation and the ability for XeSS 3.0 to use external memory heaps for GPU memory allocated by the game engine. This allows XeSS and the engine to operate on the same VRAM blocks instead of each reserving separate ones. However, it is unclear if XeSS 3.0 works with the latest Unity 6 engine, as official support has been withdrawn and the repository now serves as a public archive on GitHub. Similarly, AMD abandoned the Unity platform years ago, leaving only FSR 2.0 support since the last update. The focus now seems to be on other game engines like Unreal Engine 5 and its future versions, which are receiving all the latest advancements from both Intel and AMD.
Intel on GitHubIntel will not provide or guarantee development of or support for this project, including but not limited to, maintenance, bug fixes, new releases or updates. Patches to this project are no longer accepted by Intel. If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the community, please create your own fork of the project.
Walvee is an AI-powered travel platform where you describe the experience you want and get a full itinerary including flights, hotels, places, and day-by-day planning organized and ready to use. Unlike basic generators, Walvee includes a Concierge that stays with you during the trip: you can ask for restaurants near your hotel, find hidden gems, or add new places to your day on the fly.
You can also explore and "steal" trips from other travelers, customize them, and share your own routes. Walvee is for people who want more than a spreadsheet β they want a companion that adapts before and during the journey.
stackcost is a community-driven database of what it actually costs to run a startup. Founders publicly share their full stacks, monthly bills, and ROI so you can compare tools, spot overkill, and budget with real-world data. Browse verified receipts, leaderboards, and detailed stack pages, then create your profile to contribute your own stack and get discovered.
IslaApp helps small businesses build and launch bilingual websites quickly. Use an AI-powered editor to rewrite text, swap images, and adjust layouts, then publish with one click. Choose from over 75 industry templates in English and Spanish, connect a custom domain, and enjoy SSL, hosting, and built-in analytics. Sell online with an integrated store and ATH MΓ³vil card payments, manage forms and databases, and run AI email campaigns with responsive support.
Story Generator helps you create full stories from simple prompts. Choose a genre and get structured narratives with chapters, character arcs, dialogue, and endings in seconds. The tool also builds outlines, plots, prompts, and titles and returns multiple versions per run so you can pick your favorite. Use it to overcome writerβs block, plan scenes, or produce ready-to-read drafts, and export your results. Start free without a credit card, with support for visual storytelling and childrenβs stories.
OPT-IMG is an AI-powered image SEO platform that turns raw images into SEO-ready, faster-loading assets. It automatically generates SEO-friendly filenames and alt text and compresses images in a single workflow. Users can batch process images, create responsive outputs, and export optimized assets at scale. OPT-IMG helps improve image search visibility, page speed, and Core Web Vitals for eCommerce sites, blogs, and content teams.
Google, Meta, Microsoft and Snapchat said they would continue to take steps to protect their platforms following the expiration of the EU ePrivacy Directive.Β
Musk told banks, law firms and other advisors they need to buy subscriptions to AI chatbot Grok before the June initial public offering, per the New York Times.Β
Co.Actor helps you grow your personal brand on LinkedIn by learning your tone of voice from your posts and every edit. It surfaces daily, relevant post ideas from industry news and your network, then drafts content that sounds like you and lets you schedule directly to LinkedIn.
Use Co.Actor solo or with your team: each member keeps a unique voice, while shared dashboards, notifications, and analytics reveal what resonates and when to post. Track views, engagement, and follower growth, and get data-backed suggestions for what to publish next.
SurveyJS is an open-source JavaScript form builder that lets you create a custom form platform within any web application. Unlike SaaS tools like Typeform or SurveyMonkey, it is not a hosted service and has no usage limits. You can create unlimited forms using a drag-and-drop interface and collect unlimited responses while keeping full ownership of your data. SurveyJS integrates directly into your application, giving you complete control over the UI and branding. Both the form builder and the forms can be fully white-labelled with no external logos or references.
Salow.IO is an AI deal intelligence platform that analyzes B2B sales conversations and returns a deal health score across 9 dimensions, a confusion diagnostic isolating buyer noise, friction, and seller inconsistency, and three ready-to-send closing paths per deal. It detects signals like stakeholder silence or pricing objections and maps them against sales cycles and buyer profiles for 25+ verticals. The platform learns each rep's writing voice from their sent emails via a three-tier Voice DNA engine, so every response sounds like them, not a chatbot. Reps can upload sales playbooks as Strategic Doctrine, enforced across all outputs.
Esseeoh helps creators turn long videos into SEO-optimized YouTube Shorts and auto-posts them with AI-written titles, keyword-rich descriptions, and niche hashtags. It streamlines discovery so your Shorts get recommended to new viewers without manual edits.
If you havenβt uploaded in three days, it finds a top-performing video, generates a fresh Short, and publishes it to keep your channel active and consistent with minimal setup.
In a recent video, creator ETA Prime showcases Red Magic's phone running multiple Windows games directly on Android. The device is powered by a Snapdragon 8 Elite Gen 5 SoC, paired with 24GB of LPDDR5T memory and 1TB of UFS 4.1 Pro storage.
DriftΒ has revealed that the April 1, 2026, attack that led toΒ the theft of $285Β million was the culmination of a months-long targeted and meticulously planned social engineering operation undertaken by the Democratic People's Republic of Korea (DPRK) that began in the fall ofΒ 2025.
TheΒ Solana-based decentralized exchange described it as "an attack six months in the
A Democratic congressman had harsh criticism for Polymarket for allowing users to bet on the date the U.S. would confirm the rescue of Air Force service members shot down over Iran.
AI skeptics arenβt the only ones warning users not to unthinkingly trust modelsβ outputs β thatβs what the AI companies say themselves in their terms of service.
VitalStep provides guided fitness programs tailored to age, goals, and health conditions. Choose from 7-day, 21-session templates that build weight control, fat loss, and cardiovascular endurance with low to moderate intensity exercises safe for osteoporosis, diabetes, gout, and hypertension. Follow clear, gentle routines that improve circulation, support blood sugar and metabolism, and promote relaxation, so you can train confidently without aggravating sensitive joints.
Consider Nvidia's work on Neural Texture Compression (NTC). In its "Tuscan Wheels" demo, the company showed VRAM usage dropping from roughly 6.5GB with traditional BCN-compressed textures to 970MB using NTC, while keeping image quality close to the original.
Listings for the 9950X3D2 have started to pop up at many different vendors, with some even listing preliminary prices. We found the CPU going for roughly $1,000 at two Canadian retailers, and just a smidge below that at a UK-based website. For context, the standard 9950X3D with 3D V-Cache on only one CCD launched at $699.
The UK Ministry of Defence has confirmed that the DragonFire high-energy laser weapon will be installed on Royal Navy Type 45 destroyers by 2027, five years ahead of the original schedule.
PPTXMailMerge lets you generate data-driven PowerPoint presentations by merging Excel, CSV, or JSON with a PPTX template. Upload a data file and a deck, add smart placeholders, and create personalized slides for each row in seconds. Replace text, images, QR codes, tables, and charts using Excel-like addressing and full JSON traversal, then export a single deck or one file per row. Start free for small jobs or choose short 3-day plans for larger batches with secure processing.
Developed by a consortium including Nextcloud, Ionos, and Proton, Euro-Office builds directly on the open-source OnlyOffice codebase. It offers a word processor, spreadsheet editor, presentation tool, and PDF editor, all supporting Microsoft formats (docx, pptx, xlsx) and open standards such as ODF. Its preview version is already available on GitHub,...
On PlayStation Studios' official site, Sony has updated the main banner to prominently feature Ghost of Yotei and Intergalactic: The Heretic Prophet, while Demon's Souls Remake no longer appears in the lineup.
The LG 27GS93QE-B is perfect for gaming, productivity, and media consumption, thanks to its versatile feature set. It's a 27-inch 1440p OLED monitor with an MLA+ panel so it's really bright at 1,300 nits (peak) and it has a 240 Hz refresh rate with support for both G-Sync and FreeSync, so it's really smooth, too.
ClauseGuard analyzes contracts to reveal hidden risks, flag unfair clauses, and extract key terms in seconds. Upload a PDF, Word doc, or text file and get a report with a risk score, red flags, plain-English summaries, and ready-to-send counter-language.
It saves your analysis history for deal comparison while keeping files private by not storing uploads. Use it to review NDAs, freelance agreements, and service contracts before signing.
Pelaris is an AI coach that knows your goals, fatigue, RPE, and progress. It builds and continuously evolves science-based, periodized training programs around your life, adapting in real time based on how you train. Not a static plan, but a coaching system that gets smarter every session.
Pelaris supports strength training, running, swimming, cycling, triathlon, CrossFit, and general fitness, using multiple science-based methods per sport. The AI coach remembers your injuries, preferences, and history, adjusting volume, intensity, and exercises as you progress. Built on Flutter, Firebase, and Vertex AI. Privacy-first by design.
ReadThai.Fun is a free Thai script learning tool created by a 20-year Thailand expat. It uses spaced repetition and interleaving to teach all 44 consonants and 32 vowels through 19 tiers ordered by real-world frequency. Each tier requires a 100% gate test before advancing, ensuring genuine mastery. The app includes OCR camera scanning for Thai signs and menus, a 13-language translator, personal dictionary builder, 3-level writing practice, and a text decomposer that breaks Thai words into consonant and vowel components. It works on any device as a PWA and is completely free with full functionality.
Your AI PC's Neural Processing Unit (NPU) is a useful piece of hardware, but are you making the most out of it? I dug up 7 apps that make good use of the NPU; which ones are you using?
Crimson Desertβs 4K output mode launched first on PlayStation 5, with Xbox Series X initially left out. Pearl Abyss declined to comment on the delay, though Xbox has now received a fixed 4K output option.
Japanese researchers trained cultured rat cortical neurons to autonomously generate complex temporal signals using a real-time machine learning framework.
Wildcat Lake is Intel's upcoming family of low-budget and low-power CPUs intended for OEMs. We've already seen many leaks surrounding this family, but now a new product from Advantech has listed three SKUs in a datasheet for its MIO-5356 SBC. This confirms the specs from prior leaks and signals that a launch is due soon.
Receivly is an invoicing platform for small businesses and freelancers. Create professional invoices in seconds with auto-numbering and due dates, then see receivables organized as Sent, Overdue, or Paid on a clean dashboard. Automate payment reminders at 7, 14, 21, and 30 days, and keep customer details and default terms in one place for reuse. Mark invoices as paid in one click without bank connections. Your data stays in a secure, isolated workspace so you remain in control.
The latest Steam client update included an FPS data gathering component in Beta, allowing the platform to monitor your framerates and compare it with your hardware.
BenQβs DesignVue PD2770U is a flexible and capable professional monitor with a 27-inch IPS panel, 4K resolution, wide-gamut color, HDR10, a built-in calibrator, software control, and premium build quality.
Sapphire introduced two new China-specific graphics cards, the Radeon RX 9070 GRE Pulse Pro, and the Radeon RX 9060 XT Pulse S. The two feature a price-performance ratio that's highly optimized for the Chinese market, banking on the success of China-specific products from previous generations. The RX 9070 GRE Pulse Pro features a board design that's similar to that of the RX 9070 series Pure brand from the company, but colored black overall. The card appears high-end when installed, with a meaty triple-slot cooling solution, and a board length of 32 cm. It uses a pair of 8-pin PCIe power inputs. Sapphire has given this card a Game clock of 2920 MHz boost, and 2340 MHz Game clock. Display outputs include two each of HDMI 2.1b and DisplayPort 2.1a.
Carved out from the 4 nm "Navi 48" silicon, the RX 9070 GRE has 48 RDNA 4 compute units, for 3,072 stream processors. It gets 12 GB of 20 Gbps GDDR6 memory across a 192-bit wide memory bus. The RX 9070 GRE is hence positioned between the global RX 9070 and RX 9060 XT 16 GB. Next up, is the Sapphire Radeon RX 9060 XT 8 GB Pulse S. This is a compacted version of the global RX 9060 XT 8 GB Pulse. While the global card has a 24 cm board length with 12.4 cm height, the China-specific Pulse S card is just 20 cm in length, with 12.2 cm height. Both cards are 2 slots thick.
A 42-core SKU from the upcoming Nova Lake-S CPU family has reportedly been upgraded to 44 cores by swapping the 6P+12E tile with an 8P+12E tile, allowing the chip to achieve symmetry across its dual-tile config. Those leftover 6P+12E tiles could now become locked variants with 144 MB of bLLC as a new 22-core SKU (6P+12E+4LPE).
Developers behind the open-source PlayStation 3 emulator RPCS3 claim that theyβve achieved a breakthrough in emulating the PS3's Cell Broadband Engine processor.
The Golden Saga Edition of the Redmagic 11 Pro is equipped with 24 GB of RAM and an even more robust liquid cooling system that can pull upwards of 45W while emulating Red Dead 2, delivering 50+ FPS. The phone costs around $1,700, but for that money, you're getting GTA V running at up to 100 FPS on a device that just happens to make calls, too.
The company behind the tiny box AI accelerator says that its macOS driver for Nvidia eGPUs has just been signed by Apple, making it a legitimate software for Macs and no longer needs workarounds to work with the device.
How Are You is a 24/7 safety app for families with aging parents. It runs on an Android phone, learns a personβs routine in seven days, and detects anomalies like long stillness, missed wake-up times, or leaving safe zones. When something seems wrong, it emails family with context and GPS coordinatesβno app required for them. Data stays on the device, with secure, anonymous summaries used for AI analysis. Setup takes minutes, costs $49 with a 14-day guarantee, then $5/year.
UK workers now rank reliable technology nearly equal to pay, as persistent meeting failures disrupt productivity, despite increased investment in AI tools
Happy Easter to those of you that celebrate and for whatever reason, Wednesday this coming week is crammed full of new releases. We kick off the week with a Finnish hardcore post-apocalyptic survival game, which is unlikely to be everyone's cup of coffee. Monday kicks off with what might end up becoming a lawsuit with Nintendo, followed by a dark fantasy dungeon crawler and by Wednesday we take a hard left with an action adventure game that also includes racing. Thursday we veer right with a tree city builder and the week ends with trying to end humanity. We got a few more games that didn't quite make the list, of which most are early access games.
Road to Vostok / This week's major release / Early access / Tuesday 7 April
Road to Vostok is a hardcore single-player survival game set in a post-apocalyptic border zone between Finland and Russia. Survive, loot, plan and prepare to cross the border into Vostok, a permadeath zone where one mistake can end it all. Steam link
iMideo is an all-in-one AI platform that generates and edits videos from text, images, or existing footage. You can switch among 8+ leading models to compare results and quickly produce cinematic outputs. It supports text-to-video, image-to-video, video-to-video, and reference-to-video, and lets you enhance videos with effects, upscaling, background removal, and sound. Create talking avatars, face swaps, subtitles, and lipsync, and extend or animate shots with cloud processing and credit-based pricing.
ClawSkills is an open registry for AI agent skills. Creators upload AgentSkills bundles, version them like npm, and publish searchable entries indexed with vectors. Browse curated highlights, explore the latest drops, and roll back to prior versions when needed. Install any skill in one shot with npx clawskills@latest install to keep your agentβs capabilities organized and up to date.
Most domain name generators produce bad results. Plenty Domains uses various AI models and layered prompting to produce truly great names. Try it and see for yourself! It features multiple projects, shortlists, Google research, and 1-click registration.
VeriRFP is an automation platform for enterprise security questionnaires and RFPs. When a buyer sends a 300-question security assessment, VeriRFP uses local AI models like Ollama to draft answers based on your approved SOC 2 reports and existing security policies. This ensures every answer is accurate and cited without sending sensitive company IP to shared cloud AI models.
VeriRFP also acts as a unified Trust Center. You can securely share compliance documents with buyers through an NDA-gated portal, route complex questions to your engineering team for review, and export the final audit-ready package with one click.
All the ways to watch Tour of Flanders 2026 live streams online and from anywhere, including FREE options, as favorite Tadej PogaΔar defends his crown.