❌

Reading view

AMD Software 26.3.1 arrives with FSR 4.1 and new game support

AMD’s FSR 4.1 upscaler is now available with AMD Software 25.6.1 With the release of AMD Software 26.3.1, the company has officially launched its FSR 4.1 ML upscaler. This new FSR release uses the same neural network foundation as Sony’s new/improved PSSR upscaler, which recently became available to PlayStation 5 Pro owners. This new driver […]

The post AMD Software 26.3.1 arrives with FSR 4.1 and new game support appeared first on OC3D.

Arctic's $1,400 AMD Strix Point fanless mini-PC hides under your desk β€” Senza AI 370 features Ryzen AI 9 HX 370 CPU, 32GB RAM, and 1TB SSD

If you've been looking for a mini PC that goes beyond just decluttering your desk β€” one that basically doesn't even remind you of its existence, then Arctic has got just the thing for you. The new Senza AI 370 features a powerful AMD chip with a decent iGPU, 32 GB of fast RAM, a 1 TB SSD, and plenty of ports. It costs almost $1,400, though, but at least it's fanless.

River – Video call visitors instantly to qualify, sell, and book meetings


River puts an AI sales employee on your website who video calls visitors the moment they’re curious. It personalizes conversations by industry and role, answers product and pricing questions, handles objections, and speaks any language to convert interest into action.

River qualifies buyers, books meetings or closes deals on the spot, follows up with documents and next steps, logs every conversation, and only routes high-intent buyers to your team, helping you capture more pipeline without slow forms or follow-ups.

View startup

Speagle Malware Hijacks Cobra DocGuard to Steal Data via Compromised Servers

Cybersecurity researchers have flagged a new malware dubbed Speagle that hijacks the functionality and infrastructure of a legitimate program called Cobra DocGuard. "Speagle is designed to surreptitiously harvest sensitive information from infected computers and transmit it to a Cobra DocGuard server that has been compromised by the attackers, masking the data exfiltration process as legitimate

Walmart: ChatGPT checkout converted 3x worse than website

ChatGPT Walmart

Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.

Why we care. This suggests agentic commerce isn’t ready to replace traditional shopping. Sending users to owned environments still drives higher conversion rates.

The details. Starting in November, Walmart offered about 200,000 products through OpenAI’s Instant Checkout. Users could complete purchases inside ChatGPT without visiting Walmart’s site.

  • Daniel Danker, Walmart’s EVP of product and design, said those in-chat purchases converted at one-third the rate of click-out transactions.
  • He called the experience β€œunsatisfying” and confirmed Walmart is moving away from it.

Goodbye, Instant Checkout. Instant Checkout was designed to let users complete purchases directly inside ChatGPT without visiting a retailer’s website. However, earlier this month, OpenAI confirmed it was phasing out Instant Checkout in favor of app-based checkout handled by merchants.

What’s changing. Walmart will embed its own chatbot, Sparky, inside ChatGPT. Users will log into Walmart, sync carts across platforms, and complete purchases within Walmart’s system.

  • A similar integration is coming to Google Gemini next month.

The WIRED report. Why Walmart and OpenAI Are Shaking Up Their Agentic Shopping Deal (subscription required)

MetaProvide – Add decentralized Swarm backups to Nextcloud


HejBit is a backup solution for Nextcloud that stores files on decentralized Swarm storage instead of a single server. Instead of relying on traditional cloud providers, it distributes encrypted data across the network, giving users another way to protect their files while keeping control over where they are stored.

We're currently running an early adopter program and looking for Nextcloud users who want to test decentralized backups in real environments. The goal is to gather feedback, improve the product, and better understand how decentralized storage fits into everyday Nextcloud setups.

View startup

ClawStreet – AI agents analyze market data, trade, and manage portfolios autonomously


ClawStreet is a platform where autonomous AI agents reason, plan, and trade stocks with zero human intervention. Agents register themselves, analyze real market data with 15+ technical indicators (RSI, MACD, Bollinger Bands, etc.), and execute trades autonomously. It is built on the OpenClaw framework or lets you roll your own agentic workflow. Paper trading only, so there is no financial risk.

Compatible agents include OpenClaw, NemoClaw, NanoClaw, ZeroClaw, Nanobot, PicoClaw, Clearl, Cursor Automation, or you can build your own with any language or LLM.

View startup

54 EDR Killers Use BYOVD to Exploit 34 Signed Vulnerable Drivers and Disable Security

A new analysis of endpoint detection and response (EDR) killers has revealed that 54 of them leverage a technique known as bring your own vulnerable driver (BYOVD) by abusing a total of 34 vulnerable drivers. EDR killer programs have been a common presence in ransomware intrusions as they offer a way for affiliates to neutralize security software before deploying file-encrypting malware. This

Nvidia GeForce Now gains a 90 FPS VR mode and several new games

Nvidia has upgraded GeForce Now with a 90 FPS VR mode and has added support for several new games Nvidia has upgraded its GeForce Now service for Ultimate Members, adding a new 90 FPS gameplay mode for users of VR headsets. This includes Apple’s Vision Pro, Meta Quest devices, and Pico devices. Users can create […]

The post Nvidia GeForce Now gains a 90 FPS VR mode and several new games appeared first on OC3D.

Intel Reportedly Readies a 10% Price Hike for Consumer CPUs

Intel has reportedly informed its major PC clients of a planned 10% price increase on its consumer CPUs. According to industry sources cited by ET News, this price hike will affect Intel's Core Ultra family of processors, which power hundreds of millions of PCs worldwide. As a result, PC OEMs may need to alleviate the increased material costs by promoting their AI PCs and premium devices more aggressively. To maintain positive margins, these OEMs will likely have to raise prices further and emphasize their higher-end AI PC offerings to capture better margins.

PC gamers have faced challenges over the past year, with memory and storage prices climbing rapidly, reaching exorbitant levels for simple RAM kits. The high demand from data centers has depleted memory and storage inventories months in advance. GPUs have also been affected, as gamers have struggled to purchase them at MSRP, instead facing inflated prices due to the shortage of GDDR memory (VRAM) used in these GPUs. Now, CPUs are joining this trend, with Intel targeting its consumer CPU sector first. This price increase will impact everything from pre-built systems and DIY PCs to laptops and other consumer CPU variants. For example, the 10% increase will significantly affect PC pricing, depending on the CPU's share of the bill of materials. We are waiting to see how these changes will affect popular retailers like MicroCenter, Amazon, Newegg, and others before drawing further conclusions.

Perplexity’s Comet for iOS uses Google Search by default

Google hybrid search

Perplexity’s new Comet browser for iOS defaults to Google Search. That’s because mobile queries often focus on navigation, local results, and transactions, where β€œGoogle does a much better job … than anyone else … including Perplexity,” according to Perplexity CEO Aravind Srinivas.

Comet for iOS. It includes Perplexity’s AI assistant directly in the browser. Comet for iOS also blends AI answers with standard search results. For many queries, you’ll still see a traditional results page.

  • You can ask questions by voice while browsing.
  • The assistant can summarize pages, answer questions, and take actions like drafting emails.
  • Deep Research features generate cited summaries and prep materials.

What Comet does. According to Perplexity, the assistant can act on your behalf. Examples include:

  • Summarizing articles and sharing outputs.
  • Researching people or topics across tabs.
  • Assisting with bookings or form fills.

What Perplexity is saying.

  • β€œThe search experience in Comet iOS provides traditional search results pages for fast, local, and high-intent queries that are more common on mobile. Meanwhile, the Comet Assistant easily allows for more advanced knowledge and intelligence powered by the Perplexity answer engine. The intention is for users to have the smoothest browsing experience possible for the real use cases of iOS.”

Why we care. The near future of search increasingly looks hybrid, which means you’ll need to optimize for traditional Google results and AI-driven answers. This also reinforces Google’s dominance in commercial and local search while shifting competition to the AI layer.

The announcement. Comet is Now available on iOS

Microsoft Advertising simplifies automated bidding setup

Microsoft Ads: How it compares to Google Ads and tips for getting started

Microsoft is changing how advertisers configure automated bidding, aiming to reduce complexity while keeping performance outcomes the same.

What’s happening. The platform is streamlining its bidding options by folding familiar targets like Target CPA and Target ROAS into broader automated strategies rather than standalone campaign settings.

Going forward, advertisers will choose between two core approaches: Maximize Conversions or Maximize Conversion Value, with optional targets layered on top.

Credit – Hana Kobzova of PPC News Feed

How it works. For conversion-focused campaigns, advertisers select Maximize Conversions and can optionally set a target CPA. For value-focused campaigns, they select Maximize Conversion Value and can optionally set a target ROAS.

Microsoft says the underlying bidding behavior has not changed β€” only the way advertisers configure it has been simplified.

Why we care. This update makes automated bidding simpler and more standardized, which lowers the barrier to using Microsoft Advertising’s performance tools at scale. By consolidating Target CPA and Target ROAS into broader strategies, it reduces setup complexity while still keeping key performance controls available as optional targets.

In practice, this means faster campaign setup, more consistent optimization behavior across accounts, and fewer structural differences between how advertisers manage conversion and value-based bidding.

What’s staying the same. Existing campaigns using Target CPA or Target ROAS will continue to run normally without any required updates. Portfolio bid strategies also remain unchanged.

The bigger picture. The change is part of a broader push to make automated bidding more accessible, reducing setup decisions while maintaining control over performance goals.

Bottom line. Microsoft is consolidating bidding options into simpler frameworks, keeping familiar optimization controls available but moving them into a more streamlined setup experience.

Google expands its Universal Commerce Protocol to power AI-driven shopping

What 23 tests reveal about AI Max performance in Google Ads

Google is doubling down on the infrastructure behind β€œagentic commerce,” introducing new capabilities to its Universal Commerce Protocol (UCP) while making it easier for retailers to plug in.

Google says UCP β€” its open standard for connecting retailers to AI-powered shopping experiences β€” is getting new features designed to make online buying feel more like a traditional storefront, even when handled by automated agents.

What’s new. The latest updates focus on making shopping via AI agents more functional and flexible.

  • A new cart capability allows agents to add or save multiple products from a single retailer in one go, mirroring how a typical shopper builds a basket.
  • There’s also a catalog feature, giving agents access to real-time product data such as pricing, inventory and variants when needed. The goal is to make interactions more accurate and responsive.
  • Another addition is identity linking. This lets shoppers carry over logged-in benefits β€” like member pricing or free shipping β€” when using platforms connected through UCP, rather than losing those perks outside a retailer’s own site.

Why we care. This update accelerates the shift toward AI-driven, agent-led shopping, where platforms like Search and the Google Gemini app may choose, compare and even purchase products on users’ behalf. That makes product data quality β€” pricing, inventory and feeds β€” very important for visibility, while simplified onboarding and support from platforms like Salesforce and Stripe suggest rapid adoption, giving early movers a competitive edge.

Zoom out. UCP is designed as a modular system. Retailers and platforms can choose which capabilities to adopt, rather than implementing everything at once.

That flexibility is key as the industry experiments with how much control to hand over to AI-driven shopping experiences.

What Google is doing. Google plans to bring these capabilities into its own ecosystem, including AI-powered experiences in Search and the Google Gemini app.

The company is also working to expand adoption by lowering the barrier to entry. A simplified onboarding process inside Merchant Center is expected to roll out over the coming months.

Bottom line. UCP is evolving from a concept into a broader ecosystem play. By adding more capabilities and simplifying onboarding, Google is pushing to make agent-driven commerce easier to adopt β€” and harder to ignore.

Crystal Dynamics Cuts 20 Workers in Yet Another Round of Layoffs

Around the same time as Ubisoft's latest round of layoffs see over 100 employees lose their jobs, Crystal Dynamics, the game studio behind Square Enix's recent Tomb Raider reboot trilogy, has announced that it will be laying off 20 employees across its development and operations teams. This is the fourth round of layoffs at the studio in the last year, and in a recent LinkedIn post announcing the layoffs, the studio explains that "we continuously take a hard look at our team structures to ensure they align with our long term studio goals," and calls the layoffs "necessary."

Some of the comments in the LinkedIn post are from affected employees, including an environment artist and a senior animator with 15 years of experience. Crystal Dynamics confirmed in the post that its "current projects" are heading into the next phases of development, and it reiterated that "Crystal Dynamics remains fully committed to the future development of our already announced Tomb Raider titles," suggesting that there have been no game cancellations as yet. Tomb Raider: Catalyst is slated for a 2027 cross-platform launch, but even Crystal Dynamics admits that layoffs like these "can cause concern amongst our community."

(PR) Death Stranding 2: On the Beach Now Available on PC

KOJIMA PRODUCTIONS, in collaboration with Nixxes Software, is proud to announce that DEATH STRANDING 2: ON THE BEACH is now available on PC through the Steam and Epic Games Store. Watch the brand-new launch trailer, edited by Hideo Kojima, here.

In this standalone sequel, DEATH STRANDING 2: ON THE BEACH offers a new range of tools, weapons and vehicles at Sam's disposal. With his companions from DRAWBRIDGE by his side, Sam takes on a new adventure to connect Australia to the Chiral network. Beset on all sides by enemies, Sam will have to shoot, sneak, and sprint his way out of trouble, as well as survive natural disasters such as earthquakes, sandstorms and forest fires, and brave the ruinous Timefall as he strives to save humanity from extinction once again. The Social Strand System returns, connecting players from around the globe allowing you to shape someone else's world, and have their actions shape yours.

Apple Delivers More Timely Security Updates for iOS, iPadOS, and macOS

Apple has designed a new framework to keep its operating systems secure and up to date with modern protection technology without relying on major OS updates for essential security improvements. Called Background Security Improvements, this framework spans all of the company's operating systems that power iPhones, iPads, and Mac computers, used by hundreds of millions of users globally. For example, whenever a security issue arises in the Safari browser, WebKit framework, or any other software with an internet-first connection, security becomes the top priority. Instead of waiting to bundle these security updates with a new version of an operating system, Apple provides ongoing patches between major updates to address any security issues.

The company is gradually enhancing the quality of life on its platforms, and this is another significant step forward. Background Security Improvements begin with iOS 26.1, iPadOS 26.1, and macOS 26.1, and will continue with future versions of these operating systems across supported devices. Interestingly, Apple also publishes Background Security Improvements by date, component patched, and CVE number, so you can understand what the update addresses, why it was necessary, and have peace of mind that your OS remains safe from the growing number of online exploits targeting unpatched systems.

(PR) High Fantasy RPG; Higher Replay Value: Valor of Man Is Now Available on PC

Today, the tactical roguelite RPG Valor of Man, developed by Legacy Forge and published by Numskull Games, launches on PC via Steam. Valor of Man combines tactical turn-based CRPG combat with the popular roguelite format to deliver a unique replay experience filled with meaningful choices and tactical complexity. With this 1.0 launch, the game introduces two new systems: Chaos Mode and Masteries.

In Chaos Mode, players can select from unlockable modifiers to create their own difficulty, allowing for more challenging or accessible experiences. Masteries are a collector's dream; the menu shows players which items, abilities, conditions and artifacts they've successfully beaten a run with, adding a meaningful long-term measure of achievement for all skill levels.

Ubisoft's Cost Cutting Continues With 105 Jobs Cut as Red Storm Relegated to Support Studio

Red Storm Entertainment, the long-standing game studio that previously worked on titles in the Tom Clancy, Ghost Recon, and Rainbow Six franchises, has been officially transitioned to a support-only studio by Ubisoft. Red Storm had been working on The Division Heartland until that was cancelled in 2024, even after having been through a number of playtests. According to sources who spoke to GameIndustry.biz, Ubisoft will lay off as many as 105 workers at Red Storm, while the remaining staff will be dedicated to supporting Ubisoft's other studios in IT, customer relations, and development work on the Snowdrop engine.

The layoffs come after Ubisoft announced its recent cost-cutting measures and studio restructure, which would see the gaming giant reorganized into five "creative houses," all individually responsible and accountable for the games and properties under their management. The ensuing changes and announced cost-cutting measures, which aimed to save €200 million over two years, have already resulted in a number of layoffs at other studios and a massive strike across Ubisoft's international locations.

Save $350 on the cheapest RTX 5070 Ti laptop with an OLED display β€” Acer's excellent Predator Helios Neo 16S AI with 32GB of RAM is just $1,549 right now

Powerful laptops with impressive specs are often overpriced and only make sense when on sale. Thankfully, Acer has put its Helios Neo 16S AI with an RTX 5070 Ti, 32 GB of RAM, 240 Hz OLED display, and a Core Ultra 9 275HX processor on a steep discount. It's the best Windows laptop you can likely find at this price.

US trade deficit hits a record $1.2 trillion as AI hardware imports surge under the Trump administration β€” massive demand for chips from Asia outpaces domestic production, fueling a 60% increase in imports in 12 months

AI-related imports across the computing and electronics sector have led to a big increase in the U.S. trade deficit, hitting a record $1.2 trillion in 2025, despite efforts from the Trump administration to reduce the gap.

Demi – Superhuman for Slack


Demi turns Slack into a command center. It auto-drafts customer replies from your team’s Slack history, surfaces answers before you need them, and delivers morning briefings and channel digests so sales and support stay on top of what matters. Connect it to your Slack workspace to search past threads, docs, and decisions, then review and send customer-ready responses without pinging engineering. Demi helps your team cut through noise and focus on closing deals while protecting your data.

View startup

HeyDriver – Scan a QR sticker on a car or luggage to message its owner - anonymously.


HeyDriver is a privacy-first QR code communication tool. Generate a unique QR sticker for your car, luggage, keychain, or wallet. When someone scans it, they can instantly send you a message β€” delivered to your email, no personal info exchanged, no app needed.

Lost luggage at the airport? Blocked driveway? Found someone's keys? Just scan and type. Currently in beta β€” join the waitlist at heydriver.app and get a free Premium account.

View startup

CISA urges companies to secure Microsoft Intune systems after hackers mass-wipe Stryker devices

The U.S. cybersecurity agency urged companies to prevent access to systems used for remotely managing their fleets of employee devices after hackers broke into a major U.S. medical tech giant and remotely wiped thousands of phones and computers.

Intel launches Precompiled Shader Delivery with its ARC GPUs

o Intel launches its Precompiled Shader Beta for ARC graphics cards With its Intel Graphics Driver 32.0.101.8626 for ARC graphics cards, Intel has launched its Precompiled Shader Distribution Beta. With this beta, users of ARC B-series (Battlemage) GPUs and Intel Core Ultra 3-series and 2-series CPUs with built-in ARC GPUs can benefit from precompiled shaders […]

The post Intel launches Precompiled Shader Delivery with its ARC GPUs appeared first on OC3D.

Arctic Launches Senza AI 370 Fanless PC With Ryzen AI 9 HX 370 and 32 GB of RAM

Arctic has announced the updated version of its fanless Senza PC with updated internals, connectivity, and a more flexible design. The Senza AI 370 uses a familiar fanless design, with the heatsink integrated into the case of what would otherwise effectively be a mini PC, but now features the AMD Ryzen AI 9 HX 370 APU and its corresponding AMD Radeon 890M iGPU. That somewhat powerful iGPU in a silent form factor is the biggest differentiator compared to other desktops, but the Senza AI 370 is also designed to fit under a desk, virtually eliminating the need for the PC to be small in the first place. According to the Arctic online store, the Senza AI 370 retails for €1,199.99, and there are no optional barebones kits available.

Arctic markets the Senza as a silent gaming and productivity machine and claims that it can operate at temperatures as low as 50Β° in gaming workloadsβ€”although there were no specifics on the ambient temperature or the exact games and settings tested, so take that with a pinch of salt. To its credit, the Senza AI 370 does have 32 GB of DDR5X-8000 memory and a 1 TB PCIe Gen 4 Γ—4 M.2 SSD. The I/O situation is also interesting, with Arctic opting for a break-away front I/O panel module that features 1 3.5 mm audio combo jack, a USB 3.2 Gen 1 Type-A port, and a USB4 Type-C portβ€”this module is connected to the PC via a cable, so that the ports can be mounted near the front of the desk. The actual PC itself also features the following ports: 2Γ— USB 2.0, 2Γ— USB 3.2 Gen 2 Type-A, 1Γ— USB4 Type-C, 1Γ— HDMI 2.1, 1Γ— DisplayPort 2.1, 1Γ— 2.5 GbE, 1Γ— DC in, and separate 3.5 mm audio jacks for mic in and audio out. Because the PC is designed to be mounted under a desk, what would traditionally be the rear I/O is also front-facing, which should make it easier to reach.

NVIDIA DLSS 5 Gets 84% Dislikes on YouTube as Backlash Grows

NVIDIA's latest DLSS 5 technology has faced a significant community backlash, with its approval rating dropping considerably. On the published YouTube video, NVIDIA's official DLSS 5 announcement has received an overwhelming 83.7% dislikes, with only 16.3% likes. This is a substantial negative rating, with 16,107 likes and 82,515 dislikes (and counting) on a video with 1,527,915 views at the time of writing. Other videos published by the NVIDIA GeForce YouTube channel have also recorded surprisingly low approval from the community. The Resident Evil Requiem video only scored a 14.9% positive rating, while Starfield had an 18.2% positive ratio of likes to dislikes. Other demos such as Hogwarts Legacy and EA Sports FC saw positive ratings of 18.7% and 14.5%, respectively. The best rating is now exclusive to a tech demo, not even a real game, which is the Zorah Unreal Tech Demo with a 37% positive ratio.

Gamers' reactions are shifting negatively towards the technology, while NVIDIA CEO Jensen Huang famously noted that gamers are "completely wrong" because these games offer massive programmability and controllability in how DLSS 5 is applied, keeping the artistic intent intact. However, according to game developers from both Capcom and Ubisoft who spoke to Insider Gaming, while the individual studios may have been involved in marketing DLSS 5, the teams who worked on them were just as surprised by the results as the rest of the gaming community. A Ubisoft developer is quoted as saying, "We found out at the same time as the public," while developers at Capcom expressed similar sentiments, stating that it was surprising to see Capcom, which has generally been protective of its IPs when it comes to AI involvement, getting involved in the marketing for DLSS 5. Furthermore, the Capcom developers expressed concern about how DLSS 5 might change Capcom's approach to generative AI and its role in game development.

What patents reveal about the foundations of AI search

The SEO time traveler- What old patents reveal about AI search

Every time a new large language model (LLM) drops or Google tweaks an AI Overview, the SEO industry loses its mind. We develop this weird collective amnesia, scrambling to optimize for features that were actually mapped out in patent offices 10 years ago. We’re so obsessed with the now and the next that we’ve stopped looking at the blueprints.

If you want to survive 2026, stop trying to be a futurist. Instead, be an archaeologist.

To actually deliver for our clients, we need a research framework that isn’t just reactive. It has to be a balance: Look back at the foundational patents to understand the rules, and look ahead to see how AI is finally being given the muscle to enforce them.

The archaeology of SEO

There’s a massive misconception that to understand AI search, you need to be a prompt engineer or read every new research paper from OpenAI. You don’t.Β  The logic governing today’s magic is often math that was written a decade ago.

We can’t talk about patent research without honoring the late, great Bill Slawski. For 20 years, he was the SEO industry’s archaeologist. While everyone else was arguing about keyword density, he was reading dry, technical filings to predict exactly where we’re standing right now.

History proves his method worked.

The algorithm isn’t magic. It’s math. When a new feature drops today, the engineering blueprints were likely filed between 2007 and 2016. If you want to win, go read the old stuff.

Dig deeper: The origins of SEO and what they mean for GEO and AIO

Strategy vs. mechanics: From β€˜strings’ to β€˜verified things’

Don’t get buried in buzzwords. Categorize your learning into two buckets: ”strategy” or ”mechanic.”

For years, the industry talked about moving from strings to things (entities). But in 2026, that’s just the baseline. We’ve moved from strings to verifiable things. An entity is worthless if the AI can’t prove it’s real.

Think of it like building a house:

  • Semantic SEO is the architecture: It’s the vision. It’s making sure the meaning of your site actually matches what the user is looking for.
  • Entity SEO is the bricklaying: It’s using distinct nouns to build that vision so a machine can parse it.
  • Verification is the mortgage: This is the part most people miss. It’s turning those entities into findable, provable facts connected to a verified human. If you aren’t connecting your content to a provable human expert, you’re just adding to the noise.

AEO vs. GEO: Let’s stop using these interchangeably

The industry often uses AEO and GEO synonymously, but they require different content structures and serve different objectives.

Answer engine optimization (AEO)

AEO is for the β€œdirect answer.” Think Siri, Alexa, or that single snippet at the top of the page. It’s binary. It’s rooted in those 2006 fact repository patents.

You need ”confidence anchors.” These are unnuanced, structured facts. The engine isn’t β€œthinking,” it’s fetching. If your fact isn’t provable and anchored to a verified source, the engine won’t risk a hallucination by citing you.

Generative engine optimization (GEO)

GEO is for the β€œsynthesis.” This is Gemini or ChatGPT search explaining how something works. It was formally defined by researchers at Princeton and Georgia Tech in 2023.

You need information gain. These engines don’t just want a fact; they want to see how Concept A affects Concept B. They’re looking for relationships and unique perspectives.

In short, AEO is about being the fact. GEO is about being the authority that the AI trusts to explain those facts.

Dig deeper: SEO, GEO, or ASO? What to call the new era of brand visibility in AI [Research]

Get the newsletter search marketers rely on.


The trap of forward-projecting: Why the β€˜basics’ are still the β€˜floor’

There’s a danger in becoming an SEO time traveler. If you spend all your time in the patent archives or stress-testing GEO relationships, you might forget that the AI still has to reach your content.

You can have the most verified, E-E-A-T-heavy content in the world, but if your site’s technical health is a mess, the confidence anchors will never weigh in.

The persistence of technical debt

Basic SEO requirements haven’t changed. The tolerance for ignoring them has simply disappeared.

  • Crawl budget and efficiency: If your site is bloated with zombie pages or redirect loops, you’re wasting the crawler’s time. LLMs aren’t just looking for content. They’re looking for the cleanest path to a fact.
  • Core Web Vitals (CWV): More than a ranking factor, it’s a user-utility requirement. If your site doesn’t load instantly, the AI won’t recommend it as a source in a GEO overview.

The headless promise (and reality)

Many of the frustrating technical SEO issues we’ve fought for years β€” like bloated JavaScript and poor Largest Contentful Paint (LCP) β€” are finally being solved by headless/composable architectures. By decoupling the front end from the back end, we can deliver the raw, lightning-fast data that answer engines crave while maintaining a high-end experience for humans.

But headless isn’t a β€œget out of SEO jail free” card.Β  It solves the speed problem, but it introduces new risks around dynamic rendering and metadata delivery.

Whether you’re on a 20-year-old CMS or a cutting-edge headless build, the today requirements are non-negotiable:

  • Clean URL structures: If the AI can’t deduce the hierarchy from the URL, you’ve already lost the semantic battle
  • Internal linking (the nervous system): This is how you prove relationships between entities. If your internal linking is broken, your synthesis logic doesn’t exist.
  • Indexability: If the bot is blocked by a poorly configured robots.txt or a noindex tag left over from staging, the most brilliant β€œverified human” insights in the world are invisible

You don’t get to play in the frontier of AEO and GEO until you’ve mastered the floor of technical SEO. Don’t let the shiny new objects make you forget the shovel work.

Dig deeper: Thriving in AI search starts with SEO fundamentals

The SEO time traveler checklist

Phase 1: The archive

  • The Slawski deep dive: Stop reading the latest β€œAI is changing everything” blog posts for five minutes. Go back to the SEO by the Sea archives. Search for Slawski’s analysis onΒ the Knowledge Graph or theΒ user context. You’ll see the 2026 roadmap hidden in plain sight.
  • The E-E-A-T math audit: Check your assets against Patent 2015/0331866. Are you actually providing the contribution metrics (such as verifiable reviews) that the patent specifically asks for?

Phase 2: The laboratory

  • The verification pivot: Audit your entities. Are they just names on a page? Link them to a verified LinkedIn profile or a Knowledge Panel. If it’s not verified, it’s not an entity, it’s just a string of text.
  • Schema stress testing: Don’t just use a plugin and walk away. Experiment with nesting. Try nesting a Person inside a Service as the provider. It works β€” I’ve seen it trigger rich results when nothing else did.

Phase 3: The frontier

  • The confidence anchor audit: Look at your top pages. Does every topic have a clear definition? [Entity] is [attribute]. If you’re being vague, you’re invisible to AEO.
  • The synthesis test: This is a quick one. Paste your article into an LLM and ask it to explain the relationship between your two main topics using only your text. If it has to go to the web to find the answer, you haven’t built the relationship well enough for GEO.

The synthesis: Becoming the architect

The SEO time traveler isn’t looking back because they’re nostalgic. They’re looking back because they want the blueprint. When you realize AEO is just the modern enforcement of a 20-year-old patent and GEO is just the evolution of semantic relationships, the chaos of AI updates disappears.

Stop optimizing for strings. Start optimizing for verified facts. Give the engine a fact it can’t doubt, connected to a person it trusts, and a relationship it can’t ignore.

The future of search wasn’t written this morning β€” it was written years ago. You just have to be the one to actually build it.

Dig deeper: The future of SEO: Why optimization still matters, whatever you call it

References and further readingΒ 

On the evolution of fact-based search (AEO foundations)

On generative engine optimization (GEO foundations)

  • The GEO framework: Aggarwal, V., et al. (2023). GEO: Generative Engine Optimization. Princeton University, Georgia Institute of Technology, and the Allen Institute for AI. The definitive study on how LLMs cite and prioritize authoritative sources.Β 
  • The Slawski legacy: Slawski, B. (Various). SEO by the Sea Archives. For historical context on Agent Rank, phrase-based indexing, and entity metrics.

AnySlate – Write Markdown faster with AI and real-time collaboration


AnySlate is a modern Markdown editor for macOS, Windows, Linux, and the browser. It delivers a fast writing experience with real-time collaboration, cloud sync, and version history. Use AI to summarize, rephrase, and improve drafts, or extend capabilities via MCP. You can preview and export with professional control, publish to the web, and customize themes and styling so your workspace fits the way you write.

View startup

ThreatsDay Bulletin: FortiGate RaaS, Citrix Exploits, MCP Abuse, LiveChat Phish & More

ThreatsDay Bulletin is back on The Hacker News, and this week feels off in a familiar way. Nothing loud, nothing breaking everything at once. Just a lot of small things that shouldn’t work anymore but still do. Some of it looks simple, almost sloppy, until you see how well it lands. Other bits feel a little too practical, like they’re already closer to real-world use than anyone

New Perseus Android Banking Malware Monitors Notes Apps to Extract Sensitive Data

Cybersecurity researchers have disclosed a new Android malware family called Perseus that's being actively distributed in the wild with an aim to conduct device takeover (DTO) and financial fraud. Perseus is built upon the foundations of Cerberus and Phoenix, at the same time evolving into a "more flexible and capable platform" for compromising Android devices through dropper apps distributed

Alphabet’s X has a new spinout, and it’s going after one of the world’s most expensive bureaucratic nightmares

Anori aims to shrink the pre-development process by getting all parties, including the city, on a unified platform from the start so that compliance conflicts are surfaced in weeks instead of months or years.

Opera GX Gaming Browser launches on Linux

Opera GX acknowledges PC gaming’s Linux shift with official browser support Opera GX has officially arrived on Linux, giving Linux users a gaming-focused browser option. As a web browser, Opera GX prides itself on its performance, privacy, and customisability. These are all traits that Linux users love. At launch, the browser is available in Debian […]

The post Opera GX Gaming Browser launches on Linux appeared first on OC3D.

(PR) NVIDIA GeForce NOW Gets 90 FPS VR, Crimson Desert, and More Games

It's a double feature on GFN Thursday. This week, GeForce NOW offers smoother sights in virtual reality (VR) and a sprawling new land to conquer. Streaming at 90 frames per second (FPS) comes to supported VR headsets. And Crimson Desert, which recently surpassed 3 million wishlist additions on Steam, debuts in the cloud with GeForce RTX 5080‑class power. Catch it as part of four new games on GeForce NOW

VR, but Make It 90 FPS
VR in the cloud is getting a smoothness upgrade. GeForce NOW is boosting support for Apple Vision Pro, Meta Quest devices and Pico devices to stream at up to 90 FPS for Ultimate members, bringing crisper motion and more responsive gameplay straight from the cloud. The app update is starting to roll out to members starting today. Members can transform the space around them into a personal gaming theater with GeForce NOW, playing favorite PC titles on a massive virtual screen. With support for up to 90 FPS for Ultimate members, gameplay feels smoother, movement more natural and action more comfortable. All premium members can continue to enhance their experiences with NVIDIA RTX and DLSS technologies in supported titles. Just fire up GeForce NOW on supported VR platforms, step into a virtual big screen and let the cloud handle the heavy lifting. From chill sessions in a virtual theater to high‑octane firefights, 90 FPS helps keep every moment looking sharp and feeling responsive.

(PR) Corsair Announces the Airflow-Focused 3200D Mid Tower Chassis

Corsair (Nasdaq: CRSR) is proud to unveil the all-new 3200D mid-tower chassis for PC builders who demand excellent performance. The 3200D replaces the 3000D in the Corsair mid-tower lineup and is built for powerful systems with top-tier cooling. It is available in three colorways: Black, White, and an all-new semi-transparent Smoke front panel. Designed to prioritize cooling, the 3200D supports a 360 mm radiator in the front or roof and features an all-new angled fan mount on the bottom of the case to direct cool air towards the GPU.

[Editor's note: Our in-depth review of the Corsair 3200D RS ARGB is now live]

Smart Design, Serious Cooling
The 3200D is thoughtfully engineered to make the most of its footprint while providing ample room for the most powerful hardware available. It offers up to 375 mm of GPU clearance, so even extra-large GPUs like the NVIDIA RTX 5090 have plenty of room to breathe. The spacious interior supports 9x 120 mm or 4x 140 mm fans, with the option to run a 360 mm AIO liquid CPU cooler in either the front or the roof of the chassis for robust CPU cooling. It is also ready for next-gen hardware, with full support for reverse connector motherboards designed for showcase builds with no visible cables.

(PR) EK by LM TEK Launches EK-Pro GPU Water Block for NVIDIA H200 NVL

EK by LM TEK is proud to announce the EK-Pro GPU WB H200 NVL, a high-performance full-cover GPU water block designed for the NVIDIA H200 NVL. Engineered for high-density AI server deployments and data center environments, the block features a single-slot form factor optimized for maximum compute density and system compatibility. By covering the entire PCB, the EK-Pro water block directly cools the GPU, HBM memory, and VRM ensuring optimal thermal performance, stability, and sustained performance under demanding workloads. The low-profile, rack-style terminal design reduces overall assembly height, improving chassis compatibility across standard Full Height Full Width (FHFW) server platforms. This enables seamless integration as a drop-in replacement for both active and passive dual-slot air coolers.

Dual-Pass Cooling Engine for Maximum Thermal Efficiency
At the core of the EK-Pro GPU WB H200 NVL is a dual-pass cooling engine, engineered to maintain high flow rate. The coolant flows sequentially through two optimized microfin stacks, cooling each half of the GPU die and HBM memory to ensure uniform heat dissipation. This design maintains consistent flow distribution and thermal performance, minimizing losses even in reversed flow scenarios, while delivering efficient cooling across the entire GPU surface.

(PR) Sharkoon Releases S100 ARGB All-in-One CPU Cooler

The S100 ARGB from Sharkoon is an all-in-one water-cooling system with a 360 mm radiator and combines impressive looks with a strong cooling performance. The extra-large pump unit with its infinity mirror effect creates a unique play of light, while the copper plate, three precisely controllable ARGB PWM fans and the included DOWSIL thermal paste all ensure reliable cooling and maximum efficiency.

Efficient Cooling
With its 360 mm radiator, the S100 ARGB offers excellent heat dissipation, reliably keeping high-performance systems at the optimum temperature, even at high loads. This makes it the ideal solution for demanding gaming PCs as well as powerfully equipped workstations.

(PR) Silicon Power Launches Enterprise-Grade DDR5 RDIMM for AI Workloads

Silicon Power (SP) announces the launch of its latest DDR5 RDIMM module, purpose-built to meet the growing demands of enterprise data centers and AI-driven computing environments. Designed with original high-quality DRAM chips, the new module delivers massive bandwidth and enhanced reliability to support intensive workloads such as artificial intelligence, machine learning, and large-scale data processing. Leveraging a high-density architecture with 32 internal banks across 8 groups, the DDR5 RDIMM maximizes throughput to unlock next-generation server performance. With speeds of up to 6400 MT/s, it empowers enterprises to accelerate data pipelines and fully utilize modern server architectures.

Advanced Reliability, Broad Compatibility, and Power Efficiency
Engineered for uncompromising stability, Silicon Power's DDR5 RDIMM integrates advanced On-Die ECC technology alongside a Registering Clock Driver (RCD), significantly enhancing fault tolerance compared to standard ECC UDIMMs. This robust design ensures superior data integrity, making it ideal for mission-critical enterprise applications that require continuous uptime.

Multi-location SEO strategy: Stop competing with your own content

Multi-location SEO strategy- Stop competing with your own content

Multi-location brands are investing heavily in content. But more content doesn’t automatically mean more growth.

I keep seeing the same issue. Each individual location has a blog, and they all cover the same topics. Same keywords. Same structure. Same search intent. The goal is local visibility, but the result is often internal competition and diluted authority.

Building an effective content strategy for multi-location brands requires clarity around roles. What should live at the corporate level to build authority, and what should stay local to drive relevance and conversions? Without that alignment, brands risk competing with themselves instead of winning in search.

Where the strategy breaks down

Most multi-location content issues aren’t intentional. They’re often the result of growth without a clear content framework, or simply too many cooks in the kitchen without overall governance.

Corporate teams are focused on building brand authority and scaling marketing efforts. At the same time, local teams or franchisees want content that answers their customers’ questions and lives on their own site, rather than sending users elsewhere. The assumption is simple: more content equals more visibility.

However, without clear ownership or strategic keyword targeting, overlap becomes inevitable. Similar topics are published across multiple URLs, and over time, this creates internal competition rather than building authority for the entire site.

What type of content belongs at corporate

In general, corporate should own the content that applies to the brand as a whole and build authority at scale. This includes blog content that targets broader informational queries and answers user questions, no matter where users are located.Β 

Educational resources, industry insights, and evergreen topics perform best when consolidated in one place rather than duplicated across multiple URLs.

Mathnasium - sample webpage

Core service, product, and line-of-business pages should also be centralized. These pages define what the brand offers and typically remain consistent across markets. While location pages can reference and support this foundational content, they often don’t need to be recreated at the local level unless they differ between locations.

Brand-level content, such as company history, leadership, mission, and differentiators, should also sit at the corporate level. These elements reinforce credibility and should be standardized across the organization.

Dig deeper: Local content playbook: From service pages to jobs-to-be-done pages

What type of content belongs at the local level

When it comes to local content, focus on what’s relevant to that specific market. This includes geo-specific content such as:

  • Location landing pages with unique, customized copy.
  • Localized metadata.
  • Location-specific FAQs, relevant structured data (e.g., reviews, LocalBusiness).
  • In some cases, region-specific service variations.
Tend location page


On location pages specifically, there are additional opportunities to highlight uniqueness:

  • Location-specific testimonials and reviews.
  • Team bios.
  • Owner messages or stories.
  • Events or awards.
  • Community partnerships.
  • Descriptive content about the location or service area.
  • Location-specific imagery.

These elements can live on a single, well-built location page or expand into a microsite structure (pages living under a subfolder) when it makes sense for the business. Remember, the goal of these pages is to strengthen relevance, target geo-modified and local intent queries, and ultimately drive conversions.Β 

One common concern with location pages is duplicate content. The question often becomes, how much duplicate content is acceptable? Instead of focusing on a percentage of unique versus shared content, teams should focus on what’s most useful for the user.

Typically, content that doesn’t need to be unique across every location includes:

  • Brand boilerplates.
  • Core service lists.
  • Service or product descriptions.
  • Standard calls to action.
  • Legal disclaimers.
  • Navigation.
  • Trust signals.
Neighborly Done Right Promise copy

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

Get the newsletter search marketers rely on.


Common SEO risks of a faulty content strategy

When content production lacks clear governance, it can lead to a range of issues that affect organic visibility and crawl efficiency. Over time, this can cause inconsistent rankings, diluted authority, and missed opportunities to convert traffic into leads.

Keyword cannibalization

Keyword cannibalization occurs when multiple pages across a site target the same keywords and search intent. Instead of strengthening rankings, those pages end up competing against each other in search results, and, in some cases, may not get indexed at all.

For multi-location brands, this often happens when individual locations publish similar blog content. For example, a plumbing brand might have multiple location pages with blogs, each posting a blog post titled β€œTips to fix a leaky faucet,” creating several URLs targeting the same informational query.

A more strategic approach is to consolidate that topic into a single, strong corporate-level post. This would allow the brand to serve as the authoritative source, build backlinks, answer users’ questions effectively, and strengthen the site’s overall credibility.

Google choosing the β€˜wrong page’

When multiple pages on a website are targeting the same or overlapping keywords, search engines have to determine which one to rank, and sometimes it’s not the page you intended.

On a multi-location site, that may mean a local blog ranks nationally for a topic that would be better suited to live on the corporate site and build broader brand authority. While the page may be relevant to the query, it may not guide users clearly to the next step, leading to customer confusion or bounces.

It may also cause users who aren’t in-market to leave the site after absorbing the information because there’s no clear next step for them, or because they only see information about services in Austin, Texas, while they’re located in Cleveland, Ohio.

Instead, consolidating authority on a single, well-ranking page that clearly directs users to take action, whether that means finding their nearest location or submitting a form, would be more beneficial for the brand and users.

Crawl inefficiencies

Publishing multiple blog posts on the same topic, especially when the answer doesn’t vary by location, can result in duplicate or low-value content. While these pages may be regularly crawled due to internal linking, they often never make it into the index.

At scale, this can become a bigger issue, especially for sites with many locations that publish similar informational topics. For a site with dozens or hundreds of locations, having similar blog posts across those locations can create crawl bloat, where search engines may spend time and resources crawling repetitive or low-impact URLs rather than more high-impact pages.

Diluted link equity

When similar content exists across multiple URLs, backlinks and internal links are split among pages instead of consolidating authority on a single strong page. Rather than building momentum around a single piece of content, link equity is distributed across competing versions.Β 

For multi-location brands, this can weaken overall ranking potential. Consolidating authoritative content at the corporate level allows links, authority, and trust signals to compound, strengthening the entire domain and supporting location pages more effectively.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

Creating a plan: How corporate and local can work together

After defining roles, move to governance. Multi-location brands need a shared plan for ownership, keyword targeting, and team collaboration.

Before new content gets created, the right questions need to be asked, such as:

  • Is this topic location- or region-specific, or is it broader for any consumer?
  • Would publishing this for only one location add value to those specific customers?
  • Would publishing it across multiple locations make sense?
  • Who should own the keyword? The brand or a specific location?
  • Who does it make sense for the information to come from?

Clear keyword mapping and a centralized content calendar can prevent overlap before it starts. When teams understand their roles, content supports overall growth instead of competing internally.

Content collaboration also creates opportunities to strengthen E-E-A-T signals for the site as a whole. Corporate can cover broader educational topics while drawing on real expertise and experience from local teams.

For example, a roofing company might want to write a post about how often homeowners should replace their roofs. The topic is universal. However, the answer could vary by region due to factors such as the material used in that area or the weather.Β 

The blog could include quotes from franchise owners or team members across different regions to provide insights into regional factors, such as heat and humidity in the South versus harsh winter weather in the North.

This would allow corporate to own the topic and give locations the opportunity to provide their unique expertise and experiences. Plus, linking to relevant location pages can reinforce context and create stronger internal linking throughout the site.

Another option would be to create a local hub within the blog.

Volume isn’t always the right strategy

Search may be changing, but many of the fundamentals remain the same. High-quality, well-structured content that genuinely helps users is what earns visibility.

With Google’s AI Overviews and large language models pulling from authoritative sources, content that clearly answers questions and reflects real expertise is even more valuable. Pages created solely to scale across multiple locations β€” without adding unique value β€” are unlikely to perform consistently, and can even hurt a site in the long run.

Content shouldn’t be treated as a volume game. More pages alone won’t drive growth. What matters is planning, ownership, and alignment.

When corporate and local teams build a shared content strategy, it helps turn content into a growth driver rather than just more pages on a site.

Your SEO maturity score doesn’t measure what you think it does

Your SEO maturity score doesn’t measure what you think it does

The Visibility Governance Maturity Model (VGMM) is about something most SEO programs lack: clear ownership, documented processes, and decision rights that keep your work from being undone by teams who don’t understand it.

So how do you actually score that?

Each domain uses a bank of governance questions tailored to the business. They’re not about how SEO is executed. They’re not about tools. And they’re not an audit.

What VGMM questions are designed to reveal

VGMM questions go to managers and the C-suite β€” the people who should know about governance but often don’t. Meanwhile, you (the SEO practitioner) actually know whether standards are documented, whether QA is in place, and whether processes exist.

VGMM diagnoses organizations where SEO knowledge lives in practitioners’ heads, rather than in documented, governed processes. If VGMM surveyed only practitioners, it would measure whether you know what to do (you do). But governance maturity measures whether the organization can sustain capability when you’re on vacation, when you get promoted, or when you leave.

Questions go to managers because governance gaps show up as:

  • β€œI don’t know the answer to that.”
  • β€œI’d have to ask Sarah.”
  • β€œWe used to have a process, but it’s not enforced anymore.”
  • β€œEach team does it differently.”
  • β€œThat’s documented somewhere, I think?”

When managers can’t answer governance questions, that’s the signal. It means processes aren’t institutionalized.Β 

Dig deeper:Β Why most SEO failures are organizational, not technical

The SPOF reality check

Single point of failure (SPOF) questions can cap your organization at Level 2 maturity until they’re resolved.

Here are some examples of SPOF question:

  • β€œIf [key person] left tomorrow, could the organization maintain SEO standards without them?”
  • β€œIs SEO knowledge documented in a way that’s transferable to new team members?”
  • β€œAre there at least two people who understand how [critical system] works?”

Right now, you’re probably the SPOF. You’re the person who knows where all the bodies are buried, how the redirects work, why that weird canonical setup exists, and what breaks if someone changes X. That feels like job security. It’s actually a job prison.

When VGMM identifies you as an SPOF:

  • Leadership realizes your knowledge needs to be documented.
  • You get resources to create documentation.
  • You get approval to train other people.
  • You get your own tools, training, and conference budgets. (Yay!)
  • Your expertise becomes institutional, not personal.
  • You can take a vacation without disasters.

The organization can’t move past Level 2 until SPOF conditions are cleared. This forces leadership to address hero-dependency.

How domain scores become VGMM score

Each domain model (SEOGMM, CGMM, WPMM, etc.) produces a maturity score based on its own question bank. Here’s how they roll up:

Step 1: Domain assessment

Each domain asks 30-60 governance questions tailored to that area. Questions are behavior-based, not opinion-based:

  • β€œDo you think SEO standards are important?” (opinion)
  • β€œAre SEO standards documented and approved by [role]?” (behavior)

Step 2: Weighted scoringΒ 

Answers are weighted based on impact. Not all governance failures are equal:

  • Missing documentation = lower weight.
  • No ownership for critical decisions = higher weight.
  • SPOF identified = can cap maturity level regardless of other scores.

Step 3: SPOF constraintΒ 

If SPOF conditions exist, the domain score maxes out at Level 2 (emerging) even if other governance is strong. You can’t be structured (Level 3) when capability depends on one person.

Step 4: Domain aggregationΒ 

Domain scores average into the overall VGMM score with adjusted weighting based on:

  • Your industry (ecommerce weights performance governance higher).
  • Your business model (SaaS weights content governance higher).
  • Your complexity (international weights workflow governance is higher).

Step 5: Final maturity levelΒ 

The overall VGMM score maps to maturity levels:

  • Level 1 (0-30%): Ad hoc/unmanaged
  • Level 2 (31-50%): Aware/emerging
  • Level 3 (51-70%): Structured/defined
  • Level 4 (71-90%): Integrated/coordinated
  • Level 5 (91-100%): Optimized/sustained

Why questions change between models

Domain questions adapt to the maturity model being used.

SEOGMM questions focus on:

  • Technical SEO governance (schema, redirects, crawl management).
  • Content optimization standards.
  • Performance monitoring and alerts.

LVMM questions focus on:

  • Location data governance across distributed sites.
  • Google Business Profile management and ownership.
  • Review response workflows and accountability.
  • NAP (Name, Address, Phone) consistency

IVMM questions focus on:

  • Market-specific SEO governance across countries.
  • Translation workflow and quality controls.
  • Local compliance and regulatory requirements.
  • Cross-market coordination and escalation.

Same governance principles, different operational contexts. An ecommerce company doesn’t need LVMM. A restaurant chain with 500 locations absolutely does.

Dig deeper:Β SEO’s future isn’t content. It’s governance

Get the newsletter search marketers rely on.


Why you can’t (and shouldn’t) compare scores

VGMM scores are internal quality metrics, not competitive benchmarks. A 62% score doesn’t mean you’re ahead of another organization at 58%. Here’s why.

Weighting varies by business model

  • Ecommerce company: Performance governance weighted 30%.
  • Information publisher: Content governance weighted 35%.
  • Service company: Workflow governance weighted 25%.

Domain combinations vary by organization

  • Organization A: SEOGMM + CGMM + WPMM + IVMM (international).
  • Organization B: SEOGMM + CGMM + WPMM + LVMM (multi-location).

Not comparing apples to apples.

Organizational context changes what scores mean

  • Startup at 45% with 10 people = impressive, mature for size.
  • Enterprise at 45% with 500 people = serious governance gaps.

Strategic priorities shape the score

  • Organization prioritizing organic visibility: SEOGMM weighted higher.
  • Organization focused on technical debt: WPMM weighted higher.

The only meaningful comparison is your organization against itself over time:

  • Q1 2025: 42% (Level 2)
  • Q3 2025: 58% (Level 3) ← Progress
  • Q1 2026: 61% (Level 3) ← Sustained improvement

Use VGMM to answer:

  • Are we improving quarter over quarter?
  • Which domains are holding us back?
  • Where should we invest in governance?
  • Are SPOF conditions getting resolved?

Don’t use VGMM to answer:

  • Are we better than Competitor X?
  • What’s the industry average score?
  • Should we publicize our score?

What VGMM scoring means for you

As an SEO practitioner, this scoring approach protects you.

You’re not being blamed

When governance assessment reveals gaps, managers are answering questions about organizational capability. They’re not evaluating your individual performance. The assessment asks, β€œDoes the organization have documented standards?” not β€œIs the SEO person doing a good job?”

SPOF detection is your escape hatch

When SPOF questions flag that the organization depends entirely on you, leadership sees it as an organizational risk β€” not as proof you’re valuable. They can’t move to Level 3 until they fix it, which means resources for documentation, training, and knowledge transfer.

Weighted scoring highlights systemic issues

When content governance scores low, but SEO governance scores high, it shows other domains aren’t holding up their end. This redirects leadership attention to where governance actually needs strengthening.

Progress tracking shows your impact

When your organization moves from Level 2 to Level 3 over two quarters, you have concrete evidence that governance investments are working. This isn’t β€œtraffic went up 15%,” it’s β€œorganizational capability improved measurably.”

Dig deeper:Β SEO execution: Understanding goals, strategy, and planning

The difference between hero work and sustainable SEO

VGMM’s scoring approach is designed to:

  • Diagnose organizational capability gaps without blaming individuals.Β 
  • Make your implicit knowledge visible as institutional risk.
  • Force leadership to address hero-dependency.Β 
  • Track progress in ways that make governance investments defensible to finance.

The assessment focuses on whether the organization can sustain your work without you. That’s the difference between being an indispensable hero (exhausting) and being a strategic professional whose expertise is institutionalized (sustainable).

Researchers reach superconductivity at ambient pressure, record high temperature β€” milestone of -122Β°C reached by using pressure quenching, still 140 degrees off room temperature target

Researchers from the University of Houston managed to increase the critical temperature for superconductivity by about 18 K at ambient pressure, but at βˆ’122Β°C, they are still 140Β°C away from the target.

RendrKit – Design API that lets AI agents generate images instantly


RendrKit is a design API built for AI agents. Your agent sends a JSON request with text and brand colors and receives a professional PNG in under two seconds. There’s no need for DALL-E or prompt engineeringβ€”just 69 deterministic templates that render pixel-perfect images every time.

RendrKit works with LangChain, CrewAI, OpenAI GPT Actions, MCP (Claude/Cursor), and n8n. You can use it via Python SDK, Node.js SDK, or plain REST, and a free tier is included.

View startup

Tracium – Track AI agents and costs with a single line of code


Tracium is a developer-first observability layer for AI systems. With a single line of code, it monitors agents and models in real time, tracing every request end-to-end across tools and steps while tracking token spend, latency, and total cost. It captures and classifies errors, supports per-tenant analytics, and lets you compare prompts, models, and routing with live A/B versioning. Use drift detection to spot shifts in inputs and outputs before performance degrades, and manage everything across customers, workspaces, and environments.

View startup

(PR) The KiiBOOM Phantom98 Lite Blends Style and Function

More than just a way to type, the KiiBOOM Phantom98 Lite is an upgrade for users' entire desktop experience. With its considered layout, keycaps design, premium typing feel, and seamless multi-device flow, this keyboard transforms everyday input into a daily joy, becoming the natural centerpiece of the workspace.

Thoughtful Colorways and Details
The Phantom98 Lite continues KiiBOOM's commitment to aesthetics, debuting with three themed colorways: Green Rainy Frog, Foggy Translucent, and Pink. Each curated color combination is designed to evoke a distinct atmosphere. Beyond the colorway, the textural contrast between the UV-glazed case and the PBT dye-sublimated keycaps adds depth to both the look and feel. Practicality is equally prioritized, with the magnetic nameplate discreetly storing the 2.4G receiver, ensuring style and function live in perfect harmony.

AI Mode is Google’s next ads engine β€” and it already knows how to monetize it

AI Mode is Google’s next ads engine β€” and it already knows how to monetize it

As conversational search gains traction, the bigger question isn’t who has more users, but who can monetize them.

Google enters this phase with a massive advantage: mature ad systems, deep advertiser adoption, and decades of optimization. Early AI Mode signals point to a measured rollout.

The panic phase is over

After a period of panic within the company, Google’s built-in advantages, coupled with massive capital expenditures, have helped it regain ground on category leader ChatGPT in LLM search.

In December 2025, Google’s own code red became OpenAI’s code red.

The dust will continue to settle, and analysts have different takes. But one signal stands out: in a major validation, Apple has chosen Google to power its own AI.

It was perhaps premature to assume Google Search would simply lose to ChatGPT on product. That was the consensus at the start of 2025. Google shares fell about 30% from peak to trough before rallying 130%. Today, the company is valued at roughly $3.6 trillion, just behind Apple.

Why monetization will decide the winner

Why did Google’s recent progress in LLM conversational queries β€” in the form of AI Overviews and AI Mode β€” have such a large impact on the company’s valuation in such a short time?

Ultimately, it comes down to visibility of financial projections. In a company with so much to defend, Google’s CFO and leadership team needed to determine whether shifts in user behavior β€” in how search works and how it makes money β€” would weaken the business model or reinforce it.

Net-net: Google before the shift: huge. Google after the shift: ditto.

Google stock price. The market changed its mind.
Google stock price. The market changed its mind.

Visibility β€” in the sense of financial planning, not in the SERP β€” means a great deal to Google’s advertisers, too.

A large proportion of your annual digital advertising budget is likely allocated to Google. You also still care about how you appear in organic results and increasingly, how your company appears in AI Mode, ChatGPT, Claude, and similar environments.

β€œI’m fine with 30% less of my business coming in from Google, and figuring out lots of complicated ways to replace it,” … said no advertiser ever.

How monetization will play out in AI search

The competition between monetization models in LLM conversations β€” especially between the two leaders, ChatGPT and Google’s AI Mode β€” will play out differently from the broader race for overall user share. There are several moving parts to keep an eye on:

  • Overall assumptions about ad formats and β€œhow to monetize.”
  • Pace of rollout.
  • Whether users and public opinion recoil at ads.
  • Advertiser success rates based on performance measurement.
  • Advertiser adoption, including adoption by the agency ecosystem.
  • Platform targeting options.
  • Advantages of fuller-funnel ad journeys and data collection.
  • Privacy, safety, policies, and enforcement.
  • An all-encompassing consumer brand vs. a better mousetrap.
  • And a few other factors.

Right now, OpenAI is at a critical moment because it’s still so early in its monetization. It’s still testing an inefficient auction model confined to a small group of large advertisers. (Some ads, from their pilot, spotted here.) It may be some time before more mature tools and reporting emerge.

Most recently, OpenAI brought ad platform Criteo (often used for retargeting) on as a partner. The Trade Desk, the world’s largest non-Google DSP for programmatic, is also in the mix. Some observers have speculated about deeper partnerships or even an acquisition of The Trade Desk, though that seems unlikely.

In any case, outsourcing inventory to programmatic partners is a pragmatic step in OpenAI’s monetization strategy. It also underscores how early the company is in building a scalable ads business.

Despite a broad rollout with partners, OpenAI is stepping back from β€œcheckout in chat” integrations after limited adoption from both merchants and consumers. When your primary competitor has a 25-year head start, the learning curve is steep.

So does it make sense now for advertisers to lean into evolving Google user behavior and figure out how to ride the wave?

AI Mode considerations for Google advertisers

Expect the transition to more AI Mode sessions β€” and eventual monetization β€” to be smoother than initially anticipated. If you’re an advertiser, AI Mode need not equal panic mode.

How do these LLM sessions look to users? Obvious to you and me, but likely less so for many searchers.

Depending on how you search, AI Overviews may appear above other results on the SERP. That’s becoming a natural extension of Google Search sessions.

But that’s not the real conversational layer. The LLM workflow happens in AI Mode. How often users go there remains to be seen.

It’s improving quickly. Unlike ChatGPT, Google AI Mode downplays how it finds information, whether it is β€œreasoning,” and which model is being used. The experience feels relatively seamless.

It’s still early, but ads are already appearing in some cases. The key question is how this evolves, and what advertisers should be paying attention to.

The key areas to watch are:

  • Extent of monetization.
  • Different ways to monetize.
  • Advertiser control and campaign types.
  • Reporting.
  • Funnel stage.

1. Extent of monetization

AI Mode is in a popularity contest and a price war with ChatGPT. Google will likely try to grind down competitors in LLM conversations by monetizing lightly and gradually. Perplexity and Anthropic, for their part, are completely shunning ads.

An ad-free AI Mode results page. We’re going to see a lot of this.

The result will be less ad volume in this space than you might expect. It may also increase the commercial value of organic visibility in LLM-driven results, leading to renewed focus on content and reputation fundamentals.

Forget ad campaign FOMO, then. It will be interesting to place ads alongside AI-driven sessions, but don’t break the bank. Implement, watch, and learn at your own pace.

2. Different ways to monetize

Experienced advertisers know there are a few ad formats to consider in any situation like this. The main ones would be: text ads triggered by keywords or similar signals, in a reasonably native format, and feed-based Shopping type ads.

Another way to make money is to allow direct checkout β€” to take a cut of transactions. As noted above, OpenAI is backtracking on this approach, though not eliminating it entirely. How important it will be for Google merchants (and Google itself) remains to be seen.

Google’s experience likely allows it, again, to play the long game, study the data, and bring partners and advertisers along for the ride, on an impressive scale.

Recently, Loblaw inked an integration deal with OpenAI. A week later, it made a similar deal with Google.

Get the newsletter search marketers rely on.


3. Advertiser control and campaign type

In terms of execution, we’ll want to be on the lookout for which kinds of campaign types in Google Ads make your ads eligible to show in AI Mode.

You can learn everything you want about how ads will show in AI Overviews in Google’s help files. Unsurprisingly, text and shopping campaigns from Performance Max, standard shopping, and keyword campaigns make your ad eligible to show in AI Overviews.

Google says less about AI Mode in its documentation, for now.

Our agency recently received a Google deck outlining a β€œShopping Expansion” beta. There’s little mention of AI Mode, though one table, in a subtle way, refers to both AI Overviews and AI Mode.

My expectation is that Google will gradually ease users into AI Mode and test ads sparingly. Even if ads appear in a small share of sessions β€” say 0.5% β€” that will still generate significant data and feedback.

Advertiser control will likely be even more limited than it is today. In the world of feed-based ads, you have some levers, but the massive machine learning that controls matching is held by Google and the real-world behavioral ecosystem.

To a lesser extent, that’s also how keyword matching works. Micromanagers won’t be too comfortable, but the impact of the ads could still be powerful, especially with data-driven attribution.

Here’s hoping new signals, new reporting breakouts, and new levers become available to advertisers. Namely: audiences including cool personas; demographics; novel larger buckets around life stages; novel characteristics we haven’t even dreamt of yet, such as their language ability level or aspects of how they interact with the LLM.

4. Reporting

The real question is: will reporting be transparent and insightful? We need to at least be able to look at all available metrics for ads that showed in AI Mode specifically. Time will tell.Β 

Microsoft seems to be the first out of the gate with AI-conversation-specific reporting breakouts. We expect no less from Google and are impatiently awaiting further guidance on this front β€” primarily on what kind of reporting will be directly available in the Google Ads interface.

It would be easy for the casual observer to blindly believe that somehow, you’ll never be eligible to show up in AI Mode or AI Overviews unless you adopt certain Google Ads campaign types. There’s a lot of rhetoric around AI Max.Β 

I’d advise advertisers to do their own research and run their campaigns to suit themselves. Hint: AI Max isn’t the only magical gateway to AI-using users and might not even be a good or appropriate one for many advertisers.

Once reporting is beefed up, you’ll want to know how well the AI-specific inventory is doing, however your campaigns wind up serving there.

5. Funnel stage

But that leads us to a wrinkle. Although ads appearing astride AI Mode conversations could certainly be low-funnel (think Shopping ads in high-intent situations), much of the opportunity here is thematic. Your company may now enjoy new opportunities to associate itself with higher-order thinking, new audience definitions, and new intent characteristics.

This opportunity probably comes to your door dressed up as β€œlower ROAS.” It may be tempting, therefore, to shy away.

That’s a mistake.

Why?

Like what happened when everyone started using mobile phones, that’s where the consumer will be. Ugly early numbers shouldn’t blind us to the imperatives associated with scale.

When the funnel moves, everything moves

Midsized to larger advertisers should step back and reimagine how they approach growth and market impact. There are meaningful opportunities for companies to align more closely with their audiences.

This has little to do with AI Max, and everything to do with how LLM-driven research works. Compare how publishers have traditionally assembled consumer personas β€” often from fragmented behavioral signals β€” with the much richer context that can emerge from ongoing interactions with an LLM.

A net shift up-funnel could follow. Imagine a world where a significant share of Google search sessions takes place within conversational experiences. Your ads will need to show up there, where appropriate. If that happens, your funnel β€” and your competitors’ β€” will move with it.

Will you be ready?

Kioxia announces new Super High IOPS SSD that helps accelerate AI workloads on Nvidia GPUs β€” 25.6TB drive provides more GPU-accessible memory for faster data access

Kioxia has developed a new AI SSD that is designed to provide a secondary cache for Nvidia AI GPUs. The new drive comes with the manufacturer's XL-Flash that is SLC based and performs at over 10 million IOPS.

8BitDo launches $40 Nintendo 64-inspired wireless controller with 2.4 GHz connection β€” dedicated wireless receiver even works with the original N64

8BitDo has released the 2.4 GHz version of its popular 8BitDo 64 controller; it ships with a Retro Receiver inside the box that can even be connected to an actual N64 console. The receiver can also be bought separately to work over a Bluetooth Low Energy (BLE) connection with most 8BitDo first-party controllers, and other third-party ones, too.

SoloDeskPad – Prepare for MTD, chase late payments, and reduce admin


SoloDeskPad prepares UK sole traders for Making Tax Digital and cuts admin so you can focus on work. It checks your MTD readiness, schedules quarterly HMRC reminders, and keeps records tidy with a mileage and expense logger. It also helps you get paid with automated late-payment chasers and creates UK-law-aware contracts from a short form. Join the waitlist to be ready before the 6 April 2026 deadline.

View startup

How Ceros Gives Security Teams Visibility and Control in Claude Code

Security teams have spent years building identity and access controls for human users and service accounts. But a new category of actor has quietly entered most enterprise environments, and it operates entirely outside those controls. Claude Code, Anthropic's AI coding agent, is now running across engineering organizations at scale. It reads files, executes shell commands, calls external APIs,

'The fights are another level': I asked the cast of The Madison what to expect from the season 1 finale β€” and their answer is giving me Yellowstone flashbacks

Don't get it twisted: new Taylor Sheridan show The Madison has nothing to do with Yellowstone. But the 'fights' the cast promise in the final season 1 episodes have me thinking otherwise.

Optiscaler delivers improved FSR 4 version for AMD RDNA 2 GPU users

The Optiscaler community has done what AMD couldn’t: bring FSR 4 to RDNA 2 The Optiscaler Community has come together to create an improved version of AMD’s leaked FSR 4 INT8 version. With this new version, FSR 4.0.2b, users of RDNA 2 graphics cards can now use AMD’s improved upscaler with significantly less ghosting. Furthermore, […]

The post Optiscaler delivers improved FSR 4 version for AMD RDNA 2 GPU users appeared first on OC3D.

(PR) Opera GX Gaming Browser Lands on Linux After Community Demand

Opera GX is now on Linux. The gaming browser from Norwegian company Opera now brings its signature performance controls, gaming integrations, and unparalleled options for customization to the platform. Demand for a Linux version of Opera GX has hit a breaking point across gaming subreddits, Discord and Linux forums, with gamers and developers consistently asking for the browser to support the platform in public communities and other forums. With this release, Opera GX delivers what many in the community have been waiting for: a gaming browser that aligns with Linux's privacy-first mindset while still meeting the high-performance expectations of modern gamers.

"PC gaming has long been associated with a single dominant platform, but that's changing. Bringing GX to Linux users - who are renowned for the control they like to exert over their tools - means gamers and developers can manage browser resources, customize their setup, and keep their system performing exactly the way they want," said Maciej Kocemba, Product Director, Opera GX.

(PR) Strong AI Momentum to Drive 24.8% Growth in Foundry Revenue in 2026

TrendForce's latest research on the foundry industry reveals that continued investment in the AI arms race by North American CSPs and AI startups will keep demand for AI processors and supporting ICs strong in 2026. Global foundry revenue is projected to grow 24.8% YoY to approximately US$218.8 billion, with TSMC expected to post the largest increase of around 32% YoY.

Demand for advanced nodes will continue to be driven by AI GPUs from companies such as NVIDIA and AMD. Meanwhile, North American CSPs, including Google, AWS, and Meta, and AI startups such as OpenAI and Groq, are accelerating the development of their own AI chips. Many of these designs are expected to enter volume production and begin shipping in 2026, becoming key drivers for 5/4 nm and more advanced process technologies.

Counter-Strike 2 Changes Decades-Old Magazine Reloading Rule

If you are a long-time Counter-Strike player, you know how magazine reloading works. Whenever you reloaded in a Counter-Strike game, any leftover ammunition from your magazine would be returned to your reserve supply. However, Valve is now changing this rule after decades of Counter-Strike gameplay. With the latest update, Counter-Strike 2 is introducing a change where reloading will discard your remaining magazine, essentially depleting your ammunition supply. This alters what used to be a standard procedure for gamers. Even after firing a single bullet, players would reload to keep their supply high in case of nearby combat. However, this has now changed in CS2 to make the experience feel more realistic and to introduce higher stakes for both sides. Careful planning, ammunition purchasing, and balancing will now better represent a real-world scenario.
ValveWhen you reload in CS2, the leftover ammo in your magazine is dumped back into an essentially endless reserve supply. And so the decision to reload has never offered significant trade-offsβ€”in a safe position with enough time, you might reload after firing a single bullet, or half a mag, or after firing down to empty, and the rest of the round would be unaffected. We think the decision to reload should have higher stakes, so in today's update reloading has been redesigned. Now, when you reload, you'll drop the used magazine and discard all of its remaining ammo. Instead of 'topping off' your weapon with a few bullets, a new full magazine will be taken from the reserves whenever you reload.

Intel Supplies Core Ultra 200HX Plus in Limited Quantities to Laptop Makers

Laptops powered by Intel's latest Core Ultra 200HX Plus mobile processors won't arrive all at once but will instead be released in stages, according to PC World. Intel is supplying its OEM partners with these chips in waves. Some OEMs are receiving the chips first, while others will have to wait weeks before their shipments arrive. For example, Lenovo and Razer are the only OEMs that can immediately ship their new laptops based on the Core Ultra 200HX Plus chips. Customers of other brands, such as Dell, will have to wait until the end of March, while MSI laptops will ship in the second quarter of this year. Perhaps the most delayed is ASUS, whose laptops will hit the market in late May, allowing ASUS fans and customers to get the new "Arrow Lake-HX Refresh" CPUs.

When considering the reasons for such limited availability at launch, we need to examine Intel's supply chain for the "Arrow Lake-HX." Since TSMC manufactures this generation, Intel's capacity allocation is limited to what the company set months ago. As a result, Intel is supplying these CPUs in waves as they come out of TSMC's fabs. If the process were internal to Intel, the company would likely manage manufacturing more easily with a predictable supply. If the launch involved a server-grade processor or something with a higher margin, Intel would likely allocate more TSMC capacity beforehand. However, since this generation serves the lower-margin mobile market, the current allocation is working adequately, just with a bit of a delay. Below is a complete list of design wins that Intel has secured from OEMs and its partners for Core Ultra 200HX Plus processor family.

Intel's new feature can improve game loading times by up to 3x β€” Precompiled Shader Delivery comes to Arc Xe2 and Xe3 GPUs following DirectX SDK release

Following in Nvidia's footsteps, Intel has now officially adopted Microsoft's Advanced Shader Delivery to make shader compilation much faster in games. Intel is calling it Precompiled Shader Distribution and it's available on a bunch of Arc GPUs right away, supported in 11 games at launch, with more likely to follow. AMD is now the only company left who hasn't officially embraced this feature.

BabyMealBot – Send recipe links to WhatsApp to get baby-safe versions


BabyMealBot lets you send any recipe link or screenshot to WhatsApp and returns a baby-safe version tailored to your child's age and allergies. It extracts ingredients, steps, and tips from TikTok, Instagram, YouTube, and recipe websites, then saves meals to an organized, searchable cookbook. Create grocery lists with one tap and share access with family so everyone stays in sync. Start with three free recipes, then choose simple plans for personal or family useβ€”no app download required.

View startup

DarkSword iOS Exploit Kit Uses 6 Flaws, 3 Zero-Days for Full Device Takeover

A new exploit kit for Apple iOS devices designed to steal sensitive data from is being wielded by multiple threat actors since at least November 2025, according to reports from Google Threat Intelligence Group (GTIG), iVerify, and Lookout. According to GTIG, multiple commercial surveillance vendors and suspected state-sponsored actors have utilized the full-chain exploit kit, codenamed DarkSword

ExactOnce – Single-use actions with atomic guarantees


ExactOnce is an API for creating and consuming single-use actions with atomic guarantees. It resolves concurrent requests deterministically, prevents duplicate side effects, and returns clear failure states like already_used or expired. You can time-bound actions, add optional PIN protection, and get an immutable audit trail. Batch-create actions via CSV for use in magic links, invitations, password resets, secure downloads, approvals, and one-time codes.

View startup

Renamer.ai – Bulk rename files with AI that understands their content


Renamer.ai is an AI-powered file renaming tool that looks inside your files to understand what they are, renaming thousands of them in seconds. Drop a folder full of IMG_4382.jpg, Screenshot 2026-02-13 at 10.43.12 AM.png, and Untitled-3-final-v2.pdf files, and get back clean, descriptive filenames without manually touching a single one.

Most batch renamers require you to write rules and patterns. Renamer.ai skips all that; the AI reads the content of each fileβ€”images, documents, assetsβ€”and generates meaningful filenames on its own. It's available on Windows, Mac, and web, with no naming conventions to memorize, no regex to learn, and no manual sorting ever again.

View startup

Sony To Rebrand PlayStation Network in Late 2026

According to Insider Gaming, Sony will soon be rebranding the PlayStation Network branding, including "PSN," by September 2026. This news comes by way of an email sent to developers in preparation for the branding shiftβ€”likely as a way to ensure that future game marketing messaging reflects the change. The email explains that the shift away from PSN and PlayStation Network "properly capture the breadth of our evolving digital service," but that the change is merely a branding exercise, so it seems unlikely that the actual functionality of the service will change. At the time of writing, it's unclear what will replace PSN and PlayStation Network, but it seems reasonable to assume some more generic branding will take its place. The full email follows.

Nametastic – Free AI name generator - 1,000+ available domains, ranked and explained


Nametastic exists because naming a startup shouldn't take longer than building one. Describe your business idea in a few sentences, and the AI generates over 1,000 brandable suggestions - each scored for memorability and pronounceability, each checked against live domain registries across 50+ extensions in real time. The best ideas surface first, and you can see what's available without leaving the page.

Most generators combine random word fragments and hope something sticks. Nametastic actually reads your description, understands the concept, and produces options that fit your industry and tone. Five minutes from idea to a shortlist with available domains. Free, no signup required.

View startup

CISA Warns of Zimbra, SharePoint Flaw Exploits; Cisco Zero-Day Hit in Ransomware Attacks

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has urged government agencies to apply patches for two security flaws impacting Synacor Zimbra Collaboration Suite (ZCS) and Microsoft Office SharePoint, stating they have been actively exploited in the wild. The vulnerabilities in question are as follows - CVE-2025-66376 (CVSS score: 7.2) - A stored cross-site scripting

NeuralOps – Keeps remote teams balanced and focusedβ€”privacy first, no micromanagement


NeuralOps is an AI-powered work rhythm intelligence for distributed teams that delivers visibility without micromanagement. It maps focus, collaboration, and recharge patterns, shows real-time activity dashboards and screenshots, and highlights focus time versus meetings and burnout risks. Teams stay in control with transparent tracking, clear indicators, and one-click pauses, with no keystroke logging. Managers coordinate better with team and individual views, while AI nudges help improve workflows across macOS and Windows. Save up to $12K annually compared to similar products.

View startup

founder/mode – Get LinkedIn posts written and scheduled in your voice each week


founder/mode helps founders publish high-performing LinkedIn content in less than 30 minutes a week. Share voice notes, transcripts, and links in a shared knowledge base, then its fine-tuned model drafts posts in your tone while human editors refine every line.

Approve with one click and schedule to post, keep your voice consistent, and turn ideas into conversations with customers, partners, and leads

View startup

S2Flow – Business OS where AI agents run sales, marketing, and e-commerce


S2Flow is a Business OS that uses AI agents to run and improve marketing, sales, and e-commerce operations. You set goals and KPIs, and it generates strategies, assigns agents across departments, and executes tasks like ads, content, SEO, email, and reporting while continuously optimizing to hit targets.

You control guardrails via a strategy queue that auto-applies high-confidence actions and flags others for review. S2Flow tracks metrics such as ROAS, pipeline velocity, and average order value, feeds results back into planning, and keeps your growth engine improving without manual orchestration.

View startup

Arch Tools – Call 61 AI tools with one key and pay per use in USDC


Arch Tools provides 61 production-ready AI tools via a single REST API and MCP protocol. These tools include code analysis, web scraping, image generation, NLP, sentiment analysis, crypto data, search, and more. You pay per API call with x402 USDC micropayments on Base, Polygon, Avalanche, and Solana, or you can use traditional API keys. AI agents can discover tools through MCP and pay autonomously without human approval. A free tier offers 100 credits per month, and TypeScript and Python SDKs are included.

View startup

GNOME 50 Ditches X11 but Launches With Improved Color Management, Native Fractional Scaling, and VRR Support

In the lead-up to the official launch, Gnome 50 has been previewed in beta via distributions like the Fedora 44 beta, but the updated version of the desktop environment, dubbed Tokyo, has officially launched in stable form, bringing with it a slew of long-expected changesβ€”although some features did not make the cut for Gnome 50 and will be pushed back to Gnome 51. The biggest additions to Gnome 50 are the official launch of variable refresh rate and fractional scaling, both of which are enabled as long as the hardware supports it. Beyond implementing VRR and fractional scaling, Gnome 50 also features a low-latency cursor mode that allows the cursor to refresh independently from the window behind it when VRR is active. It also features workarounds for the NVIDIA driver for stuttering and frame timing issues, which should result in "noticeably smoother window animations and general desktop fluidity for users with NVIDIA GPUs."

The desktop UI also now features a power profile indicator, which is a nice quality-of-life feature for laptop users; the settings app also now has a toggle for reduced motion for accessibility, as well as parental control with daily screen time limits, app restrictions, and schedules for child accounts. With Gnome 50, the developers have implemented bug fixes to the color management and display calibration options, making it more viable for creatives doing color-sensitive work, and the sound settings screen also more clearly denotes input and output audio devices and volumes. The remote desktop functionality in Gnome now features GPU acceleration, which is nice for high-performance applications or Unfortunately, while a great many changes were made in Gnome 50, mostly enabling better performance, enhancing security, or polishing the UI, session restore functionality did not make the cut for Gnome 50. Gnome 50 also removed the Mutter backend code for X11 support entirely, which has been a somewhat controversial choice, given that some workflows and features are still unable to be replicated on Wayland.

Scientio – Organize ideas and projects with an AI-first knowledge platform


Scientio is a knowledge management platform that helps you capture ideas, plan projects, and publish knowledge bases. It uses an AI-first chat interface to create markdown pages and journals compatible with Obsidian and Logseq, while automatically indexing topics and cross-referencing notes. Share read-only online knowledge bases so others can browse your research or documentation without making changes.

View startup

Intel Silently Adds Core i7-13645HX to Its Raptor Lake Mobile CPU Lineup

Intel has quietly rolled out a new mobile CPU in the Raptor Lake lineup, the Core i7-13645HX. The chip is built with Intel's 10 nm process and splits its 14 cores across six Performance cores and eight Efficient cores, just like the Core i7-13650HX. That means both share the same 20-thread count, 24 MB of Smart Cache, max turbo clock at 4.9 GHz, and power specs: 55 W base, 157 W peak. However, there are some differences. The i7-13645HX supports DDR5-5600, up from 4800 on its older sibling. With that comes also an increase in memory bandwidth to 89.6 GB/s, up from 76.8 GB/s. It also sports a more capable Intel UHD Graphics P730 with 32 execution units versus just 16 on the i7-13650HX. The older part still has a slight edge in efficiency: its E-cores can boost to 3.6 GHz, while this one tops out at 3.5 GHz. But for users who need faster memory and better integrated graphics, especially those running light gaming or video workloads, the new chip might be the better choice.

Compared to the i7-13700HX, which has eight P-cores, a higher boost clock (5.0 GHz), 30 MB cache, and slightly more power headroom, this one feels like something in between, less raw performance than the top-tier HX model but with better memory support. Intel rebranded Raptor Lake as Core 200 series in late 2024, positioning it for higher-end laptops. The i7-13645HX appears to be part of that push: a refined option for systems targeting mainstream performance without stepping up to the full HX stack. No official pricing yet, as this is likely to show up in select gaming and creator machines over the next few months. Remember that just this Tuesday, Intel rolled out its Core Ultra 200HX Plus mobile processors, including the Core Ultra 9 290HX Plus and Core Ultra 7 270HX Plus. These processors add new features and architectural refinements, including support for the new Intel Binary Optimization Tool that can improve native performance in select games.

The Legend of California Gets Alpha Playtest in Late March, Minimum Specs Call for RTX 2060 Super

Jeff Kaplan's new game studio, Kintsugiyama, recently announced The Legend of California as its first titleβ€”a Western-inspired open-world, online FPS exploration game with shades of the survival-craft genre. Kaplan, who previously served as lead designer on Overwatch, said that The Legend of California was slated for a 2026 launch, but it now seems as though Kaplan was serious, with the game studio announcing a public alpha playtest in a new video on YouTube. The playtest will start on March 26 and end on March 30, while participants can sign up on the Steam store page and will be selected randomly via Steam sometime around March 25. The playtest will be online-only, and players will be able to play solo or in a squad of up to four players.

In the playtest announcement video, Kaplan admits that it will be a limited playtest, so not everyone who registers will get a slot to play. Kaplan calls it the "next important moment for the development of this game," suggesting Kintsugiyama will use the data and feedback gathered from the playtest for further improvement and development of the game. A FAQ on the game's Steam page claims that there will be another playtest "in summer," and that the game will launch in Early Access before getting a full release. The FAQ also addresses the game's minimum hardware requirements, which were seemingly added to the Steam page along with the playtest announcement. According to the FAQ, the minimum hardware specificationsβ€”an NVIDIA GeForce RTX 2060 Super 8 GB or AMD Radeon RX 6600 8 GB, 16 GB of RAM, and an Intel Core i7-10700K or AMD Ryzen 7 3700Xβ€”will target 30 FPS, while the recommended specβ€”an NVIDIA GeForce RTX 3080 or AMD Radeon RX 6800, 32 GB RAM, and an Intel Core i7-12700K or AMD Ryzen 7 5700Xβ€”will target 60 FPS.

EthosOne – Streamline governance, compliance, and risk for schools


EthosOne is a governance platform built for independent schools. It centralizes board reporting, state-aligned compliance calendars, and ISO 31000 risk management so leaders can see obligations, owners, and evidence at a glance. Schools document controls, manage duty of care for camps and excursions, and keep every artifact accountable, turning oversight into an active, consistent process across Australia.

View startup

Game Developers "Found Out At the Same Time as the Public" About NVIDIA DLSS 5

Much has been said about NVIDIA's recent DLSS 5 announcement, with many criticisms of the tech stemming from how the neural rendering potentially alters the artistic vision and tone of the game. While NVIDIA insists that developers remain in full control, and that the game studios were involved in creating the promotional in-game imagery used to show off the neural rendering features, a new report from Insider Gaming suggests this might not have been the case.

According to game developers from both Capcom and Ubisoft who spoke to Insider Gaming, while the individual studios may have been involved in marketing DLSS 5, the teams who worked on them were just as surprised by the results as the rest of the gaming community. A Ubisoft developer is quoted as saying "We found out at the same time as the public," while developers at Capcom expressed similar sentiments, stating that it was surprising to see Capcom, who has generally been protective of its IPs when it comes to AI involvement, getting involved in the marketing for DLSS 5. Further, the Capcom developers expressed concern at how DLSS 5 might change how Capcom approaches generative AI and its role in game development.

(PR) Nordcurrent Labs Unveils Defender of the Crown: The Legend Returns at Future Games Show

During today's Future Games Show Spring Showcase, developer and publisher Nordcurrent revealed Defender of the Crown: The Legend Returns. The beloved 1986 strategy experience is coming to PC (Steam, GOG), Nintendo Switch, PlayStation 5, and Xbox Series X|S, and will include three modes: Retro Mode, Classic Mode, and Kingdom Mode. An Amiga hallmark, the title has been rebuilt to bring the Saxon-Norman conflict to a new audience with modernized visuals, refined mechanics, and an ambitious expansion of its classic formula.

England is a land of chaos where the crown has been stolen, and the King is dead. Players must step into the boots of a Saxon lord to outmaneuver rival Normans, raise armies, and reclaim the throne. This remake preserves the design of the original, while updating the experience with cleaner systems and meaningful quality-of-life enhancements.

(PR) Micron Reports Results for the Second Quarter of Fiscal 2026

Micron Technology, Inc. (Nasdaq: MU) today announced results for its second quarter of fiscal 2026, which ended February 26, 2026.

Fiscal Q2 2026 highlights
  • Revenue of $23.86 billion versus $13.64 billion for the prior quarter and $8.05 billion for the same period last year
  • GAAP net income of $13.79 billion, or $12.07 per diluted share
  • Non-GAAP net income of $14.02 billion, or $12.20 per diluted share
  • Operating cash flow of $11.90 billion versus $8.41 billion for the prior quarter and $3.94 billion for the same period last year

SK Group Warns Memory Shortage May Continue into 2030

Earlier this week we reported that the memory shortage was going to persist until late 2028, but news out of Korea now suggests it might stick around all the way until 2030. SK Group Chairman Chey Tae-won is quoted by The Korea Times saying that "The shortage stems from a lack of wafer capacity, and securing additional wafers takes at least four to five years," continuing "We expect the industry-wide supply shortfall to persist at over 20 percent through 2030."

The company is apparently looking at ways to stabilise pricing, but as to when this will happen, is anyone's guess at this point in time. In this specific case, it's down to SK hynix CEO Kwak Noh-jung to try and resolve the immediate situation. SK Group doesn't seem interested in building fabs outside of South Korea either, as Chey also told The Korea Times that "Korea already has the infrastructure in place, allowing for a much faster response. That is why we are concentrating our efforts here." There is at least some good news here, as it doesn't seem like SK hynix will put its entire focus on HBM memory, as the company is aware that might lead to further shortages of DRAM, which will affect the broader tech industry.

Rytora BuildLabs – Turn plain language into production-ready full-stack web apps


BuildLabs turns plain language into production-ready full-stack applications. It generates a React + Vite frontend, a NestJS backend, and a Prisma + PostgreSQL database on Neon, all in exportable TypeScript you own. Use a live preview and chat to iterate features, refine UI, and fix details. Projects include JWT auth, protected routes, and real data. Export code or deploy with one click to Vercel and Railway. Scale from solo work to teams with workspaces, role-based access, and priority support on paid plans.

View startup

Google AI Overviews now appear on 14% of shopping queries: Report

Search battlefield

Google’s AI Overviews now appear on 14% of shopping queries, up 5.6x from 2.1% in November 2025, according to new Visibility Labs analysis.

  • Ecommerce brands have been mostly unaffected by AI-driven click loss in Search. That seems to be changing.

Why we care. As Google’s AI Overviews expand across product searches, ecommerce brands face a growing risk of losing visibility and clicks before shoppers reach standard organic or Shopping listings.

The details. The analysis targeted product-intent keywords tied to results with a Shopping box, paid or organic β€” terms like β€œweighted blanket,” β€œmushroom coffee,” β€œprotein powder,” and β€œblue T-shirts.”

  • That produced 20,900,323 shopping keywords.
  • Of those, 2,919,229 triggered an AI Overview β€” 14.0% penetration.

What they’re saying. Report author Jeff Oxford, founder and CEO of Visibility Labs, concluded:

  • β€œFocusing on AI SEO is no longer a luxury, it’s becoming a necessity. Ecommerce sites need to think beyond traditional SEO and start incorporating AI SEO best practices into their search optimization strategy.”

The report. AI Overviews Now Appear on 14% of Shopping Queries, Up 5.6x in 4 Months (Study of 20.9M SERPs)

Glimpse – Protect your attention and live mindfully through building true connections


Glimpse is on a mission to mindfulness by protecting your attention and building true connections. We empower you to control your technology and live connected, mindful lives with real people in the real world. The less you use it, the more valuable it becomes.

Glimpse starts by blocking distractions and forces that steal your attention. It then builds mindfulness through an ecosystem that helps you connect with the real world, yourself, and others, disconnecting you from what does not matter and connecting you with what does.

View startup

VisualGPT – Create, edit, and enhance images with AI in your browser


VisualGPT is an all-in-one AI platform to create, edit, and enhance images right in the browser. It combines hundreds of image tools including image generation, background removal, upscaling, retouching, and quick design so you can go from idea to polished visuals fast.

The platform integrates top models like Nano Banana, Flux, Ideogram, and Stable Diffusion to deliver sharp, ready-to-use results. Use purpose-built apps for photo editing, clothes and hairstyle changes, interior and room design, and infographic or flowchart creation with simple prompts or uploads and no learning curve.

View startup

Clico – AI tools for every text box in your browser


Clico brings AI to every text field in your browser. Use simple shortcuts to draft replies, continue writing in your voice, rewrite selections, summarize long pages, and search highlighted text without switching tabs. It reads on-screen context from emails, posts, and docs to produce accurate, in-place results.

Dictate by holding Command, then edit or insert at your cursor. Clico is free to use, needs no API key, and works across all Chromium browsers.

View startup

Small publisher search traffic fell 60% over two years: Data

Traffic shrinking

Small publishers are seeing sharp traffic declines from AI search experiences, according to new data from thousands of global sites using Chartbeat analytics.

The details. Publishers with 1,000 to 10,000 daily pageviews lost 60% of search referral traffic over two years, Chartbeat found.

  • Mid-sized sites with 10,000 to 100,000 daily pageviews lost 47%.
  • Large publishers with more than 100,000 daily pageviews were down 22%.

Reality check. AI referrals aren’t replacing lost search traffic.

  • Google Search pageviews fell 34% year over year.
  • Google Discover dropped 15%.
  • ChatGPT referrals rose 200% but still account for less than 1% of total traffic.

Yes, but. Traffic is shifting, not disappearing. Total weekly pageviews across publishers fell just 6% from 2024 to 2025, a typical swing tied partly to the news cycle. Search is shrinking as a share of traffic, while direct, internal, and messaging channels are growing.

Why we care. SEO has long been the growth engine for smaller sites. That’s no longer true. If you don’t have a strong brand, direct audience relationships, repeat visitors, or differentiated value, you face the biggest risk as search referrals decline.

The Axios report. Exclusive: Small publishers hit hardest by search traffic declines.

Google retires several legacy ad format policies

How to tell if Google Ads automation helps or hurts your campaigns

Google is cleaning up outdated requirements in Google Ads, reflecting how legacy ad formats have evolved into newer, more automated products.

What’s happening. As of March 17th, Google discontinued multiple ad format policies, including those related to form ads, image quality, responsive ads, and text ads.

What changed. These requirements are being removed because the original formats have transitioned into newer campaign types and ad experiences, making the old policy frameworks no longer relevant.

Why we care. This update simplifies the policy landscape in Google Ads, reducing confusion around outdated requirements tied to legacy formats.

What advertisers should do. Advertisers are now expected to rely on current Google Ads policies and ad format requirements, which govern newer formats like automated and AI-driven campaigns.

The bottom line. By removing legacy requirements, Google is streamlining policies in Google Ads β€” signalling a continued move toward fewer, more unified standards for modern ad formats.

(PR) Imec Receives the World's Most Advanced High NA EUV System

Today, imec, a world-leading research and innovation hub in advanced semiconductor technologies, announces the arrival of the ASML EXE:5200 High NA EUV lithography system, the most advanced lithography tool available today. With this strategic milestone, imec reinforces its position as the industry's launchpad into the Γ₯ngstrΓΆm era, giving its global partners ecosystem unparalleled early access to the next generation of chip-scaling technologies. Integrated directly with a comprehensive suite of patterning and metrology tools and materials, the High NA EUV system will empower imec and its ecosystem partners to unlock the performance needed to pioneer sub-2 nm logic and high-density memory technologies that will fuel the rapid growth of advanced AI and high-performance computing.

Luc Van den hove, CEO of imec: "The past two years have marked an important chapter for High NA (0.55NA) EUV lithography, with imec and ASML joining forces with the ecosystem in its joint High NA EUV Lithography Lab in Veldhoven (The Netherlands) to pioneer High NA EUV technology. With the installation of the EXE:5200 High NA EUV lithography system into our 300 mm cleanroom in Leuven (Belgium), we aim to bring these High NA EUV patterning technologies to an industry-relevant scale and to develop the next-generation High NA EUV patterning use cases. Its unmatched resolution, improved overlay performance, high throughput, and a new wafer stocker that improves process stability and throughput, will give our partners a decisive advantage in accelerating the development of sub-2 nm chip technologies. As the industry moves into the Γ₯ngstrΓΆm era, High NA EUV will be a cornerstone capability, and imec is proud to lead the way by offering its partners the earliest and most comprehensive access to this technology."

AMD "Medusa Point" APU Early Benchmarks Match "Strix Point" at Half the Clock Speed

AMD is preparing to launch its "Medusa Point" APU in early 2027. However, more benchmarks are emerging to showcase what the actual SoC can do as AMD and its OEM partners test the chip. In the latest Geekbench v6 run, AMD's 10-core, 20-thread "Zen 6" chip appeared with the AMD Engineering Sample number 100-000001713-21_N, achieving a 2,300 single-core and 13,002 multicore score while officially running at only a 2.4 GHz base frequency. In real-world operation, it ran within the range of 2.0-2.1 GHz during the benchmark. The most surprising factor is that this "Medusa Point" test system can match a 10-core, 20-thread AMD Ryzen AI 9 365 "Strix Point" APU that operates at more than double the frequency. When comparing the two, "Medusa Point" scores slightly lower in single-core performance, while the multicore score is surprisingly higher.

This phenomenon could be attributed to the fact that the "Zen 6" cores in the "Medusa Point" APU are much better performing and more optimized for the workloads that Geekbench tests. The IPC improvement target from "Zen 5" in "Strix Point" to the newest "Zen 6" could be a high single-digit to low double-digit gain on average. It is likely that the combination of new instructions and IPC improvements is what is pushing "Medusa Point" so high. Since the new APU also appeared in firmware running AVX-VNNI in FP16 precision, we might be seeing these workloads getting accelerated thanks to the lower precision of the floating-point operations. For now, the situation remains a mystery, at least until more benchmarks are available in the coming months. AMD is expected to launch this new APU around CES 2027, so we still have a lot of time before official and third-party benchmarks are released.

Microsoft Won't Auto-Install Microsoft 365 Copilot App on Windows 11 Anymore After Backlash

Microsoft has reportedly decided not to automatically install its Microsoft 365 Copilot App on Windows 11 by default, following significant user backlash against its AI integration into the operating system. What seemed to be a major initiative by the Redmond giant is now being reined in by a group of power users and enthusiasts who are resisting the "Copilot everywhere" strategy that Microsoft has recently promoted. According to a new update on the Admin 365 dashboard, Microsoft states, "Automatic installation of the Microsoft 365 Copilot app on Windows devices with Microsoft 365 desktop apps, planned for December 2025, is temporarily disabled. Existing installations remain unaffected. Admins can deploy the app via other methods and should await further updates."

For those who may not remember, the Microsoft 365 Copilot App is the rebranded version of what was originally called Microsoft 365 / Office Hub. This app version was introduced alongside the regular Copilot app on customers' Windows 11 systems. Back in September 2025, Microsoft planned to automatically install the Microsoft 365 Copilot App on Windows 11, along with the regular Copilot App, which meant users would receive two "AI-enhanced" applications automatically. This move sparked a significant backlash from the community, particularly from enthusiasts who saw little to no added value in Microsoft's AI integrations. Users made it clear that they wanted this enhancement to be optional, if not stopped altogether. Microsoft has recently commented that the company is focusing on what truly matters to consumers, such as fixing the bug-prone operating system and enhancing core features for a smoother user experience in Windows 11. They also mentioned stepping back from the "AI-everywhere" approach.

OFAC Sanctions DPRK IT Worker Network Funding WMD Programs Through Fake Remote Jobs

The U.S. Department of the Treasury's Office of Foreign Assets Control (OFAC) has sanctioned six individuals and two entities for their involvement in the Democratic People's Republic of Korea (DPRK) information technology (IT) worker scheme with an aim to defraud U.S. businesses and generate illicit revenue for the regime to fund its weapons of mass destruction (WMD) programs. "The North Korean

The leaderboard β€œyou can’t game,” funded by the companies it ranks

Artificial intelligenceΒ models are multiplying fast, and competition is stiff. With so many players crowding the space, which one will be the bestΒ β€” and who decides that?Β Arena, formerly LM Arena, hasΒ emergedΒ as the de facto public leaderboard for frontier LLMs, influencing funding, launches, and PR cycles. In just seven months, the startup went from aΒ UCΒ Berkeley PhD research […]

SMX Now: Learn how brands must adapt for AI-driven search

AI Search Picks Winners Here's the GEO Strategy Behind It

Visibility is no longer just about ranking. It depends on whether your content is discovered, evaluated, and selected in AI-driven search experiences.

We’re kicking off our new monthly SMX Now webinar series on April 1 at 1 p.m. ET with iPullRank’s Zach Chahalis, Patrick Schofield, and Garrett Sussman on how you must adapt.

The session introduces iPullRank’s Relevance Engineering (r19g) framework for executing Generative Engine Optimization (GEO) through an omnichannel content strategy. You’ll learn how AI search uses query fan-outs to discover and select sources, and how to structure content so it’s retrieved, surfaced, and cited.

It also emphasizes that GEO success isn’t universal. It requires testing, tailored strategies, and a three-tier measurement model spanning discovery, selection, and citation impact.

Save your spot

Search Engine Land is proud to be a media partner for iPullRank’s upcoming SEO Week event.

Google brings vehicle feeds to Search campaigns

Google Ads tactics to drop

Google is expanding how inventory appears in Google Ads Search campaigns, giving automotive advertisers a more visual, product-rich format directly in text ads.

What’s happening. Google Ads now supports vehicle feed integration on Search ads, allowing advertisers to pull inventory from Google Merchant Center and enhance existing text ads with details like make, model, price, and images.

How it works. Vehicle listings appear as clickable assets alongside standard Search ads, either below or beside the main text. Users can click through to a specific vehicle detail page or a broader landing page, depending on the interaction.

Why we care. This update lets automotive advertisers bring real inventory directly into Search ads, making them more engaging and useful for high-intent users. It also means richer visibility without extra campaign setup, while potentially driving more qualified leads by showing key details upfront within Google Search.

Why it’s notable. The update brings Shopping-style visual elements into Search campaigns, helping advertisers showcase real inventory without needing separate campaign types.

For advertisers. Key benefits include a more engaging ad experience, the potential for higher-intent leads, and the ability to use existing Merchant Center feeds without duplicating setup.

Measurement. Performance can be tracked using the β€œClick type” segment, allowing advertisers to understand how users interact with vehicle listings versus standard ad components.

Matching. Google’s automation determines which vehicles appear based on user intent and query context, continuing the shift toward less manual control and more AI-driven ad assembly.

The bottom line. Vehicle feeds in Search campaigns give automotive advertisers a way to blend inventory with intent-driven queries, turning standard text ads into more dynamic, product-led experiences within Google Search.

The world’s first 16TB M.2 SSD has appeared on Amazon, and its price is eye-watering

16TB M.2 SSDs are now available to purchase, and their pricing is NUTS If you want to max out your M.2 slots and you have an unlimited budget, you can now buy a 16TB M.2 NVMe SSD for your PC. Fanless Tech has spotted a 16TB PE4 M.2 SSD from Exascend, a drive that costs […]

The post The world’s first 16TB M.2 SSD has appeared on Amazon, and its price is eye-watering appeared first on OC3D.

(PR) Quantum Machines Launches Open Acceleration Stack Alongside NVIDIA and AMD

Today Quantum Machines launches The Open Acceleration Stack, a first-of-its-kind framework allowing users to integrate any classical processor (XPU) into their quantum control stack. This novel architecture allows quantum computers not only to be Quantum Error Correction (QEC)-ready and AI-ready, but also QEC- and AI-native.

The Open Acceleration Stack marks a significant expansion of Quantum Machines' Orchestration Platform, the industry's leading hardware and software framework for the control and operation of quantum processors. Using Quantum Machines' OPNIC (OPX Network Interface Card) and NVIDIA NVQLink, the framework enables an ultra-low, microsecond-level latency link between its proprietary Pulse Processing Unit (PPU) and high-performance accelerators, including GPUs, CPUs, FPGAs and ASICs.

(PR) Tuxedo Intros Gemini 17 Gen 4 17.3-Inch Notebook with AMD Ryzen 9 9955HX

Goodbye Desktop PC: The Tuxedo Gemini 17 - AMD represents the classic desktop replacement notebook. With a large 17.3-inch high-resolution screen, an absolute top-tier CPU, powerful graphics, and enhanced cooling for quieter operation, it delivers excellent stationary performance in a portable, subtly designed workstation - ideal for both work and play. While its Intel-based sibling with Core i9-14900HX and NVIDIA GeForce RTX 5070 Ti is aimed at users with very high graphics requirements, the Gemini 17 - AMD primarily shines with its exceptionally fast high-end processor.

Solid Desktop Replacement Chassis with Classic Workstation Flair
With an overall height of just under 2.9 cm and a weight of 2.8 kg, this Linux desktop replacement is not designed for constant mobility, but still allows comfortable transportation over short to medium distances.

8BitDo Officially Launches Retro Wireless Receiver for N64 Alongside Classic Grey 64 2.4 GHz Wireless Controller

We recently visited 8BitDo at CES 2026, where we saw the Retro Wireless Receivers that enable wireless controller compatibility with retro game consoles. Now, 8BitDo has officially released both the Retro Wireless Receiver for the N64 and a modern wireless controller designed to work with a 2.4 GHz version of the same receiver. The Retro Wireless Receiver for N64 is available on Amazon and the 8BitDo eShop for $24.99, operates via BLE, and is compatible with a whole host of 8BitDo and first-party console controllers, including the Switch 64 Online, Switch Pro, and Wii U Pro controllers, seemingly any 8BitDo controller that supports BLEβ€”like the Ultimate series and SN30 Pro seriesβ€”Xbox One, Series, Elite Series controllers, and PlayStation's DualShock 4, DualSense, and DualSense Edge Controllers. The receiver also supports vibration and a built-in memory, with customization available via 8BitDo's Ultimate Software. 8BitDo also sells a mod kit to make the original N64 controller wireless, which is also compatible with the Retro Wireless Receiver.

The 8BitDo 64 2.4 GHz controller, on the other hand, is mostly just a rehash of the existing 8BitDo 64 Bluetooth controller repackaged to work with 2.4 GHz and paired with a Retro Wireless Receiver for N64 compatibility. The 64 controller uses modern controller design and ergonomics with a button layout adapted for the N64β€”specifically, it features large AB buttons where the right thumb stick would normally be on a game controller and a D-Pad where the usual ABXY face buttons would be. These design changes are to make the controller compatible with the Nintendo N64 console, but the internals are quite modern, featuring a Hall-effect joystick and support for Windows, via a wired connection, and the Analogue3D FPGA project's reimagining of the original N64, replete with vibration support on the latter. The 8BitDo 64 2.4 GHz controller is available on Amazon and 8BitDo's eShop for $39.99.

HireMeIQ – Track every application, interview, response, rejection, and ghosting


HireMeIQ helps job seekers organize every application, interview, response, rejection, and ghosting in one place. Import roles from links or emails, log each stage, and see at-a-glance insights into volume, response rates, and progress.

HireMeIQ focuses on clarity today and hiring transparency tomorrow, surfacing patterns and timelines across companies as the community grows. Your data stays private, you control what’s tracked, and early users shape what comes next.

View startup

Interlock Ransomware Exploits Cisco FMC Zero-Day CVE-2026-20131 for Root Access

Amazon Threat Intelligence is warning of an active Interlock ransomware campaign that's exploiting a recently disclosed critical security flaw in Cisco Secure Firewall Management Center (FMC) Software. The vulnerability in question is CVE-2026-20131 (CVSS score: 10.0), a case of insecure deserialization of user-supplied Java byte stream, which could allow an unauthenticated, remote attacker to

Where to focus technical SEO when you can’t do it all

Where to focus technical SEO when you can’t do it all

When technical issues hold your SEO program back, progress stalls. Yet technical SEO remains a top priority for leading SEOs and Google, and a key factor correlated with rankings in Backlinko’s 2026 Google ranking factors report.Β 

One of the biggest hurdles for in-house SEO programs is the lack of resources to implement changes to the website.

  • Up to 67% of respondents cite non-SEO dev tasks as the biggest reason technical SEO changes can’t be made, according to Aira’s State of Technical SEO Report.
  • This is costing businesses an additional $35.9 million in potential revenue each year, seoClarity estimates.Β 

When you can’t do everything, focus on the technical SEO tasks that drive the most impact. Here are the priorities to start with.

Where to focus first: Prioritization techniques

Most enterprise SEO teams want to fix issues that impact the most pages, revenue, and user journeys. Aira’s report ranks in-house technical SEO changes in this order:

  • Quick wins (big impact, little effort).
  • Expected impact on KPIs.
  • Impact on users.
  • Best practices based on Google guidelines.
  • Industry changes and algorithm updates.

Still, with millions of pages, it’s difficult to know where to focus. Here are some tips:

  • To limit what you work on, start with small groups of keywords or specific product areas.
  • Fix any barriers to ranking.Β 
  • Ensure all major pages are indexed.
  • Consolidate, improve, or remove low-quality pages that don’t need to be indexed.

Starting with a technical SEO audit lets you identify the exact technical issues you need to resolve, hopefully with a prioritized list of tasks.Β 

SEO tools can help identify and prioritize technical fixes. You may also want to check out β€œSEO prioritization: How to focus on what moves the needle,” which includes prioritization techniques like the Eisenhower Matrix.

Technical SEO - Eisenhower Matrix

If asked for the top foundational technical SEO fixes, I’d point to the following:

1. Site architecture

A well-organized site creates the foundation for your SEO program to run more smoothly. Site structure impacts key SEO outcomes, including crawling, indexing, and user experience, and getting this piece right really sets the stage for a site primed for search.

Fundamentally, site architecture (what I call β€œSEO siloing”) helps you organize a site around how people search. The goal is to have your content and navigation hierarchy mirror the keyword themes/queries people use and to couple that with content that answers intent across the customer journey.

For example, this is how a β€œpower tools” section of a large ecommerce site might be siloed/organized:

Ecommerce 'power tools'

The internal linking piece of siloing reinforces topical authority and funnels strength toward your primary landing pages. This alignment between search behavior, content themes, and site structure turns your site into a ranking asset.

In AI-powered search, you want your enterprise site to be well-organized, with a clear hierarchy and strong internal linking to send stronger relevance signals.Β 

Here are common site architecture issues to look for:

  • Important pages that are buried deep in the site (four-plus clicks from the homepage).
  • Orphaned or weakly linked high-value pages.
  • Any content topics that lack a clear thematic hub or silo.
  • Multiple pages competing for the same core query.
  • Lack of internal linking to connect and reinforce key content sections/silos.
  • Thin or fragmented supporting pages.
  • Taxonomy structures (like tags, archives, categories) that are competing with core pages.

A full site architecture overhaul is difficult in enterprise environments, so focus on the tasks you can reasonably get done. Consider these three action items to help make an impact with potentially the least resistance:

Strengthen internal linking to priority contentΒ 

Internal linking can be deployed without changing the core site architecture/URL structure, so this is usually a faster win. Look to fix:

  • Revenue-driving pages that are not positioned as thematic hubs.
  • Topical pages that aren’t interlinked but support the customer journey.Β 
  • Relevant blog content that doesn’t link back to specific topical hubs or service/product pages.
  • High-authority pages that are not linking to supporting pages.Β 
  • Cross-linking between unrelated themes that may dilute topical focus.

Consolidate topics before rebuilding the structure

Instead of reorganizing the entire taxonomy, you can look for things like multiple pages that are targeting the same primary keyword/queries, thin variations of the same topic across different URLs and blog content that may be competing with key pages like products/services.

Here, you can merge overlapping content, choose and reposition one page as the thematic hub and redirect URLs as needed.Β 

Elevate key pages closer to the top

When resources are tight or politics get in the way, you can reinforce the site architecture by ensuring that:

  • Priority pages are within two to three clicks.
  • You add contextual links to reinforce thematic hubs/silos by implementing things like β€œrelated resources.”

2. Crawling and indexing

At the enterprise level, crawling and indexing issues are almost guaranteed. But which issues deserve immediate attention?

Fix indexing issues first

This step may feel obvious, but it’s often overlooked. When search engines aren’t indexing the pages that matter most, this step becomes a No. 1 priority on the β€œfix” list.

But with so many URLs on an enterprise site, it can be overwhelming to review the Google Search Console Page indexing report. So instead, you can start by filtering the Page Indexing report by your XML sitemap. Compare the URLs listed in the sitemap with what Google has indexed.Β 

Any sitemap URLs that are not indexed should be investigated first. Determine why they’re excluded and fix those issues before expanding your analysis.

During your page reviews, you can do a quick triage by checking:

  • Robots.txt rules that may be blocking critical sections.
  • Noindex tags that may have been accidentally deployed.
  • Canonical tags that might be pointing to the wrong versions.
  • Any rendering issues preventing search engines from seeing content.

Eliminate signal dilution

It’s not uncommon for pages across a large site to send mixed signals to search engines. In enterprise environments, this often happens at the template level where one structural issue can weaken countless URLs.

Look for these problems:

  • Multiple URL variations being indexed (HTTP/HTTPS, trailing slash inconsistencies, parameter variants).
  • Canonical tags that conflict with internal links or XML sitemaps.
  • Near-duplicate pages targeting the same primary query.
  • Redirect chains that are working inefficiently.
  • Important pages rendering with more than one URL.

Reduce crawl waste

For an enterprise site, crawl budget is a strategic resource. You want to avoid having crawlers spend time on pages that don’t matter. To see if this is happening, check for some common culprits:Β 

  • Excess crawl activity on faceted navigation and parameter URLs (filters, sorting, pagination variations).
  • Internal search results being indexed.
  • Thin or competing archive structures (tag, category, or date archives).
  • Out-of-stock or low-value product pages cluttering the index.
  • Thin, auto-generated, or outdated location pages.
  • Staging or test environments accidentally being indexed.
  • Legacy or irrelevant content that’s still crawlable.

Get the newsletter search marketers rely on.


3. Website performance

If your site is hard to use, it wastes the organic traffic that you’ve worked hard to get. Yelp and Pinterest are two examples of organizations that invested in site performance and experienced revenue and engagement lifts.Β 

  • Yelp reported a 15% increase in conversion rate after improving page performance and reducing load times.
  • Pinterest reported that after launching its Progressive Web App, time spent increased 40%, user-generated ad revenue rose 44%, and core engagements grew 60%.

What requests should you prioritize?

Fix backend bottlenecks first

When the backend is performing poorly, it impacts everything from site speed and crawl efficiency to user experience metrics. Check for problems like:

  • High Time to First Byte (TTFB) on any key templates.
  • Sluggish performance on high-traffic pages.
  • Heavy CMS processing or middleware overhead that delays page generation.Β 
  • Slow database queries that lengthen the server response time.

Some action items that can address these issues include:Β 

  • Implementing full-page or edge caching for high-traffic templates.
  • Optimizing database queries and reducing CMS processing overhead on dynamic pages.
  • Upgrading hosting or moving to a scalable cloud infrastructure for traffic spikes.

Reduce JavaScript and rendering bottlenecks

Enterprise sites face more navigation issues β€” especially with filters or JavaScript β€” and accumulate script bloat. Tag managers, personalization engines, testing platforms, and third-party widgets stack up over time.

Unfortunately, no one wants to remove them because they’re not sure if they’re still needed. When you reduce execution overhead, it can improve interactivity and stability without having to redesign the site.

Here are some problems to look for:

  • Large JavaScript bundles that are loading sitewide.
  • Third-party scripts that are blocking rendering.
  • Poor Interaction to Next Paint (INP) scores.
  • Core content that’s dependent on client-side rendering.

Some high-impact fixes to consider:

  • Audit and remove unused or redundant third-party scripts.
  • Defer or lazy-load any non-critical JavaScript.
  • Shift critical content to render before JavaScript execution by deploying server-side rendering or hybrid rendering where possible.

Improve what users see first

Site performance is also about perceived speed and the first meaningful interaction for users. This is another area where Google’s Core Web Vitals become useful as a diagnostic tool.

Common culprits that cause issues in the user experience category include:

  • Hero images that are loading late.
  • Any render-blocking CSS or JavaScript.
  • Layout shifts that are caused by ads or dynamic elements.
  • Above-the-fold content that’s being delayed by non-critical assets.

When considering what to fix, focus on structural optimizations that change how the browser prioritizes what matters most:

  • Preload and properly size all above-the-fold images.
  • Inline critical CSS and defer any non-essential styles/scripts.
  • Reserve static space in the layout for dynamic or third-party elements (ads, embeds) to prevent layout shifts.

Improve speed

Improving page speed helps improve indexing. The slower and larger pages are, the fewer Google will crawl. That isn’t an issue if your site has 500 pages. It’s an issue getting a million pages indexed.

The Google Search Console Crawl Stats report is an underutilized tool. The report shows how Googlebot is crawling your site, including the total number of crawl requests, total download size and average response time for fetched resources.

Bonus: Mobile user experience

About 63% of website traffic is mobile, according to Statista. But the majority of sites aren’t prioritizing their mobile experiences, according to a study by the Baymard Institute.

For example:

  • 95% of sites put ads in key areas of the homepage that cause interaction issues.
  • 61% don’t use the correct keyboard layouts, which cause accidental typos.
  • 66% place tappable elements too close together, and 32% of sites have tappable elements that are too small.Β 

A responsive website is the baseline. But mobile experiences go beyond this foundation. The most successful enterprises are thinking about how to create sites that are dialed in for mobile users.Β 

While most would agree that many UX functions fall outside the realm of technical SEO, the ability of your site to retain and convert mobile traffic is a shared goal for SEO and UX teams.

With that in mind, you can analyze your mobile experiences alongside your colleagues by thinking about the following questions:Β 

  • Are your most important pages meeting Core Web Vitals thresholds?Β 
  • Is your critical content fully visible on mobile, or is it hidden behind tabs, accordions, or scripts?
  • Are you optimizing for mobile-first indexing by ensuring that structured data, internal links, etc., match desktop versions?
  • Is your content formatted for mobile scanning with short paragraphs, clear visual hierarchy, and fast-loading media?
  • Are you accounting for emerging user behaviors in your content, like voice queries and AI-generated summaries?
  • Is your navigation mobile-friendly, as in simple, thumb-friendly menus, intuitive hierarchy, and easy access to key actions?
  • Have you evaluated any gesture-based interactions, simplified checkout flows or reduced any input friction for mobile users?
  • Are you measuring real-user mobile performance (not just lab scores) to identify any friction in the wild?

Build momentum with high-impact technical wins

Technical SEO can feel overwhelming, especially when you don’t control the entire process. Focusing on fundamentals like site structure, crawlability, and user experience sets the stage for everything else in your SEO program.

Prioritize the areas that deliver the biggest impact for the least resistance, and build momentum from there.Β 

AMD releases official statement on the Chuwi Ryzen CPU mislabelling scandal

AMD responds to the CHUWI CPU scandal Over recent weeks, NotebookCheck has uncovered an AMD CPU scandal involving Chuwi, a Chinese manufacturer. The company has been found selling systems with mislabeled CPUs. The company claimed its notebooks use AMD’s Ryzen 5 7430U CPU, but in reality, they used AMD’s much older Ryzen 5 5500U. This […]

The post AMD releases official statement on the Chuwi Ryzen CPU mislabelling scandal appeared first on OC3D.

Intel Advanced Packaging Complex in Malaysia to Go Live Later This Year

Intel is accelerating its advanced packaging push with a major move in Malaysia. According to reports from The Edge Malaysia, the company's new complex there, part of Project Pelican, is now 99% complete and set for full operations later this year. Malaysian Prime Minister Datuk Seri Anwar Ibrahim confirmed he met with Intel CEO Tan Lip-Bu and executives to review the project progress. The first phase will launch assembly and testing capabilities for advanced packaging, marking a key step in Intel's foundry expansion strategy. The facility is designed to handle die sort, prep, and full production flows across both EMIB (Embedded Multi-die Interconnect Bridge) and Foveros technologies, critical for supporting high-volume chiplet-based designs. As we reported in December last year, Intel values the project at approximately $7 billion and aims to transform Malaysia into a major regional hub for its advanced packaging operations. An additional $200 million investment has already been committed to finish the site.

Intel continues to advance its EMIB packaging approach. Unlike traditional silicon interposers used by NVIDIA in Blackwell, EMIB embeds conductive bridges directly into the substrate. This approach is cheaper, more efficient, and more suited for high-density chips. Intel is now aiming for 120 x 120 mm packages from the current 100 x 100 mm. These larger dies can support up to twelve HBM stacks, compared with eight in current designs. By 2028, Intel plans 120 x 180 mm packages capable of handling twenty-four HBM stacks. The company has also upgraded EMIB-T, its latest version incorporating through-silicon vias (TSV), to support next-gen HBM4 memory, now entering mass production. That is critical as AI chipmakers demand higher bandwidth and tighter integration. However, bigger isn't always easier. Scaling up package size increases warpage risk and yield challenges during manufacturing.

(PR) EK Releases New EK-Pro GPU Water Block for NVIDIA RTX PRO 6000 Blackwell Server Edition and MAX-Q

EK by LM TEK is proud to announce the EK-Pro GPU Water Block for NVIDIA RTX PRO 6000 Blackwell Server Edition & MAX-Q Workstation Edition GPUs, a high-performance single-slot solution engineered for high-density AI server deployments and professional workstation applications. Designed for NVIDIA RTX PRO 6000 GPUs, this full-cover EK-Pro block actively cools the GPU core, VRAM, and VRM, ensuring optimal performance, stability, and efficient heat dissipation.

The single-slot form factor maximizes GPU density, making it ideal for AI and server infrastructure. Integrated quick-disconnect fittings enable efficient maintenance without the need for disassembly or loop draining, reducing downtime and supporting scalable data center operations. The EK-Pro GPU Water Block for NVIDIA RTX PRO 6000 Blackwell Server Edition and MAX-Q Workstation Edition is available through the EK Shop via our enterprise team. Please note this EK is not compatible with the reference RTX Pro 6000 Workstation Edition GPU.

JobScroller – Search fresh tech jobs and get instant AI resume fit scores


JobScroller aggregates tech jobs directly from company career pages and updates them daily, so you see fresh listings from 1,100+ employers without recruiters or stale posts. You can search roles across disciplines and apply via 100% direct links.

It also offers an AI resume checker that reads your resume and each job description to give a Gemini-powered match score, highlight missing hard skills, and suggest targeted edits. You can explore salary intelligence and track opportunities in one place.

View startup

Peak Pursuit – Climb a shared leaderboard by tracking fitness, health, and mind


Peak Pursuit is a competition-based health and fitness tracker that turns your workouts and habits into points on a shared leaderboard. Join groups with friends, family, or coworkers to log runs, rides, lifting, and daily habits, and see real-time insights with streaks across Fitness, Health, and Mind to stay accountable and motivated.

View startup

Local content playbook: From service pages to jobs-to-be-done pages

The local content playbook: From service pages to jobs-to-be-done pages

Local SEO has a visibility problem, but it’s not where most teams think. It’s not about rankings for β€œnear me” or service keywords.Β 

It’s everything that happens before that moment, when customers are trying to figure out what’s wrong, what it means, and whether they need help at all. That gap is why so much high-intent demand slips through the cracks.

Service-first site structures miss real search behavior

Most local service websites are built the same way: a homepage at the top, then service pages, and often location pages underneath. It’s a good, clean structure, and it makes sense because it mirrors how the business thinks.Β 

You offer drain cleaning, furnace repair, and emergency roof replacement, and you want to show up for β€œdrain cleaning Brookline, MA,” or β€œfurnace repair near me.” That structure also aligns with how Google’s local algorithm has historically rewarded local businesses.

The issue is that customers don’t always start with the service name. A lot of the time, they start with the problem in front of them.Β 

β€œI need drain cleaning” isn’t always the first thing that pops into a homeowner’s mind. Instead, they might be thinking, β€œMy kitchen sink is backed up, it smells, and I don’t want to make this worse.” 

A property manager isn’t necessarily thinking of β€œHVAC maintenance.” They’re thinking, β€œThis unit is blowing cold air again, and tenants are already complaining.” 

Service-first vs problem-first

If your site is built only around service names, you can miss a big part of the search journey, where people are diagnosing, comparing options, and trying to decide if this is a DIY or a β€œcall someone now” situation.

That mismatch is why so many local sites underperform on some of the highest-value searches in their market. They may have strong service pages, but they don’t have pages designed for the way people actually search when the situation is unfolding. Jobs-to-be-done pages are a practical fix for that gap.

JTBD pages- The middle layer

What is a jobs-to-be-done page?

A jobs-to-be-done (JTBD) page is built around what the searcher is trying to accomplish in real life, not what the service is called. It’s a β€œhelp + hire” page that lets the reader understand what’s happening, what their options are, and what a smart next step looks like, while also making it easy to contact a professional when they’re ready.

At a glance, it can look like a blog post because it’s informational, but its intent is different. A blog post often exists to attract traffic or cover a topic broadly. A JTBD page exists to support a decision and convert the right visitors into calls and estimate requests.

You can usually feel the difference immediately. A JTBD page doesn’t open with a long introduction. It opens by confirming the situation in plain language and offering a quick path forward if the issue is urgent. The goal is to reduce uncertainty fast, because uncertainty is what keeps people bouncing between search results instead of picking up the phone.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Why service pages still matter but aren’t enough

Service pages are still quite important, and they’re still the best fit for searches where the customer already knows exactly what they want and is choosing between providers. These pages tend to win for hire-ready searches like:

  • β€œNear me” searches.
  • β€œBest” searches.
  • Service + town searches.

The gap is that a huge portion of local demand shows up earlier as problem-first searches. People search for symptoms. They search β€œwhy,” β€œhow,” β€œwhat does it cost,” and β€œis this dangerous.” 

If your site only offers service pages, you’re often invisible during the earlier stage where trust is formed. The business that helps someone understand the problem is often the one they call when they decide it’s time.

JTBD pages help you show up earlier without drifting into generic informational content that doesn’t lead anywhere.

Dig deeper: Local SEO sprints: A 90-day plan for service businesses in 2026

The JTBD structure that consistently converts

The JTBD pages that perform best tend to follow the same decision sequence customers follow in their heads. They start with symptoms, then move into likely causes, then options, then cost context, and then a clear line for when it’s time to call a pro.

JTBD decision flow

1. Start with symptoms, not marketing

Starting with symptoms helps the reader self-identify quickly. You’re not trying to impress them yet. You’re trying to confirm they landed on the right page. A short symptoms section mirrors their lived experience and makes the content feel immediately relevant.

Right after symptoms is usually the best place for a small conversion nudge that’s practical, not salesy. Something like: β€œIf you need this fixed today, call. If not, keep reading to understand what’s likely going on.”

2. Explain likely causes without pretending you can diagnose remotely

This is where a lot of local content goes wrong in either direction. Some sites oversimplify and turn every issue into a one-line answer. Others write a technical essay that overwhelms the reader.

A better approach is to list the most likely causes, ordered from common and simple to less common and more serious, and use conditional reasoning to show what would change the diagnosis. For example:

  • If it’s only one fixture, it’s often a localized issue.
  • If multiple fixtures are affected, it’s more likely downstream.

That kind of conditional guidance is useful, and it signals competence.

3. Give options: Safe checks, pro fixes, and what to avoid

After identifying the causes, people want to know what they can do right now. You don’t need a full DIY tutorial. The goal is triage.Β 

Provide a few low-risk checks to help someone avoid an unnecessary call, along with clarity on when continuing to β€œtry things” becomes risky or wasteful.

A simple options section often includes:

  • A few safe checks that take 5–10 minutes and don’t require special tools.
  • What a professional typically does on a service call, described in outcomes.
  • What not to do, focusing on the common actions that create damage.

This is also where conversions happen without pressure. When someone can visualize what a pro will do, the process feels less intimidating.

A lot of local conversions are anxiety conversions. People aren’t just buying the fix, they’re buying relief and certainty.

Dig deeper: Scalable local SEO practices

4. Include cost context without boxing yourself in

Pricing content doesn’t need to promise exact numbers. People are going to look it up anyway. If your page helps them understand realistic ranges and what drives cost, you become the safer choice.

A strong cost section usually covers:

  • A realistic range for the common, simple scenario.
  • The main factors that push costs higher (i.e., access, severity, time sensitivity, parts availability, recurring issues).
  • A quick note on how to avoid surprises.

The tone matters. You’re not selling a coupon. You’re reducing uncertainty.

5. Draw a bright line for β€˜when to call a pro’

This is the conversion center of a JTBD page. Many pages just hint at it. The best ones state it clearly and make the triggers specific and unmissable.

Examples of β€œcall a pro” triggers include:

  • The issue keeps returning within a day or two.
  • Multiple fixtures or rooms are affected.
  • There’s evidence of leaks, water damage, or sewage odors.
  • There’s anything involving gas, electrical proximity, or structural risk.
  • Delaying is likely to make the repair more expensive.

The reader wants permission to stop guessing. When you give them that permission after guiding them through symptoms, causes, options, and cost context, your CTA feels like the logical next step, not a marketing maneuver.

Where these pages should live on a local website

If you want these pages to feel like service assets rather than β€œblog content,” placement matters. Don’t bury them in a dated blog feed. Put them in a dedicated section like:

  • Problems we fix.
  • Help.
  • Homeowner guides.
  • Service resources.

This signals permanence and usefulness and makes internal linking cleaner. A good rule is to include clear conversion moments throughout the page without overdoing it:

  • Near the top for urgency.
  • Near β€œwhen to call a pro” for decision.
  • At the end for readiness.

Example: β€˜Kitchen sink draining slow’ as a JTBD page

An effective version of this page opens with a plain-language title: β€œKitchen sink draining slow? Here’s what causes it and what to do next.” The intro stays brief and sets expectations: most slow drains are caused by grease, soap scum, or buildup in the trap or branch line, and this guide covers safe checks, realistic options, and clear signs it’s time to call.

Symptoms come first, helping the reader quickly confirm they’re in the right place: slow draining, gurgling, odor, or backup when the dishwasher runs. From there, the page moves into likely causes, using conditional guidance to help narrow things down.

Next comes options: a few low-risk checks, a short β€œwhat not to do,” and a plain explanation of what a plumber typically does on a service call. This leads naturally into pricing context, with realistic ranges and the factors that influence cost.

Finally, β€œwhen to call a pro” makes the decision easy. Recurring clogs, multiple drains, leakage, sewage odor, or shared-building situations where DIY mistakes affect others all signal it’s time to bring in help.

The page is informational, but it’s decisional. It helps the reader choose a next step. That’s why it converts.

Get the newsletter search marketers rely on.


How JTBD pages fit with service pages

JTBD pages serve to complement and support existing service pages. A simple model is to keep your main service pages as core conversion targets, then add a β€œProblems we fix” cluster around your highest-value services.

For internal linking, JTBD pages link to the relevant service page as the β€œsolve this quickly” path, and service pages link back to JTBD pages as the β€œnot sure what’s causing it” path.

This expands your footprint into problem-first searches and funnels visitors into your service pages with more trust and clarity than they would have had if they arrived cold.

Dig deeper: The local SEO gatekeeper: How Google defines your entity

Keyword research for β€˜Problems we fix’ pages

The easiest way to pick JTBD topics is to start with what customers say before they know the service name. Better starting points than a keyword tool include:

  • Transcripts.
  • Estimate requests.Β 
  • Google reviews.
  • The questions your team answers every week.Β 

Those phrases become your most natural page titles and headings because they’re already written in the customer’s language.

Once you have a starter list, use your favorite keyword tool to expand it and sanity-check demand. You’re looking for problem-first patterns like:Β 

  • β€œWhy is this happening.” 
  • β€œWhat causes it.” 
  • β€œIs this dangerous.” 
  • β€œShould I shut it off.” 
  • β€œHow much does it cost.” 

These queries are usually informational in intent and often sit one step before a call, especially when the symptom is urgent or recurring.

A quick way to qualify topics is to ask whether the query has a clear β€œhire” outcome hiding underneath it. β€œFurnace blowing cold air” does. β€œToilet keeps running” does. β€œWhy does my house have hard water” might, depending on the business. If the query is purely academic or doesn’t naturally lead to a service call, it’s usually better as a blog post, not a JTBD page.

Finally, don’t build these pages randomly. Cluster them around your highest-value services first, and make sure each JTBD page has a straightforward internal link path to the related service page as the β€œsolve this quickly” option. That’s what turns a helpful page into booked work.

3 common mistakes that make these pages underperform

Even well-structured JTBD pages can fall short if they miss a few fundamentals.

Writing generic content

If the page could belong to any business in any city, it won’t earn trust or conversions. The fix is to include β€œwhat to expect” language and provide relevant local context without turning the page into geo-stuffing.

Over-teaching DIY

When a page becomes a full tutorial, it attracts the wrong audience and increases the chance of damage or liability. Keep DIY checks low-risk and focused on triage.

Avoiding the decision moment

If you don’t clearly state when to call a professional, you miss the main conversion opportunity on the page.

How JTBD pages support AI-driven search visibility

JTBD pages also tend to align with the queries that trigger AI answers in the first place. A lot of AI Overviews show up for problem-first searches, especially:Β 

  • β€œWhy is this happening.” 
  • β€œWhat should I do next.” 
  • β€œIs this serious.” 

JTBD pages are designed to satisfy that moment, while a standard service page usually assumes the customer has already decided what they need.

The structure helps, too. When a page is organized into symptoms, likely causes, options, cost context, and clear β€œcall a pro” thresholds, it becomes easier for systems to summarize accurately and cite specific passages without guessing.

If you want one simple upgrade, add a short β€œQuick take” paragraph near the top that summarizes the likely causes and next step in three to four sentences. It helps rushed readers and creates a clean block of text that AI systems can lift without distorting your meaning.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

Turning help into booked jobs

Local businesses don’t lose jobs because they lack service pages. They lose jobs because they’re invisible or unconvincing during the moment customers are trying to understand what’s happening.

Jobs-to-be-done pages are a practical way to meet customers earlier, answer the problem they’re actually searching for, and guide them toward a safe next step, including a clear path to book service.

When built with the right structure and intent, they become some of the most useful pages on a local website for both search performance and real-world leads.

30-day vs. 7-day attribution in Google Ads: What the shorter window revealed

Google Ads may be over-crediting your conversions- A 7-day test tells a different story

For many advertisers, a 30-day click attribution is the default conversion window setting in Google Ads. Once that’s set, it’s rarely revisited. But what if your customers convert within a week, or even two days?

One of my clients, a DTC retailer in an intensely competitive industry, has an average conversion window of 2.2 days. Yet we were optimizing campaigns using a 30-day click window, which meant conversions were credited weeks after the initial interaction. This muddied the waters when assessing the true incremental impact of different advertising efforts, especially when trying to capture that impulse-buying behavior.

With that in mind, we transitioned the account from a 30-day click window to a 7-day click window in January. Here’s what changed and what we learned.

Inside the 7-day attribution test

This client allocates the majority of its marketing budget to Meta Ads. So, when looking at platform reporting, Meta Ads (unshockingly) accounted for the majority of sales. Since Google Ads operated on a 30-day click window at the time, that platform also accounted for a large percentage of sales.

When your average conversion lag is about two days, allowing 30 days of click credit can inflate perceived contribution in-platform. Because of this, neither platform’s incremental impact was clear, making it difficult for our client to know where to invest the majority of their advertising dollars.

Before making any changes, we analyzed conversion path data to understand how long customers were actually taking to purchase. Over the last three months, users converted in an average of 2.2 days, with the majority of conversions happening in less than a day:

Purchase conversions by day

We didn’t just flip the switch. We hypothesized that since the average conversion window was 2.2 days, we shouldn’t see too much volatility. To be safe, we first set up this new conversion action as a secondary conversion.

So it looked like this:

  • Step 1: Duplicate the primary purchase conversion with a 7-day click window and set it as a secondary conversion action.
  • Step 2: Monitor performance for two weeks.
  • Step 3: Transition it to primary optimization on January 12, 2026.

When you change a primary conversion action, smart bidding recalibrates, and learning phases reset. This phased approach allowed us to compare reporting side by side and prepare for any volatility.

Dig deeper: How to tell if Google Ads automation helps or hurts your campaigns

What happened after the switch

We compared the 30 days post-conversion action change to the previous period, which included peak holiday shopping season.

Results (in-platform)

  • Cost: Down 6.3%
  • Conversions: Up 42.9%
  • Conversion value: Up 52.1%
  • ROAS: Up 62.3%

Initial results looked great, but we wanted to see if there was any measurable impact on the business.

Using Shopify sales data, we saw that total sales increased 20%, and net profit increased 30%.

More importantly, marketing mix modeling (MMM) data showed a shift in incremental contribution:

  • Google’s incremental ROAS increased 10% to 1.82
  • Meta incremental ROAS dropped 25% to 0.59.

This was the strongest indication that shortening the attribution window helped clarify channel contribution.

Now, in full transparency, we were also restructuring campaigns, adjusting budgets, and refining bidding during this time. So, we can’t give all the credit to the shorter attribution window. But we can say performance wasn’t negatively affected, and the contribution percentage improved.

Get the newsletter search marketers rely on.


How a 7-day window improved signal quality

With overlapping attribution between Meta and Google, both channels looked over-credited in-platform. By shortening Google’s click window, we limited its ability to claim delayed conversions that were likely influenced by other touchpoints. Tightening this window reduced cross-platform duplication and gave us a clearer view of incremental impact.

Additionally, instead of waiting weeks to understand campaigns’ actual ROAS, we could evaluate performance within days and make adjustments more confidently.

By reducing to a 7-day click window, we:

  • Decreased delayed attribution.
  • Tightened optimization feedback loops.
  • Improved performance diagnostics.

This change also significantly affected Smart Bidding behavior. Automated bidding strategies, such as target return on ad spend, optimize based on conversion signals. With a 30-day window, those signals are extended, meaning the algorithm reacts more slowly to performance shifts, such as bid adjustments, seasonality shifts, and budget reallocations.

Moving to a 7-day window continuously feeds fresher signals to Smart Bidding strategies. This created tighter alignment between spend and actual buying behavior. Combined with Marketing Mix Modeling data, the picture became even clearer.

\The cleaner attribution structure gave us stronger confidence in making account optimizations and, even better, helped our client make more informed business decisions about where to invest ad dollars.

In short, tightening the conversion window didn’t just change reporting. It improved the quality of the signal driving optimization decisions.

Dig deeper: In Google Ads automation, everything is a signal in 2026

The downside (and why this isn’t a universal fix)

Shortening an attribution window could work for you, but you should consider the trade-offs.

Reported conversion volume will likely drop, at least initially. Removing delayed conversion credit can make performance appear weaker overnight, even if actual sales haven’t changed. That can create internal concern if your client or other stakeholders aren’t prepared.

Smart Bidding will need to recalibrate. Changing a primary conversion action is a significant change to an account. This will trigger a learning phase and short-term volatility, especially in accounts using automated bid strategies such as target ROAS and Max Conversion Value.

Most importantly, this approach only works if it aligns with your sales cycle. For high-consideration or longer purchase journeys, a 7-day window may undercount legitimate conversions, suppress ROAS, and limit optimization data. A shorter attribution window is only better if it reflects how your customers are actually buying.

Adjusting attribution wasn’t the silver bullet here. In this case, other account improvements were happening simultaneously, and this was just one lever.

When attribution reflects reality

Ultimately, this change wasn’t about improving platform metrics. It was about improving business insights.

For this client, aligning the attribution window with a 2.2-day conversion cycle improved conversion signal quality, enhanced Smart Bidding, clarified cross-channel impact, and gave leadership stronger confidence in where to invest.

Whether a 7-day click model makes sense depends on how closely your attribution settings reflect your account’s buying cycle.

New AMD Ryzen CPU leak with boosted speeds

Two new AMD Ryzen CPUs leak with boosted TDPs and clock speeds A new leak from chi11eddog has unveiled two potential AMD CPUs that could launch this year to refresh AMD’s Ryzen 9000 range. These new CPUs feature higher base/boost clock speeds than their existing counterparts. That means they aim to deliver higher performance for […]

The post New AMD Ryzen CPU leak with boosted speeds appeared first on OC3D.

First 16 TB M.2 NVMe SSD Listed at Eye-Watering $16,000 Price Tag

If your workstation build has no budget limits, there's now a way to get 16 TB of NVMe PCIe 4.0 SSD storage on a single M.2 2280 SSD. However, this small storage drive will set you back nearly $16,000, with Amazon listing a brand new Exascend PE4 16 TB M.2 SSD at $15,935. What makes this drive so expensive is the fact that you are getting 16 TB of NVMe SSD storage in a super-dense configuration that fits into a single M.2 slot on a motherboard, requiring no additional drives to achieve this capacity. The Exascend PE4 drive uses a PCIe 4.0 interface and can reach sequential read speeds of up to 3,270 MB/s, while write speeds can reach up to 2,980 MB/s, just shy of 3 GB/s. The drive uses TLC 3D NAND Flash storage modules from an unknown manufacturer.

The drive is designed to withstand a TBW of 16,640 TB written to it and about two million hours of MTBF, meaning its usable life is guaranteed to last through any known task, running 24/7 in systems where reliability is the number one factor. Interestingly, Exascend only offers a five-year warranty but claims its hardware will significantly outlast that. Rigorous testing and massive capacity justify the high price point, so some users might find the need for such a drive. Additionally, idle power consumption remains under 1.3 W, while this SSD can reach up to 7.2 W of active power when under full load, meaning that efficiency for such a drive is rather good.

AMD Readies Ryzen 7 9750X and Ryzen 5 9650X Desktop Processors with Increased TDP

AMD is preparing an update to its Ryzen 9000 series desktop processor lineup with the introduction of two new models, the Ryzen 7 9750X, and the Ryzen 5 9650X. Both these chips are non-X3D (lack 3D V-Cache), and implement the regular "Zen 5" CCD with 32 MB on-die L3 caches. The two are being designed with increased clock speeds and TDP, and their launch closely follows Intel's recent product stack refresh with the Core Ultra 7 270K Plus and Core Ultra 5 250K Plus.

The Ryzen 7 9750X is an 8-core/16-thread chip with a base frequency of 4.20 GHz with 5.60 GHz maximum boost frequency, a significant increase over the 3.80 GHz base and 5.50 GHz maximum boost frequency of the Ryzen 7 9700X. The 9750X comes with a 120 W TDP out of the box. In comparison, the 9700X comes with 65 W TDP out of the box, and AMD allowed motherboard vendors to provide a BIOS-based 105 W TDP mode that doesn't break warranty, designed to improve boost frequency residency. The 9750X not only comes with increased clocks, but also increases the TDP further, to 120 W from that BIOS-based 105 W TDP mode.

(PR) ASUS Announces ExpertCenter PN55 Mini PC with AMD Ryzen AI 400 Series

ASUS today announced the ExpertCenter PN55 Mini PC, a compact Copilot+ PC powered by the latest AMD Ryzen AI 400 Series processors with class-leading multithreaded performance and advanced XDNA 2 NPU delivering up to 55 AI TOPS. Integrated AMD Radeon 800M graphics provides prosumers and content creators with exceptional, incredibly detailed visuals. Despite its small footprint, ExpertCenter PN55 offers dual LAN and up to six USB ports, giving it the flexibility to take on a variety of tasks including AI-accelerated productivity, collaboration, and content creation. It also offers design features that enable tool-less upgrades.

Enhanced productivity with Copilot+
ASUS ExpertCenter PN55 Mini PC is powered by an up to AMD Ryzen AI 9 HX 470 processor featuring up to 55 TOPS of XDNA2 NPU performance. This robust compute engine enables smooth generative AI tasks, allowing users to generate ideas, create content, and upscale images in seconds. Paired with up to 96 GB of DDR5 memory, ExpertCenter PN55 ensures rapid access to AI-accelerated workloads and supports seamless multitasking for everyday productivity tasks such as web browsing, presentations, and content creation. AI PCs accelerate everyday tasks by automating repetitive work, helping users save time and work more efficiently.

Pounce – Turn 15 minutes into real followers and replies on X and Reddit


Pounce streams the best conversations from X and Reddit straight to you. It delivers real-time posts into a focused inbox, seconds after they go live, so you can reply first and build momentum. Set your strategy once, then let AI filter for relevance and draft replies in your voice. Track daily goals and session stats to turn 15 minutes into consistent growth and real connections.

View startup

Meridian Realms AI – Build living worlds and characters with AI and craft epic adventures


Meridian Realms is an AI-powered platform for immersive storytelling and worldbuilding. Create rich universes across any genre, design characters with long-lasting memory and evolving relationships, and explore narratives through natural dialogue and meaningful choices. Generate artwork in multiple styles, collaborate in shared worlds, and run group adventures with your favorite characters. Choose from the public catalogue, start crafting your own worlds and characters, or use AI to flesh out backstories, settings, and scenes.

View startup

Claude Code Security and Magecart: Getting the Threat Model Right

When a Magecart payload hides inside the EXIF data of a dynamically loaded third-party favicon, no repository scanner will catch it – because the malicious code never actually touches your repo. As teams adopt Claude Code Security for static analysis, this is the exact technical boundary where AI code scanning stops and client-side runtime execution begins. A detailed analysis of where Claude

9 Critical IP KVM Flaws Enable Unauthenticated Root Access Across Four Vendors

Cybersecurity researchers have warned about the risks posed by low-cost IP KVM (Keyboard, Video, Mouse over Internet Protocol) devices, which can grant attackers extensive control over compromised hosts. The nine vulnerabilities, discovered by Eclypsium, span four different products from GL-iNet Comet RM-1, Angeet/Yeeso ES3 KVM, Sipeed NanoKVM, and JetKVM. The most severe of them allow

Why customer personas help you win earlier in AI search

Why customer personas help you win earlier in AI search

Buyers ask a question. You answer it clearly. That’s the premise behind the β€œThey Ask, You Answer” (TAYA) framework, and it holds up in AI-driven discovery.

In theory, it’s simple. In practice, teams struggle to anchor their approach and get started. The result is predictable: generic questions that produce generic content.

That’s a problem, especially as AI shifts search behavior from short queries to more detailed, contextual questions. The difference comes down to the questions you choose to answer. And that’s where a simple concept makes a big difference: buyer personas.

The problem with generic questions

Odds are, you and many of your competitors have already answered these questions somewhere, or could easily.

The generic question trap happens because when marketing teams brainstorm content ideas, they often start with topics like:

  • What is CRM software?
  • What is marketing automation?
  • What is warehouse management?

These are reasonable questions. But they’re also questions no real buyer actually asks.

Real buyers ask questions that reflect their situation and their problem. Something more like this:

  • β€œWhat CRM should a 10-person sales team use?”
  • β€œWhy are leads slipping through the cracks in our marketing?”
  • β€œWhy is our warehouse picking speed so slow?”

The difference is subtle but important. The second set of questions includes a person and a problem. That context completely changes the quality of the content.

Why this matters more in AI-driven discovery

Instead of typing short keywords, buyers ask detailed, contextual questions:

  • β€œI run a 15-person marketing team, and we’re struggling to track leads properly. What should we do?”

The AI explains the problem, outlines solutions, and suggests vendors. In other words, the buyer is having a consultation with an AI.

If your content explains why a specific persona experiences a specific problem, you have a much better chance of shaping how that problem is understood in the first place.

This puts you into the conversation and consideration set earlier, making it more likely you’ll stay in as the user refines their thinking.

Consider this scenario. I’ll use myself as an example.

  • Marcus.
  • 50 years old.
  • Meeting some old friends in Birmingham, UK.
  • Looking for ideas of things to do for the day.

I start by asking a somewhat broad opening question:

  • β€œI’m looking for some ideas of things to do with friends in Birmingham on the weekend. I’m 50, and I have several male friends coming down to get together for a day. There will be some beers, no doubt, but we need some activities as well.”

Answers then include a bunch of top-level suggestions β€” bars, food, and activity-type bars. One of these suggestions is for an F1 gaming arcade. I like games, but not so much cars, so this leads my follow-up to dig in a bit more:

  • β€œAh, we all like games. What about gaming arcades? What gaming arcades could you recommend?”

I get a bunch of recommendations, one of which is for a pinball arcade in Digbeth (a sub-area of Birmingham).

  • β€œPinball Factory in Digbeth sounds fun. What else is there to do around there, food- and drinks-wise?”

I then get a set of responses that helps me narrow the list and formulate a perfect day and evening out for a group of old friends.

Being in the early part of the conversation lets you shape the dialogue and increases your chances of being part of the eventual solution.

Get the newsletter search marketers rely on.


Personas make TAYA far more precise

Personas are the tools that let you think like your customers and figure out the kinds of questions they ask long before they get to what you have to offer.

When you can identify a customer segment, you can dig into that persona, understand their problems and goals, and think like your target customer to generate content ideas that help them decide earlier.

Now, instead of writing content for a generic avatar, write for specific people. For example, instead of β€œThings to do in Birmingham?” you might write, β€œThe best day out in Birmingham for a group of 50-year-old gamers.”

You’re still addressing the same underlying topic. But now the content speaks directly to a real person experiencing a real problem.

That shift usually leads to much more useful content. This helps you work your way into those conversations, rather than relying on the brutal battleground of commercial queries.

A simple way to uncover better questions

You don’t need a complicated persona framework to make this work. In most cases, a simple three-question exercise will uncover the kinds of problems your buyers are actually trying to solve.Β 

For each persona you serve, ask:

  • What are they responsible for? For example:
    • Hitting sales targets.
    • Generating marketing leads.
    • Running warehouse operations.
  • What problems make that responsibility difficult? Examples might include:
    • Missed sales targets.
    • Inefficient warehouse processes.
    • Poor lead tracking.
    • Slow picking speeds.
  • What would they ask Google or an AI assistant when that problem occurs?

Now the questions start to look very different. Instead of broad category topics like: β€œWhat is CRM software?”

You start to see questions like:

  • β€œWhy are leads slipping through the cracks in our CRM?”
  • β€œWhat CRM should a small sales team use?”
  • β€œWhy is our warehouse picking speed so slow?”

Those questions reflect real situations experienced by real people β€” exactly where the best content opportunities exist.

β€˜They Ask, You Answer’ works better with personas

Now we revisit the big five topic areas from TAYA: cost, problems, comparisons, reviews, and best-of. These topics already give us a powerful structure for content.

But when they’re approached generically, they often lead to content that looks exactly like everyone else’s.

So you can go from the typical, generic kinds of questions:

  • β€œHow much does CRM software cost?”
  • β€œWhat problems do warehouse systems have?”
  • β€œHubSpot vs. Salesforce”
  • β€œBest CRM systems”
  • β€œSalesforce review”

To questions that are more connected to the needs of our target audience:

  • β€œWhat does CRM cost for a 10-person sales team?”
  • β€œWhy do my warehouse managers struggle with picking accuracy?”
  • β€œHubSpot vs. Salesforce for a small B2B marketing team”
  • β€œBest CRM for growing sales teams”
  • β€œIs Salesforce worth it for a mid-size sales organization?”

The topic hasn’t changed, but the question now reflects the buyer’s reality. This shift produces more useful content and aligns with how people interact with AI assistants.

Those questions include their role, company size, or situation:

  • β€œWe’re a small marketing team struggling to track leads properly. What CRM should we use?”

If your content already answers these persona-driven questions, you increase the chances that your explanation becomes part of that conversation.

In other words, personas don’t replace They Ask, You Answer. They make it more precise, moving you from answering generic topics to answering the exact questions buyers ask when solving a real problem.

Persona-driven questions improve TAYA content for three simple reasons.

  • They mirror how buyers actually think: People rarely search for textbook definitions. They search for solutions to problems. Personas keep the content anchored in those problems.
  • They produce more useful content: When you know who the content is for, it naturally includes better examples, more practical advice, and clearer explanations. In other words, content that genuinely helps someone move forward.
  • They align with how AI explains problems: AI assistants increasingly start by explaining the problem before recommending a solution. Content that clearly describes why a specific persona experiences a specific challenge fits neatly into this pattern. This increases the chances that your explanation influences the AI’s response.

Start with the problem, not the product

One of the most common mistakes companies make with content marketing is starting with their product.

But buyers rarely start their journey there. They start with a problem.

Personas help keep your content anchored in the buyer’s world rather than your own product β€” remember, it’s about the customer, not you.

And that simple shift often makes the difference between content that merely exists and content that actually influences decisions.

Where you enter the conversation matters

β€œThey Ask, You Answer” remains one of the most powerful frameworks available to marketers. But the effectiveness of the framework depends entirely on the quality of the questions you answer.

Personas help you turn vague topics into real problems and ask better questions. When your content speaks directly to those problems, buyers and AI systems are far more likely to trust your answers.

The Xbox App now supports manually added 3rd party games

Microsoft is now allowing PC games to add 3rd party games and apps to their Xbox App Microsoft has given PC gamers the ability to add any 3rd-party games (or any .exe file) to the Xbox PC App. This is a feature that Valve’s Steam platform has offered for years, allowing PC gamers to centralise […]

The post The Xbox App now supports manually added 3rd party games appeared first on OC3D.

AMD Releases Statement on Chuwi's Ryzen Processor Mislabeling Scandal

AMD today released a statement in China on the scandal involving Chinese notebook OEM Chuwi mislabeling Ryzen 5000 series mobile processor models as Ryzen 7000 series. You can read all about the scandal in our older article, but to summarize, the company was found selling notebooks with cheap Zen 2 Ryzen 5 5500U processors with the processor name strings modified in the BIOS to falsely show Ryzen 5 7430U, a Zen 3 chip released almost three years later. This attempt at deception also covered product marketing, and Ryzen 7000 series case badges on the notebook chassis.

AMD, in its Chinese-language statement to the Chinese press, as reported by Hong Kong-based HKEPC, came down hard on the malpractice by Chuwi. The company said that this behavior by Chuwi was in no form authorized by AMD; that the company has strict and legally-binding agreements with its OEMs over the handling of the AMD brand, product labels, or product promotion; and condemned the behavior, saying that such acts damage consumer confidence in AMD as a brand. It ended the statement saying that the company reserves the right to pursue legal action against those involved.
The machine translated statement by AMD follows:

StoreAsk – Ask plain-English questions and get instant insights from your Shopify data


StoreAsk lets Shopify merchants ask questions in plain English and get clear, actionable answers in seconds. It analyzes orders, products, customers, inventory, traffic, and marketing spend to surface trends, explain changes, and recommend next steps.

Connect your Shopify store in one click with read-only access, then get daily briefings, follow-up questions, and exports without dashboards or spreadsheets. Data stays secure with AES-256 encryption, SOC 2 compliance, and 99.9% uptime.

View startup

VERDICT.COM – Search case law, check court records, and find lawyers with AI


VERDICT.COM is a free AI-powered legal research platform that helps you search court records, explore case law and precedents, understand your rights, draft legal documents, and find qualified lawyers near you. Describe your situation to get targeted case law, verify legal letters, and create forms and agreements with guided assistance. It provides legal information and research support, not legal advice.

View startup

Product Walkthrough: How Mesh CSMA Reveals and Breaks Attack Paths to Crown Jewels

Security teams today are not short on tools or data. They are overwhelmed by both.Β  Yet within the terabytes of alerts, exposures, and misconfigurations – security teams still struggle to understand context:Β  Q: Which exposures, misconfigurations, and vulnerabilities chain together to create viable attack paths to crown jewels? Even the most mature security teams can’t answer that

Nvidia CEO claims gamers are β€œcompletely wrong” about DLSS 5

Jensen Huang responds to complaints about Nvidia’s DLSS 5 tech Nvidia unveiled DLSS 5 earlier this week, and to say the least, the tech is controversial. Memes have been flying across the internet, calling the new AI technology little more than an β€œAI filter”. Gamers are complaining that the tech is changing the look of […]

The post Nvidia CEO claims gamers are β€œcompletely wrong” about DLSS 5 appeared first on OC3D.

(PR) Thermaltake TR200 Series Delivers Mid-Tower Performance in a Compact Design

Thermaltake, a leading PC DIY brand for premium hardware solutions, introduces the TR200 Series, a compact microATX lineup designed to deliver high-end hardware compatibility, advanced cooling support, and modern visual customization within a minimalist form factor. The TR200 Series includes the TR200 and TR200 WS Micro Chassis, both available in Black and Snow, offering identical internal specifications with distinct front panel aesthetics to match different setup styles.

The TR200 WS features vertical real wood accents on the front panel, introducing natural warmth and refined texture while maintaining the same airflow performance and hardware compatibility as the TR200, which features a clean mesh front design. This allows builders to choose between a minimalist, modern aesthetic or a warmer, more decorative style that complements contemporary desk setups and living spaces.

Death Stranding 2: On the Beach Gets New Trailer Ahead of PC Launch and Confirms Ray Tracing, Ambient Occlusion, and Upscaling Support

Ahead of Death Stranding 2: On the Beach PC launch scheduled for March 19th, Sony has released a new PC version launch trailer called "No Rain, No Rainbow", as well as confirmed that the game will be getting support for NVIDIA DLSS, AMD FSR, and Intel XeSS upscaling and frame generation technologies. Death Stranding 2: On the Beach will also support ray-traced reflections and ambient occlusion, where ray tracing will be used for surfaces like water and other reflections, while ambient occlusion should bring much more realistic shadows due to ambient lighting effects. Unfortunately, Sony did not include ray tracing in the previously released PC system requirements, so it is left to be seen how the Decima engine handles these effects in terms of CPU and GPU requirements. According to Sony, "these additional PC options are aimed at players with powerful hardware that want to push visual fidelity beyond the "Very High" graphics settings the game already offers."

In addition, the PC version of Death Stranding 2: On the Beach will also get uncapped frame rates, more extensive graphics settings options, super-ultrawide (32:9) and ultrawide (21:9) monitor support, full mouse and keyboard support with key binding, spatial sound support, and more. The PC version of the game will also get a new challenge with the "to the wilder" game mode, as well as some other new game features and content additions. Sony also released a new launch trailer for the PC version of the game, which you can check out below.

Intel Enables Precompiled Shader Delivery for Up to 3x Faster Game Loading Times

With the latest Arc 101.8626 WHQL graphics driver, Intel has extended its precompiled shader delivery service to Intel Arc B-series GPUs and Intel Core Ultra Series 3 and Series 2 SoCs with built-in Intel Arc GPUs. This enhancement aims to significantly reduce game loading times. Intel's service gathers game shaders in the company's private cloud infrastructure, where they are processed and precompiled. When you install the Intel Graphics Software App, the service identifies the games you play and downloads the precompiled shaders for those games, using the Intel app as a distribution service and creating a folder with the precompiled shaders. This approach allows games to load much faster, reduces stuttering on the first launch, and automatically updates shaders whenever they are revised, with Intel's service pulling the new shaders into the shared folder on your PC. If this sounds familiarβ€”Microsoft is working on a similar mechanism called "Advanced Shader Delivery," for release later this year. Intel's approach is separate and independent. For TechPowerUp, Intel confirmed the following:
IntelIntel Precompiled Shaders is custom built and run by Intel. We are also working with Microsoft's on launching Advanced Shader Delivery later this year. Together, both services will provide users of supported Arc GPUs with more game and game store coverage of technologies that reduce waiting times and in-game stutters due to shader compilation.

(PR) Biostar Introduces the BITWL-IHT Mini Industrial Motherboard

BIOSTAR, a leading manufacturer of edge computing solutions, industrial motherboards, graphics cards, and storage devices, is proud to introduce the new BITWL-IHT, a Thin Mini-ITX industrial motherboard engineered to deliver stable and scalable performance for modern intelligent infrastructure. Built to smoothly run with the Intel Alder Lake-N, Amston Lake, and Twin Lake series processors, the BITWL-IHT is designed to address the growing demand for reliable edge platforms across AIoT, industrial automation, HMI, kiosk, digital signage, and distributed edge computing environments where long-term stability and flexible integration are essential.

At its core, the BITWL-IHT supports Intel Processor N150, Intel Atom x7213E, and Intel Core i3-N305, offering adaptable processing performance across a wide spectrum of embedded and edge workloads. Combined with support for DDR5-4800 SO-DIMM memory up to 16 GB, the platform ensures responsive multitasking, efficient data handling, and improved power efficiency for continuous industrial operation. Its compact 170 mm x 170 mm slim Mini-ITX form factor enables streamlined system integration in space-constrained environments, making it ideal for slim kiosks, panel PCs, and embedded control systems.

Ubuntu CVE-2026-3888 Bug Lets Attackers Gain Root via systemd Cleanup Timing Exploit

A high-severity security flaw affecting default installations of Ubuntu Desktop versions 24.04 and later could be exploited to escalate privileges to the root level. Tracked as CVE-2026-3888 (CVSS score: 7.8), the issue could allow an attacker to seize control of a susceptible system. "This flaw (CVE-2026-3888) allows an unprivileged local attacker to escalate privileges to full root access

Microsoft Reveals Next-Gen DirectX Ray Tracing: Clustered Geometry, Partitioned TLAS, and GPU-Driven Acceleration Ops

Microsoft has finally released a second DirectX Ray Tracing (DXR) functional specification file that outlines what its ray tracing pipeline is expected to look like, the goals the company is pursuing, and what the technology does behind the scenes. In the original file, Microsoft described the ray tracing pipeline from ray shader generation, scheduling, and acceleration structure, all the way to the shading of the actual game. This time, the company has shared insights into areas such as clustered geometry, partitioned top-level acceleration structures (TLAS), and indirect acceleration structure operations.

Firstly, Microsoft introduces the concept of clustered geometry. Readers need to understand that the core graphics elements are triangles, which are the building blocks of the 3D worlds we have today. However, DXR clustered geometry treats groups of nearby triangles as common building blocks or multiple building blocks, allowing the GPU to build, move, and instantiate geometry in bulk. Instead of handling this separately with multiple calls for triangles, the GPU's task is now much more simplified. DXR even defines compact vertex encodings and predefined template formats to ensure that the GPU memory and bandwidth required to execute bulk geometry building and moving are sufficient. As a result, the GPU doesn't have to update or duplicate existing geometry, and DXR will help render foliage, crowds, and in-game props once, allowing them to be moved around easily. This reduces the GPU load and improves the performance of ray tracing in games.

Intel Arc GPU Graphics Drivers 101.8626 WHQL Released

Intel has released its latest 101.8626 WHQL Arc GPU graphics drivers, adding day-one support for Death Stranding 2: On the Beach and Everwind games, as well as introducing the new Intel Graphics Shader Distribution Service, which should improve first load times by up to 2x on Intel Arc B-series and Intel Core Ultra Series 3 and Series 2 CPUs with Intel Arc GPUs. The Graphics Shader Distribution Service is currently limited to a dozen games, so hopefully Intel will extend the list further with the future driver release. In addition, the new Intel Arc GPU 101.8626 WHQL graphics drivers also improve game performance in the Nioh 3 game on Intel Arc B-series GPUs by up to 9 percent at 1080p resolution with Ultra settings.

The new Intel Arc GPU driver release also fixes a couple of issues seen with previous driver releases, including an application crash with ray tracing enabled in the Naraka Bladepoint game, cinematic corruption in Hogwarts Legacy, and visual corruption in the viewport while resizing the window with HDR enabled in Davinci Resolve Studio. These issues are fixed on Intel Arc B-series GPUs and Ultra Series 3 CPUs with Intel Arc GPUs. Since it is a major WHQL release, Intel is listing all new known issues that are left to be fixed with future driver releases.

DOWNLOAD: Intel Arc Graphics Driver 101.8626 WHQL

(PR) Samsung and AMD Expand Strategic Collaboration on Next-Generation AI Memory Solutions

Samsung Electronics Co., Ltd. today announced it has signed a Memorandum of Understanding (MOU) with AMD to expand their strategic collaboration on next-generation AI memory and computing technologies. The signing ceremony was held at Samsung's most advanced chip manufacturing complex in Pyeongtaek, Korea, attended by Dr. Lisa Su, Chair and CEO of AMD, and Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics.

"Samsung and AMD share a commitment to advancing AI computing, and this agreement reflects the growing scope of our collaboration," said Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics. "From industry-leading HBM4 and next-generation memory architectures to cutting-edge foundry and advanced packaging, Samsung is uniquely positioned to deliver unrivaled turnkey capabilities that support AMD's evolving AI roadmap."

(PR) MSI Accelerates Enterprise AI with NVIDIA MGX Servers and DGX Workstations at GTC 2026

MSI, a global leader in high-performance server solutions, today unveils its latest AI infrastructure portfolio built on NVIDIA's modular architectures, including the NVIDIA MGX platform and NVIDIA DGX Station technology. Designed to accelerate AI training, large-scale inference, HPC, edge, and next-generation data center workloads, MSI's expanded lineup delivers exceptional scalability, performance density, and deployment flexibility.

Scalable AI Infrastructure Built on NVIDIA MGX Architecture
Leveraging the modular design of NVIDIA MGX architecture, MSI has developed a comprehensive portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. The NVIDIA MGX architecture enables flexible CPU selection, high-capacity memory configurations, and seamless integration of high-speed networking - empowering enterprises to deploy infrastructure tailored to diverse workload requirements, from data center deployments to edge applications.

CarChrono – Search and decode VINs to get transparent multi-source vehicle reports


CarChrono delivers multi-source vehicle intelligence for car buyers. Search millions of listings, decode any VIN or Japanese chassis number, and get transparent reports with specs, title and accident history, market value, recalls, and ownership timelines. It cross-references over nine data sources, flags discrepancies, and helps detect fraud such as odometer rollbacks and title washing. Use it across the US, Canada, Japan, the UK, Germany, and more with real-time coverage.

View startup

Apple Fixes WebKit Vulnerability Enabling Same-Origin Policy Bypass on iOS and macOS

Apple on Tuesday released its first round of Background Security Improvements to address a security flaw in WebKit that affects iOS, iPadOS, and macOS. The vulnerability, tracked as CVE-2026-20643 (CVSS score: N/A), has been described as a cross-origin issue in WebKit's Navigation API that could be exploited to bypass the same-origin policy when processing maliciously crafted web content. The

Take-Two CEO Calls the Idea That AI Could Create a Game From Scratch "Laughable"

Strauss Zelnick, the CEO of Take-Two Interactive, has a somewhat complicated relationship with artificial intelligence, having previously expressed an interest in AI NPCs for more natural conversations while also having confirmed that GTA VI will feature no generative AI. Now, in a recent interview with The Game Business, Zelnick has once again commented on the capabilities and applications of generative AI. Addressing a question about Google's recent Project Genie showcases, Zelnick has dismissed the idea that generative AI could be used as a one-stop-shop for game development, saying that the gaming industry has always used technology to create great entertainment," adding that "an advance in technology that allows us to do our work better and quicker is great for us."

Zelnick dismisses the idea that AI projects like Genie are a threat to the gaming industry and to game developers, commenting that "it's quite obvious that creation tools are a benefit to our industry," and that the notion that "AI tools can somehow create big hits kind of doesn't stand to reason." He reasons that generative AI may help developers create game assets, but that creating a hit game requires human engagement and creativity. Zelnick rounds out the AI discussion by emphasizing that Take-Two's goal is to create engaging, entertaining games, and that this requires creativity, adding that "technology can assist with that mission, but technology on its own will not replace the fulfillment of that mission." The executive goes on to explain that "the notion that somehow new tools would allow an individual to push a button and generate a hit and bring it to many millions of consumers around the world, it's a laughable notion." It's worth noting that there have been recent layoffs, like those at EA, that have been attributed to or followed by an increase in AI adoption, indicating that, while AI may not be capable of replacing artists and developers from a technical standpoint, it does not necessarily mean there is no threat.

Critical Unpatched Telnetd Flaw (CVE-2026-32746) Enables Unauthenticated Root RCE

Cybersecurity researchers have disclosed a critical security flaw impacting the GNU InetUtils telnet daemon (telnetd) that could be exploited by an unauthenticated remote attacker to execute arbitrary code with elevated privileges. The vulnerability, tracked as CVE-2026-32746, carries a CVSS score of 9.8 out of 10.0. It has been described as a case of out-of-bounds write in the LINEMODE Set

Spydomo – Track competitors and get curated AI briefs automatically


Spydomo monitors competitors across reviews, social media, websites, and news, then delivers concise AI-generated briefs highlighting launches, customer pains, and market trends. It's designed for founders, product teams, agencies, and investors.

It automatically finds sources like G2, Reddit, LinkedIn, and blogs, turning scattered signals into structured insights you can act on. Receive updates daily, weekly, or instantly via email, Slack, or Teams. Pricing starts at $10 per tracked company per month, with a 14-day free trial.

View startup

Friendware – Tab-to-complete everywhere on macOS


Friendware brings AI autocomplete to macOS so you can write and act faster across every app. It observes your style and drafts instant, context-aware replies for email, Slack, LinkedIn, iMessage, and X. It polishes text and generates prompts as you type; just press Tab.

Use one-click actions to handle multi-step tasks like checking Stripe billing, sending follow-ups, or creating calendar invites. Built with native Mac code, it runs fast, stays lightweight, respects local context, and supports 100+ languages.

View startup

Echo – An anonymous voice map for what people can't say out loud


Echo is an anonymous 3D voice space where people leave short voice or text messages in a virtual environment. There are no accounts, profiles, or comments, so people can speak more honestly without the pressure of social media.

It offers a quieter way to express feelings, release emotions, and hear real voices from others. Instead of performance and attention, Echo is built for honesty, privacy, and emotional connection.

View startup

RouteStack.ai – Plug live travel inventory and deep link checkout into your AI agent


RouteStack gives AI agents access to live travel data including hotels, flights, cars, rentals, and activities in one place. Pricing and availability are pulled in real time from global distribution systems, and every booking link is cryptographically signed for secure checkout.

Developers can connect to RouteStack using Python or Node SDKs, a ready-to-run server, or Docker. It's built to be fast, reliable, and easy to integrate into any AI agent or framework.

View startup

RPCS3 Adds Support for Steam Library Integration in Latest Update

RPCS3, the multi-platform, open-source PlayStation 3 emulator that recently announced support for 70% of the PlayStation 3 game library, has just added a UX workflow to automatically add emulated games to your Steam library. This effectively works the same way as adding a third-party game to your Steam library from Steam, but it eliminates the extra step of opening Steam and manually adding the game launcher to your Steam library. Games added this way will automatically add the game's PS3 art included with the game files in the emulator, to boot.

It's a small UI change, in the grand scheme of things, but it should help simplify game emulation and make it easier for gamers to play their emulated games via RPCS3. Being able to add third-party games to a game library is a nigh-essential feature that even Microsoft recently added to the Xbox App for Windows. RPCS3 adopting support for seamless Steam library integration could effectively let Linux and SteamOS players go from cold boot to playing a game all from a controller using Steam Big Picture modeβ€”no keyboard or mouse necessary.
❌