❌

Reading view

A NAS is the best way to avoid a data catastrophe β€” Our Editor's Choice just hit the lowest price of the year, and hard drives to fill it with are also on sale

World Backup Day is just around the corner, making it a great time to invest in a NAS for extra data protection. This one's perfect for beginners, and it's down to the lowest price of the year at Amazon's Big Spring Sale.

Nintendo First-Party Digital Editions Will Be Cheaper than Physical from May 2026

Just days after it was revealed that Nintendo would be cutting Switch 2 production by as much as a third, Nintendo USA has confirmed that it will be changing its pricing structure for Switch 2 exclusives, specifically relating to the digital editions. As of May 2026, digital versions of Switch 2 exclusive games will cost less than the physical versions, and the first game to receive the new pricing structure is Yoshi and the Mysterious Book, which retails at $59.99 for the digital version and $69.99 for the physical version.

As one might expect, Nintendo blames the increased cost of physical games on the cost of distribution, with physical media obviously requiring physical logistics channels and additional physical packaging and storage media, which increases costs. Nintendo's wording, which refers specifically to "new Nintendo published digital titles," suggests that this pricing change will not apply retroactively, at least not initially, suggesting that games like PokΓ©mon Pokopia will still cost $69.99, regardless of whether they're physical or digital versions.

PixelPanda – Generate studio-quality product photos and UGC videos in seconds


PixelPanda is an AI photoshoot platform for product photography, UGC marketing videos, and background removal. Upload a product, choose a style or model, and generate listing-ready photos and talking-head videos in seconds. The platform includes AI model generation, background removal, image upscaling, and fashion try-on, plus ad-ready layouts for Amazon, Shopify, and social channels. Over 10,000 e-commerce brands use PixelPanda to create studio-quality assets fast without costly shoots.

View startup

SalesExchange – Match with commission-based sales reps to scale revenue


Inside Sales Manager connects companies with motivated commission-based sales reps through real, posted opportunities. Companies post opportunities, set terms, control visibility, and chat with interested reps to expand coverage without adding full-time headcount. Sales reps browse and request opportunities, message companies in-app, track status, and earn referral fees or commissions.

View startup

OpenAI closes Sora AI Video Generator and cancels $1bn Disney partnership

OpenAI’s AI video slop generator is dead – Will OpenAI soon follow? OpenAI has confirmed that it has discontinued its Sora AI video generation tool and has pivoted away from video generation tools entirely. Now, OpenAI appears to be focusing on other forms of AI. Presumably, this will be forms of AI that weren’t burning […]

The post OpenAI closes Sora AI Video Generator and cancels $1bn Disney partnership appeared first on OC3D.

(PR) Maxsun Launches 32 GB Arc Pro B70 Series Graphics Cards

MAXSUN today announced the official release of its Intel Arc Pro B70 Series graphics cards, delivering a decisive leap forward in AI computing and professional visualization. Built on a close collaboration with Intel, the new lineup follows the widely acclaimed B60 series and raises the bar with more powerful hardware specifications and a deeply optimized software ecosystem. The result? A platform engineered for AI developers, multi-GPU deployments, and high-intensity professional workloadsβ€”without breaking a sweat.

Massive Memory, Effortless Performance
Equipped with 32 GB of VRAM and 32 Xe cores, the MAXSUN Intel Arc Pro B70 series eliminates memory bottlenecks once and for all. Whether handling large-scale AI models or complex visual workloads, the cards ensure seamless data throughput and consistently stable performance. In other words, "out of memory" just became someone else's problem.

(PR) Edifier Releases M90 Compact 100 W Speakers with HDMI eARC

Edifier, a global leader in premium audio solutions, today announced the availability of the Edifier M90. Unveiled at CES 2026, the Edifier M90 represents a defining milestone signalling the beginning of the brand's next chapter in home-entertainment audio.

Designed to elevate everyday listening, the M90 features HDMI eARC for seamless, low-latency connectivity with desktop audio, TVs, and streaming platforms, delivering a richer, more immersive experience for movies, games and streaming. With 100 W of bi-amplified power, larger 4-inch aluminium mid-low drivers for deeper bass and silk-dome tweeters for smooth, detailed highs, the M90 delivers powerful, high-fidelity sound. Enhanced connectivity and a flexible design allow it to transition easily from desktop to living-room setups, making the M90 a compact, all-purpose speaker solution.

Jury finds Meta and Google negligent in landmark social media addiction trial

Evidence presented at the trial swayed the jury to the plaintiff's side, demonstrating that Meta understood how addictive its platforms could be among teens in particular and that it was actively researching the issue and using its findings to increase engagement among young users.

(PR) Sparkle Announces Intel Arc Pro B70 and Intel Arc Pro B65 Series Graphics Cards

SPARKLE, a leading manufacturer of professional graphics solutions and AI computing platforms, today announced the launch of the SPARKLE Intel Arc Pro B70 and SPARKLE Intel Arc Pro B65 series graphics cards. Designed to deliver powerful AI acceleration and workstation-class graphics performance, these new GPUs expand the Arc Pro B-Series lineup for modern professional computing environments.

The SPARKLE Intel Arc Pro B70 delivers up to 367 TOPS (INT8 Dense) of AI performance, enabling accelerated AI inference, engineering simulation, rendering, and advanced visualization workloads. With powerful compute capability and large memory capacity, the GPU allows professionals to handle complex datasets and AI-driven applications with greater efficiency.

(PR) CRKD Introduces the Next Evolution of Pocket-Sized ATOM+ Keychain Controller

CRKD, the premium collectible gaming brand known for their highly popular Nitro Deck, and line of premium rhythm gaming products, announced today ATOM+, the next evolution of its collectible keychain controller, designed to deliver powerful gameplay in an ultra-compact form. Expected to begin shipping from June 2026, the ATOM+ is compatible with a wide variety of gaming platforms including Nintendo Switch 2 | 1, PC, mobile devices, tablets, and Smart TVs, bringing big features to a small package, and delivering full-fat gaming on the go.

Small enough to attach to a keychain, backpack or just about anywhere you can imagine, the ATOM+ defies convention, expanding upon the original ATOM with a host of sophisticated features including Thumbsticks for maximum gaming compatibility, and utilizing TMR (Tunnelling Magnetoresistance), technology, a next-generation magnetic sensor designed to forever eliminate Stick Drift, and provide superior precision compared to traditional mechanical Thumbsticks or Hall Effect sensors.

NVIDIA Releases GeForce 596.02 Hotfix Beta Driver to Address Game Stuttering

NVIDIA has officially released its second hotfix driver this month, GeForce 596.02 Hotfix, addressing the stuttering issue found in the previous 595.97 WHQL, which was released just yesterday. According to the official driver changelogs, the only game experiencing stuttering was Arknights: Endfield, which was apparently severe enough for NVIDIA to issue a complete hotfix driver. No other changes have been included in this hotfix release. NVIDIA notes that the quality of hotfix drivers is usually unknown, as they go through a much shorter quality testing pipeline than WHQL drivers. These hotfix drivers are designed to provide quick iterations and changes so that gamers who installed the new game-ready or regular WHQL drivers aren't left dealing with game stutters while waiting for the next driver release. Finally, NVIDIA advises that if you aren't experiencing any issues with your current 595.97 WHQL driver, it's best to wait for the next WHQL release to ensure your system remains as stable as possible. The next WHQL release will integrate the changes from the 596.02 Hotfix driver, so you won't have to worry.

DOWNLOAD: NVIDIA GeForce 596.02 Hotfix Beta.

Forza Horizon 6 Gets Lax PC System Requirementsβ€”1080p, 60 FPS on GTX 1650 and Day-1 Steam Deck Support

With the May 19 launch of Forza Horizon 6 just around the corner, Microsoft and Playground Games have officially revealed the minimum hardware requirements for the new racing game, and, at least on the low end, Forza Horizon 6 will not ask players for a lot when it comes to hardware. The minimum spec for the game to run is an Intel Core i5-8400 or AMD Ryzen 5 1600 paired with 16 GB of RAM, an SSD, and an NVIDIA GeForce GTX 1650, Radeon RX 6500 XT, or Intel Arc A380 GPU. This should make the game playable at 1080p, 60 FPS at low settings preset. For 1440p at 60 FPS and high settings, or the recommended hardware spec, Horizon 6 will demand an Intel Core i5-12400F or AMD Ryzen 5 5600X paired with an NVIDIA GeForce RTX 3060 Ti, Radeon RX 6700 XT, or Intel Arc A580 GPU and 16 GB of memory.

The "Extreme" requirements, which target 4K native at 60 FPS, call for an Intel Core i7-12700K or AMD Ryzen 7 7700X with an NVIDIA GeForce RTX 4070 Ti or AMD Radeon RX 7900 XT with 24 GB of RAM, while the "Extreme RT" option, which targets 4K upscaled with Ray Tracing enabled, bumps up the GPU requirement to an RTX 5070 Ti or an AMD Radeon RX 9070 XT as well as requiring 32 GB of RAM and an NVMe SSD. The game also supports DLSS 4, AMD FSR 3 and 4, and Intel XeSS 2.1, as well as Ray Traced Reflections and Global Illumination, and the engine aims for "high, uncapped framerates," and will support both the Valve Steam Deck and ASUS ROG Ally handhelds, although there is no word on whether there will be a graphics preset for the handhelds.

The legendary 3dfx Voodoo is back in FPGA form


3dfx Voodoo graphics accelerators are likely to remain a key part of retro modding projects and gaming ventures for years to come. The Voodoo chip is now almost perfectly emulated in several DOS-based emulators, such as DOSBox-X, and PC emulators like PCem and 86Box, while hardware modders continue developing their...

Read Entire Article

Geekbench 6 warns about inconsistent benchmarking performance from new Core Ultra 200S Plus chips β€” says Intel's IPC boosting Binary Optimization Tool modifies scores in 'unclear' fashion

The team behind Geekbench 6 has warned users about benchmarking inconsistency with Intel's latest iBOT tool found in the new Core Ultra 7 270K Plus and Ultra 5 250K Plus. Geekbench 6 can't identify when iBOT is enabled or disabled during benchmark runs.

Subspace – A nutrition label for job postings


Subspace checks what you can’t see behind job postings so you avoid dead ends. Paste a direct link from a company careers page and it scores the listing for signals like whether it’s still active, salary disclosure, hiring manager, role substance, and employer quality, then returns a clear Job Health Score. Start free with limited checks, or go Pro for unlimited checks, a full seven-category breakdown, and a job board sorted by listings worth applying to.

View startup

LeakBase Admin Arrested in Russia Over Massive Stolen Credential Marketplace

The alleged administrator of the LeakBase cybercrime forum has been arrested by Russian law enforcement authorities, state media reported Thursday. According to TASS and MVD Media, a news website linked to the Russian Interior Ministry, the suspect is a resident of the city of Taganrog. The suspect is said to have been detained for creating and managing a criminal site that allowed stolen

Is your PC ready for Forza Horizon 6? PC Requirements Released

Forza Horizon 6’s system requirements are a breath of fresh air for PC gamers Playground Games has officially released its PC system requirements for Forza Horizon 6, and it’s great news for PC gamers. On PC, the newest Forza game will be highly scalable, supporting platforms as low-end as Valve’s Steam Deck and ASUS’ entry-level […]

The post Is your PC ready for Forza Horizon 6? PC Requirements Released appeared first on OC3D.

Call of Duty: Black Ops 7 campaign's Endgame mode is going free to play for a limited time with the April launch of Season 3 β€” plus new content and changes to Combat Rating are on the way

Treyarch aims to release Black Ops 7 campaign's Endgame mode for free as part of Call of Duty: Warzone with the Season 3 update on April 2. Free-to-play access to Endgame is only for a limited time.

(PR) NVIDIA DLSS 4 Comes to Forza Horizon 6, Screamer & Subliminal

Each week, new games and apps integrating NVIDIA DLSS, NVIDIA Reflex, and advanced ray-traced effects are released or announced, delivering the definitive PC experience for GeForce RTX players. This week, Screamer launches with DLSS 4 with Multi Frame Generation, followed by Subliminal on the 31st. And Playground Games has announced that Forza Horizon 6 is launching May 19th with DLSS 4 and ray tracing.

Additionally, we've unveiled NVIDIA DLSS 5, which delivers an AI-powered breakthrough in visual fidelity by infusing pixels with photorealistic lighting and materials, bridging the gap between rendering and reality.

Iqunix Plans Shine-Through Keycap Update for MQ80 and Magi-Series Low-Profile Mechanical Keyboard

Shine-through keycaps are one of the points of contention where the enthusiast keyboard market and fans of gaming and budget keyboards clash. With the rise of enthusiast-grade pre-built mechanical keyboards, though, things have become complicated, with buyers of mid-range and more affordable enthusiast-tier mechanical keyboards often longing for shine-through keycaps on those keyboards. This was one of the common gripes about Iqunix's Magi and MQ series low-profile mechanical keyboards when they launched, despite the overall positive community sentiment surrounding those keyboards. According to a new post by the official Iqunix team on r/Iqunix on Reddit, the brand has heard the community's feedback regarding shine-through keycaps and is planning to release shine-through keycaps for its MQ80 and Magi-series low profile keyboards.

Iqunix says it has not yet confirmed production with the OEM, but has reached out to the community in an interest check, since it will be small-batch production, and the brand needs 1,000 units spoken for before entering production. The idea with Iqunix prototyping and selling the keycaps itself is that it will color-match the keycaps with the aluminium keyboard cases and ensure compatibility with the Kailh Choc V2 low-profile switches and the north-facing lighting on the MQ and Magi-series keyboards. It has not yet released images of the keycaps on the Magi-series keyboards, but the white versions of the MQ80 and the Magi keyboards are two very different colors, with the Magi taking on a sort of creamy off-white color, suggesting that there will be three colorways for the keycaps. The keycaps will be PBT and follow the same profile as the original Magi and MQ80 keyboardsβ€”that is to say, uniform height with a spherical top.

(PR) CRKD Announces ULT PRO Wireless Gaming Controller

CRKD, the premium collectible gaming brand known for their highly popular Nitro Deck, and line of premium rhythm gaming products, announced today the ULT PRO, a next-generation professional wireless gaming controller featuring multi-platform connectivity and an array of state-of-the-art technology.

Ideal for professional or competitive gamers who demand precision, customization, and multi-platform compatibility, the ULT PRO sets a new standard in performance, yet is priced competitively, allowing any gamer to take advantage of its premium feature set. Compatible with Nintendo Switch 2 | 1, PC, mobile devices, tablets and Smart TVs, the ULT PRO offers exceptional versatility, seamlessly adapting to a wide variety of gaming platforms.

(PR) Razer Introduces the 2026 Blade 16 Gaming Laptop Series

Razer, the leading global lifestyle brand for gamers, today announced the 2026 evolution of the Razer Blade 16. While maintaining its iconic, industry-leading chassis as the thinnest gaming laptop in Razer's history, the new 2026 model shifts into a new gear of performance for gamers, content creators, and professionals who demand power on the go.

By prioritizing the latest silicon breakthroughs from Intel and faster-than-ever memory clock speeds, the new Blade 16 is engineered for those who demand the latest-generation processing in a form factor that remains impossibly slim.

Getly – Buy and sell digital products with instant downloads and 80% payouts


Getly is an independent marketplace for buying and selling digital products like templates, design assets, music, video, courses, and AI prompts. Creators keep 80% per sale, accept Stripe or stablecoins, and deliver instant downloads to buyers. The platform offers creator stores, analytics, marketing automation, bundles, and a Pro subscription with unlimited downloads from a curated catalog. Shoppers can browse thousands of items, filter by price and rating, and checkout securely worldwide.

View startup

Feevio – Speak a quick note and turn it into a ready-to-send invoice


Feevio turns your voice notes into polished invoices and quotes so you can bill clients before details fade. Speak what you did, who it was for, and time or rates, and it drafts clear line items, handles totals and tax, and applies your branding. Use it on phone or desktop to capture jobs, tidy the draft, and email a professional PDF in minutes. Track revenue and outstanding invoices, keep client records together, and bulk-download PDFs when it’s time to share paperwork.

View startup

newnity – USDC crowdfunding with on-chain escrow


newnity is a crowdfunding platform built on Base (Coinbase's L2) where creators launch campaigns and backers fund them in USDC. Every campaign uses on-chain escrow with all-or-nothing settlement: if the goal is met, the creator receives the funds. If not, every backer is automatically refunded. No middleman holds your money.

Supporters earn XP for every campaign they back, building reputation across the platform. Creators can run campaigns for games, music, digital art, and more. Transaction fees on Base are fractions of a cent. Currently live on Base Sepolia testnet, with mainnet launching in Summer 2026.

View startup

HowΒ soap opera-TikTok hybridsΒ became a billion-dollar business

Over the past few years, a new category of mobile apps has quietly exploded intoΒ a multi-billion dollar business.Β They’reΒ called β€œmicro dramas” β€” short-form, mobile-first scripted shows designed to be watched vertically on your phone. Think soap opera meets TikTok, complete with secret billionaire romances, disapproving werewolf mothers-in-law, and cliffhangers engineered to keep users tapping. The leading […]

How to write for AI search: A playbook for machine-readable content

How to write for AI search- A playbook for GEO-friendly copy

Once upon a time, in the delightfully chaotic 1990s, web copywriting was all about exact-match keywords and relentless meta tag stuffing. As algorithms matured, so did SEO copywriting.Β 

Now, with proposition-based retrieval systems, writing like you’re in the business of tricking a crawler into seeing relevance through keyword repetition is no longer a viable strategy.Β 

Below is a playbook for generative AI-friendly copywriting, broken down into self-contained, high-density concepts.

The β€˜grounding budget’: Quality over quantity

Large language models (LLMs) don’t seek less information. They seek higher information density. Google’s Gemini operates on a limited budget of retrieved information, according to research by DEJAN AI, which analyzed over 7,000 queries.

The grounding budget is roughly 1,900 words per query, split across multiple sources. For an individual webpage, your typical allocation is around 380 words. You’re competing for a tiny slice of a fixed pie, so being precise helps the AI’s matching process.

  • Weak retrieval: β€œCoffee maker” (Generic)
  • Strong retrieval: β€œSemi-automatic espresso machine” (High density)

Moving structure inside the language

If Schema.org is the external scaffolding of a building, structured language is the load-bearing internal frame. Language itself is the structure we provide machines, such as β€œsemantic triplets” (subject β†’ predicate β†’ object). When a copywriter moves structure inside the language, the sentences become inherently machine-readable.Β 

Google’s passage ranking, AI Overviews, and third-party LLMs like ChatGPT all evaluate content at the passage level using similar retrieval infrastructure. A sentence that works for one works for all of them.

A properly structured sentence fulfills four strict data criteria:

  • Names the entities: Explicitly identifies subjects and objects (e.g., β€œNotion Team Plan”).
  • States the relationships: Defines how entities interact using clear verbs (e.g., β€œcosts”).
  • Preserves the conditions: Includes context that makes the statement true (e.g., β€œ$10 per user per month”).
  • Includes specifics: Provides verifiable details rather than marketing fluff (e.g., β€œincludes 30-day version history”).
FeatureThe marketing fluffStructured language (GEO-friendly)
Exampleβ€œOur revolutionary platform makes managing your team easier than ever. It is affordable and comes with great support.β€β€œThe Asana Enterprise Plan [Entity] streamlines [Relationship] cross-functional project tracking [Specifics] for teams over 100 people [Condition], starting at $24.99 per user [Data].”
Machine utilityLow (Vague, hard to extract)High (Decomposable into atomic claims)

Best practices for AI-friendly copywriting

Traditional copywriting flows like a row of dominoes. When an AI β€œchunks” your page, it snaps those dominoes apart. If your sentences aren’t load-bearing on their own, the logic collapses.

Rule 1: Every sentence must survive in isolation

Ensure every single sentence explicitly names its subject. Vague pronouns like β€œthis,” β€œit,” or β€œthe above” become dead bits when extracted.

  • Broken: β€œIt also includes unlimited cloud storage.”
  • Anchorable: β€œThe Dropbox Business Standard Plan includes 5TB of encrypted cloud storage.”

Rule 2: State relationships, don’t just list entities

Keyword stuffing introduces inference errors. Effective structured language explicitly states the relationship between nodes.

  • The keyword dump: β€œWe offer SEO, PPC, and content marketing services.”
  • The structured relationship: β€œOur agency integrates PPC data into SEO strategies to lower the cost per acquisition (CPA) by an average of 15% within the first 90 days.”

Rule 3: Build β€˜anchorable statements’

Provide anchorable statements instead of fluff: dense passages equipped with clear claims and specific evidence.

The gold standard example:

  • β€œRamon Eijkemans is a freelance SEO specialist at Eikhart.com, specializing in enterprise SEO for platforms with 100,000 or more pages. He developed the LLM Utility Analysis framework, a five-lens content scoring system that measures the likelihood of content being selected and cited by AI systems, covering structural fitness, selection criteria, extractability, entity and propositional completeness, and natural language quality, based on research into passage retrieval architectures, Google patent evidence, and proposition-based extraction systems. The framework is the subject of this Search Engine Land article.”

The AI inverted pyramid: Engineering β€˜citation bait’

Research shows LLMs reliably extract claims near the beginning or end of a text. Adding more content often dilutes your coverage.Β 

  • β€œPages under 5,000 characters get about 66% of their content used. Pages over 20,000 characters? 12%. Adding more content dilutes your coverage.”

Here’s the four-step formula for citation bait.

  • The direct answer: Open with a dense, 40-60 word declarative statement answering the β€œwho, what, why, or how.”
  • Context and detail: Follow up with nuance, maintaining high semantic density.
  • Structured evidence: Use bulleted lists, tables, or numbered steps (extractable data).
  • Follow-up alignment: Anticipate the next logical prompt in clearly labeled H2 or H3 subheadings.

Clear headings above a paragraph can improve its mathematical relevance (cosine similarity) to AI systems by up to 17.54%.

Get the newsletter search marketers rely on.


The 5 lenses of LLM utility

Developed by Ramon Eijkemans, this scoring system measures the likelihood of content being cited:

  • Structural fitness: Does the prose build hierarchy and relationships?
  • Selection criteria: Is the information dense enough to win the grounding budget?
  • Extractability: Are there broken references or vague pronouns?
  • Entity completeness: Are subjects and relationships explicitly named?
  • Natural language quality: Is the structure rich without being β€œrobotic”?

Here’s a table of the most common pitfalls when it comes to extractability:

PatternExampleProblem
Unresolved pronoun (what?)β€œIt features a 120Hz display”What device?
Vague demonstrative (what + what?)β€œThis gives it an advantage”What gives what an advantage?
Context-dependent (which?)β€œThe above specs outperform the competition”Which specs? Which competition?
Stripped conditions (when? how much?)β€œThe price has dropped significantly”From what? To what? When?
Assumed knowledge (what? who?)β€œThe popular supplement helps with recovery”Which supplement? Recovery from what?
Relative claim (how much? compared to what?)β€œOur fastest-selling product”How fast? Compared to what? Over what period?

Source: From structured data to structured language

Practical content testing tips

To ensure your high-value pages are programmatically extractable, run these four stress tests on your mid-page copy.

The isolation test

The action: Select a single sentence completely at random from the middle of a webpage and read it in total isolation.

The goal: If the sentence relies on preceding paragraphs to make sense or uses vague pronouns (e.g., β€œThis allows for…”), the page has a utility gap. Every sentence should be self-contained.

The context test (β€˜Scroll twice and read’)

The action: Scroll down twice on a homepage so the hero banner and primary H1 disappear, then start reading from wherever your eyes land.

The goal: If a reader (or a machine β€œchunking” that section) can’t immediately identify the product or service without the top visual layout, the mid-page text fails the context test.

The disambiguation test

The action: Read a mid-page sentence out loud and ask: Could this apply to the deforestation of the Amazon or a steamy romance novel?

The goal: If a sentence is wildly generic (e.g., β€œWe empower our clients to achieve more”), an LLM will struggle to map it to your specific entity. Specifics prevent misinterpretation.

The URL accessibility test

The action: Run the live URL through an LLM agent or NotebookLM.

The goal: If convoluted JavaScript, heavy code bloat, or aggressive bot protection prevents an agent from β€œseeing” the raw text, generative search engines may skip the content entirely.

AI search content optimization FAQs

Here are answers to common questions about optimizing content for AI search.

Is generative engine optimization (GEO) a legitimate discipline?

Yes. Formalized by researchers at the University of Washington and Columbia, it focuses on optimizing for β€œcitation frequency” through dense, condition-preserving sentences.Β 

Traditional SEO relies on bolt-on machine-readable code to make human narratives SEO-worthy. AI search optimization requires embedding explicit entity relationships and structure directly inside your copy.

What is the ideal section length for chunking?

Open with a dense 40-60-word declarative statement. Information buried deep in long paragraphs is rarely retrieved.

Does copywriting for AI search help traditional SEO?

Yes. Because Google uses vector embeddings to evaluate content at the passage level, structuring language for an LLM improves traditional visibility.

Is longer content better?

No. Density beats length. Pages under 5,000 characters see a 66% extraction rate, while pages over 20,000 characters plummet to 12%.

What is the inverted pyramid for AI copywriting?

The AI inverted pyramid means abandoning the slow, conversational introduction and placing your core entities, exact claims, and specific conditions in the very first sentence to guarantee flawless machine extraction.

Write for humans, structure for machines

The content creator is now a machine-readability engineer. Our job is to build narratives that are persuasive to humans while being programmatically extractable for neural networks.

If your content lacks explicit entity relationships, perfectly self-contained sentences, and highly β€œanchorable” citable claims, the machines will simply look right through you.

Google March 2026 spam update done rolling out

Google released the March 2026 spam update less than 24 hours ago and it is already done rolling out. The update finished today at 10:40 a.m. ET.

  • This update was released yesterday (March 24) at 3:20 p.m. It took 19 hours and 30 minutes to fully roll out, which is super fast.

Why we care.Β This is the second Google algorithm update announced in 2026. It’s unclear what spam it targeted, but if you see ranking or traffic changes in the next few days, the Google March 2026 spam update could be the cause.

More on spam update.Β Google’sΒ documentationΒ says:

β€œWhile Google’s automated systems toΒ detect search spamΒ are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as aΒ spam updateΒ and share when they happen on ourΒ list of Google Search ranking updates.

For example,Β SpamBrainΒ is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.

Sites that see a change after a spam update should review ourΒ spam policiesΒ to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.

In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”

Impact. This update should only impact sites spamming Google Search, so hopefully you didn’t see any major negative impact.

Intel Arc Pro B70 Shows Up on Newegg With April Release Date and $949.99 Price

Intel just announced the Arc Pro B70 and B65 GPUs with the Arc Xe2 Battlemage architecture, featuring 32 and 24 Xe2 cores, respectivelyβ€”see TechPowerUp's launch coverage fore more detailsβ€”but it did not include information on pricing or availability. Fortunately, for those interested in the workstation graphics cards, Newegg has listed the Arc Pro B70 online, spilling the beans on price and a prospective launch date.

According to the pre-order page, the Arc Pro B70 will cost $949.99 and launch on April 24, 2026. Unfortunately, there is no listing for the Arc Pro B65 just yet, but that should launch around the same time as the B70 and will likely be priced somewhere between the B60's current retail price of $659.99 and the Arc Pro B70's $949.99 price, thanks to the increased VRAM on the B65.

Prototype mod brings native 1080P output to the Super Nintendo


Developed by hardware engineer Stanislav Parhomovich – the same developer behind the MegaSwitch HD for the Sega Genesis – the new modification, called the Super Switch HD, introduces full digital video output to Nintendo's 16-bit classic. The mod enables the console to output a high-resolution digital signal over HDMI on...

Read Entire Article

GlassWorm Malware Uses Solana Dead Drops to Deliver RAT and Steal Browser, Crypto Data

Cybersecurity researchers have flagged a new evolution of the GlassWorm campaign that delivers a multi-stage framework capable of comprehensive data theft and installing a remote access trojan (RAT), which deploys an information-stealing Google Chrome extension masquerading as an offline version of Google Docs. "It logs keystrokes, dumps cookies and session tokens, captures screenshots, and

How to optimize influencer content for search everywhere

How to optimize influencer content for search everywhere

Influencer content isn’t just a brand awareness play. It’s showing up in Google SERPs, Google AI Overviews, and AI answers, making keyword strategy an essential part of every influencer brief.

When we brief an influencer, we assign them a keyword. Not as a nice-to-have, but as a required part of the strategy, usually woven into the script, the caption, the on-screen text, and the hashtags.

That might sound like an SEO team overreaching into an influencer team’s lane. But in 2026, the lane lines don’t exist.

Social content is search inventory. If your influencer marketing program isn’t built around that reality, you’re leaving a significant and measurable share of voice on the table.

Search journeys now span platforms, formats, and sources

For most of search’s history, optimization meant ranking on Google. That’s still important, but it’s no longer the full story.

TikTok Creative Center Keyword Insights
TikTok Creative Center Keyword Insights

Today, nearly half of U.S. consumers (49%) use TikTok as a search engine. Gen Z may lead that adoption, but it cuts across generations.

Over a third of consumers now prefer to start their search journey with AI tools like ChatGPT over Google. Platforms like YouTube, Instagram, and Pinterest have also become primary discovery engines for product research, how-to queries, and purchase decisions.

This is what search everywhere may look like in practice:

  • A user searches β€œbest lightweight running shoes” on TikTok and watches three creator videos.
  • Then they ask ChatGPT for a comparison.
  • Next, they Google for brand reviews to look at Reddit commentary and What People Are Saying content.
  • Then they navigate to a brand’s site.

Each of these touchpoints is a search moment, and there’s a strong chance they involve influencer content. The brands showing up at every step are the ones treating influencer marketing content as search content from the beginning.

Ross Simmonds, CEO of Foundation Marketing, shared with me:

  • β€œInfluencers exist on practically every platform, whether we’re talking about LinkedIn, Reddit, Instagram, or TikTok. They’re creating content every day. When people search, whether through Google or directly on these platforms through things like Ask Reddit or TikTok search, they’re coming across content that influencers have created.”
  • β€œIf those influencers understand best practices around search and discoverability, they’re more likely to create content that ranks not only on native platforms, but also directly in the SERP. That’s a marketer’s dream.”

Dig deeper: Why creator-led content marketing is the new standard in search

Why your influencer’s video is now a SERP result

This is where things get concrete.

What people are saying SERP feature for β€œbest skin care for moms”
What people are saying SERP feature for β€œbest skin care for moms”

Google’s What people are saying SERP feature is a carousel that appears directly in search results and surfaces user-generated and creator content from platforms like YouTube, TikTok, LinkedIn, Instagram, and Reddit for relevant queries.

It’s now a default feature in U.S. search results and consistently shows up for mid- to bottom-of-funnel keywords, exactly where purchase decisions are made. A brand can appear in this SERP feature (either directly or indirectly via an influencer) without ranking in the traditional Top 10 results.

β€œShort videos” SERP feature for β€œskin routine for moms”
β€œShort videos” SERP feature for β€œskin routine for moms”

Additionally, the Short videos SERP feature is another prime spot for your influencer content to take up shelf space on Google. This means an influencer video optimized with the right SEO keyword can surface in multiple spots on Google for a commercial query your brand’s own site might never rank for.

It’s not theoretical. It’s happening now.

Google AI Mode referencing TikTok and Instagram content for a hair curling prompt
Google AI Mode referencing TikTok and Instagram content for a hair curling prompt

Meanwhile, AI answers are pulling from social content at scale. An analysis of 40 million AI search results found Reddit to be the single most-cited domain across ChatGPT, Copilot, and Perplexity. Ahrefs research confirms that YouTube mentions and branded web mentions are among the top factors correlating with AI brand visibility in ChatGPT, AI Mode, and AI Overviews.

Samanyou Garg, CEO of Writesonic, shared with me:

  • β€œYouTube is the No. 1 cited domain for Gemini. And 35% of the channels getting cited have under 10K subscribers. We checked the correlation between views and citations. It’s basically zero.
  • β€œWhat actually correlates? How well the creator describes the topic in their video description. So if an influencer makes a video about your product and writes a lazy two-line description, you’re leaving AI visibility on the table.”

The more creators talk about your product with consistent language, the more confident AI becomes in recommending you. So if your influencer content doesn’t contain the SEO keywords your audience is actually searching for, it won’t be surfaced in all the places that matter.

Dig deeper: Short-form, big impact: What creators can teach performance marketers

Get the newsletter search marketers rely on.


The keyword isn’t optional

Sample influencer brief with keyword included as a standard
Sample influencer brief with keyword included as a standard

Keyword research should be a standard step in every influencer campaign. Start by identifying your target keyword from data across three sources:

  • Existing keyword targets shared by the organic strategists.Β 
  • In platform searches for what’s trending and/or suggested auto completes.
  • AnswerThePublic searches for both brand and non-brand terms related to the campaign theme.

Once the keyword is identified, embed it into every element of the creator’s content:

  • Script: Spoken naturally, ideally in the first half of the video, where TikTok’s algorithm is most attentive to audio signals.
  • Caption: Written to open with or include the keyword, supporting both platform and Google indexing.
  • On-screen text: Reinforcing the keyword visually for accessibility and algorithm legibility.
  • Hashtags: Used to connect the content to the broader topic the keyword lives in.

Don’t confuse this with keyword stuffing. It’s modern content architecture.

There’s a big difference between a creator naturally saying, β€œIf you’re searching for the best running shoes right now…” versus a brand clunkily forcing a phrase into otherwise natural content. The influencer brief sets the requirement, yes, but the creator’s job is to incorporate their unique voice.

Ashley Liddell, co-founder and Search Everywhere director at Deviation, shared:

  • β€œWe assign keywords to influencers based on real search behaviour across platforms, not just brand messaging, and map demand from TikTok, YouTube, Reddit, and Google, then align specific queries to creators whose content style and audience best fit that intent.
  • β€œEach brief gives a clear search-led direction, including topic, angles, and format, while leaving room for the creator’s own creativity. The goal is to make influencer content discoverable in-platform search while ensuring it remains engaging in-feed.”

Once the content is live, track whether the creator’s post is surfacing for the target keyword across:

  • The native platforms (e.g., TikTok, Instagram, etc.)
  • Google SERP features
    • Videos and Short videos carousel
    • What people are sayingΒ 
  • Standard organic resultsΒ 

Screenshot and log positions immediately (because rankings can quickly shift). This data tells a story clients aren’t used to seeing from an influencer program.

Influencers extend your search everywhere footprint

Our search everywhere optimization framework
Our search everywhere optimization framework

There’s a reason this matters beyond any individual campaign. Google organic CTRs have declined dramatically, by as much as 61% on queries where AI Overviews appear.

With Google SERP features increasingly highlighting video and social content, traditional web content is losing surface area on the SERPs. Social content, conversely, is gaining traction, and we cannot ignore this.

For brands, influencer content has taken on a much stronger value: scalable, authentic, human-first search inventory distributed across platforms where their audiences spend time. It doesn’t replace a traditional SEO program, but it extends reach into channels where creator voices tend to outperform brand-owned content.

Younger audiences search socially first. In some categories, a meaningful share of consideration-stage audiences see creator content before they ever search for your brand. If your influencers don’t use the language your audience searches, you’re invisible in the moments that matter most.

Search everywhere optimization comes down to one thing: showing up where your audience actually searches with content worth stopping for.

Dig deeper: Why social search visibility is the next evolution of discoverability

The operational reality: Putting things into practice

The biggest barrier to building keyword optimization into influencer programs is structural. SEO and influencer teams often sit within different parts of an organization, owned by different teams with different KPIs, and little reason to collaborate.

Even when those teams are close, a common hesitation remains: adding a keyword requirement to a creator brief may make the content feel scripted or inauthentic. That concern is valid, but somewhat misplaced. A keyword isn’t a constraint on creativity β€” it’s a topic signal.

Creators integrate talking points, product messaging, and brand language into their content all the time. A search term is no different, as long as the brief gives them room to use it in their own voice.

Closing that gap requires a few concrete changes.

  • SEO and influencer strategy should share a brief template. The target keyword, along with guidance on how to integrate it naturally, should be a standard field, not an afterthought. If the influencer lead and the SEO lead aren’t in the same briefing conversation, that’s the first thing to fix.
  • Keyword selection should be platform-specific. What users search on TikTok differs from what they search on Google. TikTok search is more conversational and trend-based. Pull keywords from TikTok’s own autocomplete, not just a traditional keyword tool, then validate on AnswerThePublic, and cross-reference with existing organic targets to find terms that work across surfaces.
  • Approval workflows should include keyword checks. When reviewing a script, a caption, or a live post, include a keyword compliance check. If the keyword is missing, ask the influencer for a revision before the content goes live. This sounds small, but it’s the difference between content that ranks and content that doesn’t.
  • Reporting should include search metrics. Did the post surface on TikTok for the target keyword? Did it appear in one of Google’s video sections or β€œWhat People Are Saying”? These are trackable, reportable metrics, and they belong in campaign reports alongside reach, engagement, and conversions.

Influencer content has always shaped brand perception. Today, it also shapes search visibility across social platforms, Google’s evolving SERP features, and AI-generated answers.

Brands that recognize this apply a search strategy to a channel that, until recently, operated without it. You treat every influencer video as search content β€” briefing keywords and reporting on search performance as you would for other organic channels.

Influencer content is search inventory. The only question is whether you’re optimizing it.

How schema markup fits into AI search β€” without the hype

How schema markup fits into AI search β€” without the hype

Does schema markup really benefit AI search optimization? Some suggest it can 3x your citations or dramatically boost AI visibility. But when you dig into the evidence, the picture is far more nuanced.

Let’s separate what’s known from what’s assumed, and look at how schema actually fits into an AI search strategy.

How schema fits into AI search now

Search is shifting from surfacing a SERP with blue links to AI Overviews, generative answers, and chat‑style summaries that collate content in addition to links.Β 

To get your content to appear in this model, your site has to be understood as entities β€” singular, unique things or concepts, such as a person, place, or event β€” and the relationships between them, not just strings of text.​

Schema markup is one of the few tools SEOs have to make those entities and relationships explicit and understandable for an AI: This is a person, they work for this organization, this product is offered at this price, this article is authored by that person, etc.​

For AI, three elements matter the most:

  • Entity definition: Which brands, authors, services, or SKUs exist on the page.
  • Attribute clarity: Which properties belong to which entity (e.g., prices, availability, ratings, job titles).​
  • Entity relationships: How entities connect (e.g., offeredBy, worksFor, authoredBy, and sameAs schema tags).​

When schema is implemented with stable values (@id) and a structure (@graph), it starts to behave like a small internal knowledge graph.Β 

AI systems won’t have to guess who you are and how your content fits together, and will be able to follow explicit connections between your brand, your authors, and your topics.​

Dig deeper: Why entity authority is the foundation of AI search visibility

How AI search platforms use schema

Two major platforms have confirmed that schema markup helps their AIs understand content. For these platforms, it is confirmed infrastructure, not speculation.

What about ChatGPT, Perplexity, and other AI search platforms?Β 

We don’t know how these platforms use schema yet. They haven’t publicly confirmed whether they preserve schema during web crawling or use it for extraction. The technical capability exists for LLMs to process structured data, but that doesn’t mean their search systems do.

Dig deeper: When and how to use knowledge graphs and entities for SEO

Research on schema and AI

Here are a few studies that show how schema can benefit AI search.

Citation rates

A December 2024 study from Search/Atlas found no correlation between schema markup coverage and citation rates. Sites with comprehensive schema didn’t consistently outperform sites with minimal or no schema markup.

This doesn’t mean schema is useless, it means schema alone doesn’t drive citations. LLM systems appear to prioritize relevance, topical authority, and semantic clarity over whether content has structured markup.

Extraction accuracy

A February 2024 Nature Communications study found that LLMs extract information more accurately when given structured prompts with defined fields versus unstructured β€œextract what matters” instructions.

Put differently, LLMs perform best when you give them a structured form to fill out, not a blank canvas. When models are asked to extract into predefined fields, they make fewer errors than when told to simply β€œpull out what matters.” 

Schema markup on a page is the web equivalent of that form: a set of explicit entity, brand, product, price, author, and topic fields that a system can map to, rather than inferring everything from unstructured prose.

What the research tells us

This tells us that LLMs have the technical capability to process structured data more accurately than unstructured text.Β 

However, this doesn’t tell us whether AI search systems preserve schema markup during web crawling, whether they use it to guide extraction from web pages, or whether this results in better visibility.

The leap from β€œLLMs can process structured data” to β€œweb schema markup improves AI search visibility” requires assumptions we can’t verify for most platforms.

For Microsoft Bing and Google AI Overviews, schema likely improves extraction accuracy, since they’ve confirmed they use it. For other platforms, we don’t have confirmation of actual implementation.

Dig deeper: Entity-first SEO: How to align content with Google’s Knowledge Graph

Get the newsletter search marketers rely on.


What we don’t know about schema and AI search

AI search is so new β€” for example, ChatGPT search only launched in October 2024 β€” that companies haven’t disclosed their indexing methods. Measurement is difficult with non-deterministic AI responses. There are significant gaps in what we can verify.

To date, there are no peer-reviewed studies on schema’s impact on AI search visibility, or controlled experiments on LLM citation behavior and schema markup.

OpenAI, Anthropic, Perplexity, and other platforms besides Microsoft or Google haven’t published their indexing methods.

This gap exists because AI search is genuinely new (ChatGPT search launched in October 2024), companies don’t disclose indexing methods, and measurement is difficult with non-deterministic AI responses.

How schema builds an entity graph

In traditional SEO, many implementations stop at adding Article or Organization markup in isolation. For AI search, the more useful pattern is to connect nodes into a coherent graph using @id. For example:​

  • An Organization node with a stable @id that represents your brand.
  • A Person node for the author who works for your organization.
  • An Article node authoredBy that person and publishedBy that organization, with about properties that declare the main topics.
{  
  "@context": "https://schema.org",  
  "@graph": [  
    {  
      "@id": "https://example.com/#organization",  
      "@type": "Organization",  
      "name": "Example Digital"  
    },  
    {  
      "@id": "https://example.com/#person-jane-doe",  
      "@type": "Person",  
      "name": "Jane Doe",  
      "worksFor": { "@id": "https://example.com/#organization" }  
    },  
    {  
      "@type": "Article",  
      "@id": "https://example.com/blog/schema-markup-ai-search",  
      "headline": "Schema Markup for AI Search",  
      "author": { "@id": "https://example.com/#person-jane-doe" },  
      "publisher": { "@id": "https://example.com/#organization" }  
    }  
  ]  
} 

That connected pattern turns your schema from a set of disconnected hints into a reusable entity graph. For any AI system that preserves the JSON‑LD, it becomes much clearer which brand owns the content, which human is responsible for it, and what high‑level topics it is about, regardless of how the page layout or copy changes over time.​

AspectTraditional SEO schemaEntity graph schema
StructureSingle @type object per page@graph array of interconnected nodes ​
Entity IDNone (anonymous)Stable @id URLs for reuse across siteΒ 
RelationshipsNested, one‑way (author: β€œname”)Bidirectional via @id refs (worksFor, authoredBy) ​
Primary benefitRich snippets, SERP CTR ​Entity disambiguation, extraction accuracy for AI ​​
AI impactMinimal (tokenization often strips)Β Makes site a unified knowledge graph source if preservedΒ 
ImplementationEasy, page‑by‑pageRequires site‑wide @id consistency ​

Dig deeper: How structured data supports local visibility across Google and AI

Recommendations for implementing schema for AI search

For AI search, the best way to position schema right now is to:

  • Make entities and relationships machine-readable for platforms that preserve and use structured data (confirmed for Bing Copilot and Google AI Overviews).
  • Reduce ambiguity around brand, author, and product identity so that extraction, when it happens, is cleaner and more consistent.
  • Complement topical depth, authority, and clear brand signals, not replace them.

Use schema markup for:

  • Improving visibility in Bing Copilot.
  • Supporting inclusion in Google AI Overviews.
  • Enhancing traditional SEO.
  • Making content easier to parse (good practice regardless of AI).
  • Maintaining a low-cost implementation with potential upside as platforms evolve.

However, don’t expect:

  • Guaranteed citations in ChatGPT or Perplexity.
  • A dramatic visibility lift from schema alone.
  • Schema to compensate for weak content or low authority.

Priority schema types (based on platform guidance) include:

  • Organization (brand entity identity).
  • Article or BlogPosting (content attribution and authorship)
  • Person (author authority and entity connections).
  • Product or Service (commercial entity clarity).
  • FAQPage (Q&A content formats).

Dig deeper: The entity home: The page that shapes how search, AI, and users see your brand

Implement schema for AI search today

Schema markup is infrastructure, not a magic bullet. It won’t necessarily get you cited more, but it’s one of the few things you can control that platforms such as Bing and Google AI Overviews explicitly use.

The real opportunity isn’t schema in isolation. It’s the combination of structured data with proper entity relationships, high-quality, topically authoritative content, clear entity identity and brand signals, and the strategic use of @graph and @id to build entity connections.

Intel officially launch their ARC PRO B70 and B65 GPUs

Big Battlemage arrives with Intel’s ARC Pro B70 and B65 graphics cards Intel has officially launched its first β€œBig Battlemage” graphics cards, new, higher-end Xe2 discrete GPUs that stand above Intel’s prior products. The Intel ARC Pro B70 will become available today, March 25th, with pricing starting at $949, while the ARC Pro B65 will […]

The post Intel officially launch their ARC PRO B70 and B65 GPUs appeared first on OC3D.

Windows 11 to become β€œcalmer and more chill” OS with β€œfewer upsells”

Expect fewer upsells in the future from Windows 11 Big changes are coming to Windows 11, as Microsoft appears to be finally taking feedback seriously. Microsoft has confirmed that it plans to make Windows 11 more performant and reliable. Additionally, Microsoft appears to be looking into removing mandatory logins from the OS, freeing PC users […]

The post Windows 11 to become β€œcalmer and more chill” OS with β€œfewer upsells” appeared first on OC3D.

(PR) Dell Announces New Dell Pro Notebooks, Workstations, Monitors and Peripherals

Dell Technologies (NYSE: DELL) today introduces a transformed commercial portfolio spanning Dell Pro notebooks, Dell Pro Precision workstations, desktops, monitors and client peripherals. Thinner, lighter and more powerful, the new lineup brings a bold, refined design language to commercial devicesβ€”prioritizing sleeker silhouettes, premium materials and modern details that elevate everyday productivity. With advances in cooling, power efficiency and support for on-device AI, the portfolio delivers improved performance and long battery life in more portable form factors, enabling a consistent, elevated experience for everyone from frontline workers to senior executives.

Why it matters
Users want sleek, powerful devices. IT needs security, manageability and budget discipline. Dell's reimagined commercial portfolio delivers both. Advanced engineeringβ€”modular architecture, improved thermals, AI-ready siliconβ€”allows thinner, lighter designs that maintain enterprise-grade performance and control. Organizations can now deploy modern hardware that professionals prefer without compromising on the standards IT demands.

Intel Announces Xeon 600 Workstation Processors Implementing "Redwood Cove" P-cores

Intel launches the new Xeon 600 series "Granite Rapids-WS" processors targeting workstations and HEDTs (high-end desktops). These had been announced in February. These processors mainly target AI development, where you take advantage of Intel AMX (FP16) accelerators, and a fully-fledged AVX-512 instruction set, along with a large PCIe I/O. The key talking point with this processor is its Compute complex. The processor comes with up to 86 "Redwood Cove" P-cores. These are the same P-cores powering Core Ultra "Meteor Lake" processors, and feature Hyper-Threading, besides a full-fat AVX-512 pipeline. The top-speed SKU hence comes with an 86-core/172-thread configuration, with a maximum Turbo Boost frequency of 4.80 GHz. Intel claims that the core-configuration offers a 61% multithreaded performance gain over the previous generation.

The Intel Xeon 600 series is built in the LGA4710 package, and supports Intel W890 chipset. It is configured with an 8-channel DDR5 memory interface (16 sub-channels), supporting up to 4 TB of ECC DDR5 memory, with speeds of up to DDR5-8000 being supported. The processor's PCIe root complex puts out 128 PCI-Express Gen 5 lanes. Intel is also offering CPU overclocking features with these processors. There are as many as 11 processor models, with core-counts ranging from the top 86-core/172-thread down to 12-core/24-thread, all models come with 8-channel ECC DDR5 memory and PCI-Express 5.0 x128. All processor models support Intel vPro remote manageability feature-set.

Intel Intros Core Ultra Series 3 vPro "Panther Lake" Processors for Commercial Notebooks

Intel today announced its ambitious Core Ultra Series 3 vPro "Panther Lake" mobile processors for commercial notebooks. The processors build on the Core Ultra Series 3 processors launched in the consumer segment earlier this year, bolstering them with the entire Intel vPro platform for remote manageability and enhanced enterprise security. The new "Panther Lake" microarchitecture debuts the Intel 18A foundry-node, a must-win foundry note for Intel that offers superior transistor density, energy efficiency, and clock speeds compared to the TSMC N3 foundry node on which Intel built its previous Core Ultra 200V "Lunar Lake" and Core Ultra Series 2 "Arrow Lake" mobile processors.

The new "Panther Lake" architecture introduces the new "Cougar Cove" P-core that's optimized for Intel 18A node with enhancements to its branch predictor, memory disambiguation, and TLB improvements to provide a minor IPC increase over the previous "Lion Cove" P-core. The new "Darkmont" E-core further pushes up IPC over the previous "Skymont." Intel Thread Director sees further improvements for more accurate scheduling. The processors launching today come with the entire constellation of Intel vPro manageability features, including Device Discovery, Innovation Platform Framework, Unique Platform ID, total memory encryption with multi-key, Stable IT Platform, Intel AMT, platform service record, remote erase, one-click recovery, and CET. Security features include Intel TXT, platform trust technology, VT with redirect protection, Boot Guard, BIOS Guard, partner security engine, linear address-space separation, and Intel Threat Detection technology.

Intel Announces Arc Pro B70 and Arc Pro B65 GPUs, Maxes Out Xe2 "Battlemage" Architecture

Intel today announced the Arc Pro B70 and Arc Pro B65 graphics cards for advanced AI compute workloads on workstations, and professional visualization. The two primarily target local inferencing, software development, and deployments in multi-GPU configurations for rack scale AI GPU compute acceleration. The Arc B70 GPU in particular stands out, because it is the most powerful discrete GPU based on Intel Xe2 "Battlemage" graphics architecture, with 32 Xe cores, and a 256-bit wide GDDR6 interface. If you recall, the Arc B580 gaming GPU only comes with 24 Xe cores and a 192-bit interface, and we for long wondered if Intel would ever max out the silicon for a more powerful SKU. The Arc B70 is that SKU.

The Intel Arc Pro B70 is configured with 32 Xe cores (Xe2-HPG), 256 XMX engines, and 32 Ray Tracing Units. It comes with 32 GB of GDDR6 memory across a 256-bit wide memory interface, with 608 GB/s of bandwidth on tap. The card comes with a PCI-Express 5.0 x16 host interface. The Arc B70 offers a peak throughput of 367 TOPS (INT8). On the graphics side of things, it supports DirectX 12 Ultimate, OpenGL 4.6, Vulkan 1.3. Compute APIs include Intel's own oneAPI, OpenCL 3.0, and OpenVINO. Its media engine supports AV1, HEVC, VP9, and H.265 hardware-accelerated encode and decode. Display outputs include four DisplayPort 2.1 ports. The card comes with power draw ranging between 160 W to 290 W depending on partner implementation (230 W for the Intel reference card). Intel will provide certified drivers for Windows 11, Windows 10, and Linux.

Citrix VDI Gets Intel Low-power Island E-core Awareness and HDX Super Resolution

Citrix announced that the Citrix VDI (virtual desktop) platform has been updated with greater awareness of Intel Hybrid core architecture, particularly for processors with low-power island E-cores (LPE cores). The company also announced that Citrix will leverage Intel Video Processing Library to implement a display stream super resolution feature that should improve image-quality in virtual desktop sessions. Since Citrix is essentially "GeForce NOW for work," the client's side of the application has a tiny compute footprint, of decoding the display stream from the server, and conveying user inputs to it. The new LPE-core awareness lets Citrix Desktop confine its workload to the low-power island, allowing the processor to clock-gate, or even power-gate the main CPU complex, significantly improving battery life on commercial notebooks handed out by businesses to their employees.

First-party testing by Citrix shows that LPE-core awareness is found to reduce power consumption by up to 25% on commercial notebooks. For administrators, the update requires no IT policy changes, simply deploy the latest version of Citrix desktop across clients, and the application will automatically detect and work with processors featuring low-power island E-cores. LPE cores were introduced by Core Ultra "Meteor Lake" mobile processors. These got a major update in significance with Core Ultra 200V "Lunar Lake," where all "Skymont" E-cores of the chip are located in the low-power island of the SoC tile, separate from the Compute complex. The latest Core Ultra 300 "Panther Lake" builds on LPE core concept with up to 4 "Darkmont" LPE cores.

AMD Joins Intel in Raising PC CPU Prices by Up to 15%

Last week, we reported that Intel is preparing to increase CPU prices by 10% across its client PC sector. However, a new report from Nikkei Asia suggests that AMD is also joining this trend, with plans for a PC CPU price hike as well. Reportedly, AMD Ryzen CPUs could see prices rise by 15% compared to the same time last year, when prices were typically around the MSRP at retailers. Now, as CPU demand has depleted inventories and the focus remains on server and data center CPU production, capacity for the client segment has been significantly reduced, leading to little to no inventory for PC enthusiasts. Large OEMs like HP and Dell are among the first to feel the pressure from the dwindling supply chain and have reported a significant gap between demand and supply for their PC systems.

Nikkei also notes that both AMD and Intel have informed their clients that CPU price increases will take effect by the end of March and into April, which should start to manifest very soon. What used to be a one or two-week wait from order to CPU shipment has now stretched to a process lasting eight to twelve weeks. This means that a CPU batch ordered in April might not arrive until June. This period is expected to be the worst in terms of supply in the second quarter of this year. Even if OEMs are willing to pay extra to secure CPU supply, availability is lacking. The CPU shortage is worsening daily, coinciding with the ongoing memory and storage shortage we are already experiencing.

Kentucky farm family rejects $26 million offer for 600 acres of land from unnamed AI data center suitor β€” declines 7x offer, wants to β€˜Stay and hold and feed a nation’

A family in Northern Kentucky received a $26 million offer for half their land β€” a price that's worth more than 7 times the going rate for the area. But despite the massive price, they still refused, saying that they "fed a nation off of it."

TradeMatrix – Score any stock with 25 indicators across 3 time horizons


TradeMatrix scores every stock from 0 to 100 using 25 indicators organized into five factors: Technicals, Sentiment, Momentum, Macro, and Quality. Each stock gets three separate scores: short-term, mid-term, and long-term, as different factors matter at different horizons.

Short-term scores weight technicals at 40%, while long-term scores weight business quality at 60%. The same stock can be a Buy for a swing trader and a Hold for a long-term investor, and you can see exactly why. We cover the S&P 500 and NIFTY 500 (Indian market) with full factor breakdowns showing exactly which indicators drive each score. Currently in beta and seeking feedback from active investors.

View startup

Anchored Vines – Learn wine with interactive guides, expert reviews, and consulting


Anchored Vines offers wine education, reviews, and consulting for curious drinkers and wineries. Explore interactive resources like the Periodic Table of Wine, regions map, food pairing guides, aroma wheel, and grape encyclopedia, plus blogs and travel itineraries. You can book personalized consulting to build tasting confidence or get winery support, and use the companion iOS app to learn on the go.

View startup

The Kill Chain Is Obsolete When Your AI Agent Is the Threat

In September 2025, Anthropic disclosed that a state-sponsored threat actor used an AI coding agent to execute an autonomous cyber espionage campaign against 30 global targets. The AI handled 80-90% of tactical operations on its own, performing reconnaissance, writing exploit code, and attempting lateral movement at machine speed. This incident is worrying, but there's a scenario that should

Russian Hacker Sentenced to 2 Years for TA551 Botnet-Driven Ransomware Attacks

The U.S. Department of Justice (DoJ) said a Russian national has been sentenced to two years in prison for managing a botnet that was used to launch ransomware attacks against U.S. companies. Ilya Angelov, 40, of Tolyatti, Russia, was also fined $100,000. Angelov, who went by the online aliases "milan" and "okart," is said to have co-managed a Russia-based cybercriminal group known as TA551 (aka

Device Code Phishing Hits 340+ Microsoft 365 Orgs Across Five Countries via OAuth Abuse

Cybersecurity researchers are calling attention to an active device code phishing campaign that's targeting Microsoft 365 identities across more than 340 organizations in the U.S., Canada, Australia, New Zealand, and Germany. The activity, per Huntress, was first spotted on February 19, 2026, with subsequent cases appearing at an accelerated pace since then. Notably, the campaign leverages

TikTok ad creative has a shorter shelf life. Here’s how to keep up

How to build a creative supply chain for TikTok ads that hold up over time

You know the feeling.

You launch a new TikTok ad. Early metrics look great β€” low CPCs, high engagement, and a ROAS that makes you look like a pro. Then, a few days later, performance slips.

Ad frequency creeps up, the hook rate drops, and you’re suddenly back at the drawing board.

Some call it creative fatigue. On TikTok, it’s closer to creative exhaustion.

A TikTok ad’s β€œhalf-life” is shorter than any other platform. If you’re still treating it like a Meta ad campaign, you’ll lose.

To win, treat creative like a supply chain, not a campaign asset.

Why TikTok creative decays so quickly

On intent-based platforms like Google, Amazon, or Pinterest, people search for things. On social platforms, people look for family, friends, and other people. On TikTok, above all, people go for entertainment (though they still discover things and people).

TikTok’s algorithm favors variety, and you consume content at lightning speed. The moment something feels repetitive or stale, you swipe.

Your creative decays faster because the platform runs on high-velocity novelty. You’re competing with thousands of creators and brands.

If your process relies on long feedback loops β€” from storyboarding to shooting to editing β€” you’ll fall behind. By the time your ad goes live, the trend has shifted, the audio is dated, the hooks are stale, and your audience has moved on.

Creative as a supply chain

To keep up, treat your creative like a fast supply chain:

  • Raw materials: Your footage β€” b-roll, unboxings, natural, unpolished reactions.
  • Processing: Rapid assembly with trending hooks, visuals, audio, and varied CTAs.
  • Distribution: High-volume testing to see what the algorithm picks up.

Use ongoing content capture to avoid bottlenecks and keep up with TikTok’s shrinking content half-life.

  • Modular creative: Record five hooks, three body segments, and four CTAs. Get 60 ad permutations from one hour of filming. Block time on your calendar to shoot.
  • Creator-in-residence: Don’t rely on one-off shoots. Hire creators in-house or on retainer to capture footage and document the brand daily. Make content creation more efficient and effective.
  • The 80/20 fidelity rule: Keep 80% of your content lo-fi and native, as if it were shot on a phone. Use the other 20% for higher-production, polished hero assets. Blend into the feed, maximize performance, and elevate your brand where it matters.

Dig deeper: Cross-platform, not copy-paste: Smarter Meta, TikTok, and Pinterest ad creative

Get the newsletter search marketers rely on.


The anatomy of a modular TikTok ad

Every high-performing TikTok ad can be broken down into three distinct modules.

The hook (0:00-0:03)

The most volatile part. It stops the scroll and fatigues fastest.

Film 5–7 variations for each concept. Use pattern interruptsβ€”start mid-action, zoom in, throw a box. Try a negative constraint: β€œStop doing [common mistake] if you want [result].”

Use green screen reactions with trending news or customer reviews as the backdrop, with your commentary over it. Strong statements and questions keep it open-ended.

The body (0:04-0:15)

This is where you retain attention, deliver value, and show the β€œwhy” or β€œhow.” It’s more educational or narrative and lasts longer than the hook.

Test β€œus vs. them” in a split-screen showing your product solving a common problem.

Test first-person use in real settingsβ€”at home, in the kitchen, outside, at the gym, or at work.

The CTA (last 3-5 seconds)

This is where you close. Test psychological triggers to see what moves the needle:

  • Use scarcity: β€œOur last drop sold out in 48 hoursβ€”don’t miss this one.”
  • Test low-friction angles: β€œTake the 2-minute quiz to find your best fit.”
  • Offer incentives beyond β€œShop Now” or β€œLink in bio”: β€œUse code (X) for (% off) your first order.”

When a winning ad fatigues, don’t kill it. Keep the body and CTA, swap in a new hook. TikTok weights the first seconds for audience matching β€” use that to reset fatigue and extend performance.

When to pause or reallocate

A common mistake is cutting an ad too soon and missing its potentialβ€”or letting it run too long and wasting budget.

Your intuition matters, but TikTok’s algorithm sees more. An ad may fatigue with one audience and find a second life with another, so don’t give up too quickly. Here’s when to pause and when to move it elsewhere:

  • Kill signal: If your thumb-stop rate (3-second views/impressions) drops below your benchmark for three straight days, your hook isn’t workingβ€”pause it. If your hook is very fast, use 2-second views/impressions.
  • Iterate signal: If engagement is high but conversions are low, your creative may work, but your offer, CTA, or landing page is adding friction.
  • Algorithm reallocation: Before you delete any asset, test broad targeting β€” especially with Smart+ campaigns. Let the algorithm find a new audience that hasn’t seen your ad and compare performance to manual targeting.
A visual of a woman doing her face makeup with word bubbles and graphs floating around her.
Source: TikTok

With fast iteration cycles, your TikTok budget can’t be static. Dedicate 20% to 30% of your monthly budget to testing new creative concepts. This budget isn’t for hitting your target ROAS β€” it’s for buying data and insight.

Once you find a winner, move it into scaling campaigns. This prevents performance from dropping when a single creative hits its half-life.

Dig deeper: How to use TikTok Creator Search Insights to find content opportunities

Keep on capturing

Brands winning on TikTok aren’t the ones with the biggest budgets or name recognition. They create and test the most.

Capture everythingβ€”packaging, shipping, unboxings, product use, customer testimonialsβ€”as raw material in your creative supply chain. Shorten the distance between a brand event and launch.

The shrinking ad half-life won’t slow you down. It will become your advantage.

G.Skill showcases DDR5-10000 speeds with Intel Core Ultra 270K PLUS CPU

G.Skill pushes its memory speeds past 10K with Intel’s latest CPUs G.Skill has confirmed that its DDR5 memory kits are ready for Intel’s new Core Ultra 200S PLUS CPUs (see our review here). This includes support for G.Skill’s standard DDR5 (DIMM) modules and their CU-DIMM XMP 3 memory kits. G.Skill has noted that many of […]

The post G.Skill showcases DDR5-10000 speeds with Intel Core Ultra 270K PLUS CPU appeared first on OC3D.

The Super Micro AI accelerator smuggling scandal proves how cut-throat the global AI race has become β€” as global trade evolves, so does export control evasion

One of the co-founders of American server company Super Micro has been arrested and charged with smuggling AI chips to China in deals worth several billions of dollars. Several managers and contractors are also implicated, and one remains a fugitive at the time of writing.

CANIQO – Analyze photos of your dog to track health scores and catch changes early


CANIQO is an AI-powered dog health monitoring web app that analyzes photos of your dog to detect visible health signals, such as coat condition, skin appearance, and body posture, and turns them into an objective health score. Dog owners use PetSignal to track their dog's health over time, spot changes early, and get clear guidance on when a vet visit makes sense. It takes less than two minutes, works from any phone, and builds a health timeline that makes every vet appointment more informed.

View startup

With enticing visual enhancements, creative new bosses, and multiplayer mayhem, is Super Mario Bros. Wonder on Nintendo Switch 2 worth the upgrade? Here’s what I think after collecting every Wonder Seed

I spent 25 hours stomping on Koopas and Goombas in Super Mario Bros. Wonder for Switch 2, and although its multiplayer minigames left me conflicted, I still had a wonderful time.

The first-party data illusion by AtData

For the past several years, marketing strategy has reorganized itself around a simple premise. Third-party data is fading. Privacy expectations are rising. The solution, we are told, is first-party data.

Collect more of it. Centralize it. Build the customer view around it.

In many ways, the shift was necessary. Direct relationships with customers are more durable than rented audiences. Consent and transparency matter. Organizations that invested early in their own data ecosystems are better positioned today than those that relied entirely on external signals.

But the industry’s confidence in first-party data has grown so strong that it now obscures a more complicated reality.

Owning customer data does not automatically translate into understanding customers.

Most marketing leaders have sensed this tension already. Despite increasingly sophisticated technology stacks, many organizations still struggle with familiar questions. Which records represent active individuals? Which identities are stale or misattributed? How much of the customer view reflects current behavior versus historical assumptions?

These are not philosophical concerns. They surface in everyday operational decisions. Campaigns that reach fewer real customers than expected. Personalization efforts that plateau. Measurement models that appear precise but produce inconsistent outcomes.

The problem is not the absence of data. If anything, the opposite is true.

The problem is the assumption that the data sitting inside our systems still reflects reality.

When first-party data becomes historical data

One of the quiet characteristics of customer data is how quickly it shifts from present tense to past tense.

Most organizations gather identity information at moments of interaction. Account creation, purchases, subscriptions, service requests. These events create durable records that enter CRM systems, marketing platforms and data warehouses.

From that point forward, the records largely persist as they were captured.

What changes is the world around them.

Consumers rotate devices. Email addresses evolve from primary to secondary. People move, change jobs, create new accounts, abandon others. Behavioral patterns shift with new platforms, new habits, and new privacy controls.

The record still exists, but the certainty surrounding the identity begins to loosen.

Marketing teams encounter this reality in subtle ways. Lists that appear healthy but deliver diminishing engagement. Customer profiles that fragment across systems. Identity graphs that require constant reconciliation as signals drift out of alignment.

None of this means first-party data is wrong. It simply means it ages.

The moment of collection is precise. The months and years that follow are less so.

The distance between records and reality

The idea of a unified customer profile has become foundational to modern marketing infrastructure. Customer data platforms, identity graphs and advanced analytics environments all attempt to bring scattered signals together into a coherent picture.

When the signals align, the results can be powerful.

But the effectiveness of these systems depends heavily on the integrity of the identifiers entering them. Email addresses, login credentials, device associations and other identity anchors serve as the connective tissue between records.

When those anchors drift or degrade, the unified profile begins to lose clarity.

This is not a failure of the technology itself. Most identity platforms perform exactly as designed. They connect the signals available to them.

The challenge is that many of those signals were captured months or years earlier, during moments when the system had limited visibility into the broader identity context surrounding the individual.

As the digital environment evolves, the original record becomes one reference point among many.

Marketing leaders recognize this gap when their systems produce technically accurate profiles that still fail to explain current customer behavior. The database reflects what was known. The customer reflects what is happening now.

Closing that gap requires something more dynamic than stored attributes alone.

The value of activity signals

In recent years, some organizations have begun looking beyond the traditional boundaries of customer records and focusing more closely on signals that indicate whether an identity is still active within the broader digital ecosystem.

Activity signals provide a different kind of intelligence.

Instead of asking what information was collected about a customer in the past, they ask whether the identity attached to that information continues to exhibit real-world behavior today.

  • Is the email address still being used?
  • Does the identity appear in recent digital interactions?
  • Are the signals surrounding it consistent with genuine consumer activity?

These questions are becoming increasingly important for teams responsible for both growth and risk management.

For marketing, activity signals help clarify which audiences remain reachable and which identities have quietly gone dormant. For fraud teams, they help differentiate legitimate consumers from synthetic identities that appear valid on the surface but lack authentic behavioral patterns.

Both disciplines are ultimately trying to answer the same question.

Does this identity correspond to a real person who is active in the digital world right now?

Stored data alone rarely answers that question with confidence.

A more durable identity anchor

Among the many identifiers circulating through the digital ecosystem, one has proven particularly resilient over time.

Email.

For decades it served as both a communication channel and a persistent identity anchor. It appears in authentication systems, commerce transactions, subscriptions, customer service interactions and countless other digital touchpoints.

That ubiquity produces a secondary effect. Email addresses generate a continuous stream of activity signals that reflect how identities move through the online world.

When those signals are analyzed across large networks, they reveal patterns that extend far beyond a single company’s customer database.

They can indicate whether an identity is actively engaged in digital life or has fallen silent. They can highlight inconsistencies that suggest risk. They can surface connections that help reconcile fragmented customer views.

In other words, they transform a simple identifier into a dynamic indicator of identity health.

Organizations that understand this dynamic tend to treat email differently. It becomes less of a campaign endpoint and more of a reference point for understanding identity across channels.

Rethinking what it means to know the customer

Over the past decade, marketing technology has made extraordinary progress in storing and organizing customer data. Few organizations today lack the infrastructure to capture and analyze enormous volumes of information.

The next frontier is not accumulation. It is validation.

Knowing a customer increasingly depends on the ability to verify that the identities inside a database still correspond to real individuals with ongoing digital activity.

This shift changes how teams think about data quality.

Instead of focusing solely on completeness, forward-looking organizations pay closer attention to vitality. Which identities remain active. Which have quietly faded. Which exhibit patterns that suggest fraud or synthetic creation.

These distinctions influence everything from campaign reach to attribution accuracy to risk exposure.

When identity signals are strong, the rest of the marketing ecosystem performs more reliably. Personalization becomes more relevant. Measurement reflects real outcomes. Customer experiences align more closely with actual behavior.

When identity signals weaken, even the most advanced tools begin operating on uncertain ground.

Moving beyond the illusion

The industry’s embrace of first-party data was an important correction after years of dependence on opaque third-party sources.

But ownership alone does not guarantee clarity.

Customer records capture moments in time. The people behind them continue to evolve.

For organizations that want to truly understand their customers, the challenge is no longer simply collecting data. It is maintaining an accurate connection between stored identities and real-world activity.

That requires looking beyond the database itself and paying closer attention to the signals that reveal whether an identity remains alive in the digital ecosystem.

Companies that make that shift discover something important.

The most valuable customer data is not the information they collect once.

It is the intelligence that helps them keep that data connected to real people over time.

(PR) Gamdias Launches Atlas M5 Series Case with Curved Tempered Glass Variant

GAMDIAS, a global leader in high-performance gaming hardware, announces the ATLAS M5 Series, a new mid-tower case lineup designed for immersive performance and standout aesthetics. Featuring a panoramic showcase and three built-in NOTUS M1 ARGB PWM fans, the case delivers both visual impact and efficient cooling. Aligned with GAMDIAS' 2026 vision, "AUGMENTED IMMERSION," ATLAS M5 Series showcases the brand's dedication to immersive design and high-grade engineering. "ATLAS M5 Series reflects our commitment to gamers who expect both performance and aesthetics, our goal is to deliver a truly immersive gaming experience at an accessible price" said Stimson Wang, CEO of GAMDIAS. The lineup features the ATLAS M5 CG, distinguished by its seamless one-piece curved tempered glass panel. An embodiment of "Immersion" and a clear expression of GAMDIAS' vision for the year.

The ATLAS M5 Series is designed to present your entire build through a refined panoramic showcase. The ATLAS M5 CG elevates the presentation with a striking L-Shaped curved tempered glass panel, meanwhile the ATLAS M5 features seamless tempered glass panels on the front and side. This immersive design removes visual corner obstructions, delivering an uninterrupted view of the internal components.

(PR) MSI Announces Safeguard+ for its MPG Ai TS Series PSUs

High-end GPUs are hungrier than ever, and connector safety is a top priority. MSI's new MPG Ai TS Series PSU introduces GPU Safeguard+, a proactive protection for 12V-2x6 interfaces that stops hardware damage before it even starts.

Proactive Detection, Not Reactive Shutdown
Most PSUs only react after a crash. GPU Safeguard+ is differentβ€”it spots hidden issues in real-time to stop damage or melting before it happens. Say goodbye to sudden system crashes. With our "Save Buffer," you'll get instant alerts from a hardware buzzer and a software pop-up screen. This gives you plenty of time to save your work or exit your game safely. Keep your hardware safe and your progress secure.

Intel's Binary Optimization Tool Results Marked as Potentially Invalid by Geekbench

Alongside "Arrow Lake Refresh," Intel released a new Binary Optimization Tool, available for Core Ultra 270K Plus and 250K Plus SKUs. However, since the tool essentially changes the way .exe applications run, scores on popular benchmarking applications may be deemed invalid. According to Primate Labs, the developer behind Geekbench, the use of Intel's binary optimization tool will result in a special flag on Geekbench scores. Reportedly, as the tool modifies the instruction sequence that the CPU exhibits, benchmark scores will deviate from the standard range. In a standardized benchmarking environment, this is a massive negative consequence, as the inner workings of the Binary Optimization Tool are unknown and resemble a black box that benchmark makers like Primate Labs have to deal with. To combat the use of this tool, Geekbench will now display the message "This benchmark result may be invalid due to binary modification tools that can run on this system" as a warning.

In TechPowerUp's own testing of the Binary Optimization Tool, we found that Intel's Binary Optimization Tool boosts Geekbench v6 single-core performance by 8.2%, while the multi-core score has risen by 7.8%, resulting in about an 8% average uplift in Geekbench testing. Games like Cyberpunk 2077 experience smaller uplifts, while some select titles like Shadow of the Tomb Raider can see up to a 22% increase using the Binary Optimization Tool. In reality, Intel only supports 12 games for now, which will expand further as the company labs release more optimizations. For gamers, this is fantastic news as the performance uplift is essentially free without any drawbacks, but for benchmarks, it is a potential issue as it represents an execution mystery since no one outside Intel knows exactly how the tool runs and how the code paths get modified.

Geekbench declares all Intel Core Ultra PLUS CPU benchmarks potentially β€œinvalid”

Primate Labs call Geekbench results with Intel’s IBOT tool β€œinvalid” Primate Labs, the company behind Geekbench, the popular cross-platform benchmarking tool, has responded to the release of Intel’s Core Ultra 200S PLUS series CPUs (see our review here). The company has stated that all Geekbench 6 results using Intel’s new CPU β€œmay be invalid” due […]

The post Geekbench declares all Intel Core Ultra PLUS CPU benchmarks potentially β€œinvalid” appeared first on OC3D.

(PR) GAMEMAX Introduces MAX PB Series ATX 3.1 80 Plus Bronze PSUs

GAMEMAX, a rising innovator in PC gaming hardware, today announced the MAX PB-Series 80 Plus Bronze Power Supply, a new lineup designed to deliver stable, efficient, and reliable power for modern gaming systems.

Engineered to meet the demands of today's hardware, the GAMEMAX MAX PB-Series is fully compliant with the latest Intel ATX 3.1 specification, supporting high transient power loads from next-generation GPUs while maintaining consistent performance. Built with premium internal components and a robust power design, the MAX PB-Series is positioned as a dependable and cost-effective solution for entry-level and mainstream PC builders.

(PR) COLORFUL Presents iGame Z890 ULTRA-S W and iGame Z890M ULTRA Z Motherboards

Colorful Technology Company Limited, a leading brand in gaming PC components, gaming laptops, and Hi-fi audio products, proudly introduces the iGame Z890 ULTRA-S W and iGame Z890M ULTRA Z motherboards designed to support the latest Intel Core Ultra 200S Plus Series processors. Designed to meet the needs of modern gamers, creators, and PC enthusiasts, both the new iGame ULTRA Series motherboards feature a white color scheme designed to complement the our popular iGame ULTRA W Series graphics cards.

With high-speed memory support, robust power delivery, and user-centric features such as simplified building mechanisms and cleaner cable management approaches, the new iGame Z890 series aims to deliver a balance of performance, usability, and distinctive visual identity for today's high-performance PC builds.

Prowl – Catch every competitor move before it costs you customers


Prowl automates competitor tracking for pricing, website changes, hiring, news, and social channels. It delivers clear weekly reports explaining what changed, why it matters, and how to respond, plus real-time email or Slack alerts for critical updates. Use dashboards for trend analysis, side-by-side comparisons, and sales battlecards. Get started free for two competitors with no setup required.

View startup

FCC Bans New Foreign-Made Routers Over Supply Chain and Cyber Risk Concerns

The U.S. Federal Communications Commission (FCC) said on Monday that it was banning the import of new, foreign-made consumer routers, citing "unacceptable" risks to cyber and national security. The action was designed to safeguard Americans and the underlying communications networks the country relies on, FCC Chairman Brendan Carr said in a post on X. The development means that new models of

Lofree Hyzen Keyboard Features All-New Hybrid Mechanical-Magnetic Switches in World First

Lofree is no stranger to peculiar peripheral designs, with the brand having previously released devices like the Hypace wireless gaming mouse and the Lofree Flow 2, with all its ergonomic quirks. It seems as though Lofree's next release, the Hyzen keyboard, may be even more unique than even those examples. The Hyzen is a futuristic, but otherwise unassuming 65% keyboard made out of CNC-cut aluminium, but its design conceals a few nifty features and an entirely new switch design that supposedly combines the best of both mechanical and magnetic tech. The Hyzen will be a wireless keyboard with a gasket mount design, and Lofree claims that the 10,000 mAh battery can last up to 80 hours. It will launch on Kickstarter on April 23, but early bird reservations are available on the Lofree site ahead of the official launch. Kickstarter pricing plans include the tri-mode wireless version priced at $189 with a $299 claimed MSRP further down the line or a $169 wired version that will later retail for $279.

The Nexus switch is a hybrid mechanical switch, designed by Lofree in collaboration with Kailh, that features both a magnet in the stem for TMR functionality and metal contacts and pins for traditional mechanical operation. Lofree says that the new switch design, which it claims is the first of its kind, combines the tactile feedback of mechanical switches with the low latency, customizable actuation distance and analog features of TMR switches. Aside from the new switches, the keyboard also has a physical toggle that switches the number row to a function row, with the function indicated by a row of LEDs above the num row when the function row is active. The Hyzen also has a knob on the back edge for volume control and a switch to select between 2.4 GHz, Bluetooth, and wired operation. It will also be powered by a Nordic nRF54L series MCU and be capable of 8 kHz polling.

QR Dex – Create dynamic QR codes with your logo and track scans in real time


QR Dex lets you create, brand, and manage dynamic QR codes while tracking every scan with real-time analytics. You can customize codes with your logo and colors, choose from URL, Email, Phone, SMS, WhatsApp, and Wi-Fi types, and update destinations anytime without reprinting.

Collaborate with your team using folders and roles, view campaign performance across locations, and export reports. The platform secures data in transit and offers SSO for teams that need centralized control.

View startup

VaultIt – Save your child's art and quotes in a private, searchable time capsule


VaultIt helps parents preserve their children's artwork, photos, and quotes in a secure, organized space. Capture memories quickly, tag by child, date, or theme, and find milestones fast without paper clutter. Choose who sees what, keep everything private, and upgrade for unlimited memories, advanced tags, custom timelines, and HD media. Build a digital time capsule today and later turn it into beautiful printed albums.

View startup

NutritionGuide – Reduce fitness burnout with a taste-first approach to meal tracking


Most nutrition apps start with a calorie target and work backward. NutritionGuide starts with the food you love β€” your cuisine preferences, health condition, and lifestyle β€” and builds a 7-day guide from there. There's no calorie counting or macro tracking. Balance is shown as food groups, not numbers. Every meal is swappable, and your guide regenerates every week.

View startup

OtterQuant – Track stocks with AI reports, Congress trades, and Reddit sentiment


OtterQuant delivers live market intelligence with AI-powered analysis and interactive data. You can track custom portfolios, generate instant financial reports with OtterBot, and chat to screen stocks using natural language. Explore a congressional trade tracker, daily Reddit sentiment, and full earnings call transcripts. View fast intraday charts, analyst targets, calendars, and news for thousands of US tickers. Use free core tools or upgrade for faster updates and higher AI limits.

View startup

Upgraded PSSR Uses INT8 FSR 4 Implementation That AMD Denied Older RDNA 3 GPUs

Shortly after AMD released FSR 4, claiming that the tech was exclusive to the latest RDNA 4 GPUs, the company seemingly accidentally published the libraries that make up the backbone of the tech, revealing that there may have been a version of FSR 4 planned for RDNA 3 and RDNA 2 GPUs. While this open-sourced oops was later used by modders to bring support to the aforementioned Radeon RX 7000 and 6000 GPUs, a recent Digital Foundry interview with Sony's Mark Cerny suggests that the INT8 version of FSR 4 may have been a compatibility version of the upscaling tech that would later make an appearance as Upgraded PSSR (or PSSR 2.0) on the PlayStation 5 Pro and its RDNA 2 GPU.

According to Cerny, "FSR Redstone and the new PSSR have somewhat different implementations due to the underlying hardware, e.g. FSR Upscaling uses 8-bit floating point and PSSR uses 8-bit integer." He adds that "in practice, the same model is used, but it's trained on different data, e.g. if targeting a 2:1 fixed upscale then the training data used is just for that upscaling ratio - and that different training results in different parameters...not seeing too much difference in results, the various flavors in the updated FSR Upscaling really are rather close to the new PSSR." He also mentions that, on PC, because players are generally so much closer to their monitors than living room gamers are to their TVs, the goals of FSR and PSSR differ slightly.

New Xbox CEO Plans Game Pass Changes With Cheaper Tier to Draw Subscribers

Asha Sharma has faced her fair share of criticism since stepping into the role of CEO of Microsoft Gaming earlier in 2026, but she has repeatedly made statements that she would try to guide the company in what she calls a "return to Xbox," with a focus on in-house hardware. According to a new report by The Information, one of Sharma's first moves as CEO of Microsoft Gaming will be to introduce a new lower-price tier of Game Pass in order to attract new gamers to the service and to make the service inticing to a broader range of consumers.

This comes shortly after Microsoft renamed the Windows Full Screen Experience to Xbox Mode, perhaps a rebranding in the name of unifying Microsoft's gaming offeringsβ€”a mission that is clearly part of Microsoft's plan, given that its next-gen console is confirmed to be a PC-console hybrid. Previously, there were rumors of an ad-supported tier for Xbox Cloud Gaming, but Microsoft also increased the price of Game Pass across the board as recently as October 2025. A Microsoft executive previously confirmed that Game Pass was profitable as a service, but that Xbox had to emphasize flexibility in its strategy, and introducing a cheaper Game Pass tier may be a viable way to draw in new Game Pass gamers, some of whom will presumably hop over to the more premium tiers, depending on Microsoft's implementation of the new entry-level tier.

OpenAI Shuts Down Sora API and App, Disney Withdraws $1 Bn Investment

There has been much said about OpenAI's Sora video appβ€”both positive, about how the tech would make video creators and social media influencers obsolete, and negative, about how it would kill creativity and fill the internet with soulless slopβ€”but it seems like neither side of that argument will ever get to find out if they were correct. In a recent post on X, OpenAI announced that it would be shutting down both the Sora app and API, suggesting a pivot away from video generation at OpenAI. A final date for the app and API closure has not yet been announced, but OpenAI says it will reveal more information soon.

Following the announcement, The Hollywood Reporter reported that a Disney spokesperson had confirmed that the media giant would be withdrawing a previously announced $1 billion investment into OpenAI's video generation model and product. Disney's statement reads: "As the nascent AI field advances rapidly, we respect OpenAI's decision to exit the video generation business and to shift its priorities elsewhere. We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators." The closure of Sora comes after backlash and legal action against OpenAI from a number of publishers, including Square Enix, Bandai Namco, and Nintendo over AI-generated videos created with some of their characters and other IPs.

ManyLens – Compare philosophy, psychology, and faith on your real dilemmas


ManyLens lets you type a real-life dilemma and view structured perspectives side by side from philosophy, psychology, religion, and other traditions. It keeps each lens distinct, highlights common ground, and helps you reflect by saving insights over time. Use it to compare reasoning, spot convergences, and make decisions with context rather than one blended answer.

View startup

Taskadactyl – Gamified task app for ADHD brains with real rewards and quests


Reward your brain, feed your Dactyl, get stuff done! Taskadactyl is a gamified task app built for ADHD brains bored by other productivity tools. Your tasks don't get to win anymore. Your Dactyl eats first. Tasks become quests, completions trigger real rewards, with over 50 badges and game themes. Something unlocks at 3 referrals, with clues in the app.

Built by an ADHD founder who got tired of being eaten alive and decided to build the predator instead.

View startup

TinyCashflow – A cashflow tracker with an infinite future timeline


TinyCashFlow is a manual cashflow tracker with an infinite timeline. Instead of just showing your past spending, it projects forward β€” scroll to any future date and see your exact balance, accounting for all your recurring transactions. Built around a spreadsheet-style interface, everything is on one screen. Edit inline, filter on the fly, and quick-sum any selection. It supports multiple currencies, crypto, and shows a running net worth column across all your accounts. No bank connections or sign-up are required. The free tier is genuinely useful, while premium adds cloud sync, mobile, and multi-sheet support. It works on Mac, Windows, iOS, and Android, and is fully offline first.

View startup

1047 Games Announces New Movement Shooter With Titanfall and CoD Origins Alongside Splitgate 2 Season 2

One of the reasons players loved the original Splitgate game was its fast-paced, highly kinetic gameplay that relied as much on positioning as it did gunplay and tactics. Despite its somewhat rough start to life, Splitgate: Arena Reloaded, featured many of those same mechanics. In a recent video announcing Splitgate: Arena Reloaded Season 2, Ian Proulx, CEO of 1047 Games, announced that "a small section of the team" has started working on a new movement shooter that takes inspiration from Titanfall and Call of Duty Black Ops 3. While he doesn't share much more beyond that, Proulx did ask the community for feedback on what players would like to see from a game like that and has published a form for play-testers to apply to test the new game. The fact that the studio is already recruiting play testers suggests that the game has been in development for a good while. The studio was also sure to follow up the announcement with a post on X clarifying that 1047 Games is not abandoning Splitgate: Arena Reloaded now that there's another game in the works.

Proulx says that part of the reason for pursuing a new game is that 1047 Games has always had the dream of being a multi-game studio, and the timing of the announcement suggests that a second project may have been partly motivated by the recent backlash against 1047 Games following the middling launch of Splitgate: Arena Reloaded. For its part, Splitgate: Arena Reloaded Season 2 will launch before the end of March and introduce a number of new features, many of them based on community feedback, including a more cohesive theme, a new battle pass and accompanying skins, three new maps, a new biome that's a twist on a biome from Splitgate 1, and a classic Splitgate 1 TDM gameplay starts, which have been heavily requested by the community. Season 2 will also reintroduce time trial races from Splitgate 1, replete with user-created maps with their own leaderboards. Players will also be able to buy all of the Splitgate: Arena Reloaded Season 2 battle pass with the Splitcoin in-game premium currency instead of needing to spend real money aside from Splitcoin they may already have.

Steam Breaks Concurrent Player Count Record Yet Again

Not four months ago, we reported on Steam breaking 42 million concurrent players, but much like Crimson Desert's recent success, Valve's gaming platform doesn't seem to be slowing down, even during an unlikely time of year. According to SteamDB, Steam has once again broken its own record for concurrent players, reaching 42,318,602 players on Sunday, March 22, 2026, at 14:20 UTC. Steam's own statistics report a peak concurrent player count of 42,282,922 players, which is a little less than SteamDB reports, but still a record high.

Compared to February 2026, which itself was a slower month for the gaming giant, Steam saw an increase of over a million players at its peak. This doesn't necessarily mean all of those players were in-gameβ€”in fact, SteamDB reports that there were only 13,731,783 players in-game at the time of the new record. The new player count record was set on a Sunday, which tends to be Steam's busiest day of the week both for players on the platform and in-game. This new player-count record comes in spite of a recent spate of layoffs and studio closures, which have historically signalled a downturn in the gaming industry.

(PR) Denon Announces the New Denon Home 200, 400 & 600 Speakers

When you walk into someone's home, you can usually tell what matters to them. Some rooms show it in the colors on the wall. Others in the art they choose or the books left open on a side table. Increasingly, there is another kind of expression. It is not seen as much as it is felt. It is the presence of technology that does not draw attention to itself but transforms the room when you need it to.

This idea guided the creation of the newest generation of Denon Home wireless speakers. The Denon Home 200, 400 and 600 were designed not only to deliver exceptional sound, but to sit comfortably within the fabric of everyday life. Denon's engineering and design teams began with a simple question: What if a speaker could truly feel like part of the home?

Epic Games Lays Off 1,000 Workers Amid Fortnite Downturn

Epic Games has just announced a round of layoffs in an effort to save costs following a downturn in engagement on Fortnite. A recent post to the Epic Games news page explains that "The downturn in Fortnite engagement that started in 2025 means we're spending significantly more than we're making, and we have to make major cuts to keep the company funded," and Epic says that the layoffs are part of company-wide cost-cutting measures that aim to reduce expenses by over $500 million in order to stabilize the company's finances. These other cost-savings measures will affect contracting, marketing, and hiring at the company, and any staff affected by the layoffs will receive severance packages of at least four months of base payβ€”with that amount increasing depending on tenure at the company.

The Epic layoffs will affect 1,000 workers, and Epic seems to blame both industry-wide challenged and its reliance on Fortnite as its major cash cow for its financial troubles, with Epic CEO, Tim Sweeney emphasizing that "the layoffs aren't related to AI," and explaining that Epic wants "as many awesome developers developing great content and tech as we can." He also goes on to say that Epic has struggled to deliver "Fortnite magic with every season," and noting that the game is in its early stages of returning to mobile platforms, but that the goal going forward will be to focus on "fresh seasonal content, gameplay, story, and live events," while focusing on Fortnite's user-generated content and on upgrading from Unreal Engine 5 to Unreal Engine 6. The full statement regarding Epic's layoffs follows.

Microsoft and Nvidia launch AI partnership to speed up nuclear power plant permitting and construction β€” simulation tools and generative models could hasten historically lengthy processes

Microsoft and Nvidia are joining forces to accelerate the construction of nuclear power plants for power-hungry AI data centers. The partnership combines generative AI, digital twin simulation, and Nvidia's Omniverse platform to streamline the nuclear lifecycle from permitting through operations.

Anonymize360 – Protect sensitive data in AI chats with on-device anonymization


Anonymize360 protects sensitive data in AI chats by rewriting it on your device before it leaves and restoring it on return. It detects PII like names, addresses, SSNs, and medical or financial details, replaces them with tokens, and encrypts the originals locally with AES-256. The system runs on-device with a zero-knowledge design and works seamlessly with AI models. Enterprises gain privacy-by-default workflows and compliance support, while individuals can download and start with a free trial.

View startup

CrewBase – Find maritime jobs fast with AI matching, auto-apply, and alerts


CrewBase connects seafarers and offshore professionals with verified maritime jobs using AI-powered matching, smart filters, and real-time alerts. It lets you search instantly, set auto-apply rules, and generate a polished CV, with seamless access on iOS, Android, and web. Employers post vacancies in minutes, search a growing verified talent pool, and manage applications with secure proxy email and desktop-optimized workflows, enabling fast, targeted maritime recruiting at scale.

View startup

Google releases March 2026 spam update

Google released its March 2026 spam update today at 3:20 p.m. It’s the second announced Google algorithm update of 2026, following the February 2026 Discover core update.

  • This is the first spam update of 2026.
  • Google’s most recent spam update was in August 2025.

Timing. This update may only β€œtake a few days to complete,” Google said. On LinkedIn, Google added:

  • β€œThis is a normal spam update, and it will roll out for all languages and locations. The rollout may take a few days to complete.”

Why we care.Β This is the second announced Google algorithm update of 2026. It’s unclear what spam this update targets, but if you see ranking or traffic changes in the next few days, it could be due to it.

More on spam update. Google’s documentation says:

β€œWhile Google’s automated systems toΒ detect search spamΒ are constantly operating, we occasionally make notable improvements to how they work. When we do, we refer to this as aΒ spam updateΒ and share when they happen on ourΒ list of Google Search ranking updates.

For example,Β SpamBrainΒ is our AI-based spam-prevention system. From time-to-time, we improve that system to make it better at spotting spam and to help ensure it catches new types of spam.

Sites that see a change after a spam update should review ourΒ spam policiesΒ to ensure they are complying with those. Sites that violate our policies may rank lower in results or not appear in results at all. Making changes may help a site improve if our automated systems learn over a period of months that the site complies with our spam policies.

In the case of a link spam update (an update that specifically deals with link spam), making changes might not generate an improvement. This is because when our systems remove the effects spammy links may have, any ranking benefit the links may have previously generated for your site is lost. Any potential ranking benefits generated by those links cannot be regained.”

Update, March 25. The update completed in less than 24 hours. See: Google March 2026 spam update done rolling out

ASUS Plans 30% PC Price Increase in Taiwan, Other OEMs to Follow

PC pricing in Taiwan may see a significant increase next quarter, as ASUS Joint Technology Systems Division General Manager Yi-Hsiang Liao has announced a planned 30% price hike across the company's entire product line. He explains that the extremely high costs of DRAM and SSD storage, combined with a shortage of CPUs, are driving this increase. ASUS also claims that this issue is not limited to their company, as every Taiwanese PC maker will face similar challenges. Reportedly, ASUS did not comment on whether the price increase will affect overseas markets or remain exclusive to Taiwan. However, it seems likely that rising component costs in Taiwan will also impact Western markets, which have already experienced significant price increases in recent months.
UDN, Machine TranslatedYesterday, ASUS, in partnership with Qualcomm, held a press conference for its new Zenbook A16 laptop. During an interview, Liao Yi-hsiang, General Manager of ASUS United Technology Systems Business, revealed that ASUS has confirmed that PC prices in Taiwan will increase by 25% to 30% or more in the second quarter, with varying increases across different models.

AMD "Medusa Point" APU Gets "GFX1171" and "GFX1172" RDNA 4m GPU Targets

AMD's RDNA 4m graphics might be the company's most mysterious GPU IP, as they are reportedly rebranding some of their RDNA 3.5 IP to fit INT8 data types and support FSR 4 technology. We previously reported that AMD designated the GFX1170 target for RDNA 4m. However, in the latest merge request for the LLVM compiler, AMD added two new software IDs: GFX1171 and GFX1172 GPUs. These targets are not true RDNA 4 GPUs, which belong to a GFX12 branch, but rather extensions of RDNA 3. What was thought to be RDNA 3.5 has now evolved into RDNA 4m, which will power AMD's Ryzen 500 "Medusa Point" series of APUs. With RDNA 3.5 / RDNA 4m expected to be used by AMD until 2029, it makes sense for AMD to adapt RDNA 3.5 into RDNA 4m with support for FSR 4 upscaling technology.

In contrast, "Medusa Halo" will utilize AMD's next-generation RDNA 5 / UDNA GPU microarchitecture. "Medusa Point" will introduce a desktop-exclusive RDNA 4 with the new RDNA 4m variant. Although we initially lack comparisons between the two, some instruction set extensions, such as WMMA and SWMMAC instructions, indicate support in the new "GFX1170" GPU, which should be associated with the GFX11 generation, also known as RDNA 3. Currently, this is believed to be an upgraded RDNA 3 with many RDNA 4 modules, enabling FSR 4 support even on the less powerful "Medusa Point" APU.

This startup will pay you $800 to yell at AI all day


As Boston Dynamics demonstrated years ago, "bullying" technology designed to mimic intelligent behaviors is nothing new. Memvid is now offering $800 to someone interested in putting modern AI models to the test – a "professional" yeller tasked with spending an entire day stressing popular chatbots.

Read Entire Article

English Grammar – Practice English grammar with instant feedback and clear explanations


English Grammar guides you to master tenses, conditionals, modal verbs, and more through interactive exercises with instant feedback. Choose multiple choice or fill-in-the-blank, see clear visual cues, and read detailed explanations for every answer. It covers A1 to C1 levels across 20 grammar categories, with hundreds of exercises and more in development. Practice anytime on any device to build confident, accurate English.

View startup

TeamPCP Backdoors LiteLLM Versions 1.82.7–1.82.8 via Trivy CI/CD Compromise

TeamPCP, the threat actor behind the recent compromises of Trivy and KICS, has now compromised a popular Python package named litellm, pushing two malicious versions containing a credential harvester, a Kubernetes lateral movement toolkit, and a persistent backdoor. Multiple security vendors, including Endor Labs and JFrog, revealed that litellm versions 1.82.7 and 1.82.8 were published on March

Tax Search Ads Deliver ScreenConnect Malware Using Huawei Driver to Disable EDR

A large-scale malvertising campaign active since January 2026 has been observed targeting U.S.-based individuals searching for tax-related documents to serve rogue installers for ConnectWise ScreenConnect that drop a tool named HwAudKiller to blind security programs using the bring your own vulnerable driver (BYOVD) technique. "The campaign abuses Google Ads to serve rogue ScreenConnect (

Reddit introduces collection ads, deal overlays, Shopify integration

Reddit logo displayed on smartphone screen

Reddit is rolling out new Dynamic Product Ad features, including a shoppable Collection Ads format and Shopify integration, the company announced today.

What’s new.

  • Collection Ads: A new Dynamic Product Ad format that pairs a lifestyle hero image with shoppable product tiles in one carousel, bridging discovery and purchase. Early adopters following best practices are seeing an 8% ROAS lift.
  • Community and Deal overlays: Reddit-native labels like β€œRedditors’ Top Pick” and automatic discount callouts surface social proof and pricing signals without extra work from you.
  • Shopify integration: Now in alpha, this simplifies catalog and pixel setup for new DPA advertisers, automatically matching products to the right users and context.

The numbers. Reddit DPA delivered an average 91% higher ROAS year over year in Q4 2025. Liquid I.V. reports DPA already accounts for 33% of its total platform revenue and outperforms its other conversion campaigns by 40%.

Why now. Reddit has seen a 40% year-over-year increase in shopping conversations. Also, 84% of shoppers say they feel more confident in purchases after researching products on Reddit.

Why we care. The new tools, especially the Shopify integration, lower the barrier to getting started with Dynamic Product Ads. Reddit might still be viewed by some as an undervalued paid media channel, but there’s an opportunity to get in before competition and costs rise.

Bottom line. Reddit is increasingly a serious performance channel for ecommerce, and these tools make it easier to get started. If you’re not yet running DPA on Reddit, the combination of undervalued inventory and improving ad formats makes this a good time to test.

Reddit’s announcement. Introducing More Ways to Tap into Shopping on Reddit

FCC Bans New Foreign-Made Wi-Fi Routers From the U.S. Market

The United States Federal Communications Commission (FCC) has announced a ban on all new Wi-Fi routers made outside the "land of the free." With a clear goal of eliminating external supply chain vulnerabilities, the FCC has been collaborating with the White House to prohibit the sale of foreign-made Wi-Fi routers in the domestic market. Reportedly, the FCC claims that having foreign Wi-Fi routers in American homes has been a significant loophole for other countries to exploit, as Wi-Fi routers are one of the most critical points of security in any home network. These devices have long been targets of foreign attacks, which have disrupted security and introduced supply chain risks by creating points of failure that the U.S. government cannot control or prevent. These routers were sold openly to telecommunication companies or purchased by consumers independently. Additionally, foreign routers have been implicated in cyberattacks targeting vital U.S. infrastructure, such as Volt, Flax, and Salt Typhoon.

Interestingly, this decision does not affect any routers that consumers have already purchased. "Consumers can continue to use any router they have already lawfully purchased or acquired," notes the FCC. Furthermore, the FCC will allow any previously approved router models to continue being sold and imported. Only new router models will need to undergo FCC approval, which is a separate procedure coordinated with the Department of War (DoW) or the Department of Homeland Security (DHS). Both DoW and DHS grants of Conditional Approvals are honored by the FCC to ensure that devices receiving this approval can continue to be sold without posing a national threat.

Arm Enters Silicon Business for the First Time with Data Center AGI CPU

Arm has made a significant move stepping into production silicon for the first time in its history with the launch of the Arm AGI CPU, a data center processor designed specifically for agentic AI workloads. For over three decades, Arm has been purely an IP licensing business. With the launch of its AGI CPU series, Arm made a fundamental shift in how the company positions itself in the market. The AGI CPU is built on a 3 nm process, packs up to 136 Neoverse V3 cores, runs at a 300 W TDP, and delivers 6 GB/s of memory bandwidth per core at under 100 ns latency. It supports up to 6 TB per chip and DDR5-8800 speeds. On the I/O side, the chip features 96 PCIe Gen 6 lanes along with CXL 3.0 and AMBA CHI (Coherent Hub Interface) links. Each core handles a dedicated program thread, which Arm says eliminates throttling and idle threads under sustained load.

Specifically important in the data center business, density figures are notable. Air-cooled systems can fit up to 8,160 cores per rack, while liquid-cooled systems push that above 45,000. Data centers are expected to need more than four times the current CPU capacity per gigawatt to keep up, and Arm is arguing that x86 architecture carries too much overhead and complexity for this new class of workload. Arm claims more than 2x performance per rack versus x86, adding that this can translate to potential savings of up to $10 billion per gigawatt of AI data center capacity.

Firefox 149.0 Stable Launches With Split View, Free VPN, Improved PDF Performance

Mozilla has officially launched Firefox 149.0, which is now available for download in the release channel. Firefox 149 introduces a number of bug fixes, many of which were announced in previous beta versions, but there are also a handful of new features that may significantly impact the user experience. Mozilla has taken a page out of Zen Browser's books with a new split mode for loading two webpages side-by-side in split view, introduced a free VPN and added a new "share" button to the toolbar. Firefox will also now automatically block website notifications and revoke malicious websitesβ€”as flagged by SafeBrowsingβ€”by default.

Split view can be triggered by right-clicking a link and selecting the split tabs option or by selecting two tabs and clicking the split tabs button in the address bar. Firefox's free built-in VPN, which is directed at security-conscious users, allowing users to mask their location, hide their IP, and protect their data for free. The VPN will require a Mozilla account, there is a data cap of 50 GB per month, and the rollout is starting with France, Germany, the UK, and the US as of version 149's release. Firefox Labs also now features a tab notes feature, for which the developers are seeking feedback during this release. Firefox has also officially implemented hardware acceleration for PDFs, which means documents should load significantly faster. As usual, there is also a stack of under-the-hood changes to Firefox 149, like modern API implementations and a new TrustPanel, and you can check those out in the official update notes.

The US moves to block most new routers made overseas


The order effectively halts the entry of nearly all future Wi-Fi and wired routers, as the vast majority are produced abroad. Products that have already received FCC authorization can continue to be sold and imported, and existing consumer equipment remains unaffected. However, for router makers planning to release new products...

Read Entire Article

AYANEO's upcoming Next 2 handheld gaming console shelved due to rising component prices β€” company stops preorders for the $1,999 Strix Halo device

Ayaneo has suspended sales of its next-gen premium gaming handheld as the procurement costs for storage have slipped out of control. When it was first announced, prices were already high, but following the CNY break, vendor quotes had shot up many times, making it unfeasible to build the device. It's a temporary suspension, however, and existing sales will be honored.

Linkeezy – Organize your LinkedIn inbox, saved posts, and feeds in one place


Linkeezy is a compliant workflow tool that brings your LinkedIn inbox, saved posts, and feeds into one organized workspace. Instead of jumping between tabs and losing track of conversations or content, you can manage messages in a clean, Gmail-style view, organize saved posts into a searchable library, and follow focused feeds built around the people and topics that matter most.

Linkeezy runs through a web app and Chrome extension that retrieves your messages and content without storing them. It is designed to align with LinkedIn's terms of service, with no profile scraping, automation, or AI-generated interactions, so you stay in control while keeping your workflow efficient and focused.

View startup

Hackers Use Fake Resumes to Steal Enterprise Credentials and Deploy Crypto Miner

An ongoing phishing campaign is targeting French-speaking corporate environments with fake resumes that lead to the deployment of cryptocurrency miners and information stealers. "The campaign uses highly obfuscated VBScript files disguised as resume/CV documents, delivered through phishing emails," Securonix researchers Shikha Sangwan, Akshay Gaikwad, and Aaron Beardslee said in a report shared

AI citations favor listicles, articles, product pages: Study

AI citation engine

AI search citations favor a small set of formats. Listicles, articles, and product pages drive over half of all mentions across major LLMs, according to new Wix Studio AI Search Lab research analyzing 75,000 AI answers and more than 1 million citations across ChatGPT, Google AI Mode, and Perplexity.

The findings. Listicles led at 21.9% of citations, followed by articles (16.7%) and product pages (13.7%). Together, these three formats made up 52% of all AI citations.

  • Articles dominated informational queries, cited 2.7x more than other formats.
  • Listicles captured 40% of commercial-intent citations, nearly double any other type.

Why intent wins. Query intent β€” not industry or model β€” most strongly predicts which content gets cited. This pattern held across industries, from SaaS to health.

  • Informational queries skewed heavily toward articles (45.5%) and listicles (21.7%).
  • Commercial queries were led by listicles (40.9%).
  • Transactional and navigational queries favored product and category pages (around 40% combined).

Why we care. This research indicates that you want to map content types to user goals rather than just creating more content. Articles educate, listicles drive comparison, and product pages convert. Aligning content format with user intent could help you capture more AI citations and increase visibility.

Not all listicles perform equally. Third-party listicles accounted for 80.9% of citations in professional services, compared to 19.1% for self-promotional lists. That seems to indicate LLMs prefer neutral, editorial comparisons over brand-led rankings.

Model differences. All models favored listicles, but diverged after that.

  • ChatGPT leaned heavily into articles and informational content.
  • Google AI Mode showed the most balanced distribution.
  • Perplexity stood out, with 17% of citations coming from discussions like Reddit and forums.

Industry patterns. Content preferences shifted slightly by vertical:

  • SaaS and professional services over-indexed on listicles.
  • Health favored authoritative articles.
  • Ecommerce spread citations across listicles, articles, and category pages.
  • Home repair showed the most even distribution across formats.

The research. The content types most cited by LLMs

Google is tightening political content rules for Shopping ads starting April 16

Google shopping ads

A quiet but important policy update is coming to Google Shopping ads next month, requiring some merchants to verify their accounts before running ads featuring political content.

What’s changing. From April 16, merchants running Shopping ads with certain political content in nine countries will need to verify their Google Ads account as an election advertiser. Google will also outright prohibit some political Shopping ads in India.

The countries affected. Argentina, Australia, Chile, Israel, Mexico, New Zealand, South Africa, the United Kingdom, and the United States.

Why we care. Shopping ads aren’t typically associated with political advertising β€” this update signals that Google is broadening its election integrity efforts beyond search and display into commerce formats. Merchants selling politically themed merchandise, campaign materials, or other related products in the affected countries need to act before the April 16 deadline.

What to do now.

  • Review the updated policy language to determine if your Shopping ads feature content that falls under the new restrictions
  • If affected, apply for election advertiser verification through Google Ads before April 16 to avoid disruption to your campaigns

The bottom line. This affects a narrow but specific set of merchants β€” but the consequences of missing the deadline could mean ads being disapproved or accounts being flagged. If you sell anything with a political angle in the listed countries, check your eligibility now.

Nintendo Switch 2 Production Cut by 33% Due To Softening US Sales

It was recently revealed that Nintendo is planning an updated Nintendo Switch 2 with a user-replaceable battery that is meant to comply with new EU rules on repairability. That device is reportedly not destined for the US, but Nintendo has also made a more recent change to the Switch 2 for the US market. Specifically, according to Bloomberg, Nintendo plans to cut production of the Switch 2 handheld by as much as 33% due to unexpectedly low US sales during the recent holiday period.

The new production guidelines will reportedly see Nintendo produce an estimated 4 million Switch 2 consoles during Q2, down from its initial 6 million-unit plan. Bloomberg's sources claim that Nintendo declined to increase production targets despite the successful launch of PokΓ©mon Pokopia, instead opting to wait and see if recent releases for the platform will continue to perform well before producing more hardware. Part of the change could also be logistical in nature. Nintendo has previously acknowledged the increased memory and storage prices that have plagued the consumer electronics market of late, and the production cut is likely a move to reduce risk during a time when hardware is more expensive to produce.

(PR) Razer Unveils the Viper V4 Pro and Gigantus V2 Pro

Razer, the world's leading lifestyle brand for gamers, proudly announces the launch of the Razer Viper V4 Pro and Razer Gigantus V2 Pro. The latest evolution of esports gear by the #1 gaming mouse brand amongst pro-gamers - this new duo is engineered to give players an edge in today's most demanding competitive game titles.

The Viper V4 Pro takes one of the world's most trusted esports mice and makes it lighter, faster, and more precise, while the Gigantus V2 Pro reimagines Razer's flagship soft gaming mouse mat with five distinct speed ratings so players can choose the surface that locks in the way they play.

(PR) JEDEC Releases Updated LPDDR5/5X SPD Standard with Enhanced Mode‑Switching Support

JEDEC Solid State Technology Association, the global leader in standards development for the microelectronics industry, today announced the publication of JESD406-5D: LPDDR5/5X Serial Presence Detect (SPD) Contents standard, an update of the Revision C standard that adds support for calculating recovery time when switching operating modes. JESD406-5D is available for free download from the JEDEC website.

LPDDR5/5X memory devices are capable of supporting two sets of timing parameters: a full speed mode and a reduced speed mode that consumes less power. This feature allows for longer battery life for mobile devices which are a common application for LPDDR5/5X chips and modules and may also be leveraged by data centers as the use of LPDDR5/5X devices grows. The updated JESD406-5 standard documents key parameters for calculating the switching time between fast and low power modes, making the feature more efficient and allowing higher system performance.

MyDreamGirlfriend – Chat, connect, and build a private relationship with an AI girlfriend


MyDreamGirlfriend is an AI-powered dating platform where users create customized AI companions with interactive conversations, voice messaging, and roleplaying features. Optimized for both mobile and desktop, it offers a freemium subscription model. Users can exchange voice notes and photos, unlocking content and deeper interactions with gems. Start free and upgrade for unlimited messages, multiple companions, and extras. All conversations are end-to-end encrypted for complete privacy.

View startup

LYNARA – Map and explore complex software architectures in 3D in your browser


LYNARA is a browser-based platform for precise multi-layer system design. It visualizes complex software landscapes in 3D and lets you structure user interface, services, and data layers for clarity. Use fast keyboard shortcuts to select, copy, paste, and navigate across layers, all without installation or a credit card.

View startup

ChatGPT citations favor a small group of domains: Study

AI retrieval vs citations

AI citations in ChatGPT are far more concentrated than citation distributions in traditional search. Roughly 30 domains capture 67% of citations within a topic.

  • That’s according to Kevin Indig’s latest study, which also found that broad topical coverage, long-form pages, and cluster-based models outperform the old β€œone keyword, one page” approach.

The details. Citation visibility wasn’t evenly distributed. In product comparison topics, the top 10 domains accounted for 46% of citations; the top 30, 67%.

  • AI visibility was slightly less concentrated than classic organic search, but still highly centralized.
  • Indig’s conclusion: you’re effectively shut out unless you build enough authority to win one of a limited number of citation β€œseats.”

What changed. Ranking No. 1 in Google still matters, but it’s not enough. Of pages ranking No. 1, 43.2% were cited by ChatGPT β€” 3.5x more often than pages beyond the top 20.

  • ChatGPT retrieved far more pages than it cited. AirOps found that it retrieved ~6x as many pages as it cited, and 85% of the retrieved pages were never cited.
  • A third of the cited pages came from fan-out queries, and 95% of those had zero search volume.

Why we care. Publishing the β€œbest answer” for one keyword isn’t enough. ChatGPT rewards domains that cover a topic from multiple angles, not pages optimized for isolated terms. And discovery often happens outside the keyword universe you track.

The patterns. Longer pages generally earned more citations, with variation by vertical. The biggest lift appeared between 5,000 to 10,000 characters. Pages above 20,000 characters averaged 10.18 citations vs. 2.39 for pages under 500.

  • This pattern broke in Finance, where shorter, denser pages often outperformed long guides. In Education, Crypto, and Product Analytics, longer pages continued to gain citation value with little drop-off.
  • 58% of cited URLs were cited only once. Pages that recurred across prompts were usually category roundups, comparison pages, or broad guides answering multiple related questions.

On-page behavior. ChatGPT cited heavily from the upper part of a page. The 10% to 20% section performed best across all industries.

  • The bottom 10% earned just 2.4% to 4.4% of citations. Conclusions were largely ignored.
  • Finance had the steepest ramp, with 43.7% of citations in the first 30%.
  • Healthcare and HR Tech were flatter.
  • Education peaked later, around 30% to 40%.

About the data. Indig analyzed ~98,000 citation rows from ~1.2 million ChatGPT responses (Gauge), isolating seven verticals. The study used structural page parsing, positional mapping, and entity and sentiment analysis to identify which pages earned citations and where they come from.

The study. The science of how AI picks its sources

Google is testing AI-generated animated video clips inside PMax

Google Local Services Ads vs. Search Ads- Which drives better local leads?

A new creative feature has been spotted inside Google Ads Performance Max campaigns β€” and it could change how advertisers without video budgets approach animated display advertising.

What was found. Vice President of Search at JumpFly, Inc. Nikki Kuhlman spotted an option to generate animated video clips directly within PMax asset groups, using AI to enhance and animate a single source image.

How it works.

  • Upload a source image β€” a logo, a product shot, a property photo
  • AI generates several β€œenhanced” versions of that image
  • Each enhanced image produces two animated clips
  • Select up to five animated clips per asset group
  • Note: faces cannot be used in source images, though AI may generate people in enhanced versions

Early results from testing. A logo generated a spinning animation of the image element. A house with a sold sign produced a slow cinematic pan. Simple inputs, but the output quality appears usable for display advertising without any video production required.

Where the ads appear. Google hasn’t provided in-product documentation on placement, but early testing shows animated clips surfacing in Display ad previews when added to an asset group.

Why we care. Video assets continue to be a strong creative option on Paid Media β€” but producing video has always required time, budget, and resources many advertisers don’t have. This feature effectively removes that barrier β€” turning a single product photo or logo into animated display creative in seconds, at no additional production cost.

For advertisers who’ve been running PMax on static images alone, this could be a meaningful and easy win.

The bottom line. This feature is still unconfirmed by Google, but advertisers running PMax should check their asset groups now. If it’s available in your account, it’s worth testing β€” especially for campaigns that have been running on static images alone.

First seen. Kuhlman shared spotting this new feature on LinkedIn.

SEO’s biggest threat in 2026? Your own organization

SEO’s biggest threat in 2026? Your own organization

AI tools and visibility have dominated the SEO conversation in the past two years. But while discussions focus on these new technologies, most of the biggest SEO risks in 2026 will come from somewhere else: within your own organization.

Fragmented data, unclear ownership, outdated KPIs, and weak collaboration can quietly destroy even the best strategies. As SEO expands beyond the website and into AI-driven discovery, the role of the SEO team is becoming broader, more influential, and, paradoxically, harder to define.

Here are some of the risks your team should start thinking about now.

Relying too much on AI for everything

Many SEO teams now rely on AI for everything, from generating briefs to analyzing data. That’s often necessary. You can’t spend hours creating a brief when AI can produce something usable in minutes. But that’s also where the risk starts.

AI can generate content quickly, but β€œacceptable” won’t differentiate you. You still need a clear point of view β€” what story you’re telling and what unique angle you bring. Without that, your content becomes generic, predictable, and indistinguishable from competitors using the same tools.

The issue is simple: if you ask similar tools similar questions, you’ll get similar answers. And your competitors have access to the same tools.

Some companies try to stand out by training models on proprietary data. In reality, few teams do this at scale. Most prioritize speed over quality.

There’s also risk in using AI for analysis without understanding the data behind it. AI is fast, but it can misinterpret or hallucinate results.

I’ve seen this firsthand. An AI tool hallucinated part of a calculation during an urgent analysis, making every insight that followed incorrect. It only acknowledged the mistake after it was explicitly pointed out.

More broadly, AI excels at identifying patterns. But in SEO, competitive advantage rarely comes from following patterns. The most effective strategies don’t just mirror what everyone else is doing. Sometimes the best opportunity isn’t the obvious one.

AI is reshaping how SEO work gets done, how impact is measured, and whether it can be measured at all.

Dig deeper: Why most SEO failures are organizational, not technical

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with
Semrush One Logo

Fragmented data and limited visibility

For years, SEO professionals have worked with incomplete datasets. We’ve never had a full view of the user journey. That’s one reason organic impact has often been underestimated. In the past, though, we could still piece together a reasonably clear picture β€” from ranking to click to conversion.

Today, that picture is far more fragmented. AI tools have changed how people research and discover products. Users now start in AI assistants – asking questions, comparing options, and building shortlists before ever visiting a website. By the time they land on your page, part of the decision-making process is already done.

The problem is we have zero visibility into that journey. If a user discovers your brand through an AI-generated answer, adds you to a shortlist, then later searches for you directly, the signals that influenced that decision are invisible. We only see the final step.

Microsoft Bing has introduced basic reporting for AI searches, but it’s limited. We still can’t see the prompts behind specific page visibility.

At the same time, SEO teams are still expected to prove impact. Some companies are adding questions to lead forms to understand how users discovered them. In theory, this adds signal. In practice, it depends on accurate self-reporting. I know how I fill out forms, so I question how reliable that data really is. Still, it’s a start.

Setting the wrong KPIs

Fragmented data creates another risk: focusing on the wrong KPIs. Stakeholders still ask about traffic. No matter how often SEO teams explain that its role has changed, traffic remains a default measure of success. For years, organic growth meant more sessions, users, and visits. That mindset hasn’t fully shifted.

At the same time, stakeholders are drawn to newer metrics β€” AI visibility, citations, and mentions. These aren’t inherently wrong, but they need to be used carefully.

Most tools measure AI visibility using a predefined set of queries. That’s where risk creeps in. Teams can become too focused on improving visibility scores, even if it means optimizing for prompts that look good in reports rather than those that matter to the business.

For example, appearing for β€œWhat is XYZ software?” isn’t the same as showing up for β€œWhich XYZ software is best?” The first may drive visibility, but the second is much closer to a purchase decision.

To avoid this, visibility metrics need to be tied to business outcomes β€” a real challenge given the fragmented data problem.

Tracking AI visibility also opens another rabbit hole: debates over which prompts to track, how many to include, and why. This can quickly overcomplicate measurement, especially if teams lose sight of the goal. The objective isn’t to track every phrasing, but to understand the intent behind it. Trying to capture every variation is impossible.

Dig deeper: Why governance maturity is a competitive advantage for SEO

Owning more than you can actually own

SEO teams are expected to own AI visibility strategy much like they owned SEO strategy. But strategy is often treated as execution.

Even in the past, SEO was never fully independent. It relied on other teams β€” engineering to implement changes and content to create pages. The difference is that most of this work used to happen on the company’s own website.

That’s no longer true. Visibility in AI answers requires presence beyond your domain β€” Reddit threads, YouTube videos, and media mentions all play a role.

This significantly expands the scope of work. At the same time, many of these surfaces don’t have clear owners inside organizations. Even when they do, there’s a tendency to assume that if SEO owns the strategy, it should also own execution or at least be accountable for outcomes.

The opposite happens, too. If other teams own execution, they may take ownership of the entire strategy. In reality, neither model works well.

SEO teams can’t manage every platform that influences AI visibility. They don’t have the expertise to produce YouTube content or run PR campaigns. Their strength is knowing what works and helping optimize it. For example, advising on how a video should be structured to perform on YouTube.

Owning strategy also doesn’t mean deciding who owns execution. That’s a leadership responsibility. It requires visibility across teams and the authority to assign ownership. Otherwise, one team is left deciding how its peers should operate.

Get the newsletter search marketers rely on.


Lack of cross-team collaboration

Even when companies recognize the importance of AI visibility, cross-team collaboration remains a challenge.

Roles and processes are often unclear. SEO teams may expect others to execute, while those teams assume it’s SEO’s responsibility. In other cases, teams don’t prioritize AI visibility because their KPIs focus elsewhere.

This is where leadership alignment becomes critical. If AI visibility is truly a strategic priority, it needs to be reflected in goals and KPIs across all relevant teams. When AI-related KPIs sit only with SEO, it creates an imbalance: one team is accountable for outcomes, while execution depends on many others.

Many teams are also unsure how to work with SEO. Some don’t involve SEO early enough. Others choose not to follow recommendations because they don’t agree with them.

SEO teams share responsibility here, too. They need to actively onboard other teams and clearly connect SEO efforts to broader business goals. It’s our job to show that lack of visibility means lost revenue.

I’ve seen cases where teams critical to AI visibility hadn’t even read the strategy document. In these situations, the issue isn’t one-sided. Teams need to understand what’s expected of them, and SEO needs to push for alignment and involve stakeholders early. Simply moving forward without that alignment doesn’t work.

SEO teams also don’t always explain the β€œwhy.” AI visibility can end up treated as a standalone SEO metric rather than a business driver. Even when there’s agreement on its importance, a lack of clear processes, shared goals, and training keeps collaboration inconsistent.

Dig deeper: Why 2026 is the year the SEO silo breaks and cross-channel execution starts

Too much strategy, not enough doing

With rapid changes in search, SEO teams often spend more time on theory β€” reading, analyzing, building frameworks, and refining strategies β€” instead of making changes to the website.

That doesn’t mean teams should stop learning. Quite the opposite. But strategy without execution quickly loses value. In many organizations, SEO teams are expected to produce in-depth strategy documents meant to align teams and define priorities. In reality, many go unread outside the SEO team. They require significant effort but deliver little impact.

Part of the problem is that strategies are often too theoretical. They explain the why but miss the what. The value of a strategy isn’t the document, but the actions that follow. Other teams need to understand what to do and how to contribute.

AI is also accelerating how quickly search evolves. Waiting months to test ideas no longer works. A more practical approach is to understand the direction, implement changes, observe results, and iterate. Smaller experiments often lead to faster learning.

When SEO succeeds, SEO disappears

SEO has always been a consulting function. Success depends on collaboration with teams like engineering, content, and product. Today, that dynamic is more visible than ever. In many cases, SEO teams don’t execute directly. Their role is to enable others.

In mature organizations, this works well. Collaboration is strong, and credit is shared. SEO’s consulting role is recognized without forcing the team to own areas outside its expertise. In less mature environments, it can lead to SEO being undervalued or seen as unnecessary.

AI adds another layer. It can generate keyword ideas, outlines, and optimization suggestions, making SEO look deceptively simple, much like writing content. AI lowers the barrier to entry, but it doesn’t replace expertise. Without that expertise, teams produce work that’s technically correct but average.

It’s a familiar pattern: copy-pasting a Screaming Frog SEO Spider error list into a task doesn’t demonstrate real understanding. This creates a paradox. The more SEO becomes a company-wide capability, the more the SEO team risks becoming invisible.

Dig deeper: SEO execution: Understanding goals, strategy, and planning

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with
Semrush One Logo

SEO is evolving, but are companies ready?

SEO teams won’t fail in 2026 because of a lack of knowledge. They’ll fail if they can’t turn that knowledge into action, influence, and business impact.

The challenge is no longer just optimizing pages. It’s building processes, partnerships, and measurement models that reflect how visibility works today.

Success also depends on leadership support. Many of the biggest risks are structural β€” fragmented data, unclear ownership, weak collaboration, outdated KPIs, and the gap between strategy and execution.

AI visibility expands beyond the website and into the broader organization. That doesn’t make SEO less important, but it does make it harder to define, measure, and defend.

The companies that succeed will stop treating SEO as a traffic function and start treating it as a business capability that drives visibility, discovery, and growth.

Apple is bringing ads to Apple Maps this summer

Apple

Apple is preparing to introduce sponsored listings in Apple Maps, marking a significant expansion of its advertising business beyond the App Store.

How it will work. According to Bloomberg’s Mark Gurman, the system will function similarly to Google Maps β€” allowing retailers and brands to bid for ad slots against search queries. Sponsored businesses will appear in Maps search results, much like sponsored apps already appear in App Store searches.

The timeline. An announcement could come as early as this month, with ads beginning to appear inside Maps as early as this summer across iPhone, other Apple devices, and the web version.

Why Apple is doing this. Advertising is a growing and high-margin revenue stream for Apple’s services business. Maps β€” with its massive built-in user base across Apple devices β€” is a natural next step, particularly as location-based advertising continues to grow.

Why we care. Apple Maps has a massive built-in user base across iPhone and Apple devices, and users searching within Maps are expressing clear, high-intent signals β€” they’re actively looking for somewhere to go or something to buy. This opens up a brand new location-based advertising channel that previously didn’t exist on Apple’s platform, giving local businesses and retailers a way to reach those users at exactly the right moment.

Advertisers already running Google Maps or local search campaigns should pay close attention, as this could quickly become a significant complementary channel.

The privacy angle. True to Apple’s form, a user’s location and the ads they see and interact with in Maps are not associated with their Apple Account. Personal data stays on the user’s device, is not collected or stored by Apple, and is not shared with third parties.

How to access it. Businesses will be able to access a fully automated experience for creating ads through Apple Business in a few simple steps. Current Apple Ads advertisers and agencies will also have the option to book ads through their existing Apple Ads experience, which will offer additional customization options.

What you need to do now. When Apple Business becomes available in April, businesses will need to first claim their location on Maps apple before ads become available this summer β€” so the time to get set up is now, not when the auction opens.

The bottom line. Apple Maps ads should open up a high-intent, location-based channel that hasn’t existed before on Apple’s platform. Advertisers running local or retail campaigns should claim their Maps listing now and start planning budgets for a summer launch. Early entrants in a new ad auction typically benefit from lower competition before the market matures.

Update 10:45 ET: Apple has officially confirmed that ads are coming to Apple Maps this summer, as part of a broader new platform called Apple Business launching April 14.

Bing Webmaster Tools now links AI queries to cited pages

AI connection map

Microsoft added query-to-page mapping to its AI Performance report in Bing Webmaster Tools, letting you connect AI grounding queries directly to cited URLs.

Why we care. The original dashboard showed queries and pages separately, limiting optimization. Now you can tie specific AI-triggering queries to the exact cited pages, so you can prioritize updates based on real AI-driven demand β€” not guesses.

The details. The new Grounding Query–Page Mapping feature links two existing views in the AI Performance dashboard:

  • Click a grounding query to see which pages are cited
  • Click a page to see which grounding queries drive its citations
  • Mapping is many-to-many: one query can map to multiple pages, and vice versa

Catch up quick. Microsoft launched the AI Performance report in Bing Webmaster Tools in February as its first GEO-focused dashboard. It:

  • Tracks where and how often your content is cited in AI answers across Bing, Copilot, and partners.
  • Shows grounding queries, cited URLs, and visibility trends over time.
  • Focuses on citation visibility β€”Β not clicks, rankings, or traffic.

What they’re saying. Microsoft said the update responds to β€œstrong positive customer feedback and numerous requests.”

The announcement. The addition of query-to-page mapping to Bing Webmaster Tools appeared in a Microsoft Advertising blog post: The AI Performance dashboard: Your view into where your brand appears across the AI web

The entity home: The page that shapes how search, AI, and users see your brand

The entity home- The page that shapes how search, AI, and users see your brand

The entity home is the single page that anchors how algorithms, bots, and people understand your brand. It’s usually your About page, and it does far more than most teams realize.

It’s where algorithms resolve your identity, where bots map your footprint, and where humans verify trust before they convert. In one test, improving that page alone lifted conversions by 6% for visitors who reached it. The reason is simple: the human and the algorithm are doing the same job β€” checking claims, validating evidence, and deciding whether to trust you.

For years, this was overlooked. Most SEOs focused on rankings and traffic while underinvesting in the page that defines what their brand actually is. That’s no longer sustainable. The entity home is the foundation of how your brand is interpreted across search, AI, and what comes next.

What the entity home isn’t

Before going further, here are four misreadings worth pre-empting.

Not a ranking trick

Getting the entity home right doesn’t produce a traffic spike next Tuesday. It builds the confidence prior that compounds through every gate of the pipeline over time.

Not just schema

Schema markup helps the algorithm read what is already there. It isn’t a substitute for the claims, the evidence links, and the consistent positioning that schema describes. Schema without substance is a well-formatted, empty declaration.

Not always the About page

For most companies, it is, and for most individuals, it is a page on someone else’s website. The right URL to use carries the clearest identity statement, the strongest internal link prominence from the rest of the site, and the most stable long-term address (something people often don’t think about).

Not enough without corroboration

The entity home is where you declare your claims. Independent third-party sources confirm and corroborate your claims. The algorithm will only cross the confidence threshold when what you say matches what the weight of evidence supports.

Three audiences, one anchor β€” and most brands are ignoring two of them

The entity home serves three simultaneously, through three completely different mechanisms. Most brands haven’t yet given them enough thought.

The three audiences your entity home serves
  • Bots use the entity home when mapping the digital footprint. They use it to establish what entity they are dealing with and how to interpret every corroborative source they find.Β 
  • Algorithms anchor their identity resolution against it, checking confidence at every relevant gate against whatever baseline this page set.Β 
  • Humans reach for it when they want to see a resource that feels authoritative precisely because it is structured to inform rather than to sell.

So, the entity home webpage is vital to all three audiences β€” bots, algorithms, and humans: it sets the tone for the bot in DSCRI, the algorithms in ARGDW, and for the person who converts.

The entity home is just one page, and that isn’t enough

The entity home anchors everything: the canonical URL where the algorithm initializes its model of the brand, where bots orient themselves, and where humans arrive to verify their instinct. One page, doing one critical job. But one page declares. It doesn’t educate.

The entity home website educates. Every facet of the brand structured across pages that give the algorithm a complete picture of:

  • Who this entity is.
  • What it does.
  • Who it works alongside.
  • What it has produced.
  • Where independent sources confirm what the brand claims about itself.Β 

The difference between the two is the difference between introducing yourself and making your case.

Search built the web around a single assumption β€” the human acts. The engine organized, the website presented, and the human chose. That model shaped 30 years of architecture decisions because the website’s job was to win the human’s attention and trust once the engine had delivered them to you.

But assistive engines broke that assumption. They took on the evaluation work the human used to do: reading, comparing, synthesizing, and recommending. The human still makes the final call, but the website needs to have made its case to the algorithm before the human ever arrives.Β 

The audience that matters first has shifted, and a website that speaks only to humans is already losing the conversation that determines whether those humans show up at all.

Agents go one step further. The agent researches, decides, and acts. The human receives the outcome. The website that wins in an agentic environment isn’t the one with the most compelling hero section β€” it’s the one the agent can read, trust, and act on without inferring anything.

All three modes co-exist, and all three always will.Β 

  • Search serves the window shopper.Β 
  • Assistive engines serve the human who wants a recommendation without doing the research.Β 
  • Agents serve the task that can be delegated entirely.Β 

What shifts over the next three years isn’t which mode exists β€” it’s which mode does the most work, and what your website needs to do to win each one.

This is where I’ll plant a flag, and you can disagree. All three jobs need attention right now β€” the percentages below describe where the main focus of your effort sits, not permission to ignore the others.Β 

The work on assistive and agential is already overdue. The speed of change will probably make these figures look dated in a few months.

Focus weighting by year- Search, assistive, agential
  • 2026: Search 60%, Assistive 35%, Agential 5%
    • Search still drives most conversions. But the 35% on assistive isn’t optional, it’s late. The brands that started two years ago are already compounding.
  • 2027: Search 35%, Assistive 50%, Agential 15%
    • Assistive engines will be handling enough upstream evaluation that discovery and correct interpretation become the primary battle. Search remains significant. Agential execution is arriving.
  • 2028: Search 20%, Assistive 45%, Agential 35%
    • Agents execute. The algorithm’s confidence in your brand determines whether you’re in the consideration set before any human is involved. Search and assistive don’t disappear β€” they become the infrastructure the agential layer runs on.

The entity home website anchors all three eras. What changes is who it speaks to first, and what that conversation needs to contain.

Entity home (one page) vs entity home website (full education hub)

Each cluster in that diagram declares something: these satellite pages, grouped this way, belong to this entity and describe one specific dimension of what it is.Β 

  • /social names the platforms the brand controls.Β 
  • /peers places the entity in its professional network.Β 
  • /companies closes the relationship loop between person and organization.Β 

The grouping carries meaning β€” an algorithm that reads the structure learns something the individual pages couldn’t tell it separately.

The entity home website has three jobs

Search, assistive, and agential engines co-exist, which means the entity home website runs three distinct jobs simultaneously.Β 

  • The search job is the one 30 years of practice has refined, and it doesn’t change: get the bots through the DSCRI infrastructure gates cleanly, so the ranking engine delivers the right humans to you, and your content draws them through the funnel with clarity, credibility, and a path to conversion.
  • The assistive job is the one most brands are ignoring, and where the competitive gap is opening fastest: educate the algorithms. Your entity home website structures your brand’s story so algorithms understand it without guessing, and your content wins the competitive phase (ARGDW) with the highest possible confidence intact. Every explicit link from your entity home website to a satellite property declares a graph edge, carrying higher confidence through the pipeline than any connection the algorithm has to infer for itself.
  • Hardest to prepare for, and already arriving: brief the agents. Agentic engines don’t read your website the way a human reads a marketing page β€” they read it the way an instructed system reads a briefing document, scanning for structured, unambiguous, machine-interpretable facts. Don’t make the machine use imagination it doesn’t have.

Get the newsletter search marketers rely on.


Entity pillar pages solve the identity problem keyword cornerstones were never built for

SEO has always known what to do with a topic: build an authoritative page around it, link it well, and earn rankings. That architecture works because the ranking engine evaluates content.

What it can’t do is tell the algorithm who the entity behind that content is, what relationships it has built, what it has demonstrated over time, or why it should be trusted to recommend rather than merely rank.

An entity has facets, and facets aren’t the same thing as topics. A person isn’t β€œSEO consultant” plus β€œtechnical SEO” plus β€œkeynote speaker”: those are keyword clusters, useful for ranking, useless for identity.

What the algorithm actually resolves identity against is the network of dimensions that define what this entity is β€” the companies it belongs to, the peers it works alongside, the publications it has appeared in, the expertise it has demonstrated over years, the events it speaks at, and the work it has produced.

An entity pillar page is the authoritative page on your own property for one of those dimensions.

  • The /expertise page establishes demonstrated knowledge in a specific domain, not as a content topic, but as an identity declaration.
  • The /peers page places the entity in a professional network the algorithm already trusts.
  • The /companies page closes the loop between person and organization.
  • The /press page links to independent coverage that corroborates the entity’s claims, giving the algorithm something to cross-reference rather than take on faith.

These pages aren’t traffic pages in the traditional sense, and that framing matters: SEOs who measure them against keyword rankings will consistently underinvest in them because the return doesn’t show up in rank tracking. The return shows up in what AI assistive engines say about your brand when your prospects ask.

Keyword cornerstones vs entity pillar pages

Keyword cornerstone pages and entity pillar pages serve different audiences, and your website needs both

The keyword cornerstone page and the entity pillar page aren’t competing strategies: they’re parallel architectures serving different audiences, which means your website needs both, and the question is how to build them so they compound each other’s value rather than compete for the same resource.

The coincidence between them is real and worth engineering deliberately. The expertise page that ranks for β€œtechnical SEO audit” can also function as the entity pillar page that declares this entity’s demonstrated knowledge in that domain if it’s built with that second function in mind:

  • Explicit entity statements.
  • Schema that names the relationships rather than just the topic.
  • Links to corroborating third-party sources stable enough to persist across years.
  • A URL structure that commits to the identity dimension rather than the keyword cluster.

When those two requirements align, one page does both jobs, which is a good thing.

When they diverge: when the page that captures search traffic can’t easily carry the identity declaration without sacrificing one function for the other, you face an architectural choice, and making that choice consciously rather than defaulting to the keyword model is the skill the transition requires.

The percentages already told you the weighting: Both layers are required starting today

Earlier in this article, the 2026/2027/2028 split put search at 60%, then 35%, then 20% of focus. What those numbers don’t say, but what the logic demands, is that the other percentage β€” the assistive and agential share β€” needs your website to feed them right now. Don’t wait until the balance shifts.

Keyword cornerstone pages feed the search share. Entity Pillar Pages feed the assistive and agential share.

If you build the Entity Pillar Pages in 2027 when assistive engines truly dominate, you’ll be building into a window that has already closed for the brands that started in 2025, because the algorithm’s model of your entity solidifies around whatever you gave it during the period it was actively learning.

The percentages describe where the demonstrable value sits at each stage. Your investment needs to precede the moment your boss sees the results, not follow it.

Both architectures are required today; the balance shifts, but the requirement for both never goes away.

Building for machines and humans simultaneously is cheaper than building for each separately

The risk brands hear when they encounter the machine-optimization argument is a false trade-off: build for machines at the expense of humans, strip the warmth from the copy, replace narrative with structured data fields, and turn the About page into a schema exercise. You can absolutely avoid the trade-off in practice because the best practices are more complementary than they might appear.

Clear entity statements that help the algorithm resolve your identity also help the human visitor understand immediately who they’re dealing with. Explicit links to corroborating third-party sources that build algorithmic confidence also give the human prospect the independent validation they’re quietly looking for. Schema markup that declares relationships for machine consumption gives structured clarity that human scanners doing final due diligence actually appreciate.

For me, this is the reframe that makes the whole project manageable: my approach to the entity home website is your current marketing, restructured to serve three audiences simultaneously, not a technical infrastructure project running alongside it. One investment that has three returns, and (when done right), the requirements pull in the same direction more often than they pull apart.

The funnel is moving inside the assistant.

When an assistive engine names your brand, summarizes it, and links to it in response to a user query, a conversion event has happened that you don’t see in your Analytics dashboard, and the human who arrives at your website has already been half-sold by the algorithm before they clicked. Traffic will decline as more of that evaluation work moves upstream, and the brands that measure only what arrives at the site will systematically underestimate both the value they’re generating and the gaps in their strategy.

Start measuring where your brand appears in assistive engine responses, how consistently it appears, and what the algorithm says about you when it does.

Getting the entity home right requires definition, proof, and a sustained corroboration campaign

Start with the entity home page itself: choose the single URL that functions as the canonical anchor for your brand’s identity and commit to it. Don’t discover it by asking an AI engine what it thinks your entity home is, because the engine will tell you what it has already learned, and that might be your website homepage, Wikipedia, a press profile, or a LinkedIn page you half-filled in five years ago. You choose it, then you verify the algorithm has learned the lesson you are giving it. You are the adult in the room.

Five criteria determine that choice, in order of weight:

  • The most explicit identity statement on the property.
  • The strongest internal link prominence from the rest of the site.
  • The best-structured schema markup with a stable @id.
  • The clearest outbound links to corroborating third-party sources.
  • The most stable long-term URL.

If your About page doesn’t hit all five, it isn’t doing the job the algorithm requires.

Invest in your About page. Strengthen it with a clear entity statement, schema with a proper @id, verified links to Wikipedia and Wikidata where they exist, every accurate sameAs declaration you can support, and the claims that define your brand’s positioning.

Declaration vs corroboration - claim vs evidence

That single page is the anchor.

The entity home website is the education hub built around it: every entity pillar page you build β€” /expertise, /peers, /companies, /press β€” extends the identity declaration outward, giving the algorithm more dimensions to resolve against and more facets to cross-reference with independent sources. Each of those pages does for one identity dimension what the About page does for the whole: declares something specific, verifiable, and machine-readable about who this entity is.

The practical work on the entity home website side is the same audit applied at scale: for each entity pillar page, ask whether it declares a clear facet, links to corroborating evidence, and carries schema that names the relationship rather than just the topic. The pages that answer yes to all three are doing both jobs simultaneously β€” identity infrastructure and keyword architecture. The ones that don’t need a decision: extend them, or build the pillar function its own dedicated page.

If you’re unsure how much influence you actually have over what AI communicates about you, the answer is more than most people assume β€” and the channels that give you the most leverage are exactly the ones entity pillar pages are built to activate.

Then force the corroboration loop across the whole footprint: drive independent third-party sources to reference, link to, and echo the claims the entity home makes and the facets the pillar pages declare across enough independent contexts that the algorithm’s confidence crosses from hedged claim to corroborated fact.Β 

That crossing doesn’t happen on a deadline and can’t be engineered in a sprint. The corroboration loop is the curriculum, slow by design, compounding with every cycle, never truly finished. It is the work, and it rewards the brands that start it today over the ones that plan to start it when the percentages shift.

This is the sixth piece in my AI authority series.Β 

Why better signals drive paid search performance

Why better signals drive paid search performance

In an increasingly automated environment, paid search performance is constrained by a simple reality: Algorithms can only optimize toward the signals they’re given. Improving those signals remains the most reliable way to improve results.

That sounds straightforward, but in practice, many people are still optimizing around signals that don’t reflect real business outcomes.

Let’s dive into how algorithms function, how you can influence them, and where some people fail.

How bidding algorithms actually work

Modern bidding systems are often described as β€œblack boxes,” suggesting they operate mysteriously. But that description isn’t helpful.

At a high level, bidding algorithms are large-scale pattern recognition systems.

Early automated bidding used simple statistical methods, including rules-based logic and regression models. Over time, these evolved into more advanced machine learning approaches using decision trees and ensemble models.

Eventually, these became large-scale learning systems capable of processing thousands of contextual and historical inputs. The technology has developed significantly, but the goal has stayed remarkably consistent.

Today’s systems evaluate signals such as query intent, device, location, time, historical performance, and user behavior, updating predictions continuously and adjusting bids in near-real time.

Despite this complexity, the underlying mechanisms haven’t changed:

Bidding algorithms identify patterns tied to a desired outcome, estimate that outcome’s probability and expected value for each auction, and adjust bids accordingly. They don’t understand business context or strategy β€” they infer success from feedback. This distinction matters.

When the feedback loop is weak, noisy, or misaligned with real business value, even advanced algorithms will efficiently optimize toward the wrong objective. Better technology doesn’t compensate for poor inputs.

Dig deeper: Bidding and bid adjustments in paid search campaigns

The signals advertisers can influence

Paid search algorithms observe a vast range of signals, many of which are inferred by the platform and not directly controllable by you. These include user intent signals, behavioral patterns, and competitive dynamics.

While many signals sit outside of our control, there’s still a meaningful set of levers you control that shape how algorithms learn. These include:

These inputs shape how the algorithm explores and learns. They help define the environment in which optimization occurs. But they don’t, by themselves, define what success looks like. That role is played by conversion data.

Dig deeper: Conversion rate: how to calculate, optimize, and avoid common mistakes

Conversion data: The most important signal

When performance plateaus, the first instinct is to blame structure, budgets, or creative. In reality, the biggest lever you have available usually sits elsewhere: conversion data.Β 

In most accounts, conversion data is the most influential signal you control. It defines the outcome the algorithm is trained to pursue and directly informs prediction models, bid calculations, and learning feedback loops.

When conversion setups are misaligned, overly broad, duplicated, or noisy, platforms still optimize efficiently, just not toward outcomes the business actually values. This is why, at times, you can show improving platform metrics while your commercial performance stagnates or deteriorates.

A common mistake is focusing on increasing conversion volume rather than improving conversion quality. Volume accelerates learning, but if the signal is weak, faster learning just means faster optimization toward a suboptimal goal.

In practice, refining what counts as a conversion often delivers greater performance gains than structural or tactical changes elsewhere in the account.

Dig deeper: Why a lower CTR can be better for your PPC campaigns

Aligning conversion signals with real business KPIs

Before any optimization begins, define what success genuinely means for your business. Paid search platforms don’t have intrinsic knowledge of your revenue quality, profitability, or downstream value. They only see what is explicitly passed back to them.

Misalignment typically appears in predictable forms:

  • Revenue is used as the primary signal when margins vary significantly.
  • Lead submissions are optimized without regard to lead quality or sales outcomes.
  • Short-term efficiency metrics are prioritized over long-term value.

In each case, the algorithm is doing exactly what it has been instructed to do. The issue isn’t optimization accuracy, but goal definition. If an increase in a given conversion wouldn’t be seen as a win by the business, it shouldn’t be the primary signal used for optimization.

Dig deeper: 3 PPC KPIs to track and measure success

Get the newsletter search marketers rely on.


Strengthening conversion signals with richer, more resilient data

Conversion quality is determined by how confidently the platform can identify and interpret a tracked event.

Browser-based tracking alone is increasingly incomplete due to privacy controls, attribution gaps, and fragmented user journeys. As a result, ad platforms rely on a combination of browser-side and server-side data to improve matching and attribution. This means that, for you, this isn’t just a measurement problem, as it directly affects how confidently platforms can learn from conversions.

Stronger conversion signals are typically characterized by multiple reinforcing parameters, including:

  • First-party identifiers, such as hashed personal data passed via enhanced conversion frameworks.
  • Click identifiers that connect conversions back to ad interactions.
  • Transaction or event IDs that prevent duplication.
  • Accurate conversion values.
  • Session- and network-level attributes that improve attribution confidence.

When a conversion can be recognized through multiple mechanisms, platforms can match it more reliably and use it in learning models with greater confidence. This improves reporting accuracy and bidding performance by reducing feedback loop uncertainty.

Dig deeper: How to track and measure PPC campaigns

Choosing conversion goals

Selecting the right conversion goal isn’t a binary decision. It involves balancing several competing factors:

  • Volume: Higher volumes support faster learning.
  • Value accuracy: Closer alignment with business outcomes improves decision quality.
  • Stability: Highly variable values can introduce noise.
  • Latency: Delayed feedback slows learning and increases uncertainty.

Higher-volume, faster conversions often sit further away from true commercial outcomes, while lower-volume, high-quality conversions may better reflect business value but risk data sparsity. The most effective setups acknowledge these trade-offs rather than attempting to eliminate them entirely.

In many cases, the optimal solution involves using proxy or layered conversion goals that strike a balance between learning speed and value accuracy.

Dig deeper: How to use proxy metrics to speed up optimization in complex B2B journeys

Practical examples of selecting and strengthening conversion goals

Ecommerce optimization based on gross margin, not revenue

For ecommerce, optimizing toward order value assumes all revenue is equal. In reality, product margins often vary widely. When revenue alone is used as the optimization signal, algorithms may prioritize high-value β€” but low-margin β€” products.

A more effective approach is to optimize for gross margin by passing margin-adjusted conversion values via server-side tracking or offline conversion imports. This allows bidding systems to prioritize your business’s profitability rather than top-line revenue, without exposing sensitive cost data client-side.

Lead generation with long conversion latency

In lead gen models where final outcomes occur weeks or months after the initial click, form submissions alone can provide you with weak signals. They are fast and high-volume, but poorly correlated with revenue.

Introducing lead scoring improves signal quality. Leads can be assigned proxy values based on known attributes and early indicators of quality, such as company size, role seniority, or engagement depth. These values can then be passed back to the platform via CRM integrations or server-side tracking, enabling value-based optimization even when final outcomes are delayed.

Optimizing toward predicted lifetime value

If you’re focused on lifetime value (LTV), there are two viable approaches:Β 

  • Where LTV can be reliably predicted within a short window after conversion, predicted values can be imported and used directly for optimization.Β 
  • If early prediction isn’t feasible for you, lead scoring or early behavioral proxies can be used instead.

In both cases, your objective is the same: provide the algorithm with timely, value-weighted signals that correlate strongly with long-term revenue, rather than waiting for delayed outcomes that are too sparse to support learning.

Key takeaways for performance marketers

Modern bidding systems are powerful pattern recognition engines, but their effectiveness is constrained by the signals they receive.

The biggest performance gains rarely come from constant restructuring or tactical tests. They come from improving the clarity, quality, and commercial relevance of your conversion data.

Conversion signals are the most influential inputs you control, and misaligned or low-quality setups will limit performance regardless of how advanced the algorithm becomes.

Regularly audit your conversion definitions and ask a simple question: β€œWould you genuinely celebrate an increase in this outcome?” If the answer isn’t clear, the signal likely needs refinement.

Improving conversion goals, strengthening signal quality, and balancing volume, accuracy, and latency aren’t optional. They’re among the highest-impact ways to improve paid search performance.

Microsoft could drop mandatory sign-ins for Windows 11

Windows 11 could soon be freed of mandatory Microsoft accounts Last week, Microsoft made it clear that it plans to significantly improve Windows 11 in 2026. While Microsoft’s list of planned improvements was impressive, it was missing one thing that would immediately be loved by Windows 11 users. That’s the removal of Microsoft accounts from […]

The post Microsoft could drop mandatory sign-ins for Windows 11 appeared first on OC3D.

Ayaneo Suspends Next 2 Handheld Sales as Component Pricing Becomes "Unsustainable"

Ayaneo, a well-known handheld gaming PC maker, has suspended its preorders for the Next 2 gaming handheld PC due to what the company describes as unsustainable pricing of its materials, making manufacturing and sales impossible. In an update on its Indiegogo page for the Next 2 device, Ayaneo states that the cost of storage has been higher than the company anticipated, surpassing an already inflated price quote. However, after the Chinese New Year, suppliers began providing quotes that exceeded every reasonable level, making Ayaneo's original lowest price of $1,999 for a device with an AMD Ryzen AI Max 385 processor, 32 GB of LPDDR5X memory, and 1 TB of SSD storage unfeasible. While the company offered more expensive variants with up to a Ryzen AI Max+ 395 SoC, 128 GB of RAM, and 2 TB of storage, even a price tag of $4,299 was insufficient to offset the losses from the costly components. The expensive RAM, very expensive NAND Flash, and AMD SoCs meant Ayaneo would be selling these handheld PCs at a massive loss.

This is where the advantage of larger OEMs comes into play, as they can secure slightly better contracts with memory and storage manufacturers, allowing them to get memory pricing into a somewhat decent range and even absorb losses until they can no longer sell their new PCs. Smaller makers like Ayaneo are not in such a position, and the company was forced to stop offering the Next 2 handheld for sale.

Chuwi Responds to CoreBook Series CPU Debacle with Refunds for Affected Buyers

Chuwi recently made news for all the wrong reasons, with Notebookcheck finding that the CoreBook Plus and CoreBook X laptops shipped with older, significantly worse, CPUs than the spec sheet claimed. Previously, AMD issued a statement on the matter, washing its hands of the incident and placing the blame squarelyβ€”and seemingly fairlyβ€”on Chuwi's head. Now, Chuwi itself has stepped up to the plate and issued a response regarding the false CPU advertising.

In a recent blog post, Chuwi blames a production error for the CoreBook X and CoreBook Plus CPU mix-up, stating that a "limited number" of the laptops were assembled with the incorrect CPUs. It goes on to say that affected users should contact Chuwi to request a refund by sending an e-mail to service@chuwi.com or by going through the original purchase channel. Chuwi notes that any returned and refunded devices will need to be in the "original condition with all accessories included," which may be impossible in some cases. The brand has also set a deadline of May 31, 2026, for returns, which is a little over two months from the date of the announcement.

NVIDIA Releases GeForce 595.97 WHQL Game Ready Drivers

NVIDIA has released GeForce 595.97 WHQL, a minor bug-fix update with no new Game Ready titles this time. The driver is part of the R595 branch, which has had a fair number of issues. The 595.59 release was pulled entirely back in February after users reported fan detection failures and clock instability, and 595.79 followed shortly after to address a separate batch of issues. This latest release addresses three remaining issues: texture corruption in Halo Infinite on R595 drivers, stability issues in HITMAN World of Assassination when NVIDIA Smooth Motion is enabled, and crashes triggered when enabling DLSS Frame Generation while Instant Replay is active. Known issues still present include missing terrain in some areas of Enshrouded and occasional stutter in Arknights: Endfield. No general bugs are listed as fixed this time.

DOWNLOAD: NVIDIA GeForce 595.97 WHQL

(PR) Phison Expands Pascari Ecosystem in the EU to Accelerate AI Infrastructure

Phison Electronics, a global leader in NAND flash controllers and storage solutions, today announced its expansion of Pascari portfolio offerings across the European Union (EU), to be showcased at CloudFest 2026 from March 24-26, 2026 in booth G25. The company's reinforced commitment to the region comes with expanded partner engagement, distribution alignment and regional investment.

Global memory supply shortages are increasingly impacting cloud service providers (CSPs), hyperscalers and OEM server manufacturers across the EU as organizations prepare for new AI-driven revenue streams. In response, data center architects and IT leads are solving system challenges from the supply strain by addressing the infrastructure stack level, with particular focus on high-performance storage.

(PR) HP HyperX OMEN MAX PCs Gets Intel "Arrow Lake Refresh" Upgrade

Today, at HP Imagine 2026, HP Inc. (NYSE: HPQ) announced its next evolution of high-performance gaming with the new HyperX OMEN MAX 45L, HyperX OMEN 35L, expanded OMEN AI game support, and new OMEN Gaming Hub capabilities. Together, these innovations combine advanced hardware, intelligent optimization, and creative tools to elevate how gamers play, compete, and create. The lineup brings HyperX's vision for the future of play to life across hardware and software, enhancing both player and machine capability to unlock greater potential. From elite desktop power and modular expandability to AI-driven optimization, HyperX continues to deliver high-end gaming experiences that are more adaptable, more personal, and more capable.

"At HP, we put gamers first in everything we build," said Josephine Tan, Senior Vice President and Division President of Personal Systems Gaming Solutions at HP Inc. "With the new HyperX OMEN desktops, OMEN AI, and OMEN Gaming Hub experiences, we're bringing together powerful hardware and intelligent software to deliver more performance, simpler optimization, greater control, and more fun."

(PR) HP Refreshes Z Workstations With Up to Four NVIDIA RTX PRO 6000 Blackwell GPUs

Today, at HP Imagine 2026, HP Inc. announced a new generation of HP Z Workstations, and AI solutions to give people world-class high-performance compute, while also helping IT organizations modernize their infrastructure for a hybrid future. Across industries, power users such as engineers, architects, product designers, AI developers, and professional creators are facing unprecedented pressure as workflows grow more complex and performance demands accelerate. At the same time, IT leaders must balance these needs against cost, security, and manageability. HP's latest Z portfolio addresses both challenges and delivers uncompromising performance, future‑proof design, and solutions that help customers deploy the right compute in the right place at the right time.

"Technology and high-performance workflows are evolving faster than ever," said Jim Nottingham, SVP and Division President, Advanced Compute Solutions. "HP Z workstations are built to equip the best and brightest professionals with the tools they need for specialized workflows and AI at the edge, while giving IT decision makers the ability to scale performance responsibly."

Xbox Full Screen Experience Gets Renamed Into "Xbox mode"

Microsoft is renaming its Xbox Full Screen Experience (FSE) to the simpler "Xbox mode" to make the feature more appealing to a broader audience. Announced at this March's GDC 2026 conference, Microsoft confirmed that Xbox FSE is gradually being rebranded as Xbox mode, branded with a lower-case "m". This mode has been shown to enhance performance and reduce RAM usage, making it attractive to handheld console gamers and enthusiasts who plan to use it on their PCs to boost performance by minimizing background task load. In CPU-intensive games, this mode can provide a few extra frames per second and overall lower RAM usage, resulting in a smoother experience. According to the ASUS ROG Ally Life account on X, this feature has started rolling out to the Xbox Insider program with application version 2604.1000.30.0.

We have already observed that Xbox FSE, now called Xbox mode, enhances performance by using less system RAM than the standard Windows 11 desktop mode, as confirmed by MSI. Recently, Microsoft introduced this feature to regular Windows PCs, allowing them to play games using Xbox mode. MSI has confirmed that its handhelds running Windows 11 can significantly benefit from this feature. In standard Windows 11 desktop mode, the system uses 8.6 GB of RAM. However, with Xbox mode enabled, RAM usage drops to 7.8 GB, a 9.3% reduction. MSI describes this as a 5% difference based on the total system capacity of 16 GB, rather than the difference between the two usage figures. This reduction in background usage is what Microsoft is aiming for on a larger scale with the regular Windows 11 operating system, promising a smoother user experience and lower RAM usage. Xbox mode demonstrates that this is possible.

Huawei unveils new Atlas 350 AI accelerator with 1.56 PFLOPS of FP4 compute and up to 112GB of HBM β€” claims 2.8x more performance than Nvidia's H20

The Atlas 350 is using the Ascend 950PR chip, but it looks like a cut-down version with less compute. Translation overhead aside, previous reports and announcements have already revealed the full specs of this silicon, such as 128 GB of HBM and up to 2 PFLOPS of FP4 compute, but the Atlas 350 is being reported just a smidge below those numbers.

LazyScreenshots – Capture and auto-paste Mac screenshots into AI chats with one keystroke


LazyScreenshots is a Mac screenshot tool for builders that captures a region and auto-pastes it into your AI assistant with a single keystroke. It has many features like quick overlays, burst mode, and pixel measurements that keep you focused while sending screenshots back and forth with your AI agent or any other app.

View startup

Buzbee AI – Create videos with a real-time voice collaborator that drives your workflow


Collaboration is critical to creators' success, but most AI creative tools are poor at collaboration. Buzbee AI pairs creators with a personalized Scout bee, a real-time voice-powered companion who helps ideate, script, and produce videos from the first spark to the final polish. Scout learns from your channel data and video content, applying proprietary storytelling intelligence to make better videos faster and scale your business.

No more prompt engineering one output at a time. You can create with Scout coordinating all your creative tasks across a swarm of worker bees to help you make better videos in minutes instead of days.

View startup

(PR) Thermal Grizzly Introduces New Mycro Pro RGB CPU Water Blocks, DeltaMate MPII CPU Coolers Coming in Summer

Starting today, two new CPU water blocks from Thermal Grizzly are available: the AM5 Mycro Pro RGB and the Intel 1851 Mycro Pro RGB. The coolers consist of a nickel-plated copper base plate, a cooler plate made of tempered acrylic glass, an acrylic jet plate for flow optimization, and a cover made of anodized aluminium. The inlet and outlet ports with G1/4-inch threads are each located in the acrylic glass. A filter in the form of a fine-mesh screen is located at the inlet port. The design of the cooler is identical to that of the Mycro Direct-Die coolers.
  • Fine-mesh filter at the inlet port
  • Nickel-plated copper cold plate with ultra-high-performance microfins (0.2 mm)
  • Metal housing with RGB-illuminated viewing window and internal layout
  • Inlet and outlet ports with standard G1/4" threads
  • Compatibility lists available online

(PR) BenQ Launches AI-ready Display with On-device Processing and Android 15

BenQ, a global leader in interactive display and collaboration technology, today introduced the BenQ Board RP05, its most advanced and powerful AI-ready interactive display to date. Building on the success of the award-winning RP04, the RP05 features a next-generation AI-enabled architecture, faster processing performance, and an expanded software ecosystem, delivering smart, simple, and secure collaboration designed for the future of learning and modern work environments.

Powered by Android 15 and an industry-leading AI architecture featuring a 10 TOPS NPU (Neural Processing Unit), the RP05 is future-compatible, delivering lower latency, reduced lag, and improved processing speed. Schools and organizations can confidently upgrade to new AI capabilities as they are adopted by their teams. The board integrates practical AI tools directly into everyday workflows, streamlining instruction, ideation, and documentation in real time.

US senators want to suspend Nvidia AI chip export licenses to China and its intermediaries β€” bipartisan letter to Commerce Dept says that Huang’s claims of no chip diversion β€˜were contradicted by reporting available’

U.S. senators Elizabeth Warren (D-Mass.) and Jim Banks (R-Ind.) told Commerce Secretary Howard Lutnick that he should suspend all active export licenses to China for Nvidia AI chips, saying that Nvidia's most advanced AI GPUs are being diverted into the country despite Jensen Huang's assurances.

Get an entire RTX 5090 Alienware gaming PC for just 17% more than the GPU's standalone cost β€” 9800X3D beast with 32GB DDR5 and 1TB SSD drops below $4,450 at Dell, saving you a massive $1,200

This Alienware Area-51 gaming PC is one of the most powerful pre-built rigs you can buy. It's on sale with a $1,200 discount, fitted with an RTX 5090, AMD Ryzen 7 9800X3D, 32GB of RAM, and a 1TB SSD, all for only $4,449.99, with an RTX 5080 variant even cheaper.

VentureLens – Analyze pitch decks like a VC, privately in 60 seconds


VentureLens is an AI-powered pitch deck analysis tool that helps founders and investors evaluate startup decks in seconds. Simply upload a pitch deck and receive a structured, investor-style report highlighting strengths, weaknesses, risks, and opportunities, just like a VC would. Designed for speed and clarity, VentureLens turns hours of manual review into a 60-second workflow.

Built with privacy in mind, VentureLens ensures your data stays secure while delivering actionable insights you can actually use. Whether you're a founder refining your pitch or an investor screening opportunities, VentureLens helps you make smarter, faster decisions with confidence.

View startup

Ghost Campaign Uses 7 npm Packages to Steal Crypto Wallets and Credentials

Cybersecurity researchers have uncovered a new set of malicious npm packages that are designed to steal cryptocurrency wallets and sensitive data. The activity is being tracked by ReversingLabs as the Ghost campaign. The list of identified packages, all published by a user named mikilanjillo, is below - react-performance-suite react-state-optimizer-core react-fast-utilsa ai-fast-auto-trader

5 Learnings from the First-Ever Gartner Market Guide for Guardian Agents

On February 25, 2026, Gartner published its inaugural Market Guide for Guardian Agents, marking an important milestone for this emerging category. For those unfamiliar with the various Gartner report types, β€œa Market Guide defines a market and explains what clients can expect it to do in the short term. With the focus on early, more chaotic markets, a Market Guide does not rate or position

❌