Microsoft has finally released a second DirectX Ray Tracing (DXR) functional specification file that outlines what its ray tracing pipeline is expected to look like, the goals the company is pursuing, and what the technology does behind the scenes. In the original file, Microsoft described the ray tracing pipeline from ray shader generation, scheduling, and acceleration structure, all the way to the shading of the actual game. This time, the company has shared insights into areas such as clustered geometry, partitioned top-level acceleration structures (TLAS), and indirect acceleration structure operations.
Firstly, Microsoft introduces the concept of clustered geometry. Readers need to understand that the core graphics elements are triangles, which are the building blocks of the 3D worlds we have today. However, DXR clustered geometry treats groups of nearby triangles as common building blocks or multiple building blocks, allowing the GPU to build, move, and instantiate geometry in bulk. Instead of handling this separately with multiple calls for triangles, the GPU's task is now much more simplified. DXR even defines compact vertex encodings and predefined template formats to ensure that the GPU memory and bandwidth required to execute bulk geometry building and moving are sufficient. As a result, the GPU doesn't have to update or duplicate existing geometry, and DXR will help render foliage, crowds, and in-game props once, allowing them to be moved around easily. This reduces the GPU load and improves the performance of ray tracing in games.
Intel has released its latest 101.8626 WHQL Arch GPU graphics drivers, adding day-one support for Death Stranding 2: On the Beach and Everwind games, as well as introducing the new Intel Graphics Shader Distribution Service, which should improve first load times by up to 2x on Intel Arc B-series and Intel Core Ultra Series 3 and Series 2 CPUs with Intel Arc GPUs. The Graphics Shader Distribution Service is currently limited to a dozen games, so hopefully Intel will extend the list further with the future driver release. In addition, the new Intel Arc GPU 101.8626 WHQL graphics drivers also improve game performance in the Nioh 3 game on Intel Arc B-series GPUs by up to 9 percent at 1080p resolution with Ultra settings.
The new Intel Arc GPU driver release also fixes a couple of issues seen with previous driver releases, including an application crash with ray tracing enabled in the Naraka Bladepoint game, cinematics corruption in Hogwarts Legacy, and visual corruption in the viewport while resizing the window with HDR enabled in Davinci Resolve Studio. These issues are fixed on Intel Arc B-series GPUs and Ultra Series 3 CPUs with Intel Arc GPUs. Since it is a major WHQL release, Intel is listing all new known issues that are left to be fixed with future driver releases.
Samsung Electronics Co., Ltd. today announced it has signed a Memorandum of Understanding (MOU) with AMD to expand their strategic collaboration on next-generation AI memory and computing technologies. The signing ceremony was held at Samsung's most advanced chip manufacturing complex in Pyeongtaek, Korea, attended by Dr. Lisa Su, Chair and CEO of AMD, and Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics.
"Samsung and AMD share a commitment to advancing AI computing, and this agreement reflects the growing scope of our collaboration," said Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics. "From industry-leading HBM4 and next-generation memory architectures to cutting-edge foundry and advanced packaging, Samsung is uniquely positioned to deliver unrivaled turnkey capabilities that support AMD's evolving AI roadmap."
MSI, a global leader in high-performance server solutions, today unveils its latest AI infrastructure portfolio built on NVIDIA's modular architectures, including the NVIDIA MGX platform and NVIDIA DGX Station technology. Designed to accelerate AI training, large-scale inference, HPC, edge, and next-generation data center workloads, MSI's expanded lineup delivers exceptional scalability, performance density, and deployment flexibility.
Scalable AI Infrastructure Built on NVIDIA MGX Architecture
Leveraging the modular design of NVIDIA MGX architecture, MSI has developed a comprehensive portfolio of 4U and 6U liquid-cooled servers supporting NVIDIA RTX PRO 6000 Blackwell Server Edition and NVIDIA RTX PRO 4500 Blackwell Server Edition GPUs. The NVIDIA MGX architecture enables flexible CPU selection, high-capacity memory configurations, and seamless integration of high-speed networking - empowering enterprises to deploy infrastructure tailored to diverse workload requirements, from data center deployments to edge applications.
CarChrono delivers multi-source vehicle intelligence for car buyers. Search millions of listings, decode any VIN or Japanese chassis number, and get transparent reports with specs, title and accident history, market value, recalls, and ownership timelines. It cross-references over nine data sources, flags discrepancies, and helps detect fraud such as odometer rollbacks and title washing. Use it across the US, Canada, Japan, the UK, Germany, and more with real-time coverage.
Apple on Tuesday released its first round of Background Security Improvements to address a security flaw in WebKit that affects iOS, iPadOS, and macOS.
The vulnerability, tracked as CVE-2026-20643 (CVSS score: N/A), has been described as a cross-origin issue in WebKit's Navigation API that could be exploited to bypass the same-origin policy when processing maliciously crafted web content.
The
Strauss Zelnick, the CEO of Take-Two Interactive, has a somewhat complicated relationship with artificial intelligence, having previously expressed an interest in AI NPCs for more natural conversations while also having confirmed that GTA VI will feature no generative AI. Now, in a recent interview with The Game Business, Zelnick has once again commented on the capabilities and applications of generative AI. Addressing a question about Google's recent Project Genie showcases, Zelnick has dismissed the idea that generative AI could be used as a one-stop-shop for game development, saying that the gaming industry has always used technology to create great entertainment," adding that "an advance in technology that allows us to do our work better and quicker is great for us."
Zelnick dismisses the idea that AI projects like Genie are a threat to the gaming industry and to game developers, commenting that "it's quite obvious that creation tools are a benefit to our industry," and that the notion that "AI tools can somehow create big hits kind of doesn't stand to reason." He reasons that generative AI may help developers create game assets, but that creating a hit game requires human engagement and creativity. Zelnick rounds out the AI discussion by emphasizing that Take-Two's goal is to create engaging, entertaining games, and that this requires creativity, adding that "technology can assist with that mission, but technology on its own will not replace the fulfillment of that mission." The executive goes on to explain that "the notion that somehow new tools would allow an individual to push a button and generate a hit and bring it to many millions of consumers around the world, it's a laughable notion." It's worth noting that there have been recent layoffs, like those at EA, that have been attributed to or followed by an increase in AI adoption, indicating that, while AI may not be capable of replacing artists and developers from a technical standpoint, it does not necessarily mean there is no threat.
SteadyFlow tells freelancers exactly how much is safe to spend each week by accounting for tax reserves, unpaid invoices, and irregular income. It was built by a solo founder who got hit with a surprise tax bill.
Cybersecurity researchers have disclosed a critical security flaw impacting the GNU InetUtils telnet daemon (telnetd) that could be exploited by an unauthenticated remote attacker to execute arbitrary code with elevated privileges.
The vulnerability, tracked as CVE-2026-32746, carries a CVSS score of 9.8 out of 10.0. It has been described as a case of out-of-bounds write in the LINEMODE Set
Spydomo monitors competitors across reviews, social media, websites, and news, then delivers concise AI-generated briefs highlighting launches, customer pains, and market trends. It's designed for founders, product teams, agencies, and investors.
It automatically finds sources like G2, Reddit, LinkedIn, and blogs, turning scattered signals into structured insights you can act on. Receive updates daily, weekly, or instantly via email, Slack, or Teams. Pricing starts at $10 per tracked company per month, with a 14-day free trial.
Friendware brings AI autocomplete to macOS so you can write and act faster across every app. It observes your style and drafts instant, context-aware replies for email, Slack, LinkedIn, iMessage, and X. It polishes text and generates prompts as you type; just press Tab.
Use one-click actions to handle multi-step tasks like checking Stripe billing, sending follow-ups, or creating calendar invites. Built with native Mac code, it runs fast, stays lightweight, respects local context, and supports 100+ languages.
The Shokz OpenRun Pro 2 combines bone conduction and open-ear headphone tech to provide better audio quality for exercising, and it’s 22% off in Amazon’s Big Smile Sale.
Echo is an anonymous 3D voice space where people leave short voice or text messages in a virtual environment. There are no accounts, profiles, or comments, so people can speak more honestly without the pressure of social media.
It offers a quieter way to express feelings, release emotions, and hear real voices from others. Instead of performance and attention, Echo is built for honesty, privacy, and emotional connection.
RouteStack gives AI agents access to live travel data including hotels, flights, cars, rentals, and activities in one place. Pricing and availability are pulled in real time from global distribution systems, and every booking link is cryptographically signed for secure checkout.
Developers can connect to RouteStack using Python or Node SDKs, a ready-to-run server, or Docker. It's built to be fast, reliable, and easy to integrate into any AI agent or framework.
RPCS3, the multi-platform, open-source PlayStation 3 emulator that recently announced support for 70% of the PlayStation 3 game library, has just added a UX workflow to automatically add emulated games to your Steam library. This effectively works the same way as adding a third-party game to your Steam library from Steam, but it eliminates the extra step of opening Steam and manually adding the game launcher to your Steam library. Games added this way will automatically add the game's PS3 art included with the game files in the emulator, to boot.
It's a small UI change, in the grand scheme of things, but it should help simplify game emulation and make it easier for gamers to play their emulated games via RPCS3. Being able to add third-party games to a game library is a nigh-essential feature that even Microsoft recently added to the Xbox App for Windows. RPCS3 adopting support for seamless Steam library integration could effectively let Linux and SteamOS players go from cold boot to playing a game all from a controller using Steam Big Picture mode—no keyboard or mouse necessary.
Chartbeat data shows search referral traffic fell 60% for small publishers over two years, compared with 22% for large publishers, per an Axios report.
SISTRIX analyzed over 100 million German keywords and found AI Overviews reduce the position one click rate from 27% to 11%. Impact varies by industry.
Crimson Desert has had no shortage of hype leading up to its March 19 launch date, but research firm, Alinea Insights, has put a number on that hype. Based on Steam data and sales approximations, Alinea estimates that the new open-world action-adventure game has already surpassed $20 million in Steam revenue alone. This is based on an estimated 400,000 pre-orders ahead of launch. Currently, Crimson Desert sits at the top of Steam's Top Sellers charts, which ranks games by revenue earned, having beaten out long-standing chart-topper, Counter-Strike—at least in the US—and the more recent indie hit, Slay the Spire 2, which recently dropped from first to fourth place.
Likewise, according to SteamDB, at the time of writing, Crimson Desert is the fourth most-wishlisted game on Steam, boasting over 170,000 followers on Steam. The excitement leading up to the launch of Crimson Desert has been building for a good long while now, but the game, and its development team, earned a significant boost in its reputation online after a very forgiving set of minimum hardware requirements was announced for Crimson Desert in a time that games seem to be getting more demanding with every passing launch. Alinea attributes much of Crimson Desert's success to the studio's community building efforts and authentic, transparent approach to community engagement, much like the recent gameplay demo showed off on the PlayStation Japan YouTube channel.
At GTC 2026, NVIDIA announced its next-generation DLSS installment, version 5. However, after the community expressed significant backlash over the goals of DLSS 5, NVIDIA CEO Jensen Huang addressed the criticism surrounding the upcoming technology. In a Q&A session, NVIDIA provided Tom's Hardware with a response from a gamer's perspective about the objectives being pursued, as much of the negative feedback stemmed from the perceived visual downgrade in Resident Evil Requiem with the enhancements of DLSS 5. "Well, first of all, they're completely wrong. The reason for that is, as I have explained very carefully, DLSS 5 fuses the controllability of geometry, textures, and every aspect of the game with generative AI." What the CEO of the world's largest company is trying to convey is the controllability offered by DLSS 5 and how it provides developers with an option to enhance visuals.
For example, much of the criticism focuses on how DLSS 5 alters the visual definition of various in-game elements beyond the original developer's intention. Jensen Huang explains that DLSS 5 is not merely a post-processing tool for games but rather a system that gives developers generative control at the geometry level, not just a filter applied on top of the graphics processing pipeline. Huang noted that developers have complete control over what the technology does from the very beginning. DLSS 5 uses the game's color and motion vectors for each frame, and the model aligns with what the game developer originally intended for the image. Even the extent to which DLSS 5 enhances visual fidelity can be controlled, and it is entirely up to the developer to decide. "It's not post-processing, it's not post-processing at the frame level, it's generative control at the geometry level," iterated Huang.
From the outset, Samsung positioned the TriFold as an experimental, tightly controlled product rather than a mass-market flagship. Early batches in Korea were limited to around 3,000 units per release, each selling out within minutes on Samsung's online store.
QuantDock is an AI-powered trading platform that turns plain-English trading ideas into fully automated trading strategies. Users describe a trading idea—such as “buy AAPL when it dips 5% from its recent highs and sell at a 5% profit”—and QuantDock converts it into a structured strategy, runs backtests, and enables automated trade execution.
By combining natural language with quantitative trading tools, QuantDock lowers the barrier to algorithmic trading. Traders can quickly try ideas, evaluate performance with backtesting, and deploy AI-driven trading bots without programming expertise.
Scratch Frameworks was built on one uncomfortable truth: a customer's decision to leave is made 30 to 90 days in advance, yet it often comes without warning. Most customer success tools only document what happened. They don't tell you why or what to do next. We're the first platform to apply behavioral science to churn prediction, uncovering the real reasons behind disengagement before they become irreversible. Upload your customers, get instant health scores, and when a friction point is detected, you get a step-by-step intervention plan so your team knows exactly what to do right now. Stay ahead every time.
Velocity Learning is a K–8 math fluency game that trains automatic recall through short, timed sessions on iOS and Android. Students target weak facts in Study mode, then push speed in Cram mode while tracking progress on a clear mastery grid. Parents and teachers can quickly see mastery, streaks, and daily improvement, helping students gain confidence and stronger foundations in just 10–15 minutes a day.
AI agents which perform normal office tasks can also autonomously exploit systems, bypass protections, and exfiltrate sensitive data inside simulated networks.
Sony makes its PlayStation Portal streaming device even better with its new “High Quality Mode” On March 18th, Sony will be giving its PlayStation Portal a major upgrade. Soon, owners of Sony’s game streaming device will gain access to “1080p High Quality Mode” for both Remote Play and Cloud Streaming, boosting the device’s image quality. […]
The Lenovo Legion 7i 16" (Gen 10) is a near-perfect laptop for mid-range gaming, casual use, and even productivity, and it's now on sale for a 24% discount at Best Buy.
There has been a lot of discussion about Marathon's gameplay, pacing, and potential shortcomings in the weeks since the game launched. One feature that has seemingly been requested all over since the game launched is a duos queue for squads of two players, and it looks as though Bungie is indulging in the fans' requests with an imminent experimental duos queue. According to an X post by Marathon's game director, Bungie will start testing for a duos queue on Wednesday, 18 March, at 10 AM PT. Given that Bungie is even testing a duos queue suggests that there is enough player feedback for the game studio to consider adding the feature to the game at large. There are still questions about how two-player teams will fare in a game that has notoriously high time-to-kill in non-PvP encounters, but that's perhaps one of the gameplay aspects that will need tuning and experimentation to nail down.
This is an experimental test that will be limited to the Perimeter zone, and there are a few minor details to note if you're planning to hop into the new game mode. For starters, and likely the most notable issue players might run into, the duos queue will not have any matchmaking, meaning duos will only be available to players already in a squad with two players. The director is also clear that "some things will be jank," suggesting this will be far from a polished experience for players, but he hopes that any data gathered during the testing can be used to expand the duos queue in the future. Players interested in joining the duos queue can select "Perimeter - Duos" from the zone selection screen, but this will not be the final UX flow, leaving Bungie with a lot of headroom to grow the Marathon duos gameplay experience going forward.
MSI plans to increase the price of its PC products by 15 - 30%, company general manager Huang Jinqing recently said. Speaking with investors, Jinqing confirmed that the entire hardware industry is facing unprecedented market conditions. Memory manufacturers have almost entirely shifted their priorities, allocating the majority of their production...
Unlike Nvidia's earlier Grace processors, which were primarily sold as companions to GPUs, Vera is positioned as a general-purpose data center CPU with a strong focus on AI-centric workloads such as agentic frameworks, scripting-heavy pipelines, analytics, and code compilation. The chip is built on 88 Nvidia-designed Arm v9.2-A "Olympus" cores,...
Mistral Forge lets enterprises train custom AI models from scratch on their own data, challenging rivals that rely on fine-tuning and retrieval-based approaches.
Thousands of people are trying Garry Tan's Claude Code setup, which was shared on GitHub. And everyone has an opinion: even Claude, ChatGPT, and Gemini.
Kagi's "Small Web" offers a handpicked collection of more than 30,000 non-commercial, human-authored websites, including personal blogs, webcomics, and independent videos.
The PS5 release of Starfield has long been rumored for 2026, but it was unclear when exactly the game would launch for Sony's gaming platform, but Bethesda has now confirmed that Starfield will launch on the PS5 in April 2026. Specifically, the launch is slated for an April 7 release. Starfield's PlayStation 5 release will coincide with the launch of a free game content update and a new story DLC, all of which was detailed in a recent developer deep-dive on the Bethesda Softworks YouTube channel.
Bethesda says that the Free Lanes update was heavily guided by player feedback, and that it will be the biggest update since the game's launch. Free Lanes will add a new vehicle and spacesuit, as well as new weapons to find and POIs for players to explore and interact with. It also adds a cruise mode to speed up interplanetary travel and allow players to interact with their crew and ship on longer trips. There are also a new material to enhance weapons and ships, and an expansion to the progression system to give players more to do in space and on the planets they encounter. Enemies have also received an update to make enemy encounters more challenging and varied.
Following the recent early performance benchmarks of the Intel Core Ultra 5 250K Plus, Intel's Arrow Lake-S Core Ultra 3 205T has broken ground on PassMark, and it's giving the Core Ultra 5 255 and Core Ultra 5 255T a run for their money, at least in single-core benchmarks. Multithreaded benchmarks paint something of a grimmer picture for the Core Ultra 3 205T, though—likely due to the reduced core and thread count of the 205T, which features just eight cores and eight threads, likely four P-cores and four E-cores.
The Core Ultra 3 205T scored a respectable 4,561 points in the single-core benchmark, beating out the Core Ultra 5 255 and 255T by 3.2% and 6.6%, respectively. When it comes to multithreaded tasks, however, the Core Ultra 3 205T falls behind. In PassMark's benchmarks, the 205T was nearly 15% behind the Core Ultra 5 225. Still, the Intel Core Ultra 3 205T comes in ahead of the AMD Ryzen AI 5 435 by a significant margin in both single- and multi-core tests, and the Core Ultra 3 205T seems to have been tested at a peak TDP of 35 W, all of which may make it worth considering for lower-end office builds.
Video StoreAge is a new company focused on creating physical releases of indie films. The startup aims to take a more authorial approach to distribution, using a patented encrypted USB drive to share its curated titles. Its ultimate goal is to disrupt algorithm-driven distribution in favor of communities and grassroots...
Nvidia CEO Jensen Huang responded to backlash against DLSS 5, saying artistic control remained with developers and that the AI works with existing geometry.
echo99 records, transcribes, and summarizes your calls across Zoom, Google Meet, MS Teams, and Webex. It delivers accurate, speaker-labeled transcripts, AI summaries with action items and decisions, and a searchable archive for every conversation.
Send the meeting bot to attend for you, then review talk time, sentiment, and engagement, and run post-call analysis to extract quotes and trends. Flexible pay-as-you-go pricing and team options make it easy to adopt at any scale.
Ray Tracing is coming to Death Stranding 2 on PC PC players will have access to optional ray tracing upgrades in Death Stranding 2 Death Stranding 2: On the Beach is coming to PC on March 19th, with new content arriving on PlayStation 5 on the same day. New content includes a new difficulty mode […]
A new datamine suggests Wuthering Waves could be coming to Xbox, with platform references discovered in game files, though no official announcement has been made.
A $599 Mac laptop and $899 Surface laptop just don't compare, and it's not because the Surface is so much better that it warrants its higher price tag.
The Xbox App has been undergoing an overhaul for some time, and the latest update now allows users to add apps, games, and virtually any third-party software to its library. Windows Central has tested this feature, providing a preview of the process. Although these third-party games and software are not linked to any Microsoft service, the Xbox App offers a centralized location for launching them as shortcuts. For example, Steam has offered a similar feature for years, allowing gamers to add games installed from third-party stores to the Steam Client, but only as shortcuts for launching, not as official sources. This means that any updates to third-party applications will still be managed by their respective apps or clients, with the Xbox App serving merely as a unified shortcut hub.
The process is quite simple. After opening the Xbox App on your PC, go to the "My Library" section, find the small "+" icon in the top right corner, and click it to see suggested additions. If your application isn't listed among the suggested files, the Xbox App enables you to use File Explorer to manually browse for files and locate what you want to display. Nearly any .exe file can be added, including games, productivity apps, or almost anything you can imagine. The only limits are your imagination or something that hasn't been tested yet. For users who want to use the Xbox App as a single app launcher, this feature allows you to embed any game or app within the same user interface, which is a nice option for those who enjoy the Xbox App's user experience.
The two new SKUs are the Core Ultra 9 290HX Plus and the Core Ultra 7 270HX Plus. The former serves as the new flagship, offering faster performance than the Core Ultra 9 285HX in both games and professional applications. Intel claims it delivers up to 7% higher single-threaded performance in Cinebench 2026.
Nvidia officially launches the DGX Station featuring a GB300 Grace Blackwell Ultra Desktop Superchip, 784GB of LPPDDR5X and HBM3e memory, and a 1,600-watt power rating.
GameStop has declared that the Sony PlayStation 3, Xbox 360, and Nintendo Wii U are now officially retro consoles, with the change now allowing trade-in of any console that still powers on, even if they are faulty or "aesthetically unfortunate."
FraudSentry is a personal fraud detective that analyzes suspicious texts, emails, links, screenshots, and documents in seconds. It uses AI with a curated database of 100,000+ threat patterns to trace links, surface red flags, and reveal how schemes operate. You receive a clear, actionable report with the evidence, recommended next steps, and easy sharing to protect family and friends. Coming soon to TestFlight for iOS, with Android and web later this year.
Cybersecurity researchers have disclosed details of a new method for exfiltrating sensitive data from artificial intelligence (AI) code execution environments using domain name system (DNS) queries.
In a report published Monday, BeyondTrust revealed that Amazon Bedrock AgentCore Code Interpreter's sandbox mode permits outbound DNS queries that an attacker can exploit to enable interactive shells
Starfield is getting two new expansions and a whole lot of improvements next month as part of a huge content update, but from Bethesda's point of view, it's not a "2.0" patch.
All the ways to watch Man City vs Real Madrid live streams online – including for FREE – in the decisive 2025/26 Champions League 2nd leg at the Etihad.
Brazil’s new Digital ECA law requires online providers to implement strict age verification. A massive surge in VPN usage suggests citizens are turning to encryption to protect their privacy.
Google is expanding Personal Intelligence to free U.S. users in AI Mode, connecting Gmail and Photos to search. Gemini app and Chrome rollout starting.
YouTube is experimenting with a format that keeps ads visible even after users skip — potentially reshaping how advertisers think about skippable inventory.
What’s happening. YouTube is testing a sticky banner overlay that appears once a user skips an ad. Instead of the ad disappearing entirely, a branded card remains on-screen until the viewer actively dismisses it.
How it works. After hitting “skip,” users return to their video as normal, but a persistent banner tied to the original ad stays visible within the player, extending the advertiser’s presence beyond the initial skip.
Why we care. This test from YouTube creates a way to maintain visibility even when users skip ads, potentially increasing brand recall without requiring full ad views.
It also changes how skippable performance may be evaluated, as impressions and engagement could extend beyond the initial ad, giving brands more value from the same inventory within Google’s ecosystem.
Why it’s notable. Skippable ads have traditionally meant lost visibility once skipped. This format changes that dynamic by offering a second chance for exposure, even when users opt out of the full ad experience.
Impact for advertisers. The update creates an opportunity for extended brand visibility and recall, but could also influence engagement metrics and how users perceive ad interruptions.
The bottom line. If rolled out widely, the sticky banner test could redefine what a “skipped” ad means — turning it into continued, lower-friction exposure rather than a full exit for advertisers on YouTube.
First seen. This update was first spotted by Founder & CEO of Adsquire Anthony Higman who shared spotting it on LinkedIn.
Google is incrementally improving metric visibility in Performance Max, giving advertisers more insight into how creative choices — particularly video — impact performance.
What’s happening. Google Ads has introduced a new “Ads using video” segment within Performance Max channel performance reporting, allowing advertisers to break down results based on whether video assets were included.
Why we care. Marketers can now compare performance across placements that used video versus those that didn’t, offering a clearer view into the role video plays across Google’s automated inventory.
It helps answer a key question in an automated environment: whether investing in video assets is driving better results, allowing you to make more informed creative and budget decisions inside Google Ads.
Between the lines. As video becomes more central across surfaces like YouTube and beyond, this update gives advertisers a way to validate the impact of investing in video assets within automated campaigns.
The bottom line. The new segment adds a layer of clarity to Performance Max, helping advertisers better evaluate video’s contribution without changing how campaigns are run inside Google Ads.
First spotted. This update was first spotted by PPC News Feed founder Hana Kobzova.
Google is expanding Personal Intelligence across AI Mode, Gemini, and Chrome in the U.S., moving it beyond beta into broader consumer use.
Why we care. Personal Intelligence pushes Google further into fully personalized search, using first-party data like Gmail and Photos. That makes results harder to replicate, rank against, or track — especially in AI Mode, where outputs may vary based on user history, purchases, and behavior.
The details. Personal Intelligence now works across:
AI Mode in Google Search (available now in the U.S.)
Gemini app (rolling out to free users)
Gemini in Chrome (rolling out)
How it works. Users can connect apps like Gmail and Google Photos so Google can tailor responses using personal context. Examples Google shared include:
Shopping recommendations based on past purchases and brand preferences.
Tech troubleshooting using receipt data to identify exact devices.
Travel suggestions using flight details, timing, and past trips.
Personalized itineraries and local recommendations.
Hobby suggestions inferred from user interests.
Availability. These features are available only for personal accounts, not Workspace users, Google said.
Although Google continues to test ads in AI Mode, users who connect apps to enable Personal Intelligence won’t see ads — and that isn’t changing right now, a Google spokesperson confirmed.
Early results: users find these business connections “helpful,” per Google.
But there’s a clear carveout: no ads for users who opt into app-connected, highly personalized experiences.
The details.Google today expanded Personal Intelligence in AI Mode as a beta to anyone in the U.S., allowing Gemini to generate more tailored responses by connecting data across its ecosystem, including Google Search, Gmail, Google Photos, and YouTube.
Opting into Personal Intelligence creates an ad-free experience inside AI Mode.
Why we care. Ads are coming to AI Mode, but Google is moving cautiously where personal data is deepest. Personal Intelligence experiences stay ad-free for now while Google works out the right balance.
What Google is saying. A Google spokesperson told Search Engine Land:
“There are currently no ads for people who choose to connect their apps with AI Mode. That isn’t changing right now.
“Over the past few months, we’ve been testing ads in AI Mode in the US. Our tests have shown that people find these connections to businesses helpful and open up new opportunities to discover products and services.
“In the future, we anticipate that ads will operate similarly for people who choose to connect their apps with AI Mode. Ads will continue to be relevant to things like your query, the context of the response and your interests.”
Bottom line. Personal Intelligence positions Google’s Gemini app as a more personalized assistant, setting the stage for future ad experiences built on richer, cross-platform user context.
The Hormuz crisis is threatening TSMC and the global semiconductor supply chain We have now entered the third week of the Iran conflict, with Iran effectively closing the globally vital Strait of Hormuz in response to attacks from the US and Israel. Typically, the Strait would see 20% of the world’s natural gas and 25% […]
Satya Nadella unifies Copilot leadership under a former Snap VP, signaling a shift toward in-house "Superintelligence" and independence from OpenAI's models.
GitHub is a vast labyrinth of amazing open-source software projects, and it can be hard to see some of the awesomeness within. This app changes all that.
Among Microsoft's announcements for Xbox Game Pass additions for March 2026, one game caught everyone by surprise. Absolum, a popular beat em' up with Roguelite elements released in 2025 for PlayStation, Nintendo Switch, and PC, is finally coming to Xbox and Xbox Game Pass later this March.
NVIDIA has added another graphics card to its server lineup, this time in the form of a passively cooled, single-slot RTX PRO 4500 Blackwell Server Edition GPU. The company positions this release as a highly efficient GPU for compute-dense environments. It comes with 10,496 CUDA cores, 82 Ray Tracing cores, and 32 GB of GDDR7 memory running on a 256-bit bus, providing 800 GB/s of memory bandwidth, all within a total graphics power of 165 W. This specification is similar to the current RTX PRO 4500 Blackwell with an active dual-slot cooler but saves a few watts of TGP, as the actively-cooled edition has a 200 W TGP. The difference in TGP is attributed to higher-binned "Blackwell" GB203 dies with better frequency tuning, resulting in a similar performance target for this GPU. This server edition SKU also reduces memory bandwidth, running the 32 GB GDDR7 modules at 25 Gbps effective, while the regular blower-style RTX PRO 4500 Blackwell uses full 28 Gbps modules.
This server edition SKU is designed for server configurations that require hyper-dense setups, where a single-slot solution will be cooled by high-RPM server fans. For example, server farms could install a dozen of these GPUs in parallel within a single system, stacking them as long as there are available PCIe slots. High airflow chassis would push air through the passively cooled GPU shroud, cooling the 165 W TGP. Interestingly, this is not even the most efficient GB203 bin with 10,496 CUDA cores, as NVIDIA offers a GeForce RTX 5090 Mobile GPU SKU with only a 95 W TDP. However, that comes at the cost of some clock speeds, which are still unknown for the newest RTX PRO 4500 Blackwell Server Edition GPU.
System76 has refreshed its Thelio Mira desktop series with a focus on better thermals, easier serviceability, and a cleaner overall design, while it's still all built in-house at the company's facility in Denver. The new chassis mixes aluminium, steel, and a tempered glass front panel, with a vertical control bar consolidating the power button and front I/O into one clean strip. System76 uses steel fasteners throughout, and the panels are designed for quick access when you need to get inside for upgrades or maintenance. On the cooling side, System76 made improvements, the company claims up to 19% higher sustained CPU clock speeds and temperature drops of up to 13.5°C thanks to liquid cooling and revised airflow.
System76 also made changes to the specs as it is built around an ASRock X870 Pro RS WiFi motherboard. Processor options top out at the Ryzen 9 9950X or 9950X3D, memory goes up to 192 GB of DDR5, and storage can reach 28 TB spread across NVMe and SATA drives. As for the GPU, you can choose a single graphics card up to an RTX 5090 or Radeon RX 9070 XT over PCIe 5.0. Connectivity includes USB4, 2.5 GbE, and Wi-Fi 7, with a mix of front and rear USB ports. PSU requirements start at 750 W and go up to 1000 W for the beefier GPU configs. As with other System76 products, the Thelio Mira carries open-source firmware elements and is built with long-term usability in mind.
Crimson Desert's official March 19 launch is just around the corner, and in the lead-up to the launch, PlayStation Japan and Pearl Abyss have shown off the upcoming action-adventure game running on base PS5 hardware with respectable image quality and performance. The presenters of the Play! Play! Play! broadcast play through the first few minutes of the game's prologue and show off a few tutorial scenes, and, although it's difficult to gauge image quality directly, there is some information to be gleaned from the broadcast. Overall, performance and image quality seem to align with the promises made by the minimum hardware requirements published earlier this month—even if there are some upscaling artifacts visible in finer textures, like character hair.
The most notable aspect of the gameplay footage is that there are no obvious framerate issues, texture pop-ins, or stuttering visible. Even at longer draw distances, image quality seems to be well controlled, and motion remains smooth, even during high-action scenes where the load would generally increase. The demonstration has been heartening for PS5 players, since there were suspicions that the PS5 gameplay footage was being kept under wraps ahead of the game's launch due to lackluster performance. The broadcast also serves to give players a taste of the challenging combat, puzzle-platformer mechanics, exploration, and atmospheric world they can expect from the game's launch.
Echo Foundry Interactive, an independent game studio focused on building the next generation of music and rhythm experiences and founded by industry veterans behind the Guitar Hero, Rock Band, and DJ Hero franchises, today announced that Sound System, the highly anticipated next-generation rhythm game, will launch on Steam on October 16, 2026 for $24.99.
Sound System revives the rhythm game genre with intense gameplay, built-in creator tools, and a community platform for artists and players to share music-driven creativity. Shred guitar, play bass, or sing vocals using keyboard, controller, compatible guitar controllers, or microphone. Enjoy local split-screen or online modes, including co-op band performances with shared multipliers and effects, and head-to-head competitive modes for stage control battles.
Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, today unveiled one of the industry's first context memory (CMX) storage server as part of NVIDIA STX reference architecture announced NVIDIA GTC 2026. STX is a new modular reference architecture from NVIDIA which is designed to accelerate the full lifecycle of AI.
"Supermicro continues to be first to market with new rack scale architectures designed to exceed the needs of a rapidly evolving AI Factory customer base," said Charles Liang, president and CEO of Supermicro. "Building upon last year's introduction of the Petascale JBOF (Just a Bunch of Flash), where we proved the feasibility of a JBOF powered by NVIDIA BlueField-3 DPUs, we have developed the CMX storage server. Our prototype of the latest storage architecture demonstrates the level of our collaboration with NVIDIA, and our commitment to be first-to-market with game changing technologies."
Nvidia announced seven chips in full production at GTC 2026 on Monday, composing the Vera Rubin platform that the company intends to ship in the second half of this year.
Nvidia's DLSS 5 AI model uses a deep awareness of environmental lighting and how that light interacts with various materials in a scene to dramatically upgrade the appearance of games, and the results can be both impressive and uncanny.
The technology, called MOSAIC, replaces lasers with inexpensive MicroLEDs and uses a fundamentally different architecture to transmit data inside datacenters.
PDF Template API lets you design dynamic PDF templates and generate business documents via REST API, Zapier, Make, Airtable, and other no-code platforms. Build real-world documents with reusable headers and footers, data binding, auto-growing tables, and on-the-fly QR codes and barcodes. Use expressions, system variables, and 100+ functions to format content, calculate totals, and control layouts, then deliver polished invoices, packing slips, certificates, and more.
The ransomware operation known as LeakNet has adopted the ClickFix social engineering tactic delivered through compromised websites as an initial access method.
The use of ClickFix, where users are tricked into manually running malicious commands to address non-existent errors, is a departure from relying on traditional methods for obtaining initial access, such as through stolen credentials
Certain units of the Nikon Z5 II, Z6 III and ZR have been built using faulty parts, and Nikon says continued use could lead to them becoming completely "inoperable".
Sony has announced a new system software update for the PlayStation Portal that will add a new graphics mode and additional quality-of-life improvements.
New Taylor Sheridan series The Madison isn't everyone's cup of tea — and this 80s classic movie on Prime Video might be a better alternative for Kurt Russell fans.
Micro-drama apps make billions in consumer spending, so VURT launched its streaming app to allow indie filmmakers to capitalize on the vertical video trend.
Personal Intelligence allows Google's AI assistant to tap into your Google ecosystem, such as Gmail and Google Photos, to provide more tailored responses.
The hack, which brought ongoing widespread disruption to the company's operations, is thought to be the first major cyberattack in the United States in response to the Trump administration's war in Iran.
OpenAI has reportedly signed a partnership with AWS to sell its AI systems to the U.S. government for classified and unclassified work, marking an expansion beyond its Pentagon deal last month.
Finnish smart ring maker Oura is finally launching in India, taking on local rivals such as Ultrahuman in a relatively young smart ring market that is becoming price-sensitive thanks to an influx of low-cost options.
As AI agents take the reins for online shoppers, Sam Altman's unconventional startup is looking to expand its verification offerings to support agentic commerce.
At Google’s annual health event, The Check Up, we shared how our products, research and partnerships are making the most of AI to help everyone live healthier lives.
Learn how Fitbit is helping people take control of their health by linking medical records, improving sleep tracking accuracy and advancing metabolic research.
Yahoo CEO Jim Lanzone said AI-powered search — especially Google’s AI Mode — is putting the open web’s core traffic model at risk and argues AI search engines must send users back to publishers.
“I think that the LLMs are one big reason that they’re under threat, with AI Mode in Google being the biggest challenge.”
“Those publishers deserve [traffic], and we’re not going to have the content to consume to give great answers if publishers aren’t healthy.”
Why we care. Many websites are seeing less traffic from answer engines like Google and OpenAI — and I think it’ll only get worse. So it’s encouraging to see Yahoo trying to preserve the “search sends traffic” model. As he said: “We have very purposefully highlighted and linked very explicitly and bent over backwards to try to send more traffic downstream to the people who created the content.”
Yahoo’s AI stance. Yahoo is taking a different approach from chatbot-style interfaces, Lanzone said on the Decoder podcast. He added that Yahoo isn’t trying to compete as a full AI assistant:
“Ours looks a lot more like traditional search and it is more paragraph-driven. It’s not a chatbot that’s trying to act like it’s a person and be your friend.”
“We’re not a large language model. We’re not going to be the place you come to code. We’ve really launched Scout as an answer engine.”
What’s next: Personalization + agentic actions. Yahoo plans to expand Scout beyond basic answers and is embedding AI across its ecosystem:
“You are very shortly going to see us get into very personalized results. You’re going to see us get into very agentic actions that you can take.”
“There’s a button in Yahoo Finance that does analysis of a given stock on the fly… It is in Yahoo Mail to help summarize and process emails.”
Yahoo vs. Google isn’t a thing. Yahoo isn’t trying to win by converting Google users directly. Instead, Yahoo is prioritizing its existing audience and increasing usage frequency over immediate market share gains:
“Nobody chooses, you will not be surprised, Yahoo over Google or somewhere else to search. The way that we get our search volume is because we have 250 million US users and 700 million global users in the Yahoo network at any given time. There’s a search box there. And infrequently, they use it.”
A warning. Companies — including publishers — should be cautious about relying too heavily on AI platforms as intermediaries. Lanzone compared today’s AI partnerships to Yahoo’s past reliance on Google:
“You are tempting fate by opening up a way for consumers to access your product within a large language model.”
“The big bad wolf will come to your door and say everything’s cool.”
For a long time, a nonprofit’s digital presence hasn’t been a “nice-to-have.” It’s the central hub for mission delivery, donor engagement, and advocacy.
Many organizations struggle with the technical and strategic foundations needed to turn a website and a few social accounts into a high-performing digital ecosystem.
The goal isn’t simply to “be online.” It’s to build reliable infrastructure, so your organization owns its narrative, protects its assets, and measures the impact of “free” digital efforts.
Here’s a practical look at the critical elements of managing a nonprofit’s digital presence — and the common pitfalls to avoid — based on my experience helping several organizations throughout my career.
If you help an organization with digital marketing and they aren’t following these practices, your first step should be getting their digital house in order.
1. Own your foundations: Domains and account control
In my experience, the most overlooked risk in nonprofit digital management is the lack of direct ownership of technical assets.
A well-meaning volunteer or third-party agency often registers a domain or creates a social account using personal credentials. If that individual leaves the organization, you risk losing access to your primary digital channel — the domain you should own and control.
I’ve worked with several organizations that had to start over completely because they lacked control.
Domain ownership: Ensure the domain is registered in the organization’s name using a generic “admin@” or “info@” email address that multiple stakeholders can access. Set the domain to auto-renew and use a registrar that offers robust security features.
Website hosting and management: The organization also needs to control its website hosting and administration. Use a similar approach to the one recommended for domain ownership.
Social media governance: Again, use a similar process to the one described above to establish ownership of key social media channels. Grant volunteers access via delegation on individual channels rather than sharing passwords. This allows you to revoke access immediately if a staff member or volunteer moves on, protecting your brand’s voice and security.
2. Move beyond ‘winging it’: The editorial calendar
A common mistake for nonprofits is posting only when there’s an immediate need, which is often only when making a fundraising appeal. This “broadcast-only” approach often leads to donor fatigue and low engagement.
To build a community, you need a content plan that balances stories of impact with actionable requests.
The 70/20/10 rule: Aim for 70% value-based content (success stories, educational info), 20% shared content from partners or community members, and only 10% direct “asks.”
The editorial calendar: Use a simple tool, even a shared spreadsheet, to map out your themes and individual pieces of content for the month. This ensures you aren’t scrambling for a post on Giving Tuesday, that everyone knows what’s expected of them, and that your messaging and pace of content creation remain consistent across email, social, and your blog.
3. Tracking what matters (and ignoring what doesn’t)
Data is only useful if it informs future decisions. Many organizations get bogged down in “vanity metrics” like total likes or page views without understanding whether those numbers lead to real-world outcomes.
Set up conversion tracking: It isn’t enough to know that 1,000 people visited your site. You need to know how many of them clicked the “Donate” button or signed up for your newsletter.
Behavioral analytics: Use cost-free tools like Google Analytics 4 and Microsoft Clarity to see where people are dropping off in your donation funnel. If 50% of people leave the site on your “Ways to Help” page, you may have a UX issue or a confusing call to action.
4. Optimize for the ‘mobile-first’ donor
Most global web traffic is now mobile, and for nonprofits, this is critical. Donors often engage with your content on social media on their phones and expect a seamless transition to your donation page.
Speed and simplicity: Fancy header videos, sliders, and bloated images slow down your site, like the nonprofit example in this article about bad website design. Less is more when speed is of the essence. Reduce friction to make your website more usable. For example, if your donation page takes more than three seconds to load or requires more form fields than necessary, you’re leaving donations on the table.
Payment flexibility: Incorporate digital wallets like Apple Pay, Google Pay, or PayPal. Reducing friction at the point of donation is one of the most effective ways to increase your conversion rate. Many nonprofits use third-party tools to manage donations, so keep payment flexibility in mind when choosing a payment partner.
Even well-intentioned nonprofits can undermine their digital presence with a few common mistakes.
Targeting ‘everyone’
One of the biggest mistakes is trying to reach everyone. A digital presence that tries to appeal to every demographic usually ends up appealing to no one. Define your “ideal supporter,” and tailor your language, imagery, and platform choice to them.
Neglecting accessibility
Accessibility is about inclusion. Ensure your images have alt text, your videos have captions, and your website colors have enough contrast for users with visual impairments. If a portion of your audience can’t interact with your site, you aren’t fulfilling your mission.
The ‘set it and forget it’ mentality
I often tell businesses to treat websites like any other business asset, and the same applies to nonprofits. Digital ecosystems require maintenance.
Links break, plugins need updates, and search algorithms change. A quarterly “digital audit” to check your site speed, broken elements, and SEO health is essential for long-term visibility.
Turning your digital ecosystem into a mission multiplier
A successful digital presence is built on the same principles as a successful mission: consistency, transparency, and clear communication. By owning your assets, planning your content, and grounding your decisions in data, you ensure that your digital ecosystem serves as a force multiplier for the people you’re trying to help.
Xbox Game Pass rounds out March 2026 with major additions including Resident Evil 7, Disco Elysium, and Like a Dragon: Infinite Wealth, alongside several smaller and returning titles across multiple tiers.
Microsoft is stripping Copilot Chat from Office sidebars for non-licensed users on April 15, 2026, rebranding the included version as "Copilot Chat (Basic)."
Dell has given its Alienware gaming laptop lineup a boost with Intel's newest Core Ultra 200HX Plus series processors, which are part of the Arrow Lake-HX Refresh. This update comes after Intel rolled out its new mobile chips, including the Core Ultra 9 290HX Plus and Core Ultra 7 270HX Plus. These chips are designed to handle demanding gaming and workstation tasks, and they come with extra features like the Intel Binary Optimization Tool. The updated Dell lineup covers 18-inch and 16-inch laptops from the Alienware Area-51 series and Alienware 16X Aurora. The 18-inch model continues to target maximum performance, now configurable with up to the Core Ultra 9 290HX Plus, a 24-core chip with boost clocks reaching 5.5 GHz, while the Core Ultra 7 270HX Plus offers a 20-core option with boost up to 5.3 GHz.
The 16-inch models bring more notable changes. As previously announced in January at CES 2026, both the Alienware 16 Area-51 and 16X Aurora now feature anti-glare OLED panels, keeping the 2560 × 1600 resolution and 240 Hz refresh rate, but improving response time to 0.2 ms and increasing peak brightness to 620 nits, up from 500 nits on previous LCD configurations. The Alienware 16X Aurora also gets a GPU upgrade, now configurable up to an NVIDIA GeForce RTX 5070 Ti, replacing the previous RTX 5070. Memory support stays the same, with the 16X Aurora maxing out at DDR5-5600, while storage choices go from 1 TB to 4 TB, with PCIe 4.0 support on some setups.
"The inference inflection has arrived," Huang said during the keynote, framing the next stage of the AI boom around systems designed not just to train models but to run them at massive scale.
According to a detailed account posted by Reddit user Southern_Chest_9084, the incident occurred on September 24, 2025, while he was working on his laptop. The user reported feeling "an extreme surge of heat" under the device, which left a blistered, watch-shaped burn. The injury, described as painful and slow to...
Nvidia announced BlueField-4 STX at GTC 2026 on March 16, a modular reference architecture for accelerated storage designed to address the data access bottleneck limiting agentic AI inference.
Best Buy has launched a huge Tech Fest Sale, and as TechRadar's deals editor, I've spent hours shopping the sale to pick out the best offers worth buying.
The VPN giant is expanding its efforts to provide journalists, human rights defenders, and NGOs with vital digital security tools to bypass increased censorship, digital surveillance, and cyberattacks.
Earbuds, wireless over-ears, retro on-ears, boombox-style speakers, a turntable, and a ‘micro hi-fi system’ — is there any audio product Philips isn’t releasing in 2026?
We’ve signed an agreement with AMP to enable 200,000 tons of CO2 removal by 2030 and explore how their approach mitigates methane, a superpollutant that is 80 times more…
If you’re a content strategist, you might feel this isn’t your territory. Keep reading, because it is. Everything you build feeds these five gates, and the decisions the algorithms make here determine whether the system recruits your content, trusts it enough to display it, and recommends it to the person who just asked for exactly what you sell.
The DSCRI infrastructure phase covers the first five gates: discovery through indexing. DSCRI is a sequence of absolute tests where the system either has your content or it doesn’t, and every failure degrades the content the competitive phase inherits.
The competitive phase, ARGDW (annotation through won), is a sequence of relative tests. Your content doesn’t just need to pass. It needs to beat the alternatives. A page that is perfectly indexed but poorly annotated can lose to a competitor whose content the system understands more confidently.
A brand that is annotated but never recruited into the system’s knowledge structures can lose to one that appears in all three graphs. The infrastructure phase is absolute: pass, stall, or degrade. The competitive phase is Darwinian “survival of the fittest.”
The DSCRI infrastructure phase determines whether your content even gets this far. The ARGDW competitive phase determines whether assistive engines use it.
Up until today, the industry has generally compressed these five distinct processes into two words: “rank and display.” That compression muddied visibility into several separate competitive mechanisms. Understanding and optimizing for all five will make all the difference in the world.
The competitive turn: Where absolute tests become relative ones
The transition from DSCRI to ARGDW is the most significant moment in the pipeline. I call it the competitive turn.
In the infrastructure phase, every gate is zero-sum: does the system have this content or not? Your competitors face the same test, and you both pass or fail. But the quality of what survives rendering and conversion fidelity creates differences that carry forward.
The differentiation through the DSCRI infrastructure gates is raw material quality, pure and simple, and you have an advantage in the ARGDW phase when better raw material enters that competition.
At the competitive turn, the questions change. The system stops asking “Do I have this?” and starts asking “Is this better than the alternatives?”
Every gate from annotation forward is a comparison. Your confidence score matters only relative to the confidence scores of every other piece of content the system has collected on the same topic, for the same query, serving the same intent.
You’ve done everything within your power to get your content fully intact. From here, the engine puts you toe to toe with your competitors.
Multi-graph presence as structural advantage in ARGD(W)
The algorithmic trinity — search engines, knowledge graphs, and LLMs — operates across four of the five competitive gates: annotation, recruitment, grounding, and display. Won is the outcome produced by those four gates. Presence in all three graphs creates a compounding advantage across ARGD, and that vastly increases your chances of being the brand that wins.
The systems cross-reference across graphs constantly. An entity that exists in the entity graph with confirmed attributes, has supporting content in the document graph, and appears in the concept graph’s association patterns receives higher confidence at every downstream gate than an entity present in only one.
This is competitive math. If your competitor has document graph presence (they rank in search), but no entity graph presence (no knowledge panel, no structured entity data), and you have both, the system treats your content with higher confidence at grounding because it can verify your claims against structured facts. The competitor’s content can only be verified against other documents, which is a higher-fuzz verification path — more interpretation, more ambiguity, lower confidence.
For me, this is where the three-dimensional approach comes into its own, and single-graph thinking becomes a structural liability. “SEO” optimizes for the document graph. Entity optimization (structured data, knowledge panel, and entity home) optimizes for the entity graph.
Consistent, well-structured copywriting across authoritative platforms optimizes for concept graph. Most brands invest heavily in one (perhaps two) and ignore the others. The brands that win at the competitive gates are stronger than their competitors in all three at every gate in ARGD(W).
Annotation: The gate that decides what your content means across 24+ dimensions
Annotation is something I haven’t heard anyone else (other than Microsoft’s Fabrice Canel) talking about. And yet it’s very clearly the hinge of the entire pipeline. It sits at the boundary between the two phases: the last gate that applies absolute classification, and the first gate that feeds competitive selection. Everything upstream (in DSCRI) prepared the raw material. Everything downstream in ARGDW depends on how accurately the system can classify it.
At the indexing gate, the system stores your content in its proprietary format. Annotation is where the system reads what it stored and decides what it means. The classification operates across at least five categories comprising at least 24 dimensions.
Canel confirmed the principle and confirmed there are (a lot) more dimensions than the ones I’ve mapped. What follows is my reconstruction of the categories I can identify from observed behavior and educated guesses.
“We understand the internet, we provide the richness on top of HTML to a lot, lot, lot of features that are extracted, and we provide annotation in order that other teams are able to retrieve and display and make use of this data.”
“My job stops at writing to this database: writing useful, richly annotated information, and handing it off for the ranking team to do their job.”
So we know that annotation is a “thing,” and that all the other algorithms retrieve the chunks using those annotations.
Annotation classification runs across five types of specialist models operating simultaneously per niche:
One for entity and identity resolution (core identity).
One for relationship extraction and intent routing (selection filters).
One for claim verification (confidence multipliers).
One for structural and dependency scoring (extraction quality).
One for temporal, geographic, and language filtering (gatekeepers).
This five-model architecture is my reconstruction based on observed annotation patterns and confirmed principles. The annotation system is a panel of specialists, and the combined output becomes the scorecard every downstream gate uses to compare your content against your competitors.
Gatekeepers
They determine whether the content enters specific competitive pools at all:
Temporal scope (is this current?).
Geographic scope (where does this apply?).
Language.
Entity resolution (which entity does this content belong to?).
Fail a gatekeeper, and the content is excluded from entire query classes regardless of quality.
Core identity
This classifies the content’s substance: entities present, attributes, relationships between entities, and sentiment.
For example, a page about “Jason Barnard” that the system classifies as being about a different Jason Barnard has perfect infrastructure and broken annotation. The content was there, and the system read it, but filed it in the wrong drawer.
Selection filters
They add query routing: intent category, expertise level, claim structure, and actionability.
For example, content classified as informational never surfaces for transactional queries, regardless of how well it performs on every other dimension.
Extraction quality
Think:
Sufficiency (does this chunk contain enough to be useful?)
Dependency (does it rely on other chunks to make sense?)
Standalone score (can it be extracted and still work?)
Entity salience (how central is the focus entity?)
Entity role (is the entity the subject, the object, or a peripheral mention?)
Weak chunks get discarded before competition begins.
Confidence multipliers
These determine how much the system trusts its own classification: verifiability, provenance, corroboration count, specificity, evidence type, controversy level, consensus alignment, and more.
Two pieces of content can be classified identically on every other dimension and still receive wildly different confidence scores based on how verifiable and corroborated their claims are.
An important aside on confidence
Confidence is a multiplier that determines whether systems have the “courage” to use a piece of content for anything.
Once upon a time, content was king. Then, a few years ago, context took over in many people’s minds.
Confidence is the single most important factor in SEO and AAO, and always has been — we just didn’t see it.
To retain their users, search and assistive engines must provide the most helpful results possible. Give them a piece of content that, from a content and context perspective, appears to be super relevant and helpful, but they have absolutely no confidence in it for one reason or another, and they likely will not use it for fear of providing a terrible user experience.
What happens when annotation fails you (silently)
Annotation failures are the most dangerous failures in the pipeline because they are invisible. The content is indexed. But if the system misclassifies it, every competitive decision downstream inherits that misclassification.
I’ve watched this pattern repeatedly in our database: a page is indexed, it appears in search results, and yet the entity still gets misrepresented in AI responses.
Imagine this: A passage/chunk from your website is in the index, but confidence has degraded through the DSCRI part of the pipeline, and the annotation stage has received a degraded version.
The structural issues at the rendering and indexing gates didn’t prevent indexing, but they were degraded versions of the original content. That degradation makes the annotation less accurate, less complete, and less confident. That annotative weakness will propagate through every competitive gate that follows in ARGDW.
When your content is included in grounding or display, and it’s suboptimally annotated, your content is underperforming. You can always improve annotation.
Measuring annotation quality in ARGDW
Annotation quality is the most important gate in the AI engine pipeline, but unfortunately, you can’t measure annotation quality directly. Every metric available to you is an indirect downstream effect.
The KPIs I suggest below are signals that clearly show where your content cleared indexing and failed annotation: the engine found the page, rendered it, indexed it, and then drew the wrong conclusions from it.
That distinction matters: beware of “we need more content” when the real problem is “the engine misread the content we have.”
Your brand SERP tells you exactly what the algorithm understood
These signals reveal how accurately the AI has understood who you are, what you do, and who you serve. The brand SERP (and AI résumé) is a readout of the algorithm’s model of your brand and, because it is updated continuously, makes it a great KPI.
AI describes your brand using a competitor’s framing or category language.
Entity type is misclassified (person treated as organization, product treated as service).
AI can’t answer basic factual questions about your brand and offers without hedging.
If the algorithm can’t place you in a competitive set, it won’t recommend you
These signals reveal which entities the system considers comparable — a direct readout of how annotation classified them. Annotation places entities into competitive pools, and if your brand doesn’t appear in comparison sets where it belongs, the engine classified it outside that pool. Better content won’t fix that. Improving the algorithm’s ability to accurately, verbosely, and confidently annotate your content will.
Absent from “best [product] for [use case]” results where you qualify.
Absent from “alternatives to [competitor]” results.
Absent from “[brand A] vs. [brand B]” comparisons for your category.
Named in comparisons but with incorrect differentiators or misattributed features.
Consistently ranked below competitors with weaker real-world authority signals.
For me, that last one is the most telling. Weaker brand, higher placement.
Once again, what you’re saying isn’t the problem, how you’re saying it and how you “package” it for the bots and algorithms is the problem.
If the algorithm can’t surface you unprompted, you’re invisible at the moment of intent
These signals reveal whether the AI can place your brand at the point of discovery, before the user knows you exist. Clearing indexing means the engine has the content. Failing here means annotation didn’t connect that content to the broad topic signals that drive assistive recommendations.
The difference between a brand that appears in “how do I solve [problem]” answers and one that doesn’t is whether annotation connected the content to the intent.
Absent from “how do I solve [problem your product solves]” answers, even as a passing mention.
Not surfaced when the AI explains a concept you coined or own.
Absent from AI-generated roundups, guides, and “where to start” responses for your core topic.
Named as a generic example rather than a recommended solution.
The AI discusses your subject area at length and doesn’t name you as a practitioner or source.
Entity present in the knowledge graph but invisible in discovery queries on AI platforms.
The three taxes you’re paying with sub-optimal annotation
Three revenue consequences follow from annotation failure, one at each layer of the funnel.
The doubt tax is what you pay at BoFu when a buyer reaches your brand in the engine and the AI presents a confused, incomplete, or misframed version of what you offer.
The ghost tax is what you pay at MoFu when you belong in the consideration set and the algorithm doesn’t prominently include you.
The invisibility tax is what you pay at ToFu when the audience doesn’t know to look for you and the algorithm doesn’t introduce you.
Each tax is a direct read of how well annotation worked — or didn’t.
For you as an SEO/AAO expert, you can diagnose your approach to reduce these three taxes for your client or company as:
BoFu failures point to entity-level misunderstanding.
MoFu failures point to competitive cohort misclassification.
ToFu failures point to topic-authority disconnection.
Annotation should be your focus. My bet is that for the vast majority of brands, the gate in the pipeline with the biggest payback will be annotation. 99% of the time, my advice to you is going to be “get started on fixing that before you touch anything else.”
For the full classification model in academic depth, see:
Recruitment: The universal checkpoint where competition becomes explicit
Recruitment is where the system uses your content for the first time. Every piece of content the system has annotated now competes for inclusion in the system’s active knowledge structures, and this is where head-to-head competition begins.
Every entry mode in the pipeline — whether content arrived by crawl, by push, by structured feed, by MCP, or by ambient accumulation — must pass through recruitment. No content reaches a person without being recruited first. We could call recruitment “the universal checkpoint.”
The critical structural fact: it recruits into three distinct graphs, each with different selection criteria, different confidence thresholds, and different refresh cycles. The three-graph model is my reconstruction.
The underlying principle (multiple knowledge structures with different characteristics) is confirmed by observing behavior across the algorithmic trinity through the data we collect (25 billion datapoints covering Google’s Knowledge Graph, brand search results, and LLM outputs).
The entity graph stores structured facts with low fuzz — who is this entity, what are its attributes, how does it relate to other entities, binary edges — and knowledge graph presence is entity graph recruitment, with entity salience, structural clarity, source authority, and factual consistency as the selection criteria.
The document graph handles content with medium fuzz — passages and pages and chunks the system has annotated and assessed as worth retaining — where search engine ranking is the visible output, and relevance to anticipated queries, content quality signals, freshness, and diversity requirements drive selection.
The concept graph operates at a different level entirely, storing inferred relationships with high fuzz — topical associations, expertise patterns, semantic connections that emerge from cross-referencing multiple sources — with LLM training data selection as the mechanism and corroboration patterns as the primary selection criterion.
The same content may be recruited by one, two, or all three graphs. Each graph has its own speed of ingestion and its own speed of output. I call these the three speeds, a pattern I formulated explicitly this year but have been observing empirically across 10 years of brand SERP experiments:
Search results are daily to weekly.
Knowledge graph updates are monthly.
LLM updates are currently several months (when they choose to manually refresh the training data).
Grounding: Where the system checks its own work in real time
Recruitment stored your content in the system’s three knowledge structures. Grounding is where the system checks whether it should trust your content, right now, for this specific query.
Search engines retrieve from their own index. Knowledge graphs serve stored structured facts. Neither needs grounding. Only LLMs have the (huge) gap between stale training data and fresh reality that makes grounding necessary.
The need for grounding will gradually disappear as the three technologies of the algorithmic trinity converge and work together natively in real time.
In an assistive Engine, the LLM is the lead actor. When the user asks a question or seeks a solution to a problem, the LLM assesses its confidence in its own answer.
If confidence is sufficient, it responds from embedded knowledge. If confidence is low, it sends cascading queries to the search index, retrieves results, dispatches bots to scrape selected pages, and synthesizes an answer from the fresh evidence (Perplexity is the easiest example to see this in action — an LLM that summarizes search results).
But that’s too simplistic. The three grounding sources model that follows is my reconstruction of how this lifecycle operates across the algorithmic trinity.
The search engine grounding the industry currently focuses on is this: the LLM queries the web index, retrieves documents, and extracts the answer. That’s high fuzz.
Now add this: Knowledge graph allows a simple, quick, and cheap lookup: low fuzz, binary edges, no interpretation required, and our data shows that Google does this already for entity-level queries.
My bet is that specialist SLM grounding is emerging as a third source. We know that once enough consistent data about a topic crosses a cost threshold, the system builds a small language model specialized for that niche, and that model becomes a domain-expert verifier. It would be foolish not to use that as a third grounding base.
The competitive implication is huge. A brand with entity graph presence gives the system a low-fuzz grounding path. A brand without it forces the system onto the high-fuzz path (document retrieval), which means more interpretation, more ambiguity, and lower confidence in the result. The competitor with structured entity data gets verified faster and more accurately.
In short, focus on entity optimization because knowledge graphs are the cheapest, fastest, and most reliable grounding for all the engines.
Display: Where machine confidence meets the person
Your content has been annotated, recruited into its knowledge structures, and verified through grounding. Display is where the AI assistive engine decides what to show the person (and, looking to the future that is already happening, where the AI assistive Agent decides what to act upon).
Display is three simultaneous decisions: format (how to present), placement (where in the response), and prominence (how much emphasis). A brand can be annotated, recruited, and grounded with high confidence and still lose at display because the system chose a different format, placed the competitor more prominently, or decided the query deserved a different type of answer entirely.
This is essentially the same thing as Bing’s Whole Page Algorithm. Gary Illyes jokingly called Google’s whole page algorithm “the magic mixer.” Nathan Chalmers, PM for the whole page algorithm at Bing, explained how that works on my podcast in 2020. Don’t make the mistake of thinking this is out of date — it isn’t. The principles are even more relevant than ever.
UCD activates at display
You may have heard or read me talking obsessively about understandability, credibility, and deliverability. UCD is absolutely fundamental because it is the internal structure of display: the vertical dimension that makes this gate three-dimensional.
The same content, grounded with the same confidence, presents differently depending on who is asking and why.
A person arriving with high trust — they searched your brand name, they already know you — experiences display at the understandability layer, where the engine acts as a trusted partner confirming what they already believe, which is BOFU.
A person evaluating options — they asked “best [category] for [use case]” — experiences display at the credibility layer, where the engine presents evidence for and against as a recommender, which is MOFU.
A person encountering your brand for the first time — a broad topical question in which your name appears — experiences it at the deliverability layer, where the system introduces you, which is TOFU.
The user interaction reveals the funnel position. The funnel position determines which UCD layer fires.
This is why optimizing only for “ranking” misses reality: Display is a context-sensitive presentation, not a list, and the same piece of content can introduce, validate, or confirm depending on who asked.
The framing gap at display
The system presents what it understood, verified, and deemed relevant. The gap between that and your intended positioning is the framing gap, and it operates differently at each funnel stage.
At TOFU, the gap is cognitive: the system may know you exist, but doesn’t associate you with the right topics.
At MOFU, the gap is imaginative: the system needs a frame to differentiate your proof from the competitor’s, and most brands supply claims without frames.
At BOFU, the gap is about relevance: the system cross-references your claims against structured evidence, and either confirms or hedges.
After annotation, framing is the single most important part of the SEO/AAO puzzle, so I’ll talk a lot about both in the coming articles.
Won: The zero-sum moment where one brand wins and every competitor loses
Everything I’ve explained so far in this series collapses into a zero-sum point at the “won” gate. Here, the outcome is binary. The person (or agent) acts, or they don’t. One brand converts, and every competitor loses.
The system may have mentioned others at display, but at the moment of commitment, there can only be one winner for the transaction.
Three won resolutions in the competitive context
Won always resolves through three distinct mechanisms, each with different competitive dynamics.
Resolution 1: Imperfect click
The AI influences the person’s thinking at grounding and display, but the person decides independently: they choose one of several options offered by the engine, they walk into the store, or they book by phone.
This is what Google called the “zero moment of truth,” where the competitive battle happens at display, where the engine has influenced the human, but the active choice the person makes is still very much “them.”
Resolution 2: Perfect click
The AI recommends one brand and the person takes it. This is the natural next step, what I call the zero-sum moment.
This fires inside the AI interface, where the engine filtered for intent, context, and readiness, presented one answer, and the person converted.
Resolution 3: Agential click
The AI agent acts autonomously on the person’s behalf. No person at the decision point, an API settlement between the buyer’s agent, and the brand’s action endpoint.
The competitive battle happened entirely within the engine: whichever brand had the highest accumulated confidence, the strongest grounding evidence, and a functional transaction endpoint is the winner. The person doesn’t choose. The system chooses for them.
The trajectory runs from oldest to newest: Resolution 1 was dominant up to late 2025, Resolution 2 is taking over, and Resolution 3 gained a lot of traction early 2026. Stripe and Cloudflare are laying the transaction and identity rails. Visa and Mastercard are building the financial authorization infrastructure.
Anthropic’s MCP is providing the coordination layer. Google’s UCP and A2A are defining how agents communicate across the full consumer commerce journey. Apple has the closed-loop infrastructure to make it seamless on a billion devices the moment they choose to.
Microsoft is locking in the enterprise and government layer through Copilot in a way that will be extremely difficult to displace. No single company turns Resolution 3 on — but all of them together make it inevitable.
Competitive escalation across the five ARGDW gates
The competitive intensity increases at every gate — a progressive narrowing, a Darwinian funnel where the field shrinks at each stage. The narrowing pattern is my model based on observed outcomes across our database. The underlying principle (competitive selection intensifies downstream) is structural to any sequential gating system.
The field is large at annotation, where the algorithms create scorecards and your classification versus competitors’ determines downstream positioning.
Recruitment sets the qualifying round: multiple brands enter the system’s knowledge structures, but not all, and the selection criteria already favor multi-graph presence.
Grounding narrows the shortlist as confidence requirements tighten — the system verifies the candidates worth checking, not everyone.
Display reduces to finalists, often one primary recommendation with supporting alternatives.
Won is the binary outcome. The zero-sum moment you’re either welcoming with open arms or fearful of.
ARGDW: Relative tests. The scoreboard is on.
Five gates. Five relative tests. Competitive failures in ARGDW are significantly harder to diagnose than infrastructure failures in DSCRI because the fix is competitive positioning rather than technical.
Annotation failures mean the system misclassified what your content is or who it belongs to — write for entity clarity, structure claims with explicit evidence, and use schema markup to declare rather than expect the system to guess.
Recruitment failures increasingly mean you’re present in one graph while competitors have two or three — build entity graph presence (structured data, knowledge panel, entity home), document graph presence (content quality, topical coverage), and concept graph presence (consistent publishing across authoritative platforms) as a coordinated program.
Grounding failures mean the system is verifying you on the high-fuzz path — provide structured entity data for low-fuzz verification, and MCP endpoints if you need real-time grounding without the search step.
Display failures mean the framing gap is costing you at the three layers of the visible gate — assuming you fixed all the upstream issues, then closing that framing gap at every UCD layer is your pathway to gain visibility in AI engines.
Won failures mean the resolution mechanism doesn’t exist — Resolution 1 requires that you rank (good enough up to 2024), Resolution 2 requires that you dominate your market (good enough in 2026), and Resolution 3 requires a mandate framework and action endpoint (needed for 2027 onward).
After establishing the 10-gate AI engine pipeline, what’s next?
The aim of this series of articles is to give you the playbook for the DSCRI infrastructure phase and the strategy for the ARGDW competitive phase. This 10-gate AI engine pipeline breaks optimizing for assistive engines and agents into manageable chunks.
Each gate is manageable on its own. And the relative importance of each gate is now clear for you (I hope). In the remainder of this series of articles, I’ll provide solutions to the major issues at each gate that will help you manage each individually (and as part of the collective whole).
Aside: The feedback I have had from Microsoft on this series so far (thank you, Navah Hopkins) reminded me of something Chalmers said to me about Darwinism in Search back in 2020.
My explanations are often more absolute and mechanical than the reality. That’s a very fair point. But then reality is unmanageably nuanced, and nuance leads to a lack of clarity and often paralyzes people to the extent that they struggle to identify actionable next steps. I want to be useful.
I suggest we take this evolution from SEO to AAO step by step. Over the last 10+ years, I’ve always done my very best to avoid saying “it depends.”
People often say it takes 10,000 hours to become an expert. The framework presented here comes from tens of thousands of hours analyzing data, experimenting, working with the engineers who build these systems, and developing algorithms, infrastructure, and KPIs.
The aim is simple: reduce the number of frustrating “it depends” answers and provide a clear outline for identifying actionable next steps.
This is the fifth piece in my AI authority series.
Search strategy once meant ranking on Google. We optimized websites and invested heavily in organic visibility. Entire marketing strategies were built around capturing demand from Google search results.
But search behavior doesn’t live on a single platform. Today, people search on TikTok for recommendations, YouTube for tutorials, Reddit for honest opinions, and Amazon for product validation.
Search behavior now spans a much wider set of platforms, creating one of the most overlooked opportunities in digital marketing.
The findings reinforce something many marketers are beginning to notice. Search is no longer confined to traditional search engines.
While Google still dominates search activity, a growing share of discovery now happens across a wider collection of platforms — a search universe, if you will.
The research suggests search activity is roughly distributed as follows:
Traditional search engines: ~80% of searches, with Google alone at ~73.7%
Commerce platforms (Amazon, Walmart, eBay): ~10%
Social networks: ~5.5%
AI tools (ChatGPT, Claude, etc.): ~3.2%
Consumers search directly on platforms where they expect to find the most useful answers, in the formats they prefer, rather than relying on Google to send them elsewhere.
The industry is focused on AI and missing the bigger mainstream shift
Much of the search industry conversation today is focused on AI. Questions like:
How do I rank in ChatGPT?
How do I optimize for AI search?
Will AI replace Google?
They’re constantly being posed, debated, and answered by SEO professionals on platforms like Search Engine Land.
I want to be clear, these are important questions. But the data within this study tells a more grounded story, especially when thinking about strategy over the next 12 months.
AI search tools currently account for roughly 3.2% of search activity, per SparkToro research. That’s meaningful. It will almost certainly reshape how people search and discover information in the future.
But today, AI search is still smaller than many established discovery platforms with far broader adoption. For example:
Amazon receives more searches than ChatGPT.
YouTube receives more searches than ChatGPT.
Even Bing receives more search activity.
Yet many brands are pouring disproportionate attention into AI visibility while overlooking platforms where millions of searches are already happening every day.
Social platforms are now search engines
For many users, social platforms are now core search destinations. People look to:
TikTok for recommendations, restaurants, travel ideas, and products.
YouTube for tutorials, reviews, and problem-solving.
Reddit for honest discussions and community opinions.
Pinterest for inspiration and visual discovery.
Each platform plays a different role in the discovery journey.
Platform
What people search for
TikTok/Instagram
Discovery and recommendations
YouTube
Learning, tutorials, and reviews
Reddit
Real opinions and community discussions
Pinterest
Inspiration and planning
These platforms are more than entertainment destinations. Users head to them with real intent to find a solution to a problem, need, or desire.
Social content is now appearing directly in Google results
As users adopt social platforms for search, Google has begun aggregating and organizing information right within its SERPs. So yes, social and creator content appears directly inside Google search results.
Over the past year, Google has significantly expanded how it surfaces social content within SERPs. Search results now frequently include TikTok videos, YouTube Shorts, Reddit threads, Instagram posts, and forum discussions.
Google even partnered with platforms like Reddit to ensure that community discussions appear more prominently in search results. This means social content can now influence discovery in multiple ways:
Social platforms are also important sources for AI-generated answers. AI systems rely on content that reflects real-world experiences, discussions, and opinions.
That’s why platforms such as Reddit, YouTube, Quora, forums, and creator-led content (i.e., Instagram, TikTok, and YouTube Shorts) are frequently cited in AI-generated responses.
Google’s AI Overviews often reference Reddit threads and YouTube videos.
Other AI tools regularly draw insights from community discussions, reviews, and creator content when generating answers.
This means content created for social discovery can influence visibility across multiple layers of search, including social platforms, Google search results, and AI-generated responses.
A single piece of content can now travel much further across the universe, consistently providing signals to audiences, developing a preference toward one brand over another.
The compounding discoverability effect
When brands invest in social search visibility, they unlock a powerful compounding effect. For example, a useful YouTube tutorial could:
Rank in YouTube search.
Appear in Google search results.
Be referenced in AI-generated answers.
Be shared across social platforms.
Spread through private messaging and dark social channels.
Unlike traditional website content, social content can move across platforms, dramatically expanding its reach. This creates an entirely new layer of discoverability.
And at a time when marketing budgets are under increasing scrutiny, the ability for content to generate visibility across multiple platforms makes the ROI of content strategies far more compelling.
Despite these shifts, most search strategies still revolve around Google SEO, paid search, website content, and AI/LLM interfaces.
Few brands have structured strategies for TikTok search optimization, YouTube search visibility, Reddit community engagement, and creator-led discovery strategies.
While Google SEO is incredibly competitive, social search remains relatively under-optimized. Brands that move early can capture visibility (presence) in spaces where demand already exists, thereby developing preference for their brand.
When brands invest in social search visibility, they aren’t just unlocking the 5.5% of searches happening directly on social platforms. They’re also influencing traditional search results, AI-generated answers, and wider discovery across the web.
Search everywhere: A new model for discoverability
Search is more than a channel. It’s a behavior that happens across a developing and evolving search universe.
Your audience searches wherever they believe they’ll find the best answer in the most useful format — whether that’s Google, TikTok, YouTube, Reddit, Amazon, Pinterest, or increasingly, AI interfaces.
Winning search today means being discoverable wherever those searches happen. The brands that win won’t be the ones that rank in just one place, even as traditional SEO remains an important part of the mix. They’ll be the ones that are discoverable wherever their audience searches.
That is the future of search. That is “search everywhere.”
Nintendo has transformed Switch 2 handheld gaming with “handheld mode boost” With its newest firmware update for the Switch 2 console, Nintendo has added a new “Handheld Mode Boost” function to its system. When using it, Switch 1 software can be operated in “TV mode” on the Nintendo Switch 2 console in handheld mode. This […]
Microsoft finally released a firmware fix for the Surface Pro 11 for Business, resolving "hover-inking" and touch accuracy issues that plagued the Intel model for a year.
Corsair, a leading maker of performance gaming peripherals, announced the release of the 99% form factor, low-profile VANGUARD AIR 99 WIRELESS Optical-Mechanical Gaming Keyboard. Equipped with low-profile Corsair OPX optical switches, 8,000 Hz hyper-polling, FlashTap SOCD handling, versatile tri-mode connectivity, and an aluminium frame, it's built on a rock-solid foundation of gaming performance.
Premium gasket mounting, five layers of sound dampening, and a brilliant, integrated LCD screen solidify the VANGUARD AIR 99 WIRELESS as a formidable piece of competitive gaming gear that excels in all aspects of daily life. One of our thinnest keyboards ever made, it measures just a scant 18 mm, perfectly designed for modern aesthetic sensibilities. Equipped with Elgato Stream Deck integration and programmable SD-keys, it streamlines daily workflows into single button presses. VANGUARD AIR 99 WIRELESS is an elegant solution that effortlessly delivers high-performance gaming needs and optimizes productivity tasks.
The Austrian fan manufacturer shared a photo of what appears to be the exterior of a PC chassis, showing the Noctua logo next to several I/O ports. The company also shared a few details about its upcoming product in its replies to commentators.
The dispute itself was unremarkable. The Insolvency and Companies Court was hearing a claim brought by Lithuanian firm UAB Business Enterprise and Laimonas Jakštys over who owned and controlled Oneta Limited. Jakštys was seeking a declaration that he and UAB Business Enterprise owned Oneta, rectification of the company's register, and...
Another fantastic Newegg combo deal has combined the eight-core AMD Ryzen 7 9850X3D with 32GB of Corsair Vengeance DDR5-6000 RAM and an Asus ROG Strix X870E-E motherboard for just $1,019.99, making the RAM effectively just $111 in this build.
ValidDraft verifies human authorship by capturing your real drafting behavior and turning it into auditable, tamper-proof certificates. It analyzes revision patterns, timing, cursor movements, and optional video presence to produce a clear humanity score and verification status.
Use it to protect bylines, uphold academic integrity, and meet compliance needs. Detect pasted blobs and impossible patterns, keep your process private, and share verifiable proof of authorship with newsrooms, universities, and publishers.
Pixelle is an AI-powered visual toolkit for indie developers and creators. It generates consistent app icons, marketing graphics, and App Store screens, guided by project-wide brand colors and design rules for a cohesive look. Export assets in one click for iOS, Android, web favicons, macOS .icns, and Windows .ico, with localization to 20+ languages. Start with 5 free generations, then pay $0.09 per image—no subscriptions.
A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge, according to the AI and Adversarial Testing Benchmark Report 2026 from Pentera.
The report, based on a survey of 300 US CISOs and senior security leaders, examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and
CD Projekt Red has announced that Cyberpunk 2077 will soon receive Sony's updated PSSR (PlayStation Spectral Super Resolution) upscaling technology on PS5 Pro.
Sony has released a new PlayStation 5 software patch that introduces support for its upgraded version of PSSR (PlayStation Spectral Super Resolution) upscaling technology across a wide range of games.
Our 10 comics might not be able to even giggle, but we'll be laughing our heads off at home. But what time is Last One Laughing UK season 2 episodes 1-3 on Prime Video?
The company's new product, called Gamma Imagine, will let users employ text prompts to create brand-specific assets like interactive charts and visualizations, marketing collateral, social graphics, and infographics.
The e-commerce giant is making more than 90,000 items available via this new delivery system. If an item can be delivered to a user within one or three hours, they'll see a label saying so next to that item on the Amazon app.
YouTube is a Preferred Platform for the FIFA World Cup 2026. People can watch match highlights, historical games and behind-the-scenes content from creators and official…
Google is expanding capabilities in Google Ads Editor to give advertisers more creative flexibility, automation control, and budget precision — especially as AI-driven campaign types continue to evolve.
What’s new. The 2.12 release introduces a wide set of updates across Performance Max, Demand Gen, and video campaigns, with a clear focus on scaling creative assets and improving workflow efficiency.
Creative expansion. Performance Max campaigns now support up to 15 videos per asset group, allowing advertisers to feed more variations into Google’s AI for testing. The addition of 9:16 vertical images also reflects growing demand for mobile-first formats, particularly across surfaces like short-form video.
Campaign upgrades. Demand Gen campaigns get several enhancements, including new customer acquisition goals, brand guideline controls, and hotel feed integrations. A new minimum daily budget and a streamlined campaign build flow aim to improve stability and setup.
Video & AI control. Updates to non-skippable video formats and real-time bid guidance give advertisers more control over performance, while new text and brand guidelines help ensure AI-generated assets stay on-brand and compliant.
Budgeting shift. A new total campaign budget feature allows advertisers to set a fixed spend across a defined period — ideal for promotions or seasonal bursts — with Google automatically pacing delivery.
Workflow improvements. Account-level tracking templates, better visibility into Final URL expansion performance, clearer campaign status filters, and bulk link replacement tools are designed to reduce manual work and improve account management at scale.
Why we care. This update to Google Ads Editor gives them more creative flexibility and control over AI-driven campaigns, especially in Performance Max and Demand Gen. Features like increased video limits, vertical assets, and total campaign budgets help you test more, scale faster, and manage spend more efficiently.
It also improves workflows and brand safeguards, making it easier to guide automation while maintaining consistency and performance across Google Ads.
Between the lines. The update continues a broader trend: as automation increases, Google is giving advertisers more ways to guide AI rather than manually control every input.
The bottom line. Google Ads Editor 2.12 is less about one standout feature and more about incremental gains across creative, automation, and control — helping advertisers better manage increasingly AI-driven campaigns within Google Ads.
As Google rolls out AI Overviews, AI Mode in Search, and the Gemini ecosystem, we face a growing challenge: what happens when users get answers — and soon complete purchases — without leaving Google’s interfaces?
UCP is designed to help brands to sell to consumers without leaving the Gemini or LLM experience. Consumers can check out within the LLM, add rewards points, and fully execute the transaction. Here’s an example flow:
How Google’s Universal Commerce Protocol works
At its core, UCP standardizes how consumer AI interfaces communicate with merchant checkout systems. When a user tells Gemini, “Find me a highly rated, waterproof hiking boot in size 10 under $200 and buy it,” UCP is the invisible bridge that allows the AI to securely fetch inventory, process the payment, and confirm the order.
While Google’s developer documentation leans into technical jargon like “Model Context Protocol (MCP)” and “Agent2Agent (A2A) interoperability,” the implications are remarkably straightforward:
It uses your existing feeds: UCP plugs directly into your existing Google Merchant Center (GMC) shopping feeds. The inventory data you’re already managing for your campaigns is the same data that will power these AI transactions.
You keep the data: Unlike selling on some third-party marketplaces, where you lose the customer relationship, UCP ensures you remain the merchant of record. You process the transaction, you own the first-party customer data, and you control the post-purchase experience.
Frictionless checkout: By enabling checkouts directly within Google’s AI ecosystem, UCP can reduce cart abandonment and increase conversion rates among high-intent shoppers.
Like many LLM optimization recommendations, these steps come down to the fundamentals of managing your shopping feed and Merchant Center account.
Google outlined a few best practices. If you follow these four steps, you’ll be well-positioned for success.
1. Master your feed data hygiene
In an agentic commerce environment, your product feed is your primary sales tool. To ensure the AI accurately matches your products to highly specific user queries, you need to enrich your feed with granular details.
Write product titles that are 30 or more characters long.
Expand product descriptions to 500 or more characters.
Include Global Trade Item Numbers (GTINs), where relevant, to ensure accurate product matching.
Include three or more additional images alongside your primary product photo to engage visual shoppers.
Use lifestyle images, not just standard product shots on white backgrounds.
Ensure your image quality meets the standard of 1,500×1,500 pixels.
Categorize your inventory by product type and share key product highlights.
Prepare specific feed attributes required for UCP, such as returns, support information, and policy information.
Support Google’s Native Checkout when possible (checkout logic integrated directly into the AI interface). Google also offers another option called Embedded Checkout (an iframe-based solution for highly bespoke branding). This will work, but is suboptimal at this time.
To set your brand apart when AI is helping consumers make immediate, confident purchasing decisions, you must pass trust and convenience signals directly through your feed. The data shows that these elements directly impact the bottom line:
Indicate clearly if your brand offers free shipping.
Share your shipping speed (next day, two-day, etc.).
Display your return policy.
Submit sale prices when available. Regardless, ensure the feed represents the most accurate pricing details.
The shift to UCP requires foundational updates to how your backend systems interact with Google. You must work hand in hand with their development and SEO teams to prepare for these AI search experiences.
Migrate from the Content API to the Merchant API to enable real-time inventory updates and programmatic access to data and insights.
Upgrade your tag in Data Manager and implement Conversion with Cart Data to effectively use first-party data in your campaigns.
Prioritize content-rich pages for indexing and crawling, and ensure structured data is always supported by visible content.
Create your Business Profile and claim your Brand Profile to highlight your business information and brand voice on Google platforms.
Have your development team explore and prototype with UCP open source on GitHub to map APIs for checkout, session creation, and order management.
4. Additional features and tools beyond UCP to consider
Google is actively rolling out pilot programs designed specifically for the agentic era. Be proactive in adopting these new solutions rather than waiting for wide release:
Prepare for the “Business Agent,” a virtual sales associate that acts like a brand representative to answer product questions right on Google.
Consider the “Direct Offers Pilot,” a new way for advertisers to present exclusive discounts directly in AI Mode.
Inquire about the “Conversational Attributes Pilot,” which introduces dozens of new Merchant Center attributes designed to enhance discovery in the conversational commerce era.
The launch of Google’s Universal Commerce Protocol signals a significant shift. The SERP is becoming a transactional engine that increasingly operates within large language models.
UCP presents a meaningful opportunity. By removing friction between discovery and purchase, conversion rates could increase.
However, taking advantage of this requires stepping outside the Google Ads interface and working directly in your feed data and technical integrations, much like with Google Shopping. While this isn’t new, it’s becoming more important.
Ultimately, this comes down to the quality of your product data.
Noctua’s first “Noctua Edition” PC case has landed It’s official, Noctua has released its first-ever Noctua Edition PC case, shipping with six of Noctua’s NF G2 series fans, a Noctua fan hub, and a custom Noctua colour scheme. After launching a Noctua Edition PSU and several graphics cards, a case was the logical next step. […]
The Turtle Beach Stealth Pro wireless headset for Xbox and PC has received one of the biggest discounts it's had in months, courtesy of Best Buy's Tech Fest event.
Teledyne e2v is pleased to announce the start of full production of its 16 GB DDR4-X1 Flight Model (FM), expanding its portfolio of high-density, radiation-tolerant memory solutions for space applications. The new device is designed to support the growing processing and data storage requirements of AI-enabled satellites, large constellations, broadband Internet-from-Space, Direct-to-Device services, and optical inter-satellite communications. By combining high memory density, radiation resilience, and a compact footprint, the component enables spacecraft to handle increasingly demanding onboard computing workloads.
Initial samples of the 16 GB DDR4-X1 Flight Models were delivered to customers in October 2025, allowing early system integration and evaluation. The device supports data rates up to 2400 MT/s, provides single-event latch-up immunity above 43 MeV·cm²/mg, and offers radiation tolerance up to 35 krad TID, enabling reliable operation in mission-critical space environments.
Industry-leading embedded solutions provider AAEON (Stock Code: 6579) has announced the release of the ATX-Q870A, an ATX industrial motherboard supporting Intel Core Ultra Series 2 Processors (formerly Arrow Lake-S) and up to 256 GB of dual-channel DDR5-6400 system memory. Designed to accommodate up to 125 W CPUs from the new Intel CPU platform, the ATX-Q870A can leverage up to 24 cores and 24 threads of computing power alongside up to 36 TOPS of AI performance through the series' integrated CPU, GPU, and NPU. As such, AAEON has positioned the product as a foundation upon which compute-intensive applications such as industrial automation systems and high-performance workstations can be built.
Despite its impressive processing capabilities, it is the expandability offered by the ATX-Q870A that is most likely to catch attention. Of particular note are the board's two PCIe Gen 5 and five PCIe Gen 4 slots, which allow it to support high-spec GPUs and AI accelerators, NVMe storage, and task-specific modules like serial cards, sensor interfaces, and low-speed NICs simultaneously.
Taiwanese PC case specialist Antec and Austrian quiet cooling expert Noctua have teamed up to create a custom, further refined Noctua Edition of Antec's award-winning Flux Pro chassis. Upgrading the Flux Pro with Noctua's latest NF-A14x25 G2 and NF-A12x25 G2 flagship fans for even better low-noise cooling performance, the new Noctua Edition forms an ideal basis for ultra-quiet, high-end builds.
"Antec has been at the forefront of PC case design for more than two decades, and we're excited to collaborate with such a renowned, iconic manufacturer to introduce the very first Noctua Edition chassis," says Roland Mossig (Noctua CEO). "The Flux Pro has been rightfully praised for its exceptional quiet cooling potential, so it was an obvious candidate for the project. Once we integrated our latest flagship fans and saw how much further we could reduce noise levels while maintaining similar component temperatures, it quickly became clear that this is a worthy Noctua Edition."
ASUS Republic of Gamers (ROG) is proud to launch the 2026 ROG Strix G16 and Strix G18, two incredible gaming laptops featuring the latest hardware and technology to deliver incredible performance to gamers everywhere. Boasting the all-new Intel Core Ultra 9 290HX Plus processor and up to an NVIDIA GeForce RTX 5080 Laptop GPU, these machines are built from the ground up for enthusiast gamers. Both the ROG Strix G16 and G18 feature the latest ROG displays, cooling, and tool-less chassis designs that allow gamers to seamlessly upgrade critical components. Later this year, the flagship ROG Strix SCAR 18 will also be unveiled, delivering flagship performance in an incredibly sleek ROG chassis.
Elite performance for every arena
The 2026 Strix G16 and G18 gaming laptops are engineered for players and creators who demand uncompromising performance across competitive esports titles, visually rich AAA games, and intensive content‑creation workflows. Both models feature the latest Intel Core Ultra 9 290HX Plus processor, delivering exceptional multi‑threaded power and next‑generation AI capabilities.
Intel today announced the launch of its new Intel Core Ultra 200HX Plus series mobile processors, giving gamers and professionals new high-performance options in the Core Ultra 200 series family. Optimized for advanced gaming, streaming, content creation, and workstation use, the Intel Core Ultra 200HX Plus series introduces two new processors - Intel Core Ultra 9 290HX Plus and Intel Core Ultra 7 270HX Plus. These processors add new features and architectural refinements, including support for the new Intel Binary Optimization Tool, a first-of-its-kind binary translation layer optimization capability that can improve native performance in select games.
"With the introduction of the Intel Core Ultra 200HX Plus series, we're pushing mobile computing performance even further for the gamers, creators, and professionals who demand the best. With higher die-to-die frequencies and our new Intel Binary Optimization Tool, the new Intel Core Ultra 9 290HX Plus and Ultra 7 270HX Plus deliver meaningful, real‑world performance gains so users can experience smoother gameplay, faster creation workflows, and more responsive workstation performance", Josh Newman, General Manager and Vice President of Product Marketing, Client Computing Group.
In the past year, interest in four-legged robots capable of autonomous patrols has surged, according to executives at Boston Dynamics and Ghost Robotics, two of the leading developers in the field. Data center operators, facing growing pressure to maintain 24-hour uptime and security across facilities spanning dozens of acres, see...
Have you ever wanted to use a fan that's more than three times as loud as the other option while providing the same performance? If you answered in resounding joy, then this project is exactly what you've been looking for. A YouTuber 3D-printed a fan that's actually made up of 15 tiny fans, fit inside the frame of a regular 120mm fan modelled after the Noctua NF-A12x25.
Taiwan imports almost all of its energy and requires large amounts of LNG to sustain its electrical grid. That grid is then used by local chipmakers — like TSMC who is responsible for making most of the world's high-end chips. Fabrication for these chips requires helium, which Taiwan also imports and right now, the Iran-U.S. conflict has made it difficult to acquire both.
Nvidia shows off its next-generation Kyber rack-scale solution to be powered by Rubin Ultra GPUs with four compute chiplets and 1 TB of HBM4E memory per package.
Maker Will Whang designed the MK4001MTD USB Bridge to facilitate the use of the world’s smallest (0.85-inch) mechanical hard drives, originally released by Toshiba in 2004.
The Seagate FireCuda X1070 is a budget-minded SSD done mostly right. It performs well enough and has excellent support, but initial pricing is high. A Gen 4 SSD that you can just throw into any system and get the job done.
TestSprint 360 delivers AI-driven continuous testing for web, mobile, and APIs so teams ship faster with fewer defects. Its TS360 OmniTest platform streamlines setup, authoring, and execution with natural language test creation, a smart visual flow builder, and secure cloud or local runs across browsers and devices. Integrate with CI/CD pipelines like Jenkins, customize features and localization, and scale regression and in-sprint testing with reliable coverage.
Text Affirmations sends randomly timed text messages to help you build habits, practice gratitude, and stay focused. It starts with a 2-minute quiz, then writes messages based on scientifically vetted frameworks like positive psychology and CBT. You can talk to it to refine the tone and timing, and let the system learn your needs. Or write your own messages to yourself. There’s no app to download, just supportive coaching that meets you where you are.
North Korean threat actors have been observed sending phishing to compromise targets and obtain access to a victim's KakaoTalk desktop application to distribute malicious payloads to certain contacts.
The activity has been attributed by South Korean threat intelligence firm Genians to a hacking group referred to as Konni.
"Initial access was achieved through a spear-phishing email disguised as a
Google DeepMind, Google.org, and Google Skills unite to empower the next generation of AI researchers and educators with a free, high-impact curriculum.
Silicon Motion Technology Corporation, a global leader in designing and marketing NAND flash controllers for solid-state storage devices, today announced that it will showcase a rich portfolio of differentiated enterprise SSD controllers and PCIe NVMe BGA boot SSD at NVIDIA GTC 2026, Booth #3015, purpose-built to meet the evolving requirements of the NVIDIA AI ecosystem. As AI models scale, inference architectures are extending beyond HBM and System DRAM into high-performance NAND storage tiers, as reflected in NVIDIA's ICMS initiative. In this new architecture, NAND-based Storage becomes a performance-critical layer that requires deterministic latency and quality-of-service differentiation.
Silicon Motion delivers a vertically integrated storage solution encompassing advanced SSD controller design, full firmware development, and compact Reference Design Kits (RDKs) aligned with leading enterprise form factors such as E1.S, E1.L, E3.S, E3.L, and U.2 for AI server deployments. The company also provides enterprise PCIe NVMe BGA boot SSD with strong endurance and long-term operational stability, deployed in AI systems.
Logitech G today announced the long-anticipated RS H-Shifter, the latest addition to the renowned Racing Series Ecosystem. Designed to deliver unmatched realism, tactile control, and game-changing durability, this advanced 7+R manual shifter is tailor-made for anyone passionate about authentic racing experiences. For years, racers have yearned for a product that combines modern engineering with the timeless precision of manual control, and Logitech G has delivered. From gripping rally runs in Assetto Corsa Rally to flawless drifting in Forza Horizon, the RS H-Shifter gives racers the tactile realism and precise control needed to dominate the virtual track.
"There's a strong demand from car enthusiasts worldwide for the connection and control that a manual shifter offers," said Richard Neville, Head of Product, Simulation at Logitech G. "The RS H-Shifter's engaging, racing gearbox feel, is engineered to reliably deliver the elevated experience that is expected from Racing Series and PRO products."
Yesterday, NVIDIA unveiled its latest DLSS 5 technology, offering gamers the first real-time neural rendering. However, even after the announcement, many questions arose about what DLSS 5 is capable of and how it will work with games. NVIDIA released a FAQ to address some common inquiries. The primary goal of DLSS 5 is to enhance visual fidelity through various techniques that create scenes with photorealistic lighting and materials. Perhaps the most interesting aspect is that DLSS 5 will honor the original artistic intent by using the game's color and motion vectors for each frame, anchoring the DLSS 5 model to the specific setting. This keeps the output in line with what the game developers originally envisioned for each frame. DLSS 5 will add visual enhancements that help each frame undergo an overhaul.
This overhaul is completed in several steps. The first is cinematic lighting, achieved through complex effect reconstruction for realistic skin glow, shadows, and more. Next is material depth—DLSS 5 applies micro-realism to the surface of any object, such as a rock or a wall, delivering a realistic texture that enhances the game. NVIDIA highlights that its latest DLSS installment offers temporal consistency, meaning the image quality is fine-tuned frame by frame to closely follow the game content, ensuring visual enhancements remain consistent. Interestingly, this technology will work alongside existing techniques like path tracing, where path tracing provides lighting accuracy, and DLSS enhances lighting photorealism. This means path tracing improves overall shadow behavior and reflections, while DLSS 5 makes them as realistic as possible.
Acer today announced refreshed Predator Helios Neo series gaming laptops equipped with the latest Intel Core Ultra 200HX Plus series processors and up to an NVIDIA GeForce RTX 5080 Laptop GPU, bringing the latest performance capabilities to gaming enthusiasts across a range of form factors and display options. The new Intel Core Ultra 200HX Plus series processors power a new class of gaming laptops with significant performance gains over the previous generation. These devices are built for split-second responsiveness, rock-solid FPS, smart tuning, and battery life that keeps players locked in across an extensive selection of games and apps.
Powered by NVIDIA Blackwell, NVIDIA GeForce RTX 50 Series Laptop GPUs bring game-changing capabilities to gamers and creators. Equipped with a massive level of AI horsepower, the RTX 50 Series enables new experiences and next-level graphics fidelity. Multiply performance with NVIDIA DLSS 4.5 and generate images at unprecedented speed.
Giga Computing, a subsidiary of GIGABYTE and a leader in accelerated computing and infrastructure solutions, today announced new enterprise AI solutions that support NVIDIA Vera CPU and NVIDIA Vera Rubin platform, well as a new AI factory in Taiwan. The GIGABYTE booth at NVIDIA GTC shows scalable AI solutions that not only focus on performance and efficiency but also incorporate the software and infrastructure needed to build AI factories and other large GPU clusters. The GIGABYTE booth staff is ready to introduce the latest hardware and software for success in deploying accelerated computing.
Personal AI Supercomputers
With products spanning all segments of the supercomputer space, Giga Computing showcases professional desktop and deskside solutions that are ideal for AI development and accelerating AI training and inference workloads. These solutions are being used by data scientists and researchers in research institutions, government agencies, and enterprises.
As the era of pervasive AI reshapes industries worldwide, ASRock Industrial today announced the AI BOX-A395, a compact yet powerful AI box that brings the performance of an ultimate AI workstation into a single system. Powered by AMD Ryzen AI Max+ 395 processors, delivering up to 50 TOPS of AI acceleration while integrating CPU, GPU, and NPU within a compact system. With support for up to 128 GB LPDDR5x-8000 unified memory, it enables large AI models and data-intensive workloads to run directly on-device, delivering responsive AI processing while reducing reliance on cloud infrastructure.
Designed for enterprises, developers, and system integrators, the AI BOX-A395 supports the AI everywhere ecosystem by translating large-scale AI capabilities into practical local deployment. By combining high compute density, integrated AI acceleration, and rich I/O connectivity, the system provides a scalable foundation for applications ranging from AI model and application development, engineering and 3D design, and high-resolution content creation and media production.
ASUS is proud to announce the launch of the 2026 TUF Gaming A16, F16, and A18 gaming laptops. The TUF Gaming A16 and F16 boast two impressive display options, either a gorgeous 2.5K 120 Hz OLED panel or an ultra-fast 2.5K 300 Hz IPS display. Both models feature advanced anti-reflection coatings on the panel for increased immersion, while the TUF Gaming F16 also comes equipped with the all-new Intel Core Ultra 9 processor 290HX Plus. The larger TUF Gaming A18 sports up to an NVIDIA GeForce RTX 5070 Ti Laptop GPU and a lightning-fast 2.5K 300 Hz IPS display for a truly impressive and immersive gaming experience.
16-inch upgrades
The 16‑inch TUF Gaming A16 and F16 receive major upgrades this generation, headlined by two premium new display configurations designed to elevate both immersion and competitive play. Users can choose between a stunning 2.5K 120 Hz OLED panel featuring a Corning DXC advanced anti‑reflection coating, or an ultrafast 2.5K 300 Hz IPS panel enhanced with ACR anti‑reflection technology for clearer visibility in bright environments. These advanced coatings significantly reduce glare while preserving color accuracy and contrast even when viewed at off angles, ensuring gameplay remains sharp and distraction‑free and increasing immersion across your entire gaming library.
Partners.ai is an AI-powered platform that helps local service businesses, like financial advisors, real estate agents, and med spas, find and connect with complementary, non-competing businesses to build referral partnerships. It uses AI to discover ideal partner matches nearby, automates personalized email outreach through the user's own Gmail, and manages the ongoing health of those partnerships. The goal is to generate warm leads that close at higher rates than cold advertising, at a fraction of the cost.
Jeff Kaplan, formerly a Blizzard executive, recently announced both a new studio and a new game, The Legend of California, which is slated to launch in 2026. Since the game's launch, the reception has been somewhat mixed, but that seemingly hasn't dampened the spirits of the studio or its executive. In a recent 10-hour livestream of The Legend of California, Kaplan addressed some of the negative comments he'd seen online, mainly taking aim at players who were criticizing the game, but who he seems to think weren't the target market in the first place.
The specific demographic that spurred Kaplan's comments were Overwatch players who have had what he describes as a "nerd rage-out," and expressing their frustration at Kaplan's chosen genre, visuals, or game design for The Legend of California. In response to these players, Kaplan said "if a game comes out, and you don't want to play it, and you've never played it, shit the f**k up---no one cares. We don't need to hear that you aren't into it." He goes on to question the reasoning behind players voicing their opinions on a video game they're never going to play, asking "Who cares about my opinion if I'm not going to play it, and if I've never played it? Why does my opinion matter on that?"
ArchieNote is an AI-powered note-taking app that turns your notes into quizzes automatically, lets you chat with an AI trained on your own content, and supports PDF uploads for instant analysis. Unlike other AI tools, ArchieNote uses pay-as-you-go credits instead of a monthly subscription—you only spend when you generate a quiz, ask a question, or upload a PDF. Light month? Your balance barely moves. Exam week? Go all in. No subscriptions, no surprises. Beta users start with 1,000 free credits with no card needed.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added a medium-severity security flaw impacting Wing FTP to its Known Exploited Vulnerabilities (KEV) catalog, citing evidence of active exploitation.
The vulnerability, CVE-2025-47813 (CVSS score: 4.3), is an information disclosure vulnerability that leaks the installation path of the application under certain conditions
Askiva automates the entire user research process. You set a topic, choose a language, and upload your participants. The platform handles sending invites, booking meetings across timezones, and conducting interviews on Zoom using an AI researcher that follows your custom script.
After the conversation, Askiva delivers accurate transcripts, grouped themes, key quotes, and sentiment analysis. It helps product teams and universities skip manual work and move from interviews to clear decisions in hours instead of weeks.
ADHD Academic Agent is an executive function automation system that pulls assignments from your student's Learning Management System (LMS), organizes everything, and pairs it with a personalized AI tutor built on their cognitive profile. ADHD students often struggle with steps before learning like checking the LMS, downloading files, organizing folders, and setting reminders. Parents manage this manually or pay coaches $200/hour. ADHD Academic Agent automates the entire process so the student can focus on learning.
There are several reasons why startups in Southeast Asia go bust, but we lack data about it. This piece helps you track the region's macro-level trends.
The iPhone 17e may be the exciting new sub-AU$1,000 iPhone, but snagging the iPhone 17 for even just AU$50 is still the better buy for a fair few reasons, and I've broken them down for you.
Rewarded Interest is your automatic consent agent. It passes your consent settings to sites as you browse, eliminating cookie popups. It protects your privacy by blocking unwanted cookies or trackers. Once enough people join, you can share an anonymous ID so advertisers can target you, earning 5% of what brands spend to reach you. Rewarded Interest doesn’t sell your data or show extra ads; it charges advertisers when they target your ID. Available free for Chrome, Brave, and Arc.
StayScore analyzes your Airbnb listing with AI and assigns a score out of 100. It then gives specific recommendations on photos, title, description, amenities, and pricing to help you get more bookings. It evaluates photo quality, staging, and copy from a guest's perspective and highlights what's missing.
Paste your listing URL to get a photo-by-photo breakdown, prioritized fixes, and a downloadable report in about two minutes. A single analysis costs $9.99, and you can re-run it after changes to track improvements.
The AirPods Pro 3, unfortunately, launched in Australia at a higher price than their predecessor, but this Amazon Big Smile Sale discount makes them a much more attractive proposition.
As if tensions weren't high enough, ICE agents are about to bring a new patient to the door. But when is The Pitt season 2 episode 11 airing on HBO Max?
You don't need to spend a lot of money to kit out your kitchen, and there are plenty of deals on Amazon's Big Smile Sale to help you upgrade old gadgets, including air fryers, steamers, egg cookers and ice-cream makers, all under AU$100.
Crimson Desert could be one of the grandest fantasy adventure games ever made, and now it's just a few days away from its release date and launch times.
MSI today announced the launch of XpertStation WS300 on NVIDIA DGX Station Architecture, a next-generation deskside AI supercomputer built to support the accelerating demands of large language models (LLMs), generative AI, and advanced data science workflows. Powered by NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, supporting up to 748 GB of large coherent memory and dual 400GbE networking, the platform extends advanced AI infrastructure capabilities into a compact deskside deployment model and is available for order starting today.
"MSI has a strategic vision to advance AI-first computing," said Danny Hsu, General Manager of MSI's Enterprise Platform Solutions. "With NVIDIA, we are defining the next era of AI infrastructure, bridging centralized performance and distributed innovation, and enabling organizations to move from experimentation to production with greater speed, scale, and confidence."
Simes presides over Computer Pals, a volunteer-run group devoted to helping older adults develop digital literacy. Under his guidance, the club's lessons range from navigating Windows 11 to distinguishing between legitimate and malicious links online. His authority doesn't stem from age or nostalgia but from curiosity – an instinct that led...
Instead of upscaling games from lower resolutions or interpolating frames using AI, DLSS 5 applies machine learning to a game's lighting model. Nvidia calls it the next stage of rendering after upscaling and ray tracing. Digital Foundry got an early hands-on look at the technology (video below), which sparked controversy...
Fatal Frame 2: Crimson Butterfly Remake is a strong survival horror game, but some ill-considered changes mean that it’s not as compelling as its legendary predecessor.
The internet is laying into NVIDIA's DLSS 5 tech for making NPCs look like AI slop, but Bethesda's Todd Howard thinks it's "amazing" in Xbox's Starfield.
HPE (NYSE: HPE) today announced significant innovations to the NVIDIA AI Computing by HPE portfolio focused on large-scale AI factories and supercomputers that enable customers to scale, deploy efficiently, and gain faster time-to-insight. The full-stack AI solutions with NVIDIA include tightly integrated compute, GPUs, networking, liquid cooling, software, and services designed for at-scale and sovereign environments. AI-forward organizations and leading research institutions, including Argonne National Laboratory, HLRS, Hudson River Trading (HRT), and the Korea Institute of Science and Technology Information (KISTI), have chosen HPE AI infrastructure and AI factories with NVIDIA to advance innovation.
HPE brings NVIDIA AI solutions to its industry-leading supercomputing platform
Research laboratories, sovereign entities and large enterprises are rapidly adopting AI to enhance traditional high performance computing (HPC) workloads. For organizations seeking to significantly expedite scientific discovery, HPE is making the following NVIDIA products available on its second-generation exascale-class supercomputing platform designed to unify AI and HPC - the HPE Cray Supercomputing GX5000.
ASUS today unveiled its fully liquid-cooled AI infrastructure at NVIDIA GTC 2026 (Booth# 421), delivering a comprehensive, end-to-end solution powered by the NVIDIA Vera Rubin platform. Under the theme Trusted AI, Total Flexibility, this customizable framework—from rack-scale AI Factories, desktop AI supercomputing, Edge AI to Enterprise AI solutions—enables enterprises and cloud providers to build high-performance, energy-efficient large-scale AI clusters with unmatched efficiency and dramatically reduced PUE and TCO.
As a provider of NVIDIA GB300 NVL72 and NVIDIA HGX B300 systems, the flagship ASUS offering is the ASUS AI POD built on the NVIDIA Vera Rubin platform—a liquid-cooled, rack-scale powerhouse designed for massive AI workloads. Through strategic partnerships with leading cooling and component providers, ASUS offers diverse cooling modalities, tailored thermal solutions, and redundancy to meet any enterprise requirement. Proven by global client successes, ASUS provides expert consultation, a broad portfolio of AI and storage solutions, seamless infrastructure deployment, application integration, and ongoing services—combining scalability, and sustainability to drive business value and intelligence.
At NVIDIA GTC 2026, Micron announced it is in high-volume production of three products at once, all timed around and designed for the Vera Rubin platform. The headline is HBM4 with the 36 GB 12-high stack started shipping in volume in Q1 2026, built for NVIDIA Vera Rubin. It hits over 11 Gb/s pin speeds for more than 2.8 TB/s of bandwidth, 2.3 times what HBM3E offered, while also improving power efficiency by over 20%. Micron has also shipped early samples of a 16-high 48 GB variant to customers, a 33% capacity bump per HBM placement over the 12H stack. Micron also announced that its 192 GB SOCAMM2 memory modules are now in high-volume production. Designed for Vera Rubin NVL72 systems and standalone Vera CPU platforms, it enables up to 2 TB of memory and 1.2 TB/s of bandwidth per CPU, with the broader SOCAMM2 portfolio spanning from 48 GB to 256 GB.
On the storage side, Micron 9650, the industry first PCIe Gen 6 data center SSD designed specifically for NVIDIA BlueField-4 STX architecture, is in mass production as we already reported here. It boasts sequential read speeds of up to 28 GB/s and can handle 5.5 million random read IOPS, essentially doubling the read performance of its Gen 5 predecessor. Furthermore, it offers a performance-per-watt ratio that is twice as efficient.
SK hynix Inc. announced today that it is participating in GTC 2026, held from March 16 to 19 in San Jose, California. NVIDIA GTC is the global AI conference where business leaders and developers gather to share the latest breakthroughs and future trends in AI and accelerated computing. SK hynix memory solutions are designed to minimize data bottlenecks and maximize performance for both AI training and inference in NVIDIA AI infrastructure. Through its participation in GTC 2026, the company plans to demonstrate its competitive edge in memory technology—the core infrastructure of the AI era.
Under the theme "Spotlight on AI Memory," SK hynix will feature an exhibition space dedicated to the AI memory technologies and solutions. The booth will consist of three main areas: the NVIDIA Collaboration Zone, the Product Portfolio Zone, and the Event Zone. The exhibition is designed around interactive content to provide visitors with an intuitive understanding of AI memory technology.
Phison Electronics (8299TT), a global leader in NAND flash controllers and storage solutions, today announced its GTC showcase at booth 119, demonstrating how multi-tier memory architecture supports larger models and long-context inference on NVIDIA-powered local AI platforms. The industry is facing a growing memory constraint while demand for AI-ready platforms continues to surge. Fine-tuning and inference on proprietary data require massive compute and memory resources, creating investment challenges for organizations. These rising solution costs and workflow bottlenecks are slowing time-to-market for revenue-generating innovation. To address this challenge, Phison introduced aiDAPTIV technology for local and edge AI use cases. By leveraging Pascari SSDs as a new AI memory tier, aiDAPTIV technology intelligently extends and manages AI working memory across GPU memory, system RAM and flash.
Today's announcement showcases how aiDAPTIV applies these multi-tier memory architecture principles to local AI systems as NVIDIA AI infrastructure advances GPU memory capabilities to support inference workloads in data center environments. Built on high-endurance flash optimized for sustained paging and context retention, aiDAPTIV supports memory-intensive inference and fine-tuning workloads under fixed hardware configurations. The aiDAPTIV flash-based memory tier enables organizations to support these evolving workloads on local systems while maintaining data privacy and improving long-term infrastructure efficiency.
Lenovo introduces the next-generation of AI workstations optimized for on-device AI development, inference and creation, including the ThinkPad P14s i Gen 7, ThinkPad P14s Gen 7 AMD, ThinkPad P16s i Gen 5, ThinkPad P16s Gen 5 AMD, ThinkPad P1 Gen 9 and the powerhouse desktop ThinkStation P5 Gen 2.
Designed for students, engineers, data scientists and everyone in between, the new workstations pack unprecedented performance capable of tackling even the most demanding workflows, including CAD, BIM, data science, AI development and more. Part of Lenovo's new Hybrid AI Advantage solutions with NVIDIA, the systems accelerate AI adoption, boost business productivity, and bring faster ROI from AI deployments, reflecting Lenovo's commitment to deliver smarter, more adaptive solutions to meet modern challenges.
Samsung Electronics, a global leader in advanced semiconductor technology, today announced the comprehensive AI computing technologies it will showcase at NVIDIA GTC 2026 in San Jose, California, scheduled for March 16-19. As the industry's only semiconductor company offering a total AI solution spanning memory, logic, foundry and advanced packaging, Samsung will exhibit its full suite of products and solutions that enable customers to design and build groundbreaking AI systems. To learn more about Samsung's AI solutions, please visit the company's GTC 2026 booth (#1207).
The centerpiece of Samsung's showcase at NVIDIA GTC 2026 will be the new sixth-generation HBM4, which is now in mass production and is designed for the NVIDIA Vera Rubin platform. Samsung's HBM4 is expected to help accelerate the development of future AI applications, delivering consistent processing speeds of 11.7 gigabits-per-second (Gbps), which exceeds the industry standard of 8 Gbps, and can be enhanced to 13 Gbps.
Penguin Solutions, Inc. (Nasdaq: PENG), the AI factory platform company, today announced the industry's first production-ready KV cache server that utilizes CXL memory technology to address the critical "memory wall" challenge in AI inferencing—Penguin Solutions MemoryAI KV cache server. This innovative solution delivers up to 11 TB of CXL-based memory engineered to optimize performance of enterprise scale inference, including agentic AI. The result is lower latency, higher throughput, increased efficiency of GPU clusters, consistent achievement of stringent service-level agreements (SLAs), and faster time-to-first-token (TTFT).
While model training and tuning is primarily compute-bound and occurs episodically, the continuous memory-bound and latency-sensitive inference workloads required for inference and agentic AI are complex and fundamentally different. Inference demands are typically 30% compute driven (GPU) and 70% memory driven (RAM), elevating the need for greater memory capacity and causing performance bottlenecks and GPU idle time. Accelerating memory-dependent AI processes, Penguin's MemoryAI KV cache server increases memory capacity by integrating 3 TB of DDR5 main memory and up to eight 1 TB CXL Add-in Cards (AICs).
Dell Technologies (NYSE: DELL) today announces support for NVIDIA NemoClaw and NVIDIA OpenShell, expanding its collaboration with NVIDIA to advance secure, autonomous AI agents. For developers working locally, Dell and NVIDIA are teaming up on NVIDIA NemoClaw—an open source stack that simplifies running OpenClaw always-on assistants, more safely, with a single command. As part of the NVIDIA Agent Toolkit, it installs the NVIDIA OpenShell, an open source runtime providing a secure environment for running autonomous agents, and open source models like NVIDIA Nemotron.
Dell Pro Max with GB10 and GB300 provide purpose-built desktop platforms that allow enterprises to build and run autonomous, self-evolving agents locally with frontier-level intelligence. As the first OEM to ship a desktop with NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip, Dell brings 20 petaFLOPS of performance and 748 GB of memory directly to developers' desks.
HYTE, a leading manufacturer of cutting-edge PC components, is happy to reveal that its Firefly-themed product lineup inspired by the iconic Stellaron Hunter from HoYoverse's Honkai: Star Rail is now available for purchase at HYTE.com.
At the heart of the product lineup is the Official HYTE Y70 Firefly Case, which features a titanium silver colorway, 360-degree character artwork, and thematic elements and iconography inspired by Firefly and her S.A.M. armor throughout the chassis. In addition to the standard case, there are still limited-edition JP versions of the case that feature Kanji / Katakana branding for the Honkai: Star Rail logo. There is also an exclusive keychain based off Firefly's in-game phone case that can only be obtained by ordering either the US or JP PC case.
Affinity's latest update to introduces Light UI for a brighter and cleaner workplace, Convert to Curves to eliminate manual tracing by transforming objects into a fully editable vector curves, and Live Tone Blend Groups which blends layers dynamically and non-destructively.
Drop Beacon tracks product releases across top everyday carry (EDC) brands so you never miss a drop. Set your interests, follow brands, and get timely notifications when knives, pens, flashlights, fidgets, and more are released. Browse current and upcoming drops, filter by materials and mechanisms, and jump straight to the seller.
Drop Beacon also lets you create Pocket Dump photos to share with the EDC community and view in the Pocket Dump gallery. It is the perfect one-stop shop for all your EDC needs. Stop chasing, start carrying.
BrightSite is a website platform for small businesses and agencies tired of patching WordPress plugins. Analytics, forms, SEO, SSL, CDN, and staging are built in with no plugins or extra bills. Pages load quickly over WebSocket navigation. Manage content using AI tools via MCP, a ChatGPT app, publish llms.txt for AI search visibility, and use Lumi, an AI chat assistant. Plans start at $39/month.
Sen. Elizabeth Warren noted that Grok, xAI's controversial chatbot, has created harmful outputs for users and poses a potential national security risk.
Sony confirms that AMD has a new version of FSR in the works When announcing the rollout of its “Improved PSSR” AI upscaler, Sony confirmed that AMD is working on a new version of its FSR upscaler. AMD and Sony PlayStation are collaborating on AI algorithms and neural networks as part of their “Project Amethyst” […]
A new update to the Xbox PC app introduces manual library additions, allowing users to launch apps and games directly from the interface without relying solely on supported storefronts.
Amazon has a special 34% discount on the AMD Ryzen 5 9600X processor, a CPU with fantastic performance rates when paired with RTX 4080 graphics cards, while being affordable for budget-conscious PC builders.
Kioxia America, Inc. today announced the development of its Super High IOPS SSD, a new type of SSD enabling the GPU to directly access high-speed flash memory as an expansion to High Bandwidth Memory (HBM) in AI systems. The new Super High IOPS SSD, the KIOXIA GP Series, is purpose-built to meet the growing performance demands of AI and high-performance computing, providing larger GPU-accessible memory capacity for faster data access to AI workloads. Evaluation samples of KIOXIA GP Series will be available to select customers by the end of 2026.
The NVIDIA Storage-Next initiative addresses the anticipated shift from compute-intensive to data-intensive workloads and the expanded need for GPU-accessible memory space, currently limited by HBM size. Expanding the GPU's usable memory space allows access to larger data sets and improves GPU utilization by moving more data closer to compute resources.
Super Micro Computer, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for Cloud Computing, AI/ML, Storage, and 5G/Edge, today unveiled its upcoming system portfolio powered by the NVIDIA Vera Rubin platform. As data centers transform into AI factories, producing intelligence at massive scale, agentic reasoning, long-context AI, and Mixture-of-Experts (MoE) workloads are driving demand for an entirely new class of compute and storage infrastructure. Supermicro's NVIDIA Vera Rubin NVL72, NVIDIA HGX Rubin NVL8, NVIDIA Vera CPU systems are being designed and built with Supermicro's Data Center Building Block Solutions (DCBBS) advanced liquid-cooling technology stack to accelerate time-to-market for customers.
"We are entering a new era where every organization requires an AI factory to win in the marketplace, as the demand for inference workloads is reshaping what data center infrastructure must deliver," said Charles Liang, president and CEO of Supermicro. "Supermicro's DCBBS technology stack is being engineered to empower upcoming NVIDIA Vera Rubin NVL72, HGX Rubin NVL8, and Vera CPU systems to give our customers a fast, clear path to deploying next-generation AI factories, at scale. We are excited to provide an early look at these solutions as a testament to being first to market with the infrastructure that will power the next frontier of AI."
AMD, a leader in high-performance and AI computing, and Celestica Inc., a global leader in data center infrastructure and advanced technology solutions, today announced a strategic collaboration to bring the new "Helios" rack-scale AI platform to market. The collaboration pairs AMD computing leadership with Celestica's expertise in delivering leading-edge networking switch technologies. At launch, Celestica will undertake the R&D, design and manufacturing of scale-up networking switches in the AMD "Helios" rack-scale AI architecture, based on the Open Compute Project (OCP), Open-Rack-Wide (ORW) form-factor.
The scale-up switches will utilize advanced networking silicon to enable the high-speed interconnect of the next-generation AMD Instinct MI450 Series GPUs, enabling leading-edge computing, optimized for large-scale AI clusters. Consistent with the open standards-based design of the "Helios" platform, the networking switches will utilize the Ultra Accelerator Link over Ethernet (UALoE) architecture for scale-up connectivity. AMD "Helios" will be available to customers in late 2026.
Today at NVIDIA GTC 2026, Intel announced that Intel Xeon 6 is being used as the processor for NVIDIA DGX Rubin NVL8 systems. This highlights Xeon's role in providing architectural continuity and scalability for GPU-accelerated AI systems as workloads shift toward massive, real-time inference.
"AI is shifting from large-scale training to real‑time, everywhere inference-driven by agentic AI and reasoning systems," said Jeff McVeigh, corporate vice president and general manager, Data Center Strategic Programs at Intel. "In this new era, the host CPU is mission‑critical. It governs orchestration, memory access, model security, and throughput across GPU‑accelerated systems. Intel Xeon 6 delivers leadership performance, efficiency, and compatibility with the extensive x86 software ecosystem that customers rely on to scale inference workloads."
NVIDIA today announced NVIDIA BlueField-4 STX, a modular reference architecture that enables enterprises, cloud and AI providers to easily deploy accelerated storage infrastructure capable of the long-context reasoning required for agentic AI. Traditional data centers provide high-capacity, general-purpose storage but lack the responsiveness required for seamless interaction with AI agents that work across many steps, tools and sessions. Agentic AI demands real-time access to data and contextual working memory to keep conversations and tasks fast and coherent. As context grows, traditional storage and data paths can slow AI inference and reduce GPU utilization.
NVIDIA STX allows storage providers to build infrastructure that keeps data close and accessible at scale, so agentic AI factories can deliver higher throughput and responsiveness across inference, training and analytics. The first rack-scale implementation includes the new NVIDIA CMX context memory storage platform, which expands GPU memory with a high-performance context layer for scalable inference and agentic systems - providing up to 5x tokens per second compared with traditional storage.
NVIDIA today launched the NVIDIA Vera CPU, the world's first processor purpose-built for the age of agentic AI and reinforcement learning—delivering results with twice the efficiency and 50% faster than traditional rack-scale CPUs. As reasoning and agentic AI advances, scale, performance and cost are increasingly driven by the infrastructure supporting the models that plan tasks, run tools, interact with data, run code and validate results.
The NVIDIA Vera CPU builds on the success of the NVIDIA Grace CPU, enabling organizations of all sizes and across industries to build AI factories that unlock agentic AI at scale. With the highest single-thread performance and bandwidth per core, Vera is a new class of CPU that delivers higher AI throughput, responsiveness and efficiency for large-scale AI services such as coding assistants, as well as consumer and enterprise agents.
While the traditional discrete sound card has largely become a niche product for enthusiasts and hardware obsessives, Creative is attempting to attract new customers with a fresh model. The newly launched Sound Blaster Audigy Fx Pro can significantly upgrade the audio experience, the company says, and includes an additional layer...
The sale marks a turning point for GFiber, which began in 2012 as Google Fiber – a bold experiment aimed at challenging the sluggish US internet market. Its gigabit speeds, demonstrated first in Kansas City, were ahead of their time but proved difficult to expand profitably.
At GTC 2026, Nvidia revealed the Groq 3 accelerator and Groq LPX rack as part of the Vera Rubin platform. These SRAM-packed, inference-focused chips deliver large amounts of memory bandwidth to help Rubin deliver low-latency interactions with AI models spanning trillions of parameters and million-token contexts.
Nvidia announced more details about its new 88-core Vera data center CPUs, claiming impressive 50% performance gains over standard CPUs, fueled by a 1.5X increase in IPC from its Olympus cores. The firm also unveiled its new Vera CPU Rack architecture, which brings 256 liquid-cooled CPUs into one rack for CPU-centric workloads.
PilotFI is a privacy-first financial independence planning toolkit for European investors. It models state and private pensions, supports EU tax profiles, and runs 1,000-run Monte Carlo simulations and 97-year backtests, visualizing your path to financial independence with clear projections.
Track all assets, income, expenses, loans, and dollar-cost averaging across currencies. Plan as a household and export to PDF, analyze with AI, and simulate projection scenarios. Data is preloaded based on user location and stays EU-hosted with no bank connections. Pro users can enable Local Mode to keep all financial data on-device.
SportBot AI turns hours of pre-match research into just 60 seconds. It aggregates injuries, form, head-to-head history, and real-time odds from over 50 bookmakers to calculate win probabilities, detect edges, and flag potential risks across soccer, NBA, NFL, and NHL. You can see model versus market lines, predicted scores, risk levels, and best prices, then make your own decisions. Start free, or upgrade for unlimited analyses, AI chat, and edge alerts, with a performance dashboard tracking every prediction.
The GlassWorm malware campaign is being used to fuel an ongoing attack that leverages the stolen GitHub tokens to inject malware into hundreds of Python repositories.
"The attack targets Python projects — including Django apps, ML research code, Streamlit dashboards, and PyPI packages — by appending obfuscated code to files like setup.py, main.py, and app.py," StepSecurity said. "Anyone who runs
TechRadar spoke with Kyle Laughlin, SVP of R&D, Technology and Engineering at Walt Disney Imagineering, about how Disney built a walking Olaf robot in just four months — and why it could lead to parks filled with roaming characters.
GridBeyond's hardware and software coordinates several gigawatts of supply and demand to help balance the flow of electricity on the grid. The idea has attracted investors like Samsung Ventures.
Nvidia promises photorealistic graphics with DLSS 5 At GTC 2026, Nvidia has shocked the gaming world with DLSS 5, a new AI model that promises to deliver photorealistic visuals with today’s gaming hardware. DLSS 5 is due to launch this fall, and Nvidia calls it their “most significant breakthrough in computer graphics since the debut […]
Stop struggling with your TV's laggy interface. The Amazon Fire TV Stick 4K Plus is down to $25, offering snappy 4K streaming and Xbox cloud gaming for half off. (158 chars)
Xbox says it is removing friction for developers, but readers debate whether install base, Game Pass, and platform priorities still matter more than tooling.
Eight months after it began, the legal battle between Subnautica 2's creators and its publisher Krafton has ended in victory for Unknown Worlds' leadership.
NVIDIA today unveiled NVIDIA DLSS 5, the company's most significant breakthrough in computer graphics since the debut of real-time ray tracing in 2018. DLSS 5 introduces a real-time neural rendering model that infuses pixels with photoreal lighting and materials. Bridging the divide between rendering and reality, DLSS 5 empowers game developers to deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.
"Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again," said Jensen Huang, founder and CEO of NVIDIA. "DLSS 5 is the GPT moment for graphics—blending hand-crafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression."
A fresh leak from Geekbench has surfaced listing an unannounced AMD processor with the OPN (Ordering Part Number) code 100-000001713-31. As for the platform, that is listed as Plum-MDS1, a somewhat direct hint to Medusa Point, AMD's next-gen mobile APU family based on Zen 6. The chip is a 10-core, 20-thread with a 2.4 GHz base clock, though actual test results show it running closer to 1.3-2 GHz, unsurprising for an engineering sample this early in development. Each core gets 1 MB of L2 cache, and the L3 comes in at 32 MB, up from 24 MB on Strix Point and 16 MB on Hawk Point. This is 50% more cache than the current 10-core parts like the Ryzen AI 9 365. The test system had 32 GB of memory installed, type unspecified. Benchmark numbers aren't worth reading into at this stage, as the chip spent most of the test hovering around 1.39 GHz and barely peeked above 2 GHz, well below what a final retail part on a 3 nm process would run at. Too early to draw any conclusions from the scores.
Shipping manifests from Planet3DNow linked to the "Medusa" codename suggest a 4C+4D layout, standard and density-optimized cores, though that doesn't exactly explain the 10-core count seen in Geekbench. One theory is that two additional low-power cores sit in the IO die, bringing the total to ten. The part is expected to be a 28 W TDP mobile chip for the FP10 socket. Medusa Point is shaping up to combine Zen 6 CPU cores with a mix of RDNA 5 and RDNA 3.5 graphics, plus an updated NPU. A launch around CES 2027 fits the AMD usual cadence, meaning there is likely a long wait ahead; however, this Geekbench entry confirms that testing is already happening, whether at AMD itself or at one of its early hardware partners.
As PC gamers and enthusiasts, we all know by now that there is a supply shortage of various silicon components in the personal computing market, with the blame largely falling on AI demand and NVIDIA's and AMD's pivots to AI data center supply. In a recent report by Money UDN MSI's General Manager, Huang Jinqing, predicts a 15-30% price increase for its gaming products across the board as a result of the current market conditions. Jinqing goes on to say that there is an NVIDIA GPU supply shortage on the order of 20%.
As a result of the current supply shortages, Jinqing suggests that the overall PC market will shrink by as much as 10-20%. Recent retailer reports paint a slightly grimmer picture, though, with Mindfactory's sales figures pointing to a more significant decline in GPU sales volume. In light of the recent shortages, especially of DDR5 memory, hardware makers have turned to older platforms, with ASUS announcing increased production for DDR4 motherboards. Now, it seems as though MSI will follow ASUS down that path, with Jinqing confirming that it has signed long-term contracts for memory and shifted production to increase DDR4 motherboard output.
Microsoft previewed Copilot Suggestions in notifications as far back as 2024, but the feature was never rolled out, even in preview builds for Windows Insiders. According to Windows Central, the company decided to scrap the AI-powered feature following severe backlash over the Windows Recall tool, which many cybersecurity experts viewed...
A "cracker" known as Voices38 recently shared their latest release: a Denuvo-free version of Doom: The Dark Ages. The newest entry in id Software's FPS franchise has now entered the wild world of PC piracy, though it remains an 81 GB download with significant hardware requirements, including a powerful gaming GPU.
As Meta introduces its lineup of new AI chips, the company joins other tech giants in diversifying the AI accelerators used for specific workloads, and says that mainstream GPUs built for large-scale pre-training are less cost-effective for inference workloads.
A bug with a Samsung app on Windows 11 has caused some users to lose access to their C: drive, following the installation of the KB5077181 security update, according to a notice posted by Microsoft online.
Acer’s Swift Go pairs an Intel Core Ultra 7 255H processor with 16GB RAM and a 1TB SSD for demanding work, fast multitasking, and AI-powered applications.
Nvidia’s new DLSS 5 uses generative AI and structured graphics data to make video games more realistic. CEO Jensen Huang says the approach could eventually spread to other industries.
Sony’s upgraded PSSR tech is now available in several PlayStation 5 Pro games Sony has officially started rolling out its improved PSSR technology across a range of games. Several partners have already implemented Sony’s improved upscaler into their PlayStation 5 Pro games, and more game upgrades are on the way. PSSR upgrades are now available […]
Resident Evil Requiem only just launched to massive fanfare, having surpassed 5 million units in a matter of days, however, just over a week after passing the 5-million sales mark, Capcom has announced that Requiem has surpassed 6 million units sold in the 16 days since the game launched. According to the gaming giant's announcement, this officially makes it the fastest-selling Resident Evil game in the franchise's history.
In the post celebrating the achievement, Capcom also announced a celebration event for the franchise's March 22 30th anniversary, and revealed that it is developing additional content for Resident Evil Requiem, although it declined to elaborate what the additional content would be, beyond simply stating that "Going forward, Capcom plans to implement several measures, such as ongoing support and additional game content, so players can continue to enjoy the title longer." It seems likely that this "additional game content" will go beyond the already announced story expansion that Capcom recently revealed in a post on X.
It's undeniable that Palworld has had a successful Early Access period, even spawning a spinoff late last year in Palfarm and collaborating with some of the biggest games in the indie industry. Now, as the developer, Pocketpair is lining up to launch sometime in 2026, John Buckley, head of publishing and communications lead at Pocketpair, has made a few comments hyping up the launch during a GDC interview with GamesRadar+. Buckley says that, while survival crafting is a niche genre with a large player base on Steam, Pocketpair hopes that Palworld will be a game with something for everyone to enjoy. "There's so many incredible, incredible survival crafting games, but I think every survival crafting gamer has their like ideal version of what survival crafting should be, and we hope Palworld 1.0 will be that kind of something for everyone, lose yourself in this world, survival crafting game."
He goes on to suggest that the 1.0 launch will add polish to all of the game's mechanics and flesh out the mechanics that are still incomplete in addition to expanding the base content of the game for more advanced players "Now, a huge chunk of quote unquote end game content will be added. So if you really want to continue from where you left off, sure you're not missing out on the full experience, but you are missing out on some things. We've tried our best to expand everything. Not just the end game, but also improve the early game, add more to the early game, flesh out the middle game, kind of something for everyone, really."
NACON, a leader in premium gaming gear and parent of the RIG audio brand, today unveiled their latest release in their R-SERIES of headsets - the RIG R5 SPEAR MAX HD. Purpose-built for PC gaming, the R5 MAX HD features innovative GrapheneQ drivers from Ora, setting a new standard for studio-grade game audio.
"We're very excited to partner with Ora to bring their groundbreaking audio technology to our R-SERIES headsets," said Head of Audio Product at RIG, Michael Jessup. "The R5 MAX HD forms the next step in our mission to develop the ultimate range of headsets for competitive gamers."
AMD's hardware has generally enjoyed better support on Linux than its Intel and NVIDIA competition, although adoption and feature-parity to Windows can sometimes be a little slow. This has been the case with the AMD's APUs, which only just received power and usage monitoring via a pull request for Linux 7.1. The new AMDXDNA driver will expose power monitoring metrics for AMD Ryzen AI NPUs via DRM_IOCTL_AMDXDNA_GET_INFO, alongside new metrics to expose real-time NPU busy metrics to applications.
Both of these new metrics will presumably be used by those running and developing local LLMs and can be used to gauge hardware utilization and improve scheduling for AI tasks. These changes are expected to land in Linux 7.1, slated to release after 7.0, which is currently in development and is expected to launch sometime between April and May. Linux 7.0 itself is expected to introduce some significant performance improvements when it comes to cache and memory handling.
Pre-orders for the AirPods Max 2 open March 25 in more than 30 countries, with general availability beginning in early April. Apple is sticking with the familiar industrial design, which means the sequel looks a lot more like a spec and feature refresh than a dramatic reimagining.
Popular Photoshop alternative GIMP has been updated to feature non-destructive Link and Vector Layers, an upgraded MyPaint Brush tool, and expanded file format support including SVG export. The update also brings UI and stability improvements.
ConvertlyAI.online is a text-based SaaS designed for speed and precision. Use our curated prompt library to turn simple ideas into structured digital assets, professional copy, and organized content in seconds. Built for creators and developers who need high-quality output without the friction of complex AI interfaces. Key features include 10+ asset conversion types, a pro-grade prompt library, global-ready text generation, and a minimalist, high-speed UI.
Town is an AI work assistant that connects to your email, calendar, docs, and chat to triage inboxes, draft in your voice, manage scheduling, and run multi-step workflows with your oversight. It learns your preferences and maintains a memory profile so briefs, drafts, and actions match how you work. Use it across web, email, Slack, iOS, WhatsApp, and desktop. Choose read-only, approval-required, or autonomous modes, set per-tool boundaries, and review a clear action log while Town executes tasks across Gmail, Google Calendar, Drive, Slack, Notion, and more.
A Russian teach takes on Vladimir Putin in this Oscar-winning doc — here’s how to watch Mr Nobody Against Putin online and for free from any country as the film wows movie and politics fans.
Prime Video’s funniest reality show, Last One Laughing season 2, drops its first 3 episodes this week — and I know exactly who will be eliminated first.
Sony’s PS-LX310BT has been a roaring success — but time waits for no Bluetooth turntable, so more than six years since the 310 launched it’s finally being replaced. Which means this PS-LX5BT has some big and successful shoes to fill…
Here's a quick guide on how you can stream March Madness 2026 for free using trial offers from YouTube TV, Hulu+Live TV, Fubo TV, DirecTV Stream and more.
OpenAI is beginning to build the infrastructure for a formal advertising business around ChatGPT — but early performance signals suggest the company still has work to do to match established search platforms.
What’s happening. OpenAI started testing an Ads Manager dashboard with a small group of partners, according to confirmation shared with ADWEEK. The tool allows marketers to launch, monitor, and optimise campaigns in real time, similar to the campaign management platforms used across digital advertising.
Why we care. OpenAI is beginning to build a self-serve ads ecosystem around ChatGPT with a dedicated Ads Manager, as they prepare for AI assistants becoming a scalable channel. As conversational search grows, paid media marketers may need to think about visibility inside AI responses, not just traditional platforms like Google Search.
Early testing also means advertisers who participate now could gain first-mover insights into performance, formats, and optimisation strategies in a new advertising environment.
How it works today. Early testers currently receive weekly CSV performance reports that include metrics such as impressions and clicks. The reporting indicates the ads product is still evolving, with more advanced analytics and tooling likely to follow as the program develops.
The challenge: Early tests suggest click-through rates on ChatGPT ads trail those seen on Google Search, highlighting a key hurdle for OpenAI as it tries to prove the value of advertising inside conversational AI.
The cost of entry. Some early advertisers have reportedly been asked to commit at least $200,000 in spend, raising the stakes for OpenAI to demonstrate measurable performance and ROI.
Between the lines. Building an ad ecosystem requires more than ad inventory. Marketers expect robust reporting, optimisation tools, and predictable performance — areas where mature platforms like Google have years of advantage.
Google appears to be testing a new “Sponsored Shops” format in Google Shopping results that highlights entire stores instead of individual products — a potential shift in how brands compete in Shopping ads.
What’s happening. Instead of displaying only single product listings, the new block groups multiple products from the same retailer into one sponsored unit. The format features the store name, several products from that shop, and signals such as ratings and brand presence, effectively creating a mini storefront directly inside the Shopping results.
Why we care. The new “Sponsored Shops” format in Google Shopping could shift competition from individual products to entire stores. Instead of winning visibility with a single SKU, brands may need stronger product feeds, better ratings, and broader assortments to appear in these store-level placements.
It also introduces multiple click paths within one ad unit, which could change how traffic flows between product pages and store pages. If the format scales, it may reshape how advertisers optimise campaigns across Google Shopping — prioritising brand presence and feed quality, not just product-level bids.
The big picture. The test suggests a move slightly up the funnel for Shopping ads. Rather than focusing solely on a single SKU, brands can showcase a broader product assortment and reinforce their store identity within one placement.
Why it’s notable. Store-level visibility means advertisers can highlight multiple products at once, increasing exposure per impression. It also strengthens brand presence by combining store name, ratings, and product range in one block.
For users, it makes discovery easier by allowing them to browse several items from the same retailer without navigating away from results.
Between the lines. If the format rolls out widely, it could reward brands with strong product feeds, high seller ratings, and clear brand trust signals. Merchants with well-structured feeds and competitive assortments may gain more visibility compared with those relying on a few individual product listings.
What to watch. One open question is how users will interact with the different clickable elements inside the ad unit. Marketing Operating Lead, Stephanie Pratt commented on this and what measurement split we may expect:
“It’ll be interesting to see the split of clicks on each part of the ad unit, and how much is on the brand name vs product and if that will confuse some consumers
The bottom line. If “Sponsored Shops” expands beyond testing, it could push Google Shopping toward more store-level competition — shifting strategy from purely product-level optimisation to building stronger brand presence within the Shopping ecosystem.
Fist seen. This update was spotted by PPC Specialist Arpan Banerjee who shared a screenshot of the update on LinkedIn.
RPCS3 team adds “Create Steam Shortcut” option to its PlayStation 3 emulator RPCS3 is the world’s top PlayStation 3 emulator, and a new update for the tool has dropped that allows PC gamers to add their PlayStation 3 games to their Steam Library. Using the emulator’s new “Create Steam Shortcut” tool, gamers can add their […]
Microsoft Edge 146 begins the retirement of Collections and custom primary passwords, but it also makes a change that stops passwords from being deleted when clearing history.
NACON and developer Eko Software are pleased to announce that Dragonkin: The Banished, the new Hack'N'Slash from the French studio, is now available on PC (Steam), as well as for all owners of the Digital Deluxe Edition on consoles in its final version. It will be available for all other players on March 19.
Developed by the Parisian studio Eko Software, known for its work on titles like Warhammer: Chaosbane and the How to Survive series, Dragonkin: The Banished benefited from a one-year early access period, launched on March 6, 2025, which allowed for the integration of community feedback. With 85% positive reviews on Steam since the game's last update, players can now access the ravaged world of Dragonkin: The Banished in its definitive version.
Cyber Acoustics, a trusted provider of technology solutions for education and business, today announced the WC-1000 webcam, engineered for high-quality video while eliminating features that create security risks and IT support challenges for Business Process Outsourcing (BPO) companies, enterprises, and remote teams.
The WC-1000 is a purpose-built, video-only camera with no built-in microphone. By eliminating the mic entirely, Cyber Acoustics reduces common challenges for distributed teams and working professionals that already rely on dedicated headsets or speakerphones for audio.
Kingston Digital, Inc., the Flash memory affiliate of Kingston Technology Company, Inc., a world leader in memory products and technology solutions, today announced the launch of the next-generation IronKey Locker+ 50 G2 (LP50 G2) hardware-encrypted USB flash drive. The drive provides enterprise-grade security with FIPS 197 and AES 256-bit hardware encryption in XTS mode. It also safeguards against BadUSB with digitally signed firmware and against Brute Force password attacks.
LP50G2 features a premium space grey metal casing and supports both Admin and User passwords with options for Complex or Passphrase modes. Complex mode allows 6-16 character passwords using at least three of four character sets. Passphrase mode supports PINs, sentences, word lists, or other memorable phrases from 10-64 characters. Admin can enable or reset User passwords as needed. To aid in password entry, the "eye" symbol can be enabled to reveal the typed-in password, reducing typos leading to failed login attempts. Brute Force password attacks protection locks the User password after 10 failed password attempts in a row and crypto-erases the drive if the Admin password is entered incorrectly 10 times in a row. Additional safeguards include virtual keyboard to protect against keyloggers and screenloggers and anti-fingerprint coating on the casing which helps with resisting scratches.
SilverStone Technology today announced the RM100, a 1U rackmount server chassis designed for space-constrained server environments. Despite its compact 1U form factor, the RM100 supports up to ATX motherboards, providing system integrators and enterprise users with a flexible and efficient platform for building rackmount systems.
The RM100 is engineered with system configuration flexibility in mind. Its reversible chassis design allows users to choose whether the I/O ports face the front or the rear of the rack, making it easier to adapt to different rack layouts and installation environments. The power supply position can also be adjusted to suit various deployment requirements. In addition, components such as the rail kit, handles, drive bays, and power module are designed to be reversible, offering greater convenience and adaptability during system integration and installation.
German retailer Mindfactory indicates that AMD's Radeon RX 9070 XT is its top-selling GPU for weeks 9-11 of 2026 (meaning March 1-15), although it also indicated that there is a dramatic decrease in GPU sales during the same time period. According to the retailer data (shared by TechEpiphanyYT on X), AMD made up 55.6% of the outlet's sales for that time period, with the AMD Radeon RX 9070 XT in the lead as top seller with 25.6% market share and the RX 9060 XT following that up with 20.3%. The next five spots, though, are all NVIDIA GPUs—namely the RTX 5080 at 11.8%, the RTX 5070 Ti at 9%, the RTX 5060 and 5070 at 7% each, and the RTX 5090 at 5.3%.
Apple today announced AirPods Max 2, bringing even better Active Noise Cancellation (ANC), elevated sound quality, and intelligent features to the iconic over-ear design. Powered by H2, features like Adaptive Audio, Conversation Awareness, Voice Isolation, and Live Translation come to AirPods Max for the first time. The new AirPods Max also unlock creative possibilities for podcasters, musicians, and content creators, with useful features like studio-quality audio recording and camera remote. AirPods Max 2 will be available to order starting March 25 in midnight, starlight, orange, purple, and blue, with availability beginning early next month.
"With the incredible performance of H2, AirPods Max are upgraded with up to 1.5x more effective ANC for the ultimate all-day listening experience," said Eric Treski, Apple's director of Audio Product Marketing. "The sound quality is remarkably clean, rich, and acoustically detailed—and when combined with capabilities like Personalized Spatial Audio, AirPods Max 2 deliver a profoundly immersive experience."
Some weeks in security feel normal. Then you read a few tabs and get that immediate “ah, great, we’re doing this now” feeling.
This week has that energy. Fresh messes, old problems getting sharper, and research that stops feeling theoretical real fast. A few bits hit a little too close to real life, too. There’s a good mix here: weird abuse of trusted stuff, quiet infrastructure ugliness,
Unlike WhatsApp and Facebook Messenger, where E2E encryption is either the default or automatically applied to certain message types, Instagram's deployment had always been partial and manual. The feature was available only to a subset of users and had to be explicitly enabled.
The new Digg experiment is ending in a resounding fiasco. The beta version of the rebuilt social sharing portal has already been shut down – a "difficult" decision that forced the company to significantly downsize its development team. Building new internet projects in 2026 is a completely different experience, Digg...
Because the failures appeared around March Patch Tuesday and followed recent security updates, plenty of people blamed Microsoft's latest patches. Now, the company says the real culprit was not Windows itself, but Samsung software.
Intel's CPU roadmap is unlike any the company has published in recent years, because its manufacturing ambitions and its product launches have to succeed simultaneously.