❌

Normal view

Today β€” 17 March 2026Main stream

Xbox App Can Now Add Any Third-Party Game to Its Library

17 March 2026 at 21:18
The Xbox App has been undergoing an overhaul for some time, and the latest update now allows users to add apps, games, and virtually any third-party software to its library. Windows Central has tested this feature, providing a preview of the process. Although these third-party games and software are not linked to any Microsoft service, the Xbox App offers a centralized location for launching them as shortcuts. For example, Steam has offered a similar feature for years, allowing gamers to add games installed from third-party stores to the Steam Client, but only as shortcuts for launching, not as official sources. This means that any updates to third-party applications will still be managed by their respective apps or clients, with the Xbox App serving merely as a unified shortcut hub.

The process is quite simple. After opening the Xbox App on your PC, go to the "My Library" section, find the small "+" icon in the top right corner, and click it to see suggested additions. If your application isn't listed among the suggested files, the Xbox App enables you to use File Explorer to manually browse for files and locate what you want to display. Nearly any .exe file can be added, including games, productivity apps, or almost anything you can imagine. The only limits are your imagination or something that hasn't been tested yet. For users who want to use the Xbox App as a single app launcher, this feature allows you to embed any game or app within the same user interface, which is a nice option for those who enjoy the Xbox App's user experience.

NVIDIA Launches RTX PRO 4500 Blackwell Server Edition GPU

17 March 2026 at 20:54
NVIDIA has added another graphics card to its server lineup, this time in the form of a passively cooled, single-slot RTX PRO 4500 Blackwell Server Edition GPU. The company positions this release as a highly efficient GPU for compute-dense environments. It comes with 10,496 CUDA cores, 82 Ray Tracing cores, and 32 GB of GDDR7 memory running on a 256-bit bus, providing 800 GB/s of memory bandwidth, all within a total graphics power of 165 W. This specification is similar to the current RTX PRO 4500 Blackwell with an active dual-slot cooler but saves a few watts of TGP, as the actively-cooled edition has a 200 W TGP. The difference in TGP is attributed to higher-binned "Blackwell" GB203 dies with better frequency tuning, resulting in a similar performance target for this GPU. This server edition SKU also reduces memory bandwidth, running the 32 GB GDDR7 modules at 25 Gbps effective, while the regular blower-style RTX PRO 4500 Blackwell uses full 28 Gbps modules.

This server edition SKU is designed for server configurations that require hyper-dense setups, where a single-slot solution will be cooled by high-RPM server fans. For example, server farms could install a dozen of these GPUs in parallel within a single system, stacking them as long as there are available PCIe slots. High airflow chassis would push air through the passively cooled GPU shroud, cooling the 165 W TGP. Interestingly, this is not even the most efficient GB203 bin with 10,496 CUDA cores, as NVIDIA offers a GeForce RTX 5090 Mobile GPU SKU with only a 95 W TDP. However, that comes at the cost of some clock speeds, which are still unknown for the newest RTX PRO 4500 Blackwell Server Edition GPU.

NVIDIA's DLSS 5 Keeps Image Quality Consistent with Artistic Intent

17 March 2026 at 12:43
Yesterday, NVIDIA unveiled its latest DLSS 5 technology, offering gamers the first real-time neural rendering. However, even after the announcement, many questions arose about what DLSS 5 is capable of and how it will work with games. NVIDIA released a FAQ to address some common inquiries. The primary goal of DLSS 5 is to enhance visual fidelity through various techniques that create scenes with photorealistic lighting and materials. Perhaps the most interesting aspect is that DLSS 5 will honor the original artistic intent by using the game's color and motion vectors for each frame, anchoring the DLSS 5 model to the specific setting. This keeps the output in line with what the game developers originally envisioned for each frame. DLSS 5 will add visual enhancements that help each frame undergo an overhaul.

This overhaul is completed in several steps. The first is cinematic lighting, achieved through complex effect reconstruction for realistic skin glow, shadows, and more. Next is material depthβ€”DLSS 5 applies micro-realism to the surface of any object, such as a rock or a wall, delivering a realistic texture that enhances the game. NVIDIA highlights that its latest DLSS installment offers temporal consistency, meaning the image quality is fine-tuned frame by frame to closely follow the game content, ensuring visual enhancements remain consistent. Interestingly, this technology will work alongside existing techniques like path tracing, where path tracing provides lighting accuracy, and DLSS enhances lighting photorealism. This means path tracing improves overall shadow behavior and reflections, while DLSS 5 makes them as realistic as possible.
Yesterday β€” 16 March 2026Main stream

Gigabyte BRIX Adds Intel "Panther Lake" Core Ultra 9 and Up to 128 GB DDR5 Memory

16 March 2026 at 12:21
GIGABYTE is gearing up to launch a new version of its small-form-factor mini PCs called BRIX, now featuring Intel's "Panther Lake" and up to 128 GB of CSO-DIMM DDR5-6400 memory. The new BRIX GB-BRU9-386H is powered by the Intel Core Ultra 9 386H SoC with a 28 W TDP, offering a total of 16 cores distributed across four P, eight E, and four LPE core clusters (4P+8E+4LPE). It includes only four Xe3 GPU cores running at 2.5 GHz, delivering 40 TOPS. This is one of the less powerful iGPU configurations in the "Panther Lake" lineup, as the flagship Core Ultra X9 388H features 12 Xe3 cores, likely requiring better cooling for the additional GPU power. The strength of GIGABYTE's BRIX mini PC lies in its compactness, making it suitable for any work environment where GPU power isn't the main focus. With dimensions of just 112.6 x 34.4 x 119.4 mm, the volume of this tiny PC is only 0.47 liters.

Despite its small size, it is quite feature-rich, including one M.2 2280 slot for a PCIe Gen 5 SSD and another M.2 2280 slot for a PCIe Gen 4 SSD. As these BRIX PCs are barebones models, users need to add memory and storage. For storage, GIGABYTE offers a choice between two slots that support either two SO-DIMM DDR5 modules with speeds of up to 5,600 MT/s and a maximum capacity of 96 GB, or CSO-DIMM DDR5 modules running at up to 6,400 MT/s with a maximum capacity of 128 GB. The front panel includes a headset jack, two USB 3.2 Gen 2 Type-A ports, and a USB4 port. The back panel is equipped with two HDMI 2.1 ports, a USB4 port, a USB 3.2 Gen 2 Type-C with power delivery input, a USB 3.2 Gen 2 Type-A port, a USB 2.0 Type-A port, and an RJ45 Ethernet connector. Pricing and availability are still not listed, but GIGABYTE is showcasing this model on its website which means that launch is imminent.

Memory Makers Expect Shortages to End in Late 2028, Could Pause Expansion Plans

16 March 2026 at 11:47
Memory manufacturers are forecasting that memory shortages will persist until the end of 2028, when the supply chain balance is expected to be restored and memory will once again reach its commodity status. However, according to the South Korean Chosun Biz newspaper, memory manufacturers are reconsidering their expansion plans to ensure they don't overinvest in building new capacity after demand cools down. Reportedly, Samsung's internal projections indicate that demand for DRAM in forms like HBM and regular DDR will peak in the next few quarters, up until the end of 2028. This is the point where Samsung expects industry-wide demand to cool down and balance to be restored.

When the AI expansion began, memory manufacturers received massive orders from hyperscalers around the world, ordering DRAM supply months in advance. With no end in sight, SK Hynix, Micron, and Samsung started ordering more lithography tools from ASML to expand their wafer manufacturing capacity to meet their customers' needs for more HBM, DDR, and GDDR memory. According to ASML, the company estimates that in 2027, based on current order information and demand, it will deliver 56 Low-NA EUV scanners. This includes seven units for Samsung and as many as 20 units for SK Hynix, specifically for memory and storage. According to South Korean media, SK Hynix plans to install 20 Low-NA EUV units in the next two years, all designed for HBM memory and advanced storage solutions. However, as the demand is expected to stay in line with previous expansion plans, this is stopping memory makers from considering taking the fab capacity expansion further than initially planned.

Windows 11 March Update Blocks C:\ Drive Access for Some Users

16 March 2026 at 11:05
Microsoft has experienced a few months of relatively smooth Windows 11 updates, but the March update has reportedly caused inaccessible C: drives, widespread system BSODs, and freezing. According to multiple reports from Reddit and other online communities, including Microsoft's Q&A forum, the Windows 11 March 2026 KB5079473 update is causing headaches for users. They are encountering issues such as being unable to access their C: drives, system freezing, random reboots, and an overall laggy user experience. While the official March update page shows no known widespread issues, users report that the update is indeed causing trouble in everyday use and increasing the risk of instability. Some Samsung Galaxy Book owners have reported that this update completely blocked access to the C: drive on their devices.

Microsoft has confirmed that the "C:\ is not accessible - Access denied" issue has been acknowledged, and the company is working on a solution. Microsoft reportedly claims that the Samsung Galaxy Connect application is responsible for the inaccessible C: drive and the issue didn't occur solely because of the Windows 11 March update but coincidentally happened at the same time as other issues that had accumulated earlier. Windows maker claims that the problem affects Samsung Galaxy Book 4 and Samsung Desktop models with Windows 11 versions 24H2 and 25H2, including models with codenames NP750XGJ, NP750XGL, NP754XGJ, NP754XFG, NP754XGK, DM500SGA, DM500TDA, DM500TGA, and DM501SGA. For now, it is unclear if all affected users with inaccessible C: drives are Samsung Galaxy Book owners. It's possible that users experiencing issues are using other OEMs for their laptops or desktop PCs, so we are still waiting for more information. For now, Microsoft claims that only Samsung user are affected, and both are working on a fix.

AMD "RDNA 5" to Heavily Boost Shader Performance in Games with New Dual-Issue Pipeline

16 March 2026 at 11:00
AMD is refining its RDNA 5 architecture, which is likely in its final stages of design. Thanks to a submission to the LLVM compiler, we are learning that AMD's upcoming RDNA 5/UDNA architecture features architectural changes that will enhance compute utilization, resulting in significantly higher game shader performance. The codename for RDNA 5 is GFX1310, and it now implements a full Dual-Issue VALU pipeline for Wave32. This allows vector operations (VOPs) to be issued simultaneously to the GPU's X and Y arithmetic logic unit (ALU) lanes. The new design expands the range of fused multiply-add (FMA) and other VOP instructions eligible for dual-issue and relaxes some register constraints, enabling compilers and shader code to perform more intensive floating-point work in Wave32 mode. This means the FP32 compute utilization of RDNA 5 can be much higher than before, greatly benefiting applications that are FP32-heavy, which means gamers are about to see a significant performance boost.

AMD originally implemented Dual Issue VALU in the RDNA 3 architecture with the Radeon RX 7000 series graphics cards. However, the pipeline was not fully functional, as the dual-issue implementation supported only a limited subset of VOP instructions, excluded several important FMA variants, required Wave32 mode and strict register-bank separation, and was often bypassed by compilers and not fully exposed to drivers. As a result, many shaders could not exploit X/Y pairing, and measured FP32 throughput often fell well below the hardware's theoretical peak. This was detrimental to performance and reduced compute capability, especially in applications that rely heavily on FP32 compute. This is particularly true for modern games that process vertex and pixel shaders primarily using FP32 compute. With the new RDNA 5 design, gaming performance should align more closely with theoretical compute performance.
Before yesterdayMain stream

Windows 11 Insiders Get Support for >1,000 Hz Monitor Refresh Rate

13 March 2026 at 22:19
Microsoft has released its latest Windows 11 builds, 26100.8106 and 26200.8106 (KB5079387), to Insiders in the Release Preview Channel. This update notably introduces support for monitor refresh rates exceeding 1,000 Hz, marking the first time Windows 11 officially supports four-digit refresh rates. Over the years, gaming monitors have rapidly progressed from double-digit to triple-digit refresh rates. However, reaching four-digit levels so quickly was unexpected, as the general gaming audience has been limited by current technology's ability to output such high frame rates. With modern GPUs capable of running hundreds of frames per second in FPS titles, it's logical for Windows to prepare for future advancements.

For instance, Philips and AOC have introduced the first gaming monitors capable of supporting 500 Hz at 1440p and 1,000 Hz at 720p. The Philips Evnia 27M2N5500XD and AOC AGON Pro AGP277QK monitors were showcased in China on December 6 and appear to be built around the same 27-inch panel. Both models are QHD 500 Hz displays that can switch to a lower-resolution 720p mode, doubling the refresh rate to 1,000 Hz. Generally, this resolution is required to achieve frame rates above 500 FPS, but modern GPUs like the NVIDIA GeForce RTX 5090 can easily reach these numbers in games like Counter-Strike 2. At 1080p, we observed about 726 FPS, indicating that 720p gaming can fully utilize the 1,000 Hz display.

CPU-Z v2.19 Update Brings Preliminary Intel "Wildcat Lake" Support

13 March 2026 at 21:44
The popular CPU-Z utility for monitoring and hardware diagnostics has received an update in the latest version 2.19 release, bringing preliminary support for Intel's upcoming "Wildcat Lake" Core 300 series processors. This indicates that Intel's Core 300 series is on the horizon, and Intel might bring the same technology powering the Core Ultra 300 series "Panther Lake" processors to the embedded/edge sector. This lineup serves 12-25 W TDP applications with six processor cores, consisting of two "Cougar Cove" P-cores, zero "Darkmont" E-cores, and four low-power efficiency cores (2P + 0E + 4LPE). This CPU configuration is paired with two Xe3 GPU cores, which clearly shows that the product is aimed at lower-tier configurations. Additionally, the tool now includes support for AMD's Ryzen AI 7/PRO 450G/E, AI 5/PRO 440G/E & 435G/E, and AI 9 HX 470 processors. Finally, the tool now reads four-ranked CUDIMM memory in CQDIMM version.
Below is the complete changelog.

Chinese Laptop Maker Chuwi Advertised AMD Ryzen 5 7430U SoC, but Shipped the Older Ryzen 5 5500U

13 March 2026 at 01:27
Imagine buying a laptop, thinking you're getting a model with your desired CPU specifications, only to find a completely different model inside, cleverly concealed so you wouldn't notice without further investigation. According to an investigation by Notebookcheck, the Chinese electronics maker Chuwi is engaging in specification fraud. Users have discovered a different CPU SKU compared to what was advertised. In a review of the Chuwi CoreBook X and CoreBook Plus, Notebookcheck found that Chuwi had listed these laptops with an AMD Ryzen 5 7430U processor, but they actually come with an AMD Ryzen 5 5500U. This means that Chuwi is actively advertising these models on their website, on the laptop box, with laptop stickers, and even with BIOS modifications to make it seem as if they feature the newer Ryzen 5 7430U SoC with "Zen 3" CPU microarchitecture and Radeon "Vega 6" SoC. In reality, the company is shipping a processor that is a few generations old, with "Zen 2" CPU cores and Radeon Graphics 448SP.

Notebookcheck discovered this during a review of the Chuwi CoreBook X. The unit's performance was rather lackluster, prompting further investigation. Initially, they thought single-channel RAM was causing the subpar performance. To determine the true cause, they opened the unit and found the older Ryzen 5 5500U SoC with its corresponding part number 100-000000375. Additionally, other differences were noted in the SoC specifications, such as the L3 cache. Chuwi even went a step further by using a modified BIOS version to show that the unit features the AMD Ryzen 5 7430U. This led software diagnostic tools to display the advertised specification, while the lower-tier SoC was actually inside. This issue didn't occur just once with the Chuwi CoreBook X but also with a separate CoreBook Plus, which Notebookcheck also confirmed to feature the older SoC. Instead of the newer AMD Ryzen 5 7430U processor, both of these laptop models actually come with the AMD Ryzen 5 5500U, as verified through performance testing, and the OPN number that corresponds to a specific AMD chip.

(PR) Optical Scale-up Consortium Established to Create an Open Specification for AI Infrastructure

12 March 2026 at 23:40
The Optical Compute Interconnect (OCI) Multi-Source Agreement (MSA) group today announced its formation, led by founding members AMD, Broadcom, Meta, Microsoft, NVIDIA and OpenAI. This industry consortium marks a pivotal shift toward a hyperscaler-driven open ecosystem to enable the development of a multi-vendor supply chain for optical scale-up interconnects. By aligning on an open specification, the OCI MSA members are promoting a robust optical ecosystem which will ensure that the future of AI interconnects is built with a flexible, multi-vendor foundation to meet the optical interconnect needs of modern AI infrastructure.

The Physics and Power Mandate
As large language models (LLMs) advance toward super intelligence, traditional copper-based connectivity is reaching limitations in physical reach which are impacting AI cluster scale-up domain architectures. OCI will enable migration from copper-based to optical-based scale-up architectures, alleviating copper interconnect bottlenecks.

Chinese Lisuan LX 7G106 GPU Arrives June 18 with Support for Major AAA Games

12 March 2026 at 23:24
Last year, Lisuan Technology introduced its Lisuan LX 7G106 graphics card, one of the most promising GPU technologies for gamers emerging from China. Today, during the AWE 2026 stream on the Chinese BiliBili platform, the company announced that its GPUs will ship on June 18, with pre-orders starting on March 17. The 7G106 GPU is powered by a monolithic die manufactured at TSMC's facilities using the older 6 nm DUV node. This N6 node is approved for Lisuan to utilize TSMC's mature node capacity. Designed for gaming, this GPU accelerates games and 3D applications with broad API support, including DirectX 12, Vulkan 1.3, and OpenGL 4.6. While it supports DirectX 12, it does not include ray tracing, meaning there is no DirectX 12 Ultimate. However, it will support DirectX 12 games, and Lisuan notes that some of these titles include popular games on Steam, such as Cyberpunk 2077, Black Myth: Wukong, and Resident Evil 4 Remake.

Underneath the 7G106 features a SIMD engine capable of running calculations with FP32 and the new INT8 data type. The GPU has a maximum throughput of 24 TeraFLOPS in FP32, placing it high in compute. The primary compute language is OpenCL 3.0. Internally, the SIMD engine is supported by a large raster graphics pipeline, with up to 192 TMUs and 96 ROPs on the silicon. In terms of memory, it offers 12 GB of GDDR6 across a 192-bit wide memory bus, although the company has not yet finalized the exact memory frequencies. The 7G106 is equipped with a modern video acceleration engine, capable of hardware-accelerated AV1 and HEVC decoding at resolutions up to 8K at 60 FPS. It also supports hardware-accelerated AV1 encoding at 4K at 30 FPS and HEVC encoding at 8K at 30 FPS. For monitor connectivity, it includes four DisplayPort 1.4 outputs with support for DSC 1.2b. The GPU does not feature HDMI outputs, likely due to the licensing costs required by the HDMI Consortium for each installed HDMI port.

Unity Officially Gets Steam, SteamOS, and Linux Support

12 March 2026 at 20:55
Unity game engine is finally getting native integration and support across more gaming platforms, according to James Stone. What we are getting now is the first actual native port instead of the emulation we've been dealing with until now. Game developers using the Unity engine have been shipping Unity games on Steam. However, Steam was never an official Unity platform, and developers used Steamworks in the past to make it happen. That's now a thing of the past, as Unity is officially supporting one of the biggest gaming platforms in existence. Additionally, we are seeing ports to Steam Deck and Steam Machine, which run on the SteamOS operating system; these previously relied on the Wine and Proton translation layers to transform Unity's API calls and make Unity games work.

Now Unity is enhancing its Linux integration further to create native runtimes and reduce reliance on the translation layers that have been doing the heavy lifting. This is a positive sign for the growing recognition of the Linux gaming world, which has been steadily rising as gamers encounter Windows-related issues. Adding native integration with the Valve ecosystem is helping Unity extend its influence across the gaming community, alongside Valve hardware such as the Steam Deck, Steam Machine, and the Steam Controller. For more details and updates from Unity at GDC 2026, check out the video below.

Meta Unveils Four MTIA Chips Focused on High-Perfomance Inference

12 March 2026 at 20:01
Meta has laid out an aggressive, inference-first roadmap for its in-house accelerators, announcing four Meta Training and Inference Accelerator (MTIA) generations developed with Broadcom and due to be integrated into its data centers over the next two years. The family spans MTIA 300, 400, 450, and 500, with early units already running ranking and recommendation workloads and later designs optimized for real-time model serving. Since Meta runs some of the largest social platforms on the web, developing a fast inference accelerator is required to make social media browsing and recommendation algorithms instant. Rather than pursuing raw peak arithmetic alone, Meta emphasizes memory throughput and inference efficiency. According to the specification table, HBM bandwidth and capacity rises substantially across the series while compute grows more linearly. This means that Meta's point is increasing on-package bandwidth and capacity which can cut latency and power costs for production inference.

The MTIA chips also include hardware support for attention primitives and mixture-of-experts layers, along with low-precision formats tailored to inference to reduce conversion overhead. Software compatibility was a stated priority. Meta says the stack runs natively on common frameworks, so existing production models can be deployed on both GPUs and MTIA without major rewrites, which should ease adoption. Multiple MTIA generations are built to share the same chassis, rack, and networking, allowing upgrades by swapping modules rather than refitting data center infrastructure. That modularity helps explain Meta's fast release cadence compared with the industry norm, considering that Meta's data centers span millions of chips. MTIA chips are already running at kilowatt power budgets and PetaFLOPS of compute, so MTIA accelerators are also competing with industry-leading solutions from NVIDIA, AMD, and other hyperscalers.

Microsoft DirectStorage 1.4 Brings Quicker Load Times and Smoother Asset Streaming

12 March 2026 at 18:39
Microsoft has released its latest DirectStorage 1.4 update, focusing on the technicalities behind game asset streaming. Today, the company introduced new compression and decompression technology called Zstandard (Zstd), which should improve game loading times and bring much faster game asset streaming than what was previously used. Microsoft originally developed its DirectStorage in DirectX 12 to take advantage of the quick NVMe SSDs. Powerful consumer GPUs need game assets to load incredibly fast, and DirectStorage devleopment has cut out the middle-man, the CPU, in the process of streaming these assets from storage to the GPU. Traditionally, this has been done over CPU, causing delays and latencies across teh stack.

To push Zstd even further, Microsoft has developed the Game Asset Conditioning Library (GACL), a companion tool that developers run on their assets before a game ships. The idea is that instead of simply compressing textures, GACL first conditions them to be more compressible, allowing Zstd to squeeze files down by up to 50% more than it otherwise could. It does this through a few different techniques. Shuffling rearranges data inside texture files so repeating patterns cluster together, giving Zstd more to latch onto. Block-Level Entropy Reduction (BLER) and Component-Level Entropy Reduction (CLER) then reduce texture complexity at the block and color-channel level, using perceptual quality as a guide so any changes remain invisible to the player. CLER takes this a step further by incorporating machine learning to identify exactly where that reduction can be applied without anyone noticing.

AMD Prepares "FSR Diamond" Update for Xbox Project Helix

12 March 2026 at 00:50
AMD is reportedly developing a next-generation FSR update, codenamed "FSR Diamond," for the upcoming Xbox project "Helix" gaming console. With the Project Helix console expected to launch in 2027, the details of "FSR Diamond" remain a mystery. It's unclear what AMD aims to achieve with this AI-powered video generation technology, but it likely builds on previous advancements like Radiance Caching and Ray Generation, adding a new dimension to the company's graphics capabilities. We might see an AMD equivalent of the multi-frame generation technology found in NVIDIA's GeForce RTX 50-series GPUs and Intel's XeSS 3.0. Since Project Helix is anticipated to feature RDNA 5 / UDNA graphics IP, this feature could be exclusive to that generation, as AMD tends to tie its latest technologies to the current RDNA IP.

AMD already differentiates its latest FSR "Redstone" suite of technologies, with features like Ray Regeneration and Radiance Caching exclusive to RDNA 4 hardware in the Radeon RX 9000 series of GPUs. Other basic technologies, such as upscaling and frame generation, are supported on older RDNA 3/2/1 generations but use an FSR 3.1 fallback, with no FSR 4 support currently available. However, since INT8-based FSR 4 exists, it may only be a matter of time before the company extends this capability to older GPUs, though the expected performance might not be optimal. For multi-frame generation and potentially dynamic multi-frame generation, "FSR Diamond" would need specialized hardware. Even NVIDIA, with its MFG 6x mode and Dynamic MFG, keeps those features exclusive to the GeForce RTX 50-Series "Blackwell," which uses hardware flip-metering available only on the newest GPU generation. Similarly, RDNA 5 / UDNA could incorporate these hardware components as well.

NVIDIA Confirms: No Missing ROPs on RTX PRO 5000 "Blackwell" GPU

11 March 2026 at 22:35
NVIDIA has officially confirmed the ROP count for its Pro-Viz RTX PRO 5000 "Blackwell" graphics card, listing it at 160 ROPs for this GPU. Reddit user "xmikjee" posted on the NVIDIA subreddit that his recently purchased RTX PRO 5000 "Blackwell" graphics card with 48 GB has 160 ROPs instead of the 176 that our database and several online sources initially suggested. However, NVIDIA has confirmed for TechPowerUP that this is an error and that the card officially comes with 160 ROPs, as detected by our GPU-Z software. GPU-Z reads the ROP count on the GPU as soon as the driver is installed. This means that "live data" is shown and read using NVIDIA drivers that are probed by the GPU-Z utility, which then reports the ROP count. Coincidentally, another user in the thread mentioned that his card also runs with 160 ROPs as detected by GPU-Z.

To understand why the ROP count on the RTX PRO 5000 "Blackwell" matters, it's helpful to know how NVIDIA structures its GPUs. The chip is built in layers, starting with Graphics Processing Clusters (GPCs) at the top, breaking down into Texture Processing Clusters (TPCs), and then into Streaming Multiprocessors (SMs), which are the cores doing the actual work. ROPs follow this same hierarchy, with each GPC contributing 16 ROPs. On a fully loaded "Blackwell" GB202, that totals 192 ROPs across 12 GPCs. The RTX PRO 5000 "Blackwell" takes an interesting path. With 14,080 CUDA cores, pointing to just under 7 GPCs worth of compute, you might expect a leaner configuration. However, NVIDIA used 10 fully active GPCs on the card, leaving some SMs off. This results in a ROP count of 160, which is notably strong for a professional card at this tier. It suggests that NVIDIA was quite generous with the silicon it left enabled and indicates the segmentation the company is doing within the GB202 SKU. Considering the very expensive RTX PRO 5000 "Blackwell" professional graphics card that NVIDIA lists for $5,099 on its Amazon store, this is a valid compromise.

Intel "Nova Lake-S" Appears with B960 Chipset and Support for DDR5-8000

10 March 2026 at 21:51
Intel's upcoming "Nova Lake-S" has been spotted in the wild for the first time. During the Embedded World 2026 event, German media outlet ComputerBase spotted an Intel Core Ultra 400 series "Nova Lake-S" mini-PC, featuring Intel's upcoming B960 chipset and support for DDR5 memory running at 8,000 MT/s. This suggests that Intel is upgrading its integrated memory controller on the "Nova Lake" platform to support these DDR5 speeds before any XMP and factory-overclocked memory are used. The support for higher-speed memory indicates that even Intel's current memory controller, which reportedly achieves DDR5 speeds of 7,200 MT/s in the upcoming "Arrow Lake Refresh," will receive an upgrade alongside the new core IP and configuration.

Speaking of configurations, ComputerBase confirmed that the 52-core top SKU of "Nova Lake" will have a TDP of 175 W, while other configurations with a TDP of 65 W will also be available. This is a significant boost of the base TDP rating that Intel is delivering, as the current flagship "Arrow Lake" Core Ultra 9 285K carries a base TDP of 125 W, a whole 50 W less. As this CPU is a 52-core model in its top-configuration, boost frequencies are going to push the power usage much higher as the processor runs a heavier load. Graphics output is powered by Xe3P GPU IP, as previously rumored, confirming that Intel's next-generation graphics is there. For AI capability, "Nova Lake" will deliver more than 100 TOPS using the 8-bit INT8 data type, thanks to the onboard NPU and the powerful Xe3P GPU IP.

SK hynix Unveils 1c LPDDR6 Memory With 16 Gb Capacity

10 March 2026 at 15:01
SK hynix has successfully developed new LPDDR6 memory modules with a 16 Gb capacity on the sixth-generation 10 nm node, known as 1c. The South Korean giant has confirmed that mass production of this memory is scheduled for the first half of the year, with the product reaching customers in the second half. Additionally, SK hynix claims that the speed of these LPDDR6 modules exceeds 10.7 Gbps, suggesting that the company is preparing some overclocked versions as well, surpassing the initial speed specifications of this LPDDR6 generation from JEDEC. If the previous International Solid-State Circuits Conference (ISSCC) 2026 show in San Francisco was an indication, SK hynix is preparing modules that will run at speeds of up to 14.4 Gbps, delivering a significant throughput boost over the previous-generation LPDDR5X memory. The company claims a 33% improvement over LPDDR5X, which topped out at 10.7 Gbps, aligning with the 14.4 Gbps figure for LPDDR6.

SK hynix is also expecting significant power efficiency optimizations exceeding 20% thanks to the new technologies enabling LPDDR6 to run. This generation of low-power DDR memory uses a sub-channel structure that allows the memory channels to operate selectively and only process necessary data paths, meaning not all channels need to be engaged when unnecessary. Additionally, LPDDR6 incorporates Dynamic Voltage and Frequency Scaling (DVFS), which optimizes power consumption and performance by dynamically adjusting the voltage/frequency curve depending on the scenario. SK hynix notes that during applications like gaming, DVFS will scale the frequency to achieve maximum bandwidth, while standard applications will see lower frequencies to balance power consumption.

NVIDIA GeForce RTX 5050 9 GB Variant Comes with 130 W TDP

10 March 2026 at 13:20
NVIDIA is reportedly preparing to update its GeForce RTX 5050 GPU with a new variant featuring 9 GB of GDDR7 memory. A well-known leaker on X, @kopite7kimi, confirmed that the upcoming card will maintain the same 130 W TDP and thermal envelope as the current GeForce RTX 5050, which has 8 GB of GDDR6 memory. The current RTX 5050 uses 8 GB of 20 Gbps GDDR6 memory on a 128-bit bus, providing 320 GB/s of bandwidth. The new model is expected to use three modules of GDDR7 memory, each with 3 GB of capacity, resulting in a total of 9 GB across a 96-bit memory bus. While the narrower bus decreases the interface width, the switch to new 28 Gbps GDDR7 memory would increase total memory bandwidth to 336 GB/s, a roughly 5% improvement, along with a 12.5% boost in VRAM capacity. The new leak also claims that NVIDIA is using GB206 as a GPU base, while the older RTX 5050 8 GB with GDDR6 used GB207 die. However, the core count remains at 2,560 CUDA cores, suggesting that lower-binned GB206 dies found in RTX 5060 and other mid-range SKUs are repurposed for the new RTX 5050.

NVIDIA's switch to GDDR7 memory likely helps the company manage supply chain procurement better, as memory modules are in short supply. Instead of using four GDDR6 modules with 2 GB capacity each, NVIDIA is switching to three modules of GDDR7 with 3 GB capacity each, reducing the number of memory modules needed for this GPU. Interestingly, memory makers like Samsung, Micron, and SK hynix could now produce more of the 3 GB GDDR7 modules, with GDDR6 being in much shorter supply, forcing manufacturers to focus on the newer memory technology while GPU makers have to adapt. This has resulted in a situation where procuring three modules of 3 GB GDDR7 memory is now easier than finding a sufficient supply of GDDR6 modules, which NVIDIA needs to secure for each RTX 5050 GPU. As we approach Computex 2026, this GPU is expected to arrive sometime during that period.

Intel Releases Official XeSS 3.0 Software Development Kit

9 March 2026 at 21:44
Intel has launched its official XeSS 3.0 software development kit (SDK), which allows game developers to incorporate the latest binaries into their games and integrate XeSS 3.0 into game engines. Interestingly, Intel has released this version as a binary, pre-compiled file, rather than the open-source XeSS version the company promised a long time ago. This promise has remained unfulfilled for four years, with each XeSS release being closed-source, only available on GitHub under the Intel Simplified Software License as of the October 2022 revision. This binary is provided as a DLL file for Windows operating systems, meaning that Linux users cannot run this SDK on their systems without a translation layer. For users wanting to update older XeSS 2.x versions, you simply need to replace the libxess.dll, libxell.dll, and libxess_fg.dll files with those from the newest XeSS 3.0 ZIP folder.

Intel promotes XeSS 3.0 with its main feature being multi-frame generation (MFG). This version integrates up to three generated frames between two rendered frames, resulting in up to a fourfold frame increase using MFG, similar to NVIDIA's DLSS MFG technology. Intel is joining the AI-generated frame insertion trend, which seems to be gradually expanding. Interestingly, Intel also added a feature that allows XeSS 3.0 to use external memory heaps. This means the Intel XeSS SDK can now utilize GPU memory allocated by the game engine itself, allowing XeSS and the engine to operate on the same VRAM blocks instead of each reserving separate ones. This helps developers avoid duplicate buffers and fragmentation, gives them direct control over allocation and residency, and makes integrating XeSS into an existing render pipeline cleaner and more efficient.

(PR) Samsung Showcases Glasses-Free 3D and HDR10+ Gaming at GDC 2026

9 March 2026 at 19:36
Samsung Electronics America today shared its plan to expand support for glasses-free 3D gameplay on the Samsung Odyssey 3D gaming monitor at GDC Festival of Gaming 2026 in San Francisco. Samsung will spotlight Hell is Us and Cronos: The New Dawn as part of its expanding 3D gaming ecosystem, demonstrating how leading titles are embracing immersive displays without the need for special glasses.

"The Odyssey 3D is designed for gamers who want to experience their hobby in a way that feels like they're completely embedded in the action," said Kevin Lee, Executive Vice President of the Visual Display (VD) Business at Samsung Electronics. "Through partnerships with leading gaming studios, we are committed to creating an ecosystem of top-tier titles, making great games extraordinary."

NVIDIA Prepares GeForce ON Community Update for GDC 2026

9 March 2026 at 19:20
At this year's Game Developers Conference (GDC) in San Francisco, NVIDIA has prepared a special GeForce ON community update scheduled for tomorrow, where the company will address its gaming audience. While we have no official expectations for this GeForce ON update, NVIDIA might preview either new technologies that the company is developing for its gamers or new implementations in games and game engines. Usually, this centers around technologies like Deep Learning Super Sampling, Ray Tracing, Path Tracing, GeForce NOW expansion, gaming monitors like the Big Format Gaming Displays, or something entirely different. Interestingly, NVIDIA might also preview potential product launches for its GTC 2026, which starts just a week from now, on March 16 and lasts through March 19. We are eagerly anticipating any formal announcement, and you can check out the video link that starts the premiere tomorrow, below.

MaxSun Unveils Single-Slot Liquid-Cooled Arc Pro B60 Dual GPU and Fanless Model

9 March 2026 at 13:53
MaxSun has introduced two additional GPU variants to its Arc Pro B60 Dual GPU lineup. Each card features two of Intel's Arc Pro B60 GPUs, offering a total of 40 Xe2 cores and 48 GB of GDDR6 memory. The first variant is a passively cooled model designed for dual-slot configurations, ideal for server setups with high-airflow chassis that direct air across the card. This design allows any server case with high-RPM fans to accommodate multiple GPUs in parallel for AI inference and local development. However, the standout model is the liquid-cooled edition, measuring 300 x 110 x 20 mm and occupying a single slot. This compact profile is a distinct advantage of the water-cooled design, allowing several cards to be installed close together in dense enclosures without the need for strong airflow.

With the liquid-cooled edition, the card's TDP remains at 400 W, similar to the passive and fan-equipped versions. However, overall performance is expected to be slightly higher due to the increased stability that liquid cooling provides. The compact liquid-cooled unit was developed in collaboration with abee, and the company reports peak GPU temperatures around 61Β°C when integrated into a cooling loop, although other loop characteristics are unknown. It is assumed that standard industry loop measurements were conducted. Two hose barbs and a 12V-2x6 power connector are located on the rear edge, along with a fan header for auxiliary cooling control. A single-slot bracket offers one DisplayPort 1.2 and one HDMI 2.1a connector for each GPU, resulting in four display outputs in total. In a workstation, you could fit a few of these GPUs and quickly multiply the compute and memory capacity, creating a capable system even without a high-end GPU.

Fujitsu Showcases "MONAKA" CPU Sample with 3.5D XDSiP Packaging

9 March 2026 at 12:07
During MWC, Fujitsu partnered with networking equipment maker 1FINITY to unveil the first silicon wafers and an engineering sample of its "MONAKA" CPU. Scheduled for release in 2027, the initial Fujitsu MONAKA CPU utilizes the Armv9-A architecture and a 3D chiplet layout that combines a core die with separate SRAM and I/O dies. A single chip features 144 cores, and two-socket configurations can scale up to 288 cores per node. The platform supports 12-channel DDR5, PCIe 6.0 with CXL 3.0, and Arm SVE2 for AI and HPC workloads. Fujitsu has chosen TSMC to manufacture this chip using the 2 nm node, paired with Broadcom's 3.5D eXtreme Dimension System-in-Package (XDSiP) packaging architecture. This packaging allows MONAKA to become a 144-core design featuring four 36-core chiplets. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer.

In the pictures below, we can see the silicon complex in its early sample packaging, which shows a large central I/O die, HBM memory around the CPU, and the new packaging technology. Reportedly, this CPU has already reached a working version, with Broadcom shipping the CPU to Fujitsu in late February this year. After initial testing and early performance validation, Fujitsu plans to ship these processors to customers around summer, with mass shipping to commence in 2027. The company envisions this SoC as a powerhouse for AI inference, simulation, and large-scale data processing. It will sell these systems to external customers, who showed great interest in Fujitsu's A64FX when the Fugaku computer came online. Fugaku was the most powerful supercomputer back in 2020, achieving 415.53 PetaFLOPS of FP64 and an impressive HPL-AI score of 1.421 ExaFLOPS using lower FP16 precision. Hence, we expect the new MONAKA CPU to enable much greater speeds and some efficiency improvements as well.

Apple Prepares New MacBook Ultra with OLED Touchscreen and Dynamic Island

9 March 2026 at 10:50
Apple could name its upcoming laptop MacBook "Ultra" as the ultimate portable Mac from the Cupertino-based giant. According to Mark Gurman in the latest PowerOn newsletter, Apple is giving the MacBook a long-rumored "Ultra" overhaul, this time as an addition to the existing MacBook lineup, not as a product replacement. This model is expected to be Apple's first MacBook with an OLED touchscreen and a dynamic island instead of the traditional notch found on today's MacBook displays. It will sit above the new M5 Pro and M5 Max-powered MacBook Pro 14 and MacBook Pro 16, making Apple's new Mac lineup one of the most diverse in the company's history, especially with the recent launch of the MacBook Neo.

While March was reserved for the regular MacBook Pro devices, Apple is scheduling its MacBook Ultra overhaul for the end of this year, when we are also likely to see new chips powering the ultimate design. Pricing is expected to increase as well, as we have historically seen Apple introduce a price premium whenever a new OLED panel was installed on a device, similar to when the iPad received an OLED upgrade. These MacBook Ultra devices are codenamed K114 and K116 and are breaking with Apple's design philosophy, which has been critical of touchscreen devices for years. Apple's legendary co-founder Steve Jobs once called the touchscreen laptop experience "ergonomically terrible," but the competitive landscape has changed significantly over the past few years. To stay competitive, Apple is adapting to these industry changes slowly but surely. Interestingly, Gurman is not certain that Apple will definitely call it MacBook Ultra, it could also retain some Pro model naming, with clear differentiators for this model to sit at the top of the MacBook line.

YMTC Launches PC550, Its First PCIe 5.0 M.2 NVMe Client SSD

9 March 2026 at 10:45
Chinese NAND Flash maker Yangtze Memory Technologies Corp (YMTC) has introduced its first client M.2 NVMe PCIe 5.0 SSD, named the PC550. As many PC OEMs face challenges in acquiring storage solutions at reasonable prices, YMTC aims to assist with its inaugural PCIe 5.0 NVMe model. The M.2 2242 and 2280 modules feature a PCIe Gen 5 x4 link combined with the NVMe 2.0 protocol and YMTC's X4-9070 3D NAND, built on Xtacking 4.0. YMTC designed the PC550 with a four-channel architecture, which the company claims reduces power consumption and thermal output compared to the more common eight-channel designs. The lineup includes capacities of 512 GB, 1 TB, and 2 TB, with no pricing on the official website. Consumers can contact YMTC, or wait for distribution channels to start offering these SSDs.

YMTC rates the largest variant at up to 10,500 MB/s for sequential reads and up to 10,000 MB/s for sequential writes. These speeds surpass most Gen 4 drives but fall short of some Gen 5 offerings that reach nearly 15,000 MB/s, suggesting a less powerful SSD controller. Random performance scales with capacity. The 512 GB model is listed at up to 880,000 random read IOPS and 1,100,000 random write IOPS, with an endurance rating of 300 TBW. The 1 TB and 2 TB models achieve approximately 1,300,000 random IOPS for both read and write, with endurance ratings of 600 TBW and 1,200 TBW, respectively. Idle power consumption is quoted at under 3 milliwatts, and active consumption is under 6 watts, making these figures suitable for notebook use.

Samsung to Resurrect NVIDIA's GeForce RTX 3060 Using 8 nm Node

9 March 2026 at 10:36
NVIDIA's GeForce RTX 3060 is making a comeback, expected around mid-March. For this, NVIDIA will once again use Samsung's 8 nm DUV node, as it has in the past. This has been confirmed by the Korean media outlet Hankyung, which reports that Samsung is restarting its 8 nm node production to meet NVIDIA's needs. Originally, Samsung manufactured these GPUs back in 2021. The entire NVIDIA "Ampere" architecture lineup was produced on the 8 nm DUV node, and we didn't anticipate its return after several years. However, since NVIDIA has transitioned to TSMC for manufacturing its "Ada Lovelace" and latest "Blackwell" GPUs, and has become TSMC's largest customer, utilizing the 5 nm node, this move is intriguing.

We still lack concrete information about which version of the RTX 3060 will be reintroducedβ€”whether it will be the original 12 GB model with a 192-bit wide memory bus or the newer 8 GB variant with a 128-bit bus. Additionally, the decision to use a two-generation-old GPU architecture in 2026 is puzzling, as the reason NVIDIA has chosen the RTX 3060 instead of a newer model like the RTX 4060 remains unclear. Speculatively, it could be because the RTX 4060 is based on the same NVIDIA 4N (5 nm-class) node at TSMC as the current RTX 5060, while the RTX 3060, along with the rest of the GeForce "Ampere" generation, is built on the Samsung 8N (8 nm DUV) foundry node, which would leave the 5 nm capacity for "Blackwell" and its enterprise variants. Finally, it is worth pointing out that when GPU IP design is done, it is usually hard-linked with the node it has been prepared for, so NVIDIA is sticking with Samsung again to avoid any potential upfront costs of adapting this GPU for a different node.
❌
❌