❌

Reading view

(PR) Optical Scale-up Consortium Established to Create an Open Specification for AI Infrastructure

The Optical Compute Interconnect (OCI) Multi-Source Agreement (MSA) group today announced its formation, led by founding members AMD, Broadcom, Meta, Microsoft, NVIDIA and OpenAI. This industry consortium marks a pivotal shift toward a hyperscaler-driven open ecosystem to enable the development of a multi-vendor supply chain for optical scale-up interconnects. By aligning on an open specification, the OCI MSA members are promoting a robust optical ecosystem which will ensure that the future of AI interconnects is built with a flexible, multi-vendor foundation to meet the optical interconnect needs of modern AI infrastructure.

The Physics and Power Mandate
As large language models (LLMs) advance toward super intelligence, traditional copper-based connectivity is reaching limitations in physical reach which are impacting AI cluster scale-up domain architectures. OCI will enable migration from copper-based to optical-based scale-up architectures, alleviating copper interconnect bottlenecks.

Chinese Lisuan LX 7G106 GPU Arrives June 18 with Support for Major AAA Games

Last year, Lisuan Technology introduced its Lisuan LX 7G106 graphics card, one of the most promising GPU technologies for gamers emerging from China. Today, during the AWE 2026 stream on the Chinese BiliBili platform, the company announced that its GPUs will ship on June 18, with pre-orders starting on March 17. The 7G106 GPU is powered by a monolithic die manufactured at TSMC's facilities using the older 6 nm DUV node. This N6 node is approved for Lisuan to utilize TSMC's mature node capacity. Designed for gaming, this GPU accelerates games and 3D applications with broad API support, including DirectX 12, Vulkan 1.3, and OpenGL 4.6. While it supports DirectX 12, it does not include ray tracing, meaning there is no DirectX 12 Ultimate. However, it will support DirectX 12 games, and Lisuan notes that some of these titles include popular games on Steam, such as Cyberpunk 2077, Black Myth: Wukong, and Resident Evil 4 Remake.

Underneath the 7G106 features a SIMD engine capable of running calculations with FP32 and the new INT8 data type. The GPU has a maximum throughput of 24 TeraFLOPS in FP32, placing it high in compute. The primary compute language is OpenCL 3.0. Internally, the SIMD engine is supported by a large raster graphics pipeline, with up to 192 TMUs and 96 ROPs on the silicon. In terms of memory, it offers 12 GB of GDDR6 across a 192-bit wide memory bus, although the company has not yet finalized the exact memory frequencies. The 7G106 is equipped with a modern video acceleration engine, capable of hardware-accelerated AV1 and HEVC decoding at resolutions up to 8K at 60 FPS. It also supports hardware-accelerated AV1 encoding at 4K at 30 FPS and HEVC encoding at 8K at 30 FPS. For monitor connectivity, it includes four DisplayPort 1.4 outputs with support for DSC 1.2b. The GPU does not feature HDMI outputs, likely due to the licensing costs required by the HDMI Consortium for each installed HDMI port.

Unity Officially Gets Steam, SteamOS, and Linux Support

Unity game engine is finally getting native integration and support across more gaming platforms, according to James Stone. What we are getting now is the first actual native port instead of the emulation we've been dealing with until now. Game developers using the Unity engine have been shipping Unity games on Steam. However, Steam was never an official Unity platform, and developers used Steamworks in the past to make it happen. That's now a thing of the past, as Unity is officially supporting one of the biggest gaming platforms in existence. Additionally, we are seeing ports to Steam Deck and Steam Machine, which run on the SteamOS operating system; these previously relied on the Wine and Proton translation layers to transform Unity's API calls and make Unity games work.

Now Unity is enhancing its Linux integration further to create native runtimes and reduce reliance on the translation layers that have been doing the heavy lifting. This is a positive sign for the growing recognition of the Linux gaming world, which has been steadily rising as gamers encounter Windows-related issues. Adding native integration with the Valve ecosystem is helping Unity extend its influence across the gaming community, alongside Valve hardware such as the Steam Deck, Steam Machine, and the Steam Controller. For more details and updates from Unity at GDC 2026, check out the video below.

Meta Unveils Four MTIA Chips Focused on High-Perfomance Inference

Meta has laid out an aggressive, inference-first roadmap for its in-house accelerators, announcing four Meta Training and Inference Accelerator (MTIA) generations developed with Broadcom and due to be integrated into its data centers over the next two years. The family spans MTIA 300, 400, 450, and 500, with early units already running ranking and recommendation workloads and later designs optimized for real-time model serving. Since Meta runs some of the largest social platforms on the web, developing a fast inference accelerator is required to make social media browsing and recommendation algorithms instant. Rather than pursuing raw peak arithmetic alone, Meta emphasizes memory throughput and inference efficiency. According to the specification table, HBM bandwidth and capacity rises substantially across the series while compute grows more linearly. This means that Meta's point is increasing on-package bandwidth and capacity which can cut latency and power costs for production inference.

The MTIA chips also include hardware support for attention primitives and mixture-of-experts layers, along with low-precision formats tailored to inference to reduce conversion overhead. Software compatibility was a stated priority. Meta says the stack runs natively on common frameworks, so existing production models can be deployed on both GPUs and MTIA without major rewrites, which should ease adoption. Multiple MTIA generations are built to share the same chassis, rack, and networking, allowing upgrades by swapping modules rather than refitting data center infrastructure. That modularity helps explain Meta's fast release cadence compared with the industry norm, considering that Meta's data centers span millions of chips. MTIA chips are already running at kilowatt power budgets and PetaFLOPS of compute, so MTIA accelerators are also competing with industry-leading solutions from NVIDIA, AMD, and other hyperscalers.

Microsoft DirectStorage 1.4 Brings Quicker Load Times and Smoother Asset Streaming

Microsoft has released its latest DirectStorage 1.4 update, focusing on the technicalities behind game asset streaming. Today, the company introduced new compression and decompression technology called Zstandard (Zstd), which should improve game loading times and bring much faster game asset streaming than what was previously used. Microsoft originally developed its DirectStorage in DirectX 12 to take advantage of the quick NVMe SSDs. Powerful consumer GPUs need game assets to load incredibly fast, and DirectStorage devleopment has cut out the middle-man, the CPU, in the process of streaming these assets from storage to the GPU. Traditionally, this has been done over CPU, causing delays and latencies across teh stack.

To push Zstd even further, Microsoft has developed the Game Asset Conditioning Library (GACL), a companion tool that developers run on their assets before a game ships. The idea is that instead of simply compressing textures, GACL first conditions them to be more compressible, allowing Zstd to squeeze files down by up to 50% more than it otherwise could. It does this through a few different techniques. Shuffling rearranges data inside texture files so repeating patterns cluster together, giving Zstd more to latch onto. Block-Level Entropy Reduction (BLER) and Component-Level Entropy Reduction (CLER) then reduce texture complexity at the block and color-channel level, using perceptual quality as a guide so any changes remain invisible to the player. CLER takes this a step further by incorporating machine learning to identify exactly where that reduction can be applied without anyone noticing.

AMD Prepares "FSR Diamond" Update for Xbox Project Helix

AMD is reportedly developing a next-generation FSR update, codenamed "FSR Diamond," for the upcoming Xbox project "Helix" gaming console. With the Project Helix console expected to launch in 2027, the details of "FSR Diamond" remain a mystery. It's unclear what AMD aims to achieve with this AI-powered video generation technology, but it likely builds on previous advancements like Radiance Caching and Ray Generation, adding a new dimension to the company's graphics capabilities. We might see an AMD equivalent of the multi-frame generation technology found in NVIDIA's GeForce RTX 50-series GPUs and Intel's XeSS 3.0. Since Project Helix is anticipated to feature RDNA 5 / UDNA graphics IP, this feature could be exclusive to that generation, as AMD tends to tie its latest technologies to the current RDNA IP.

AMD already differentiates its latest FSR "Redstone" suite of technologies, with features like Ray Regeneration and Radiance Caching exclusive to RDNA 4 hardware in the Radeon RX 9000 series of GPUs. Other basic technologies, such as upscaling and frame generation, are supported on older RDNA 3/2/1 generations but use an FSR 3.1 fallback, with no FSR 4 support currently available. However, since INT8-based FSR 4 exists, it may only be a matter of time before the company extends this capability to older GPUs, though the expected performance might not be optimal. For multi-frame generation and potentially dynamic multi-frame generation, "FSR Diamond" would need specialized hardware. Even NVIDIA, with its MFG 6x mode and Dynamic MFG, keeps those features exclusive to the GeForce RTX 50-Series "Blackwell," which uses hardware flip-metering available only on the newest GPU generation. Similarly, RDNA 5 / UDNA could incorporate these hardware components as well.

NVIDIA Confirms: No Missing ROPs on RTX PRO 5000 "Blackwell" GPU

NVIDIA has officially confirmed the ROP count for its Pro-Viz RTX PRO 5000 "Blackwell" graphics card, listing it at 160 ROPs for this GPU. Reddit user "xmikjee" posted on the NVIDIA subreddit that his recently purchased RTX PRO 5000 "Blackwell" graphics card with 48 GB has 160 ROPs instead of the 176 that our database and several online sources initially suggested. However, NVIDIA has confirmed for TechPowerUP that this is an error and that the card officially comes with 160 ROPs, as detected by our GPU-Z software. GPU-Z reads the ROP count on the GPU as soon as the driver is installed. This means that "live data" is shown and read using NVIDIA drivers that are probed by the GPU-Z utility, which then reports the ROP count. Coincidentally, another user in the thread mentioned that his card also runs with 160 ROPs as detected by GPU-Z.

To understand why the ROP count on the RTX PRO 5000 "Blackwell" matters, it's helpful to know how NVIDIA structures its GPUs. The chip is built in layers, starting with Graphics Processing Clusters (GPCs) at the top, breaking down into Texture Processing Clusters (TPCs), and then into Streaming Multiprocessors (SMs), which are the cores doing the actual work. ROPs follow this same hierarchy, with each GPC contributing 16 ROPs. On a fully loaded "Blackwell" GB202, that totals 192 ROPs across 12 GPCs. The RTX PRO 5000 "Blackwell" takes an interesting path. With 14,080 CUDA cores, pointing to just under 7 GPCs worth of compute, you might expect a leaner configuration. However, NVIDIA used 10 fully active GPCs on the card, leaving some SMs off. This results in a ROP count of 160, which is notably strong for a professional card at this tier. It suggests that NVIDIA was quite generous with the silicon it left enabled and indicates the segmentation the company is doing within the GB202 SKU. Considering the very expensive RTX PRO 5000 "Blackwell" professional graphics card that NVIDIA lists for $5,099 on its Amazon store, this is a valid compromise.

Intel "Nova Lake-S" Appears with B960 Chipset and Support for DDR5-8000

Intel's upcoming "Nova Lake-S" has been spotted in the wild for the first time. During the Embedded World 2026 event, German media outlet ComputerBase spotted an Intel Core Ultra 400 series "Nova Lake-S" mini-PC, featuring Intel's upcoming B960 chipset and support for DDR5 memory running at 8,000 MT/s. This suggests that Intel is upgrading its integrated memory controller on the "Nova Lake" platform to support these DDR5 speeds before any XMP and factory-overclocked memory are used. The support for higher-speed memory indicates that even Intel's current memory controller, which reportedly achieves DDR5 speeds of 7,200 MT/s in the upcoming "Arrow Lake Refresh," will receive an upgrade alongside the new core IP and configuration.

Speaking of configurations, ComputerBase confirmed that the 52-core top SKU of "Nova Lake" will have a TDP of 175 W, while other configurations with a TDP of 65 W will also be available. This is a significant boost of the base TDP rating that Intel is delivering, as the current flagship "Arrow Lake" Core Ultra 9 285K carries a base TDP of 125 W, a whole 50 W less. As this CPU is a 52-core model in its top-configuration, boost frequencies are going to push the power usage much higher as the processor runs a heavier load. Graphics output is powered by Xe3P GPU IP, as previously rumored, confirming that Intel's next-generation graphics is there. For AI capability, "Nova Lake" will deliver more than 100 TOPS using the 8-bit INT8 data type, thanks to the onboard NPU and the powerful Xe3P GPU IP.

SK hynix Unveils 1c LPDDR6 Memory With 16 Gb Capacity

SK hynix has successfully developed new LPDDR6 memory modules with a 16 Gb capacity on the sixth-generation 10 nm node, known as 1c. The South Korean giant has confirmed that mass production of this memory is scheduled for the first half of the year, with the product reaching customers in the second half. Additionally, SK hynix claims that the speed of these LPDDR6 modules exceeds 10.7 Gbps, suggesting that the company is preparing some overclocked versions as well, surpassing the initial speed specifications of this LPDDR6 generation from JEDEC. If the previous International Solid-State Circuits Conference (ISSCC) 2026 show in San Francisco was an indication, SK hynix is preparing modules that will run at speeds of up to 14.4 Gbps, delivering a significant throughput boost over the previous-generation LPDDR5X memory. The company claims a 33% improvement over LPDDR5X, which topped out at 10.7 Gbps, aligning with the 14.4 Gbps figure for LPDDR6.

SK hynix is also expecting significant power efficiency optimizations exceeding 20% thanks to the new technologies enabling LPDDR6 to run. This generation of low-power DDR memory uses a sub-channel structure that allows the memory channels to operate selectively and only process necessary data paths, meaning not all channels need to be engaged when unnecessary. Additionally, LPDDR6 incorporates Dynamic Voltage and Frequency Scaling (DVFS), which optimizes power consumption and performance by dynamically adjusting the voltage/frequency curve depending on the scenario. SK hynix notes that during applications like gaming, DVFS will scale the frequency to achieve maximum bandwidth, while standard applications will see lower frequencies to balance power consumption.

NVIDIA GeForce RTX 5050 9 GB Variant Comes with 130 W TDP

NVIDIA is reportedly preparing to update its GeForce RTX 5050 GPU with a new variant featuring 9 GB of GDDR7 memory. A well-known leaker on X, @kopite7kimi, confirmed that the upcoming card will maintain the same 130 W TDP and thermal envelope as the current GeForce RTX 5050, which has 8 GB of GDDR6 memory. The current RTX 5050 uses 8 GB of 20 Gbps GDDR6 memory on a 128-bit bus, providing 320 GB/s of bandwidth. The new model is expected to use three modules of GDDR7 memory, each with 3 GB of capacity, resulting in a total of 9 GB across a 96-bit memory bus. While the narrower bus decreases the interface width, the switch to new 28 Gbps GDDR7 memory would increase total memory bandwidth to 336 GB/s, a roughly 5% improvement, along with a 12.5% boost in VRAM capacity. The new leak also claims that NVIDIA is using GB206 as a GPU base, while the older RTX 5050 8 GB with GDDR6 used GB207 die. However, the core count remains at 2,560 CUDA cores, suggesting that lower-binned GB206 dies found in RTX 5060 and other mid-range SKUs are repurposed for the new RTX 5050.

NVIDIA's switch to GDDR7 memory likely helps the company manage supply chain procurement better, as memory modules are in short supply. Instead of using four GDDR6 modules with 2 GB capacity each, NVIDIA is switching to three modules of GDDR7 with 3 GB capacity each, reducing the number of memory modules needed for this GPU. Interestingly, memory makers like Samsung, Micron, and SK hynix could now produce more of the 3 GB GDDR7 modules, with GDDR6 being in much shorter supply, forcing manufacturers to focus on the newer memory technology while GPU makers have to adapt. This has resulted in a situation where procuring three modules of 3 GB GDDR7 memory is now easier than finding a sufficient supply of GDDR6 modules, which NVIDIA needs to secure for each RTX 5050 GPU. As we approach Computex 2026, this GPU is expected to arrive sometime during that period.

Intel Releases Official XeSS 3.0 Software Development Kit

Intel has launched its official XeSS 3.0 software development kit (SDK), which allows game developers to incorporate the latest binaries into their games and integrate XeSS 3.0 into game engines. Interestingly, Intel has released this version as a binary, pre-compiled file, rather than the open-source XeSS version the company promised a long time ago. This promise has remained unfulfilled for four years, with each XeSS release being closed-source, only available on GitHub under the Intel Simplified Software License as of the October 2022 revision. This binary is provided as a DLL file for Windows operating systems, meaning that Linux users cannot run this SDK on their systems without a translation layer. For users wanting to update older XeSS 2.x versions, you simply need to replace the libxess.dll, libxell.dll, and libxess_fg.dll files with those from the newest XeSS 3.0 ZIP folder.

Intel promotes XeSS 3.0 with its main feature being multi-frame generation (MFG). This version integrates up to three generated frames between two rendered frames, resulting in up to a fourfold frame increase using MFG, similar to NVIDIA's DLSS MFG technology. Intel is joining the AI-generated frame insertion trend, which seems to be gradually expanding. Interestingly, Intel also added a feature that allows XeSS 3.0 to use external memory heaps. This means the Intel XeSS SDK can now utilize GPU memory allocated by the game engine itself, allowing XeSS and the engine to operate on the same VRAM blocks instead of each reserving separate ones. This helps developers avoid duplicate buffers and fragmentation, gives them direct control over allocation and residency, and makes integrating XeSS into an existing render pipeline cleaner and more efficient.

(PR) Samsung Showcases Glasses-Free 3D and HDR10+ Gaming at GDC 2026

Samsung Electronics America today shared its plan to expand support for glasses-free 3D gameplay on the Samsung Odyssey 3D gaming monitor at GDC Festival of Gaming 2026 in San Francisco. Samsung will spotlight Hell is Us and Cronos: The New Dawn as part of its expanding 3D gaming ecosystem, demonstrating how leading titles are embracing immersive displays without the need for special glasses.

"The Odyssey 3D is designed for gamers who want to experience their hobby in a way that feels like they're completely embedded in the action," said Kevin Lee, Executive Vice President of the Visual Display (VD) Business at Samsung Electronics. "Through partnerships with leading gaming studios, we are committed to creating an ecosystem of top-tier titles, making great games extraordinary."

NVIDIA Prepares GeForce ON Community Update for GDC 2026

At this year's Game Developers Conference (GDC) in San Francisco, NVIDIA has prepared a special GeForce ON community update scheduled for tomorrow, where the company will address its gaming audience. While we have no official expectations for this GeForce ON update, NVIDIA might preview either new technologies that the company is developing for its gamers or new implementations in games and game engines. Usually, this centers around technologies like Deep Learning Super Sampling, Ray Tracing, Path Tracing, GeForce NOW expansion, gaming monitors like the Big Format Gaming Displays, or something entirely different. Interestingly, NVIDIA might also preview potential product launches for its GTC 2026, which starts just a week from now, on March 16 and lasts through March 19. We are eagerly anticipating any formal announcement, and you can check out the video link that starts the premiere tomorrow, below.

MaxSun Unveils Single-Slot Liquid-Cooled Arc Pro B60 Dual GPU and Fanless Model

MaxSun has introduced two additional GPU variants to its Arc Pro B60 Dual GPU lineup. Each card features two of Intel's Arc Pro B60 GPUs, offering a total of 40 Xe2 cores and 48 GB of GDDR6 memory. The first variant is a passively cooled model designed for dual-slot configurations, ideal for server setups with high-airflow chassis that direct air across the card. This design allows any server case with high-RPM fans to accommodate multiple GPUs in parallel for AI inference and local development. However, the standout model is the liquid-cooled edition, measuring 300 x 110 x 20 mm and occupying a single slot. This compact profile is a distinct advantage of the water-cooled design, allowing several cards to be installed close together in dense enclosures without the need for strong airflow.

With the liquid-cooled edition, the card's TDP remains at 400 W, similar to the passive and fan-equipped versions. However, overall performance is expected to be slightly higher due to the increased stability that liquid cooling provides. The compact liquid-cooled unit was developed in collaboration with abee, and the company reports peak GPU temperatures around 61Β°C when integrated into a cooling loop, although other loop characteristics are unknown. It is assumed that standard industry loop measurements were conducted. Two hose barbs and a 12V-2x6 power connector are located on the rear edge, along with a fan header for auxiliary cooling control. A single-slot bracket offers one DisplayPort 1.2 and one HDMI 2.1a connector for each GPU, resulting in four display outputs in total. In a workstation, you could fit a few of these GPUs and quickly multiply the compute and memory capacity, creating a capable system even without a high-end GPU.

Fujitsu Showcases "MONAKA" CPU Sample with 3.5D XDSiP Packaging

During MWC, Fujitsu partnered with networking equipment maker 1FINITY to unveil the first silicon wafers and an engineering sample of its "MONAKA" CPU. Scheduled for release in 2027, the initial Fujitsu MONAKA CPU utilizes the Armv9-A architecture and a 3D chiplet layout that combines a core die with separate SRAM and I/O dies. A single chip features 144 cores, and two-socket configurations can scale up to 288 cores per node. The platform supports 12-channel DDR5, PCIe 6.0 with CXL 3.0, and Arm SVE2 for AI and HPC workloads. Fujitsu has chosen TSMC to manufacture this chip using the 2 nm node, paired with Broadcom's 3.5D eXtreme Dimension System-in-Package (XDSiP) packaging architecture. This packaging allows MONAKA to become a 144-core design featuring four 36-core chiplets. These chiplets are stacked face-to-face with SRAM tiles through hybrid copper bonding, utilizing TSMC's N5 process for the cache layer.

In the pictures below, we can see the silicon complex in its early sample packaging, which shows a large central I/O die, HBM memory around the CPU, and the new packaging technology. Reportedly, this CPU has already reached a working version, with Broadcom shipping the CPU to Fujitsu in late February this year. After initial testing and early performance validation, Fujitsu plans to ship these processors to customers around summer, with mass shipping to commence in 2027. The company envisions this SoC as a powerhouse for AI inference, simulation, and large-scale data processing. It will sell these systems to external customers, who showed great interest in Fujitsu's A64FX when the Fugaku computer came online. Fugaku was the most powerful supercomputer back in 2020, achieving 415.53 PetaFLOPS of FP64 and an impressive HPL-AI score of 1.421 ExaFLOPS using lower FP16 precision. Hence, we expect the new MONAKA CPU to enable much greater speeds and some efficiency improvements as well.

Apple Prepares New MacBook Ultra with OLED Touchscreen and Dynamic Island

Apple could name its upcoming laptop MacBook "Ultra" as the ultimate portable Mac from the Cupertino-based giant. According to Mark Gurman in the latest PowerOn newsletter, Apple is giving the MacBook a long-rumored "Ultra" overhaul, this time as an addition to the existing MacBook lineup, not as a product replacement. This model is expected to be Apple's first MacBook with an OLED touchscreen and a dynamic island instead of the traditional notch found on today's MacBook displays. It will sit above the new M5 Pro and M5 Max-powered MacBook Pro 14 and MacBook Pro 16, making Apple's new Mac lineup one of the most diverse in the company's history, especially with the recent launch of the MacBook Neo.

While March was reserved for the regular MacBook Pro devices, Apple is scheduling its MacBook Ultra overhaul for the end of this year, when we are also likely to see new chips powering the ultimate design. Pricing is expected to increase as well, as we have historically seen Apple introduce a price premium whenever a new OLED panel was installed on a device, similar to when the iPad received an OLED upgrade. These MacBook Ultra devices are codenamed K114 and K116 and are breaking with Apple's design philosophy, which has been critical of touchscreen devices for years. Apple's legendary co-founder Steve Jobs once called the touchscreen laptop experience "ergonomically terrible," but the competitive landscape has changed significantly over the past few years. To stay competitive, Apple is adapting to these industry changes slowly but surely. Interestingly, Gurman is not certain that Apple will definitely call it MacBook Ultra, it could also retain some Pro model naming, with clear differentiators for this model to sit at the top of the MacBook line.

YMTC Launches PC550, Its First PCIe 5.0 M.2 NVMe Client SSD

Chinese NAND Flash maker Yangtze Memory Technologies Corp (YMTC) has introduced its first client M.2 NVMe PCIe 5.0 SSD, named the PC550. As many PC OEMs face challenges in acquiring storage solutions at reasonable prices, YMTC aims to assist with its inaugural PCIe 5.0 NVMe model. The M.2 2242 and 2280 modules feature a PCIe Gen 5 x4 link combined with the NVMe 2.0 protocol and YMTC's X4-9070 3D NAND, built on Xtacking 4.0. YMTC designed the PC550 with a four-channel architecture, which the company claims reduces power consumption and thermal output compared to the more common eight-channel designs. The lineup includes capacities of 512 GB, 1 TB, and 2 TB, with no pricing on the official website. Consumers can contact YMTC, or wait for distribution channels to start offering these SSDs.

YMTC rates the largest variant at up to 10,500 MB/s for sequential reads and up to 10,000 MB/s for sequential writes. These speeds surpass most Gen 4 drives but fall short of some Gen 5 offerings that reach nearly 15,000 MB/s, suggesting a less powerful SSD controller. Random performance scales with capacity. The 512 GB model is listed at up to 880,000 random read IOPS and 1,100,000 random write IOPS, with an endurance rating of 300 TBW. The 1 TB and 2 TB models achieve approximately 1,300,000 random IOPS for both read and write, with endurance ratings of 600 TBW and 1,200 TBW, respectively. Idle power consumption is quoted at under 3 milliwatts, and active consumption is under 6 watts, making these figures suitable for notebook use.

Samsung to Resurrect NVIDIA's GeForce RTX 3060 Using 8 nm Node

NVIDIA's GeForce RTX 3060 is making a comeback, expected around mid-March. For this, NVIDIA will once again use Samsung's 8 nm DUV node, as it has in the past. This has been confirmed by the Korean media outlet Hankyung, which reports that Samsung is restarting its 8 nm node production to meet NVIDIA's needs. Originally, Samsung manufactured these GPUs back in 2021. The entire NVIDIA "Ampere" architecture lineup was produced on the 8 nm DUV node, and we didn't anticipate its return after several years. However, since NVIDIA has transitioned to TSMC for manufacturing its "Ada Lovelace" and latest "Blackwell" GPUs, and has become TSMC's largest customer, utilizing the 5 nm node, this move is intriguing.

We still lack concrete information about which version of the RTX 3060 will be reintroducedβ€”whether it will be the original 12 GB model with a 192-bit wide memory bus or the newer 8 GB variant with a 128-bit bus. Additionally, the decision to use a two-generation-old GPU architecture in 2026 is puzzling, as the reason NVIDIA has chosen the RTX 3060 instead of a newer model like the RTX 4060 remains unclear. Speculatively, it could be because the RTX 4060 is based on the same NVIDIA 4N (5 nm-class) node at TSMC as the current RTX 5060, while the RTX 3060, along with the rest of the GeForce "Ampere" generation, is built on the Samsung 8N (8 nm DUV) foundry node, which would leave the 5 nm capacity for "Blackwell" and its enterprise variants. Finally, it is worth pointing out that when GPU IP design is done, it is usually hard-linked with the node it has been prepared for, so NVIDIA is sticking with Samsung again to avoid any potential upfront costs of adapting this GPU for a different node.

Intel "Arrow Lake Refresh" Core Ultra 7 270K Plus Appears in HP Desktop PC

Intel's upcoming "Arrow Lake Refresh" Core Ultra 7 270K Plus processor has made its debut on HP's website, as discovered by @momomo_us. HP has installed this unreleased processor in its HyperX OMEN desktop gaming PC. The Core Ultra 7 270K Plus represents the pinnacle of the "Arrow Lake Refresh" generation, featuring 8 performance cores and 16 efficiency cores, along with 36 MB of shared L3 cache. However, it operates at slightly lower clock speeds than the flagship Core Ultra 9 285K. Recently, a new rumor suggested that Intel might unveil the full specifications of these chips on March 11, with reviews following on March 23. Retail availability of the new generation is expected shortly after.

This is the top SKU that will appear with the "Arrow Lake Refresh," as Intel has reportedly decided not to launch the rumored "Core Ultra 9 290K Plus," which would have been a more powerful version of the 285K with even higher clock speeds. The main reason for canceling this SKU is product overlap. The flagship Core Ultra 9 290K Plus would have had the same core configuration as the Core Ultra 7 270K Plus, just with slightly higher clock speeds. Additionally, Intel already offers a Core Ultra 9 285K SKU from the regular "Arrow Lake" family, which means the company would have three similar SKUs at the top of the stack. By maintaining only two products, Intel can simplify manufacturing and supply chain logistics, allowing more focus on preparing for the next-generation "Nova Lake" launch later this year.

NVIDIA Grabs 94% AIB GPU Market Share, AMD Falls to 5%

According to a new report from Jon Peddie Research, NVIDIA has once again increased its market share in the AIB GPU sector, reaching 94% in Q4 2025. This marks a 1.6% rise from the previous quarter and sets a new all-time high in recent reports. Meanwhile, AMD's market share has been declining, with a 1.6% decrease that appears to have benefited NVIDIA's partners and their shipments. Overall, JPR records indicate that the AIB GPU market sold 11.5 million units in the final quarter of 2025, a reduction of half a million units from Q3, but a significant 36% increase compared to 8.45 million units in the final quarter of 2024. JPR attributes the slight decline in overall GPU AIB shipments to rising memory prices and tariffs affecting the global supply chain, which have driven up the costs of discrete GPU solutions that use expensive GDDR7 and GDDR6 memory.

Intel's market share in AIB GPU shipments remains steady at around 1%, consistent with its performance in Q3 2025 when the company achieved its first single-digit percentage since launching its Arc "Alchemist" GPUs for gamers. This indicates that Intel's progress is stable, with demand remaining consistent as gamers continue to choose Intel GPUs at the same rate as before. For Intel to break out of the single-digit bracket, the company likely needs more GPU designs, such as the anticipated "Battlemage" B770 graphics card.

Apple macOS Tahoe 26.3.1 "Updates" M5 SoC With New "Super Cores"

We reported that Apple's M5 Pro/Max series of SoCs is now incorporating an additional core tier alongside the usual configuration we have been seeing in the company's processors for years. The performance core has been renamed "Super Core," and Apple has introduced a middle-tier design called Performance Core, which is actually a new "M-Core," while the Efficiency Core remains the same. As the regular big P-Core has been renamed to Super-Core, Apple is updating its nomenclature even for the regular M5 SoC with the macOS Tahoe 26.3.1 update. In this update, Apple has renamed the bigger Performance core to Super-Core, meaning that the M5 SoC now has four super cores and six efficiency cores, whereas this was previously called a four performance-core and six efficiency-core design before the update.

This M5 SoC has no new "M-Cores" variants that sit between the super core and efficiency core, while the M5 Pro and M5 Max have six Super-Cores and 12 M-Cores. The M-Core is a 7-wide out-of-order execution CPU that has roughly 70% of the P-core performance with slightly lower power usage. Interestingly, the efficiency core is completely absent from the new M5 Pro/Max SoCs, resulting in a combination of performance and middle-class cores. This leaves only the regular M5 with the efficiency cores in its CPU package. This macOS software update is only meant for the M5-powered MacBook Pro, which has been shipping with older macOS versions without the Tahoe's v26.3.1 update. For the latest MacBook Air and MacBook Pro equipped with M5, M5 Pro, and M5 Max SoCs, the operating system will likely already show the new naming out of the box, as Apple likely applied all OS updates before shipping. Below are screenshots courtesy of Andrew Cunningham from Ars Technica, showing the new nomenclature on the left, and old way on the right.

NVIDIA Stops China-Focused H200 "Hopper" GPU Production

NVIDIA has reportedly halted production of its China-focused H200 "Hopper" GPU at TSMC's facilities, according to multiple reports. The company has built up an inventory of 250,000 H200 GPUs, which will be available in the Chinese market for select applications that do not compromise United States national security. After NVIDIA was granted export rights to China for its H200 accelerators, the company began stockpiling these GPUs to supply AI labs across China. However, China has also restricted what its domestic companies and AI labs can import, meaning that the import of H200 GPUs is still prohibited unless a company receives a letter of exemption from Beijing. This has resulted in NVIDIA using its TSMC N5 5 nm node capacity to create about 250,000 units, which are now stored in a warehouse awaiting export approval from the U.S. administration and import approval from Chinese customs for AI labs.

Interestingly, the Financial Times and Reuters note that NVIDIA will now "reallocate" capacity from H200 production to the new "Rubin." However, these two GPU generations do not use the same manufacturing node or packaging technology. For "Hopper," NVIDIA uses TSMC's 5 nm node with CoWoS-S packaging, while "Rubin" uses a 3 nm node with CoWoS-L packaging. These reports likely refer to some conversion of manufacturing capacity involving either the node or packaging capacity that NVIDIA has secured. It is unlikely that the 5 nm semiconductor node can be converted into a 3 nm node without significant line remodeling and changes to manufacturing equipment. However, packaging can be adjusted more easily, which is likely what these reports are indicating.

CXMT LPCAMM2 Memory Appears in Lenovo ThinkBook Laptop

In response to memory shortages, PC OEMs are exploring alternative manufacturers and suppliers for this increasingly scarce and valuable resource. Customers now need to make purchasing decisions within an hour or even prepay suppliers to secure DRAM. Recently, Lenovo has started using the Chinese memory supplier CXMT in some of its laptop models. Lenovo is officially rolling out its LPCAMM2 memory to mainstream laptops after introducing it with the ThinkPad P1 Gen 7 back in 2024. LPCAMM2 is a new memory standard that combines the performance of LPDDR5X with the upgradeability of a regular SODIMM. The ThinkBook 16+ is likely Lenovo's first consumer device to feature LPCAMM2, offering up to 32 GB of LPDDR5X-8533 memory, supported by an Intel Core Ultra X7 385H and its Arc B390 iGPU, using the first CXMT memory modules.

Late last year, CXMT introduced its DDR5-8000 and LPDDR5X-10667 memory modules at the 2025 China International Semiconductor Expo. This development has likely encouraged many OEMs to seek alternatives to traditional suppliers like SK Hynix, Samsung, and Micron, whose supply has been very limited outside AI accelerator workloads. Even Apple is reportedly considering partnerships with Chinese semiconductor manufacturers CXMT and YMTC for its upcoming iPhone 18 series and possibly other products like MacBooks and Mac computers. With suppliers such as Kioxia, Samsung, and SK hynix raising prices due to a significant industry shortage, Apple is experiencing pressure on its profit margins while maintaining the same MSRP for its products. To diversify its supply chain, Apple is reportedly looking into sourcing DRAM from CXMT and NAND Flash from YMTC to reduce its reliance on South Korean and Japanese suppliers.

Intel Begins Open-Source Xe3P GPU Driver Enablement

Intel has quietly initiated open-source efforts to lay the groundwork for its next-generation Xe3P graphics within the Mesa OpenGL "Iris" and Vulkan "Anvil" drivers. According to a report from Phoronix, these efforts are not immediately focused on making the driver functional but rather on establishing code paths that can be developed for this graphics IP in the future. This means preliminary support is still a few weeks away, as additional work is needed behind the scenes. By the time Xe3P GPUs are released, open-source driver support should be ready.

We expect to see the first versions of Xe3P GPUs this year, as this IP will take on various forms. Some will be featured in the upcoming "Nova Lake" desktop processors for the consumer market, anticipated later this year. Early open-source enablement suggests "Nova Lake-P" processors will include Xe3P-LPG for integrated graphics. Additionally, "Nova Lake-P" processors will incorporate multiple new IPs like Xe3P-LPM for media processing, which handles decoding and encoding, and Xe3P-LPD for display output processing. Finally, the Xe3P IP will also be part of Intel's AI-focused "Crescent Island" inference GPU, which will feature 160 GB of onboard LPDDR5X. We are still awaiting performance claims for this Xe3P GPU, so we need to be patient a little longer.

Apple MacBook Neo Capped at 8 GB RAM by A18 Pro InFO-PoP Packaging

Yesterday, Apple announced its newest low-cost MacBook Neo, starting at $599 in the United States, or about $499 for education and students. Some online criticism emerged regarding Apple's decision to offer a laptop with only 8 GB of RAM in 2026, with no options for higher RAM capacity. However, this 8 GB of RAM is a design choice Apple made at TSMC's packaging facilities for the A18 Pro chip. Inside the MacBook Neo, Apple decided to reuse the iPhone 16 Pro's chip, which comes from TSMC with 8 GB of LPDDR5X memory. This memory is attached directly above the A18 Pro SoC using Integrated Fan-Out Package on Package (InFO-PoP), creating a 3D wafer-level fan-out package. This package is designed to hold memory directly above the SoC die, resulting in a smaller PCB design without the LPDDR5X module taking up over 100 mmΒ² of PCB area.

Therefore, Apple's MacBook Neo configurations are limited to what the A18 Pro SoC is originally packaged with. These are 8 GB LPDDR5X modules that are shipped directly to TSMC for integration into the InFO-PoP package, which is later shipped back to Apple for integration into these new MacBook Neo laptops. While offering 8 GB laptops in modern times might seem controversial, the design choices behind the SoC and the goal of keeping unit costs low are what limit Apple from providing more memory capacity. Finally, these SoCs use Unix-based macOS, which is optimized for good memory management at this capacity, ensuring that users can still have a satisfactory experience.

NVIDIA GeForce RTX 3060 Could Return Mid-March

The NVIDIA GeForce RTX 3060, a mid-range GPU now two generations old, is reportedly returning to NVIDIA's supply this month. According to Chinese Board Channels, NVIDIA is planning a mid-March restock of the "Ampere" GPU, aligning with earlier rumors that suggested a Q1 2026 revival. Interestingly, it is unclear which version of the RTX 3060 will be reintroducedβ€”whether it will be the original 12 GB model with a 192-bit wide memory bus or the newer 8 GB variant with a 128-bit bus. NVIDIA's decision to bring back this older SKU is puzzling, especially considering it is two generations old and comes amid memory supply chain shortages. However, this older SKU uses GDDR6 memory, which might be more readily available as the newer GDDR7 is being used by modern "Blackwell" GPUs and the upcoming "Rubin CPX" accelerators.

Why NVIDIA has chosen the RTX 3060 instead of a newer model like the RTX 4060 remains uncertain. Speculatively, it could be due to the fact that the RTX 4060 is based on the same NVIDIA 4N foundry node at TSMC as the current RTX 5060, while the RTX 3060, along with the rest of the GeForce "Ampere" generation, is built on the Samsung 8N (8 nm DUV) foundry node. Additionally, Board Channels note that GeForce RTX 3060 models from various brands will start arriving soon, which means NVIDIA's add-in card partners are doing much of the heavy lifting to bring back this SKU, with NVIDIA only supplying the GPU die and memory as an installation kit. AICs could start marketing this GPU again or just quietly add it to their websites. We are waiting a few more days to see how the re-launch unfolds and which SKUs we end up getting. Finally, the most important factor for considering this GPU when modern alternatives exist is the pricing, which will dictate its sales.

AMD Ryzen AI 400 Comes With Up to 12 Usable PCIe 4.0 Lanes, GPUs Limited to x8 Connection

On Monday, AMD announced its latest Ryzen AI 400 Series and Ryzen AI PRO 400 Series desktop processors, based on the "Gorgon Point" silicon and powered by the "Zen 5" core configuration. This generation follows the Ryzen 8000G series, known as "Phoenix Point." However, it has been revealed that the Ryzen AI 400 series reduces the number of usable PCIe lanes compared to the previous Ryzen 8000G generation. The new top SKU offers 16 native PCIe 4.0 lanes, but only 12 are available to the rest of the system. Four of these PCIe lanes are used for the chipset link that connects the AM5 socket to the motherboard chipset, leaving fewer lanes for the end-user. Lower-tier chips may provide as few as 10 usable lanes, which is insufficient to run a discrete GPU at its full 16x lanes in the PCIe 4.0 connector on the AM5 motherboard. When a user installs an M.2 PCIe NVMe SSD, only eight lanes remain available for a discrete graphics card, meaning the GPU will operate in x8 mode instead of x16.

Interestingly, AMD hasn't fully utilized the "Gorgon Point" silicon in the desktop Ryzen AI 400G series. For example, the top modelβ€”Ryzen AI 7 450Gβ€”is configured with four "Zen 5" cores and four "Zen 5c" cores, making up an eight-core configuration. The fully unlocked "Gorgon Point" silicon in laptops has 12 cores in total, with four "Zen 5" and eight "Zen 5c" cores. This is a similar configuration to "Strix Point," but adapted for mobile. It's also worth noting AMD's approach with the iGPU. The top 450G processor model only comes with 8 iGPU compute units, which is half the CUs physically available on the silicon. Most other processor models in the series come with just 4 CUs.

Memory Makers Shift to Hourly Contracts as AI Demand Continues to Climb

The memory procurement market has adopted a new business model with hourly contracts, where quoted prices are valid for only a single hour, necessitating a new pricing quote with each change. Memory makers like SK hynix, Samsung, and Micron are responding to the massive demand for their memory solutions with new types of contracts that force OEMs to make quick decisions to procure DRAM within a very short timeframe. This means that memory makers are requiring much faster contract settlements, as the rapid increase in demand is causing product pricing to change literally by the hour. For example, large PC OEMs, who are among the biggest customers, now have to ship PCs with one pricing, while their future products are subject to price changes that fluctuate by the hour. How sustainable and stable this market will be remains to be seen.

Interestingly, the customer DRAM market is splitting into two camps. A short list of deep-pocketed customers, including large cloud providers, major automakers, and top smartphone firms such as Apple and Samsung Electronics, retain priority access to DRAM and the best pricing negotiation leverage. Memory manufacturers like SK hynix and Micron are said to prioritize these relationships above all else by favoring buyers who can prepay or settle in cash. For the vast remainder, more than 190,000 small and medium enterprises, the situation is harsher. Many lack the cash flow and bargaining leverage to accept rapid price jumps. As costs climb, some firms are revising demand forecasts downward to avoid margin erosion. Demand outside of the hyperscaler/data center sector may be revisited downward for many companies, as consumers are less keen on spending more on products that are becoming more expensive each day.

ASUS Raises GeForce RTX 50 Series "Blackwell" Pricing in China, Radeon Pricing Unchanged

ASUS has reportedly updated its product pricing in the Chinese market to reflect the memory component shortage across the semiconductor industry. According to Board Channels, ASUS is adjusting the pricing for the GeForce RTX 50 Series "Blackwell" GPUs with GDDR7 memory, while the pricing strategy for AMD Radeon RX 9000/7000 and other series remains unchanged. At the very top, ASUS is increasing the price of its RTX 5090 D v2 SKUs by about 500 yuan, which is approximately $72 at the time of writing. Other SKUs like the GeForce RTX 5080, RTX 5070 Ti, regular RTX 5070, and RTX 5060 Ti with 16 GB of VRAM are experiencing an increase of anywhere from $14.50 to $45. Interestingly, some older and low-end SKUs like the popular GT 1030 and GT 710 series, which provide basic graphics output for many prebuilt PCs, are also seeing a price increase of up to $8. AMD's Radeon series reportedly remains unchanged, which can be explained by these GPUs already having gone through a price increase cycle and GDDR6 memory being in better supply than GDDR7.
You can check out the entire list with proposed price changes below.

EA Working on Javelin Anticheat Port for Windows-on-Arm

EA posted a job listing seeking an engineer to port its Javelin Anticheat, a kernel-level anticheat solution, to the Windows-on-Arm (WoA) platform. This is a significant indicator of where the industry is headed, confirming that the biggest game developers are officially porting their game engines, games, and anticheat solutions to Arm-based PCs running the Windows 11 operating system. Interestingly, this coincides with NVIDIA's introduction of its N1/N1X SoCs with Arm-based CPU cores, which are expected to launch in the first half of this year. These will offer consumers 20 CPU cores, consisting of 10 Cortex-X925 and 10 Cortex-A725 CPU cores based on the Armv9.2 ISA, along with a "Blackwell" GPU optimized for low power settings with 6,144 CUDA cores. Adding to the growing ecosystem of WoA solutions, NVIDIA will be competing with Qualcomm's recent Snapdragon X2 Elite and X2 Plus SoCs.

If readers recall Valve's efforts to port regular Windows games to Linux, one of the biggest issues was the use of kernel-based anti-cheat solutions that simply couldn't work on the non-standard Windows 11 platform. A similar situation is now occurring even between Windows versions, as EA needs to develop a specialized solution that will work on non-x86 deployments like the standard Windows-on-Arm. It will likely be a few months before the official deployment is released, but the job listing suggests that some internal work is in progress. This means that the senior engineer EA is looking for could arrive to quickly bring these pieces together and make the new platform work without any issues.
❌