❌

Normal view

Yesterday β€” 5 March 2026Main stream

Apple macOS Tahoe 26.3.1 "Updates" M5 SoC With New "Super Cores"

5 March 2026 at 22:03
We reported that Apple's M5 Pro/Max series of SoCs is now incorporating an additional core tier alongside the usual configuration we have been seeing in the company's processors for years. The performance core has been renamed "Super Core," and Apple has introduced a middle-tier design called Performance Core, which is actually a new "M-Core," while the Efficiency Core remains the same. As the regular big P-Core has been renamed to Super-Core, Apple is updating its nomenclature even for the regular M5 SoC with the macOS Tahoe 26.3.1 update. In this update, Apple has renamed the bigger Performance core to Super-Core, meaning that the M5 SoC now has four super cores and six efficiency cores, whereas this was previously called a four performance-core and six efficiency-core design before the update.

This M5 SoC has no new "M-Cores" variants that sit between the super core and efficiency core, while the M5 Pro and M5 Max have six Super-Cores and 12 M-Cores. The M-Core is a 7-wide out-of-order execution CPU that has roughly 70% of the P-core performance with slightly lower power usage. Interestingly, the efficiency core is completely absent from the new M5 Pro/Max SoCs, resulting in a combination of performance and middle-class cores. This leaves only the regular M5 with the efficiency cores in its CPU package. This macOS software update is only meant for the M5-powered MacBook Pro, which has been shipping with older macOS versions without the Tahoe's v26.3.1 update. For the latest MacBook Air and MacBook Pro equipped with M5, M5 Pro, and M5 Max SoCs, the operating system will likely already show the new naming out of the box, as Apple likely applied all OS updates before shipping. Below are screenshots courtesy of Andrew Cunningham from Ars Technica, showing the new nomenclature on the left, and old way on the right.

NVIDIA Stops China-Focused H200 "Hopper" GPU Production

5 March 2026 at 20:37
NVIDIA has reportedly halted production of its China-focused H200 "Hopper" GPU at TSMC's facilities, according to multiple reports. The company has built up an inventory of 250,000 H200 GPUs, which will be available in the Chinese market for select applications that do not compromise United States national security. After NVIDIA was granted export rights to China for its H200 accelerators, the company began stockpiling these GPUs to supply AI labs across China. However, China has also restricted what its domestic companies and AI labs can import, meaning that the import of H200 GPUs is still prohibited unless a company receives a letter of exemption from Beijing. This has resulted in NVIDIA using its TSMC N5 5 nm node capacity to create about 250,000 units, which are now stored in a warehouse awaiting export approval from the U.S. administration and import approval from Chinese customs for AI labs.

Interestingly, the Financial Times and Reuters note that NVIDIA will now "reallocate" capacity from H200 production to the new "Rubin." However, these two GPU generations do not use the same manufacturing node or packaging technology. For "Hopper," NVIDIA uses TSMC's 5 nm node with CoWoS-S packaging, while "Rubin" uses a 3 nm node with CoWoS-L packaging. These reports likely refer to some conversion of manufacturing capacity involving either the node or packaging capacity that NVIDIA has secured. It is unlikely that the 5 nm semiconductor node can be converted into a 3 nm node without significant line remodeling and changes to manufacturing equipment. However, packaging can be adjusted more easily, which is likely what these reports are indicating.

CXMT LPCAMM2 Memory Appears in Lenovo ThinkBook Laptop

5 March 2026 at 19:58
In response to memory shortages, PC OEMs are exploring alternative manufacturers and suppliers for this increasingly scarce and valuable resource. Customers now need to make purchasing decisions within an hour or even prepay suppliers to secure DRAM. Recently, Lenovo has started using the Chinese memory supplier CXMT in some of its laptop models. Lenovo is officially rolling out its LPCAMM2 memory to mainstream laptops after introducing it with the ThinkPad P1 Gen 7 back in 2024. LPCAMM2 is a new memory standard that combines the performance of LPDDR5X with the upgradeability of a regular SODIMM. The ThinkBook 16+ is likely Lenovo's first consumer device to feature LPCAMM2, offering up to 32 GB of LPDDR5X-8533 memory, supported by an Intel Core Ultra X7 385H and its Arc B390 iGPU, using the first CXMT memory modules.

Late last year, CXMT introduced its DDR5-8000 and LPDDR5X-10667 memory modules at the 2025 China International Semiconductor Expo. This development has likely encouraged many OEMs to seek alternatives to traditional suppliers like SK Hynix, Samsung, and Micron, whose supply has been very limited outside AI accelerator workloads. Even Apple is reportedly considering partnerships with Chinese semiconductor manufacturers CXMT and YMTC for its upcoming iPhone 18 series and possibly other products like MacBooks and Mac computers. With suppliers such as Kioxia, Samsung, and SK hynix raising prices due to a significant industry shortage, Apple is experiencing pressure on its profit margins while maintaining the same MSRP for its products. To diversify its supply chain, Apple is reportedly looking into sourcing DRAM from CXMT and NAND Flash from YMTC to reduce its reliance on South Korean and Japanese suppliers.

Intel Begins Open-Source Xe3P GPU Driver Enablement

5 March 2026 at 19:15
Intel has quietly initiated open-source efforts to lay the groundwork for its next-generation Xe3P graphics within the Mesa OpenGL "Iris" and Vulkan "Anvil" drivers. According to a report from Phoronix, these efforts are not immediately focused on making the driver functional but rather on establishing code paths that can be developed for this graphics IP in the future. This means preliminary support is still a few weeks away, as additional work is needed behind the scenes. By the time Xe3P GPUs are released, open-source driver support should be ready.

We expect to see the first versions of Xe3P GPUs this year, as this IP will take on various forms. Some will be featured in the upcoming "Nova Lake" desktop processors for the consumer market, anticipated later this year. Early open-source enablement suggests "Nova Lake-P" processors will include Xe3P-LPG for integrated graphics. Additionally, "Nova Lake-P" processors will incorporate multiple new IPs like Xe3P-LPM for media processing, which handles decoding and encoding, and Xe3P-LPD for display output processing. Finally, the Xe3P IP will also be part of Intel's AI-focused "Crescent Island" inference GPU, which will feature 160 GB of onboard LPDDR5X. We are still awaiting performance claims for this Xe3P GPU, so we need to be patient a little longer.

Apple MacBook Neo Capped at 8 GB RAM by A18 Pro InFO-PoP Packaging

5 March 2026 at 13:53
Yesterday, Apple announced its newest low-cost MacBook Neo, starting at $599 in the United States, or about $499 for education and students. Some online criticism emerged regarding Apple's decision to offer a laptop with only 8 GB of RAM in 2026, with no options for higher RAM capacity. However, this 8 GB of RAM is a design choice Apple made at TSMC's packaging facilities for the A18 Pro chip. Inside the MacBook Neo, Apple decided to reuse the iPhone 16 Pro's chip, which comes from TSMC with 8 GB of LPDDR5X memory. This memory is attached directly above the A18 Pro SoC using Integrated Fan-Out Package on Package (InFO-PoP), creating a 3D wafer-level fan-out package. This package is designed to hold memory directly above the SoC die, resulting in a smaller PCB design without the LPDDR5X module taking up over 100 mmΒ² of PCB area.

Therefore, Apple's MacBook Neo configurations are limited to what the A18 Pro SoC is originally packaged with. These are 8 GB LPDDR5X modules that are shipped directly to TSMC for integration into the InFO-PoP package, which is later shipped back to Apple for integration into these new MacBook Neo laptops. While offering 8 GB laptops in modern times might seem controversial, the design choices behind the SoC and the goal of keeping unit costs low are what limit Apple from providing more memory capacity. Finally, these SoCs use Unix-based macOS, which is optimized for good memory management at this capacity, ensuring that users can still have a satisfactory experience.

NVIDIA GeForce RTX 3060 Could Return Mid-March

5 March 2026 at 13:11
The NVIDIA GeForce RTX 3060, a mid-range GPU now two generations old, is reportedly returning to NVIDIA's supply this month. According to Chinese Board Channels, NVIDIA is planning a mid-March restock of the "Ampere" GPU, aligning with earlier rumors that suggested a Q1 2026 revival. Interestingly, it is unclear which version of the RTX 3060 will be reintroducedβ€”whether it will be the original 12 GB model with a 192-bit wide memory bus or the newer 8 GB variant with a 128-bit bus. NVIDIA's decision to bring back this older SKU is puzzling, especially considering it is two generations old and comes amid memory supply chain shortages. However, this older SKU uses GDDR6 memory, which might be more readily available as the newer GDDR7 is being used by modern "Blackwell" GPUs and the upcoming "Rubin CPX" accelerators.

Why NVIDIA has chosen the RTX 3060 instead of a newer model like the RTX 4060 remains uncertain. Speculatively, it could be due to the fact that the RTX 4060 is based on the same NVIDIA 4N foundry node at TSMC as the current RTX 5060, while the RTX 3060, along with the rest of the GeForce "Ampere" generation, is built on the Samsung 8N (8 nm DUV) foundry node. Additionally, Board Channels note that GeForce RTX 3060 models from various brands will start arriving soon, which means NVIDIA's add-in card partners are doing much of the heavy lifting to bring back this SKU, with NVIDIA only supplying the GPU die and memory as an installation kit. AICs could start marketing this GPU again or just quietly add it to their websites. We are waiting a few more days to see how the re-launch unfolds and which SKUs we end up getting. Finally, the most important factor for considering this GPU when modern alternatives exist is the pricing, which will dictate its sales.

AMD Ryzen AI 400 Comes With Up to 12 Usable PCIe 4.0 Lanes, GPUs Limited to x8 Connection

4 March 2026 at 23:24
On Monday, AMD announced its latest Ryzen AI 400 Series and Ryzen AI PRO 400 Series desktop processors, based on the "Gorgon Point" silicon and powered by the "Zen 5" core configuration. This generation follows the Ryzen 8000G series, known as "Phoenix Point." However, it has been revealed that the Ryzen AI 400 series reduces the number of usable PCIe lanes compared to the previous Ryzen 8000G generation. The new top SKU offers 16 native PCIe 4.0 lanes, but only 12 are available to the rest of the system. Four of these PCIe lanes are used for the chipset link that connects the AM5 socket to the motherboard chipset, leaving fewer lanes for the end-user. Lower-tier chips may provide as few as 10 usable lanes, which is insufficient to run a discrete GPU at its full 16x lanes in the PCIe 4.0 connector on the AM5 motherboard. When a user installs an M.2 PCIe NVMe SSD, only eight lanes remain available for a discrete graphics card, meaning the GPU will operate in x8 mode instead of x16.

Interestingly, AMD hasn't fully utilized the "Gorgon Point" silicon in the desktop Ryzen AI 400G series. For example, the top modelβ€”Ryzen AI 7 450Gβ€”is configured with four "Zen 5" cores and four "Zen 5c" cores, making up an eight-core configuration. The fully unlocked "Gorgon Point" silicon in laptops has 12 cores in total, with four "Zen 5" and eight "Zen 5c" cores. This is a similar configuration to "Strix Point," but adapted for mobile. It's also worth noting AMD's approach with the iGPU. The top 450G processor model only comes with 8 iGPU compute units, which is half the CUs physically available on the silicon. Most other processor models in the series come with just 4 CUs.
Before yesterdayMain stream

Memory Makers Shift to Hourly Contracts as AI Demand Continues to Climb

4 March 2026 at 15:55
The memory procurement market has adopted a new business model with hourly contracts, where quoted prices are valid for only a single hour, necessitating a new pricing quote with each change. Memory makers like SK hynix, Samsung, and Micron are responding to the massive demand for their memory solutions with new types of contracts that force OEMs to make quick decisions to procure DRAM within a very short timeframe. This means that memory makers are requiring much faster contract settlements, as the rapid increase in demand is causing product pricing to change literally by the hour. For example, large PC OEMs, who are among the biggest customers, now have to ship PCs with one pricing, while their future products are subject to price changes that fluctuate by the hour. How sustainable and stable this market will be remains to be seen.

Interestingly, the customer DRAM market is splitting into two camps. A short list of deep-pocketed customers, including large cloud providers, major automakers, and top smartphone firms such as Apple and Samsung Electronics, retain priority access to DRAM and the best pricing negotiation leverage. Memory manufacturers like SK hynix and Micron are said to prioritize these relationships above all else by favoring buyers who can prepay or settle in cash. For the vast remainder, more than 190,000 small and medium enterprises, the situation is harsher. Many lack the cash flow and bargaining leverage to accept rapid price jumps. As costs climb, some firms are revising demand forecasts downward to avoid margin erosion. Demand outside of the hyperscaler/data center sector may be revisited downward for many companies, as consumers are less keen on spending more on products that are becoming more expensive each day.

ASUS Raises GeForce RTX 50 Series "Blackwell" Pricing in China, Radeon Pricing Unchanged

4 March 2026 at 15:29
ASUS has reportedly updated its product pricing in the Chinese market to reflect the memory component shortage across the semiconductor industry. According to Board Channels, ASUS is adjusting the pricing for the GeForce RTX 50 Series "Blackwell" GPUs with GDDR7 memory, while the pricing strategy for AMD Radeon RX 9000/7000 and other series remains unchanged. At the very top, ASUS is increasing the price of its RTX 5090 D v2 SKUs by about 500 yuan, which is approximately $72 at the time of writing. Other SKUs like the GeForce RTX 5080, RTX 5070 Ti, regular RTX 5070, and RTX 5060 Ti with 16 GB of VRAM are experiencing an increase of anywhere from $14.50 to $45. Interestingly, some older and low-end SKUs like the popular GT 1030 and GT 710 series, which provide basic graphics output for many prebuilt PCs, are also seeing a price increase of up to $8. AMD's Radeon series reportedly remains unchanged, which can be explained by these GPUs already having gone through a price increase cycle and GDDR6 memory being in better supply than GDDR7.
You can check out the entire list with proposed price changes below.

EA Working on Javelin Anticheat Port for Windows-on-Arm

4 March 2026 at 13:34
EA posted a job listing seeking an engineer to port its Javelin Anticheat, a kernel-level anticheat solution, to the Windows-on-Arm (WoA) platform. This is a significant indicator of where the industry is headed, confirming that the biggest game developers are officially porting their game engines, games, and anticheat solutions to Arm-based PCs running the Windows 11 operating system. Interestingly, this coincides with NVIDIA's introduction of its N1/N1X SoCs with Arm-based CPU cores, which are expected to launch in the first half of this year. These will offer consumers 20 CPU cores, consisting of 10 Cortex-X925 and 10 Cortex-A725 CPU cores based on the Armv9.2 ISA, along with a "Blackwell" GPU optimized for low power settings with 6,144 CUDA cores. Adding to the growing ecosystem of WoA solutions, NVIDIA will be competing with Qualcomm's recent Snapdragon X2 Elite and X2 Plus SoCs.

If readers recall Valve's efforts to port regular Windows games to Linux, one of the biggest issues was the use of kernel-based anti-cheat solutions that simply couldn't work on the non-standard Windows 11 platform. A similar situation is now occurring even between Windows versions, as EA needs to develop a specialized solution that will work on non-x86 deployments like the standard Windows-on-Arm. It will likely be a few months before the official deployment is released, but the job listing suggests that some internal work is in progress. This means that the senior engineer EA is looking for could arrive to quickly bring these pieces together and make the new platform work without any issues.

(PR) Intel Board Chair Frank Yeary Steps Down, Craig Barratt Takes Over

4 March 2026 at 01:24
Intel Corporation today announced that its board of directors has elected Dr. Craig H. Barratt as independent chair, effective following the company's Annual Stockholders' Meeting on May 13, 2026. Barratt will succeed Frank D. Yeary, who is retiring from the board and will not stand for reelection at the Annual Meeting. Yeary has served as a director since 2009 and as chair since 2023.

"On behalf of the board and the entire company, I want to thank Frank for his commitment to Intel and his strong leadership as chair during one of the most consequential periods in Intel's history," said Lip-Bu Tan, CEO, Intel. "Frank led the effort to bring me in as the company's CEO, encouraged disciplined board oversight, and reinforced strong board governance. With his and the board's support, I have been empowered to take decisive actions to strengthen our financial foundation, advance our process roadmap and position the company for long-term competitiveness. His leadership helped guide Intel through a period of transformation and onto firmer footing for the next phase."

MSI GeForce RTX 5090D v2 Lightning Appears with 24 GB VRAM

4 March 2026 at 00:21
The MSI GeForce RTX 5090D v2 Lightning has officially appeared on the Chinese market, according to a Bilibili user named "Hardware Patrick Star." This China-exclusive SKU is an adaptation of the new GeForce RTX 5090D v2 with MSI's overclocking enhancements, allowing the card to reach new performance heights, paired with an all-in-one liquid cooler. The new RTX 5090D v2 retains the GB202 family's compute power, with 21,760 CUDA cores, Blackwell architecture, and a 575 W TGP, but reduces the memory to 24 GB of GDDR7 on a 384-bit bus, compared to the previous 32 GB on a 512-bit bus. MSI noted that the GeForce RTX 5090D Lightning GPU is limited to 1,300 units. However, since the China-exclusive SKU is considered a completely different graphics card, it is possible that there are another 1,300 units for the Chinese market with the RTX 5090D v2 Lightning model. The Bilibili user has SKU number 909, indicating that the rest have been distributed to retail channels.

We reviewed MSI's regular GeForce RTX 5090 Lightning graphics card and found that the GPU can reach up to 1,000 W with an OC BIOS loaded. We observed an average GPU clock speed of 3,218 MHz due to the water cooling. Independent testing might show the RTX 5090D v2 Lightning achieving similar performance levels, but since it's a China-focused SKU, we could only see reviews from Chinese hardware reviewers. Earlier claims by EXPreview indicate that hands-on testing suggests mixed results for buyers. In pure gaming, the RTX 5090D and RTX 5090D v2 cards are often nearly identical, with frame rate differences usually within a percent or two, so most current 4K titles will not noticeably suffer from the missing 8 GB. For AI workloads and model inference, the gap is more significant, with single-digit to low-double-digit performance drops where extra memory is important.

Apple M5 Pro and M5 Max Debut New "M-Core" Tier and SoIC 2.5D Packaging

3 March 2026 at 22:25
Apple today launched its most advanced silicon design yet with the introduction of the M5 Pro and M5 Max processors for MacBook Pro laptops. These new SoCs feature an 18-core CPU with six new "super cores" and 12 performance cores. The main difference between the M5 Pro and M5 Max lies in the size of the integrated GPU and the maximum memory capacity that Apple can equip these SoCs with. With these two SoCs, Apple has added another core tier to its lineup. In the M5 Pro/Max SoCs, a new tier called "M-core" has been introduced, which sits between the Super Core and Efficiency Core. What used to be performance and efficiency cores in the regular M5 have been renamed. Essentially, Apple renamed the performance core to super core and introduced an M-core tier that sits between the super core and efficiency core. Interestingly, the efficiency core is completely absent from the new SoCs, resulting in a combination of performance and middle-class cores, which will enhance the performance of these processors. In this context, the regular M5 SoC has four super cores and six efficiency cores.

Inside these new SoCs, the six super cores run at 4.61 GHz, while the M-cores run at 4.38 GHz. The M-core is a 7-wide out-of-order execution CPU that has roughly 70% of the P-core performance with slightly lower power usage. This new core tier is expected to boost the multithreaded performance of the M5 Pro/Max processors by up to 20%, according to some preliminary estimates found on Chinese Baidu forums. For the M5 Pro, Apple has included 16 MB of cache for the super cores, 16 MB of cache for the M-cores, and 24 MB of memory cache. The memory choice is LPDDR5X, which runs at 9,600 MT/s and offers up to 64 GB of capacity. In the M5 Max, the core cache remains the same, but the memory cache is increased to 48 MB, and the memory capacity is upgraded to a configuration of up to 128 GB. Both SoCs feature GPU cores that run at 1.62 GHz, with the M5 Pro having a 20-core iGPU and the M5 Max having a 40-core iGPU.

NVIDIA Lowers HBM4 Specs for "Vera Rubin" VR200 as Memory Suppliers Miss 22 TB/s Target

3 March 2026 at 16:52
NVIDIA has reportedly lowered its performance requirements for the HBM4 memory used in "Rubin" GPUs, as SK hynix and Samsung are reportedly struggling to meet the ambitious performance targets set by NVIDIA. According to a new note from SemiAnalysis, NVIDIA is reducing its specification requirements for the upcoming GPU generation. Originally, NVIDIA targeted a total bandwidth of 22 TB/s for the Rubin chip, but memory suppliers seem to be having difficulty meeting these requirements. Initial shipments are expected to achieve closer to 20 TB/s, which translates to approximately 10 Gbps per pin for HBM4. This indicates that NVIDIA's aggressive upgrade plan for "Vera Rubin" is facing a setback, and the final performance will differ slightly.

Interestingly, NVIDIA's initial target for the "Vera Rubin" VR200 NVL72 system was 13 TB/s in March 2025, which was later upgraded to 20.5 TB/s by September. At CES 2026, NVIDIA confirmed that the VR200 NVL72 system is now operating at 22 TB/s of bandwidth. Compared to AMD's Instinct MI455X accelerator, which has 19.6 TB/s, NVIDIA initially had lower system bandwidth. They addressed this by using faster DRAM and improving interconnects between CPUs, GPUs, and the entire system. However, as memory makers like SK hynix and Samsung struggle to meet NVIDIA's performance requirements, we will see HBM4 speeds of about 20 TB/s for the entire "Vera Rubin" system.

NVIDIA GeForce v595.71 Drivers Reportedly Restricts Voltage on RTX 50 Series GPUs

3 March 2026 at 15:12
NVIDIA released its GeForce 595.71 WHQL Game Ready Drivers yesterday to address issues with the previous 595.59 WHQL version. However, troubles don't seem to be over yet, as reports are coming from multiple users running the latest driver that the new installation is restricting GPU voltages across the RTX 50 series of "Blackwell" graphics cards. As multiple reports point out, the v595.71 driver is causing users to see a significant performance drop across multiple titles, all stemming from the capped GPU core voltage that is reducing the frequency. Wccftech testing has confirmed that the MSI GeForce RTX 5090 SUPRIM X used to run at a 1.020-1.030 V range, resulting in about 3,015-3,030 MHz in FurMark stress-testing using the older v591.86 driver with a manual overclock applied.

However, without a change in settings, the GPU now runs at a lowered voltage range between 1.005 V and 1.010 V, with occasional drops to 1.0 V. This has resulted in boost frequencies that are below 3,000 MHz, degrading the GPU performance while also lowering power usage. The reasoning behind this might be that NVIDIA is experimenting with lower voltage caps to limit what the GPU is capable of boosting to, so it can draw less power and prevent the fragile nature of the 12V-2x6 connector from overheating. Yesterday's launch of the GeForce 595.71 WHQL Game Ready Driver mentioned that the previous driver issues were resolved, which included fans not spinning or not being detected at all. However, with the new driver release, we didn't receive any information about intentional or unintentional voltage regulation happening within the driver. Hence, we are left to wait for the official company response.

Entry-Level PC Segment Might Disappear by 2028, Claims Gartner

2 March 2026 at 22:13
Rising memory and storage costs are pricing entry-level PC buyers out of the market, which may disappear entirely. According to the analyst firm Gartner, the sub-$500 PC sector might vanish by 2028. Their analysis indicates that memory, which used to be just a small part of the total bill of materials (BOM) at 16% of the total PC cost in 2025, is expected to rise to nearly a quarter of the PC's cost at 23%, making the entry-level PC segment unsustainable. "This sharp increase removes vendors' ability to absorb costs, making low-margin entry-level laptops nonviable. Ultimately, we expect the sub-$500 entry-level PC segment will disappear by 2028," said Ranjit Atwal, Senior Director Analyst at Gartner. He added, "In addition, rising AI PC prices will delay the projected 50% market penetration of AI PCs until 2028."

We have witnessed multiple price increases across many PC components such as DRAM, NAND Flash, and GPUs. With manufacturers unable to produce PCs at any tangible profit levels in the sub-$500 PC sector, it might entirely disappear from the mainstream PC market. The concept of budget builds might become obsolete, with the majority of PCs ending up in categories above that threshold, near or beyond four figures. Gartner estimates that there will be about a 130% increase in combined DRAM and NAND Flash pricing by the end of this year, increasing PC prices by about 17% compared to 2025 levels. This situation will push consumer and enterprise demand toward premium PCs.

Phison Seeks Customer Prepayments as NAND Flash Prices Surge 500% Over Six Months

2 March 2026 at 21:13
As NAND Flash pricing has skyrocketed, reaching nearly a 500% increase over the past six months, Phison has reportedly started requiring some customers to make prepayments to control the supply. This means that customers who haven't even started a Phison order will need to send funds as a form of credit to purchase a set amount of Phison controllers, SSDs, or any other storage products. According to a notice to its customers, Phison has stated that the rapid increase in NAND Flash demand has driven various parts of the storage supply chain to seek alternative payment methods, with a focus on quicker settlements. This means customers must either accept faster contract settlements or use other forms of payment like prepayment credits.

Phison is a company primarily focused on making SSD controllers, which are manufactured at TSMC, Samsung, or other fabs as logic devices, not NAND Flash. The company's latest E28 controller is produced on TSMC's 6 nm node, a mature class of semiconductor nodes. This suggests that Phison's supply chain is largely intact. However, if Phison is arranging NAND Flash purchases on behalf of customers, that could explain why faster settlements are requested. Specific arrangements likely vary by customer and should be confirmed on a case-by-case basis. There might be situations where Phison provides SSD makers with blueprints for SSDs and handles part of the design and supply chain logistics within the company, making the situation more understandable. In such cases, Phison would need to secure a fast contract settlement to ensure the SSD maker gets the best possible price for the NAND Flash that ultimately pairs with a Phison controller. The full statement follows.

Intel Arc Pro B70 Pro-Viz GPU Tested with BMG-G31 Die

2 March 2026 at 19:25
Intel has confirmed the existence of its larger Arc Pro B70 "Battlemage" graphics card, designed for professional visualization and AI workloads, through its LLM Scaler software with a small performance testing done in non-ideal scenario. The company plans to release this GPU within the current quarter, which means we could see the official launch and availability in about a month. The upcoming Arc Pro B70 and Arc Pro B65 GPUs are part of this release, utilizing the long-rumored BMG-G31 GPU die intended for higher-end models.

Starting with the more advanced Arc Pro B70, Intel plans a BMG-G31 configuration featuring 32 Xe2 cores and 32 GB of GDDR6 memory on a 256-bit bus. This setup translates to approximately 4,096 FP32 cores in its full configuration, doubling the core count and memory capacity of the current Arc Pro B60, within a single-GPU version. For the smaller Arc Pro B65, Intel has scaled down the BMG-G31 die to support 2,560 FP32 cores, with a total of 20 Xe2 cores. While this matches the core configuration of the Arc Pro B60, the B65 comes with 32 GB of GDDR6 memory, which is 8 GB more than the Arc Pro B60. As dual-GPU configurations were common with the Arc Pro B60, we might also see dual-GPU PCBs with the Arc Pro B70 if Intel's partners like Maxsun follow suit with their Arc Pro B60 Dual-GPU card.

NVIDIA Releases GeForce 595.71 WHQL Game Ready Drivers

2 March 2026 at 18:36
NVIDIA has released its latest GeForce 595.71 WHQL Game Ready Driver, addressing issues from the previous 595.59 WHQL version, which caused many headaches for gamers who installed it. With the new 595.71 WHQL, NVIDIA is providing support and optimizations for games such as Resident Evil Requiem and Marathon. Alongside these games, the new 595.71 WHQL fixes problems found in 595.59 WHQL, including a bug where one or more GPU fans stopped spinning after the update. Thankfully, this has been fixed, and gamers were spared the headaches of potentially damaging their expensive GPUs. Additionally, hardware monitoring utilities can now once again recognize all GPU fans, allowing users to continue monitoring and fine-tuning fan profiles without issues. Interestingly, this comes as another game ready driver release, and not a hotfix version. NVIDIA also fixed some game artifacts that were appearing specifically with the GeForce RTX 50 series, such as green artifacts in Total War: THREE KINGDOMS and black bars appearing in The Ascent.
DOWNLOAD: NVIDIA GeForce 595.71 WHQL

Intel Launches Xeon 6+ "Clearwater Forest" Xeon with 288 E-Cores on 18A Process

2 March 2026 at 18:21
Intel used its MWC conference in Barcelona to showcase its most core-dense Xeon 6+ processor, codenamed "Clearwater Forest." As one of Intel's most complex chiplet designs, the package combines 12 compute chiplets manufactured on an Intel 18A node with three active base tiles on Intel 3 and two I/O tiles on Intel 7. In this configuration, each compute tile contains six modules of four "Darkmont" efficiency cores, providing 24 E-cores per tile and a maximum of 288 "Darkmont" E-cores on a single socket. A two-socket system, therefore, approaches 576 cores. The design connects clusters with a high-bandwidth on-chip fabric and stacks die using Foveros Direct 3D, while EMIB links connect multiple tiles in a 2.5D arrangement.

Each "Darkmont" E-core comes with a 64 KB instruction cache, a wider front end, and a larger out-of-order window to sustain more in-flight work. Execution resources and the number of execution ports have been increased to improve parallel integer and vector throughput. Physically, clusters are grouped in four-core units sharing about 4 MB of L2 cache per group, and the package-level last-level cache can exceed a gigabyte, with about 1,152 MB of combined last-level cache across the package. "Clearwater Forest" supports the existing Xeon server platform socket, 12 memory channels, and broad I/O, including 96 PCIe 5.0 lanes and 64 CXL 2.0 lanes. Memory speed targets push toward DDR5-8000.

Microsoft Shader Execution Reordering Brings 90% Performance Increase on Intel Arc B-Series, 80% on NVIDIA "Blackwell" GPUs

2 March 2026 at 13:31
Microsoft recently updated its Agility SDK to version 1.619, bringing DirectX Shader Model 6.9 alongside some new DirectX 12 improvements. However, Microsoft's latest product demo about Shader Execution Reordering (SER) now confirms a massive performance uplift across several GPUs, with up to a 90% improvement on Intel Arc B-Series and more than 80% from independent sources' benchmarks on NVIDIA "Blackwell." With SER, the API gives applications the ability to dynamically sort rays for highly optimized parallel execution, improving performance by a large margin. In Microsoft's own testing, Intel's Arc B-Series GPUs, which include "Battlemage" discrete GPUs and Xe3-based integrated GPUs in "Panther Lake," managed to achieve a 90% framerate increase in the technology demonstration, suggesting that ray tracing performance still has some tricks up its sleeve for optimization.

Meanwhile, the company also tested the NVIDIA GeForce RTX 4090 with SER, scoring a 40% improvement over the default ray sorting in the previous execution model. Independent testing from Osvaldo Pinali Doederlein on X showed that the GeForce RTX 5080 "Blackwell" GPU scored about an 80% improvement running this demo, which gives confidence that games implementing this technology will provide gamers with a massive performance boost once implemented. Microsoft has built this D3D12RaytracingHelloShaderExecutionReordering demo with a minimum demonstration of the SER technology, so anyone can test their own hardware and the performance improvement.

(PR) AMD Launches Ryzen AI 400 Series Processors for Mobile and Desktop

2 March 2026 at 12:35
At Mobile World Congress 2026, AMD announced an expanded Ryzen AI portfolio with the launch of the AMD Ryzen AI 400 Series and Ryzen AI PRO 400 Series desktop processors. The new processors deliver powerful on-device AI acceleration and next-generation performance, enabling users to run AI applications and LLMs locally and tackle compute-intensive applications, including those for design and engineering, with ease. Additionally, AMD is expanding the Ryzen AI 400 Series mobile portfolio to include workstations.

With these additions, Ryzen AI 400 Series processors enable original equipment manufacturers (OEMs) to offer next-gen AI PCs across high-performance desktops, laptops and mobile workstations optimized for modern workloads.

AMD Ryzen 5 5500X3D Launches in China, Keeping AM4 Socket and "Zen 3" Generation Alive

2 March 2026 at 12:17
AMD has expanded the availability of its Ryzen 5 5500X3D to the Chinese market, giving new life to a processor that first appeared in Latin America last year. There was no formal launch event or major announcement. Instead, the chip appeared through retail channels quietly, with a listing price of 1,199 RMB (roughly $175). At its core, the Ryzen 5 5500X3D is a six-core, 12-thread "Zen 3" processor designed for the AM4 platform. It runs at a 3.0 GHz base frequency and can boost up to 4.0 GHz. What sets it apart from standard Ryzen 5 models of the same generation is its expanded cache configuration. With a combined 99 MB of L2 and L3 cache, it targets gaming workloads that benefit from reduced memory latency and improved data access patterns, as seen with the other X3D SKUs.

AMD lists the processor at a 105 W TDP, and buyers should note that it does not include a bundled cooler and lacks integrated graphics. At $175, this pricing positions the 5500X3D as one of the more accessible X3D options available. AMD revived its AM4 socket with an additional "Zen 3" chip, but this time in an X3D variant to keep up with modern designs in performance. With support for DDR4 memory, PCIe 4.0, and a wide range of AM4 motherboards, including X570 and B550, the 5500X3D offers existing users a relatively inexpensive drop-in upgrade. For those running Ryzen 3 or Ryzen 5 from the 5000 series, this chip could be a very good upgrade without having to switch to a completely new platform and the more expensive DDR5 memory.

Intel Publishes "Granite Rapids-WS" Xeon 600 Turbo Frequencies, AVX-512 and AMX Slash Boost Speeds

1 March 2026 at 00:07
In early February, Intel finally updated its HEDT sector with the latest "Granite Rapids-WS" Xeon 600 Series processors for workstations. The company has now published a detailed table of turbo frequencies that provides specifics on each core's boost frequency in workloads like SSE, AVX2, AVX-512, and AMX, showing how much these workloads allow the CPU cores to boost. This means that during a continuous workload like AMX, these CPUs can only run at a sustained frequency defined in the tables below. At the very top of the new "Granite Rapids-WS" stack is the Xeon 698X, featuring 86 cores and 172 threads, backed by 336 MB of L3 cache. The chip runs at a 2.0 GHz base clock, boosting up to 4.8 GHz with Turbo Boost Max 3.0, or 4.6 GHz under Turbo Boost 2.0. This CPU is fully unlocked, allowing overclocking, which is still relatively rare in the Xeon workstation space.

In non-AVX workloads, this CPU can boost up to 4.8 GHz, while its lowest-performing core, numbered 86, sits at 3.0 GHz. However, AVX2 turbo frequencies cause a significant frequency downgrade, as the base frequency drops to 1.7 GHz, and the slowest core drops to 2.9 GHz when boosting across its 86-core design. Following this are the AVX-512 turbo frequencies, which see this flagship SKU running at a base clock speed of only 1.3 GHz and only 2.5 GHz across its 86 cores. Perhaps the most demanding turbo frequency testing occurs when AMX is enabled, resulting in a base frequency of only 1.1 GHz and only 2.0 GHz across all cores at once. This significant reduction comes from the demanding scenarios of these vector and matrix instruction processing, which are very heavy on the CPU.
More information about other SKUs and details in AVX2, AVX-512, and AMX turbo frequencies follow.

Intel "Bartlett Lake-S" Flagship Appears, Won't Boot on Consumer Motherboards

28 February 2026 at 14:47
Intel's flagship "Bartlett Lake-S" processorβ€”Core 9 273 PQEβ€”has reached enthusiasts who are testing whether any consumer motherboard can boot the CPU and utilize its 12 P-Core gaming performance. According to Overclock.net user "Talon2016," who managed to obtain a sample of the LGA-1700 flagship CPU SKU, the processor won't boot using a consumer ASUS ROG Maximus Z790 Apex motherboard. These CPUs are designed for edge and embedded deployments with specialized platforms that lie outside the consumer sector. This high-TDP PQE variant has a base power of 125 W, powering a 12 P-Core variant with 24 threads and a base frequency of 3.4 GHz. This model can boost all 12 cores to 5.3 GHz, while a single thread can reach up to 5.9 GHz independently for tasks requiring intensive single-threaded performance. It is equipped with 36 MB of L3 cache and an integrated GPU with 32 EUs of Xe-LP graphics.

Unfortunately, regardless of the SKU or consumer motherboard choice, the platform will not work, and the CPU will not boot, as Intel has restricted "Bartlett Lake-S" to keep it away from consumers. Companies like ASRock have confirmed that the "Bartlett Lake-S" Core 200E will not be available for consumer motherboards and will only be used in the embedded and edge computer sector. This means you can technically buy and use this CPU for any Windows or Linux task, including gaming, but you will have to go through a process of acquiring an industrial-grade motherboard or a mini-PC that suits this platform. This means gaming support will be limited, as Intel explicitly will not bring any optimizations like APO/IPO to the platform for gaming. Instead, it will be treated as a generic x86-64 Intel CPU, just like any other processor. Extracting the maximum gaming performance could also be problematic, as there could be compatibility issues, given that Intel has envisioned other applications for this platform.

NVIDIA Pulls GeForce 595.59 WHQL Game Ready Driver After Widespread Bug Reports

27 February 2026 at 10:58
NVIDIA has officially pulled its latest GeForce 595.59 WHQL Game Ready driver from the downloads page as user reports of stability issues continue to pile up. Reportedly, users are experiencing fan detection issues on their GPU coolers, with only a single fan working. Some issues like clock stability have also occurred. On NVIDIA's official GeForce Forums, users have been complaining about driver stability, and the company has advised users to roll back the driver version to the previously stable 591.86 WHQL driver if they are experiencing any symptoms. The GeForce 595.59 WHQL Game Ready driver was launched as an optimization package to get Resident Evil Requiem and Marathon games running smoothly, which turned into a disaster that the community has reported.
NVIDIAFebruary 26th, 11am PT Update: We have discovered a bug in the Game Ready and Studio 595.59 WHQL drivers and have removed the downloads temporarily while our team investigates. For users that have already installed this driver, and are experiencing issues with fan control, please roll back to 591.86 WHQL. NVIDIA app users can reinstall their previous driver by clicking the three dots in the Drivers tab.
Update 06:57 UTC: We have removed the broken driver version from our downloads section.

Early AMD FSR 4.1 DLL Update Reportedly Leaks with Minor Visual Improvements

26 February 2026 at 23:11
Early access to AMD Radeon Software's "Vanguard" driver testing program has reportedly revealed a new Radeon FSR 4.1 DLL file, which is the next update for AMD's FSR 4 technology. According to the latest leak, AMD is preparing the FSR 4.1 update, which should bring some visual or performance enhancements, or both. Some Reddit PC enthusiasts are applying workarounds to run the file on RDNA 3 hardware, even though AMD officially doesn't support FSR 4 on the RDNA 3 generation due to some missing instructions on the older microarchitecture. Running these files can produce visible quality gains but are experimental, varying widely by title and system setup. Even when a leaked DLL carries a digital signature, running unofficial binaries can trigger instability, break driver integrity checks, or conflict with future official updates.

However, the enthusiast community has run the experiment and confirms that early side-by-side comparisons show small improvements in fine detail and edge definition when the leaked FSR 4.1 binary is forced into titles that previously used FSR 4.0.3. Testers describe sharper foliage and fabric textures and less ghosting. Other users report inconsistent results and artifacts, suggesting that the update is still a work in progress. We could have expected the update to land alongside AMD Software Adrenalin 26.2.2 WHQL drivers that launched today, as the DLL file was found in the beta test of the 26.2.2 driver, but since the update is still experimental, maybe the next Adrenalin update will bring the FSR 4.1 update as an official package.

NVIDIA Confirms Supply Constraints May Limit Gaming GPU Availability

26 February 2026 at 17:19
NVIDIA CFO Colette Kress confirmed that the gaming sector may struggle during the company's latest Q4 earnings call. In a short but very important note, she stated, "Looking ahead, while end demand for our products remains strong and channel inventory levels are healthy, we expect supply constraints to be the headwind to Gaming in Q1 and beyond." This sentence is rather vague but conveys the message that supply constraints will definitely impact the GeForce RTX 50 series lineup in the current quarter and possibly beyond. NVIDIA's current product inventory is in good shape, meaning that both silicon from TSMC and secured GDDR7 memory are sufficient for the time being, but once the inventory levels start to deplete, availability will become a problem.

Team Green has massive capacity secured at TSMC's facilities for manufacturing "Blackwell" GPUs, meaning that no production issues stem from that end. However, memory makers, with whom NVIDIA collaborates, are supply constrained in delivering their GDDR7 memory solutions, leaving NVIDIA with little to work with outside its high-margin server sector. As NVIDIA supplies its AIC partners with both memory and GPU dies, having no memory modules to bundle with the GPUs becomes a supply bottleneck, leaving the company waiting for weeks without a fresh inventory of memory modules. Hence, NVIDIA now expects that demand will continue to be strong among gamers, but the situation may be getting slightly worse as inventory levels start to deplete.

Intel Arc GPU Graphics Drivers 101.8531 Beta Released

26 February 2026 at 16:56
Intel has released its latest 101.8531 Non-WHQL Arc GPU graphics drivers, offering day-one game support for titles like Marathon, Resident Evil Requiem, and the World of Warcraft: Midnight DLC expansion pack. Intel notes that with this driver version, users of Intel Arc "Battlemage" and Arc "Alchemist" integrated and discrete GPUs will experience support and some performance improvements across other games, which are now being further optimized. For example, for "Panther Lake," this beta driver delivers a 35% FPS increase in Witcher 3 at 1080p with high settings, while Arc "Alchemist" sees a Resident Evil Requiem FPS boost of up to 40% on average at 1080p with ultra settings. Interestingly, Intel is optimizing new games for its older products, which is a promising sign for anyone considering purchasing the newer "Panther Lake" chips for gaming. With Intel planning to launch Core Ultra G3 SoCs for handhelds in a few months, consistent driver optimization is quite noteworthy.

DOWNLOAD: Intel Arc Graphics Driver 101.8531 Beta.

NVIDIA Ships First "Vera Rubin" VR200 Samples to Customers

26 February 2026 at 12:15
NVIDIA reported its full-year 2025 results with massive revenue of $215.9 billion, with $68.1 billion coming in the fourth quarter. The company's earnings call after the results were published contained some interesting information and confirmed that the first "Vera Rubin" VR200 racks are shipping to customers as samples, with volume shipping to commence in the second half of 2026. NVIDIA confirmed that the upcoming platform, which includes the Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch, will be powering the next-generation trillion-parameter models with only one-fourth of the GPUs compared to the previous-generation "Blackwell" and will reduce inference costs by up to 10 times.
NVIDIA CFO Colette KressWe shipped our first Vera Rubin samples to customers earlier this week, and we remain on track to commence production shipments in the second half of the year. Based on its modular, cable-free tray design, Rubin will deliver improved resiliency and serviceability relative to Blackwell. We expect every cloud model builder to deploy Vera Rubin.

AMD Ryzen Z1 Extreme SoC Update Arrives for ASUS ROG Ally After Six Months

25 February 2026 at 19:29
A few days ago, we reported that AMD is seemingly ending driver support for its Ryzen Z1 Extreme SoC, just two and a half years after its launch. Since Lenovo issued product guidance that the company will no longer provide updates, and ASUS's own ROG Ally handheld console had not received an update in over six months, the situation was dire. However, ASUS today released a new driver update for its ROG Ally handheld console, which had been stuck with six-month-old SoC drivers from August 2025. This changes the situation from the platform being completely abandoned for half a year to a periodic update window that will likely continue unless gamers encounter a surprise change.

Initially, we couldn't determine the "blame" for this irregular driver update cycle, as it could have been either AMD or OEMs being slow with the driver updates. As AMD offers configurable TDP (cTDP) for the Z1 Extreme with values ranging from 9 to 30 W, this means that OEMs can get SoCs in various configurations, each needing to be tested and verified before distributing an official driver. To add more to the mess, Lenovo Korea has confirmed that their own driver update plan for the product has stopped, leaving users to switch to other platforms or use Linux-based operating systems that carry their own drivers for these platforms to extract maximum longevity. Hence, the entire situation is now more complicated.

Samsung is Transforming Old 2D NAND Fabs Into Modern HBM4 Production

25 February 2026 at 19:00
Samsung is officially ending the production of its 2D NAND flash storage this year, and the company will be repurposing its old production lines to better fit the AI-driven demand. According to The Elec Korea, Samsung plans to officially stop 2D NAND production at its Hwaseong site, with Line 12 being the one carrying this aging technology. Instead of completely abandoning this facility, which houses plenty of chip-making tools, Samsung will repurpose them for DRAM metallization, which is the process of applying actual pathways within the DRAM itself to connect memory cells. Interestingly, the Hwaseong Line 12 holds a monthly wafer production capacity of 80,000 to 100,000 12-inch wafers. This is a significant number of wafers, which are now only used for 2D NAND Flash, a technology that is no longer needed in the wake of 3D NAND Flash technology.

Continuing the Line 12 legacy will be Samsung's 6th-generation 10 nm-class 1c DRAM, a technology used for HBM4, and Samsung expects the total wafer capacity for 1c DRAM to reach about 200,000 wafers per month in the second half of the year. Adapting the old 2D NAND Flash production site will definitely help, and Samsung will run this production along with Pyeongtaek Line 3 and Line 4.

HWiNFO v8.42 Update Brings Better Intel "Nova Lake" Processor Support

25 February 2026 at 12:40
The popular hardware diagnostics utility HWiNFO launched its latest v8.42 version on February 24th. Interestingly, one of the main features of this release is the improved support for Intel "Nova Lake" processors, despite this CPU generation being months away from commercial launch. This means that the tool can now distinguish between different Intel processors and even run diagnostics on engineering samples of "Nova Lake-S," despite the processor not being commercially available for another few quarters. It is possible that the utility is now raising a flag when detecting "Nova Lake," which has its own unique processor ID, and this could be found in some early compiler patches for GCC and LLVM that are enabling these processors before launch. With the launch of v8.42, the tool also gains NPU stress testing, but only in the Pro version.
Below is the complete changelog.

Apple's 2026 MacBook Pro Refresh Brings Dynamic Island, OLED Screens, and New Touch Gestures

25 February 2026 at 03:01
Apple is preparing a massive refresh cycle for its 2026 MacBook Pro laptops, with the major redesign being at the very center of the laptop. According to Bloomberg's Mark Gurman, one of the most reliable sources of Apple news, the company is preparing to implement its Dynamic Island feature on its MacBook Pro 14 and MacBook Pro 16 versions, both of which will carry the new feature. Alongside Dynamic Island, which is replacing the traditional notch we have on MacBooks today, Apple is also implementing OLED display technology that will replace the current Mini LED display found on the current generation of MacBooks.

For the Dynamic Island, Apple will bring over much of the functionality from its iPhone models, which includes status updates and a front camera cutout, but in a different shape. As the iPhone uses the Dynamic Island to host Face ID sensors, the MacBook Pro version should only include a camera sensor cutout with software support from the OS. However, the most interesting part of the announcement should be the touchscreen ability with OLED panels. Apple is reportedly optimizing its new operating system to unlock new gestures for touch, where each touch will invoke a new panel or a new interface. This design will reportedly not be similar to the iPad, but just another sensory aid to the current input method with a keyboard and a mouse.

NVIDIA Hiring Engineers to Optimize Proton and Vulkan API Performance on Linux

24 February 2026 at 22:23
NVIDIA has posted multiple job openings, which give us several hints about the company's plans for gaming on Linux and what the possible plan could look like. According to the now-removed listing, NVIDIA is hiring engineers to diagnose CPU and GPU performance bottlenecks on Linux when running the Proton compatibility layer and Vulkan graphics API. This suggests that NVIDIA is either refining its product support for the massive wave of gamers transitioning to Linux or preparing for an entirely new platform. For example, as NVIDIA is currently preparing N1/N1X SoCs for laptops, the company could create dedicated handheld chips for devices like Valve's Steam Deck, which currently runs on AMD's SoC. There are multiple handheld vendors now, and NVIDIA could be powering a new handheld with its laptop N1/N1X chips under Linux.

The job descriptions clearly indicate that the work will cover everything from the game engine and translation layers, such as Proton, to drivers and hardware interaction. This focus suggests that efforts will not be limited to profiling but will also include proposing API usage changes, building repeatable test cases, and collaborating with translation-layer and distribution maintainers to implement fixes. Anyone using NVIDIA graphics under Linux will also be impacted, as the company's polishing of the software stack will bring a definitive quality of life improvement to games. This can include fewer stutters, better frame pacing, and reduced CPU overhead in titles that rely on Vulkan or run under Proton, which translates Windows-specific API calls and optimizes games to run on Linux.
❌
❌