❌

Reading view

Intel CEO Confirms Ongoing Product Collaboration with NVIDIA

Intel's CEO, Lip-Bu Tan, has confirmed that the collaboration with NVIDIA is ongoing, and we are about to see the results of the partnership announced late last year. Yesterday, Carnegie Mellon University awarded an Honorary Doctorate in Science and Technology to NVIDIA CEO Jensen Huang for his outstanding contributions to the fields of accelerated computing and AI. Intel's Lip-Bu Tan had the honor of placing the doctoral hood on him. In a post on X, the Intel CEO confirmed that Intel and NVIDIA are still working together to "develop exciting new products," indicating that the long-promised integration of NVIDIA GeForce RTX GPUs within Intel SoCs is still in progress. What began as an initial investment from NVIDIA into Intel and an announcement of product collaboration is evolving into a much deeper integration of both companies into a unified ecosystem.

First, we look forward to seeing Intel chips that integrate third-party GPU IP, this time from NVIDIA, with its GeForce RTX graphics embedded in an Intel-branded package. Similar to the now almost forgotten "Kaby Lake G" collaboration with AMD, Intel plans to integrate third-party graphics into its x86 SoC, codenamed "Serpent Lake," which is scheduled to be the first joint collaboration between NVIDIA and Intel in a single chip package. Second, we anticipate customized x86 Xeon server processors for NVIDIA, which Intel has been producing for large hyperscalers like Amazon for years. NVIDIA is also integrating Intel Xeon processors alongside its custom "Grace" and "Vera" CPUs, with designs customized by Intel for their HGX AI server nodes.

SK hynix Eyes Intel EMIB 2.5D Packaging for HBM Memory

SK hynix is collaborating with Intel to utilize its Embedded Multi-die Interconnect Bridge (EMIB) 2.5D packaging technology for HBM memory. As SK hynix aims to diversify its supply chain and customers are increasingly considering Intel Foundry, the South Korean memory giant is exploring research and development efforts with Intel on 2.5D packaging technology. Intel's premier 2.5D packaging technology is its EMIB, which interconnects multiple silicon dies using bridges embedded in a packaging substrate. SK hynix is interested in integrating this technology into its HBM memory, presumably to bring its HBM4 memory modules up to standard for EMIB integration, should its AI chip partners choose Intel Foundry for advanced packaging of their next-generation solutions.

We have already covered that small silicon EMIB bridgesβ€”available in variants like EMIB-M with embedded MIM capacitors and EMIB-T with TSVsβ€”provide low-cost, high-density shoreline connections ideal for logic-to-logic and logic-to-HBM interfaces. However, until now, SK hynix has been using TSMC and its CoWoS 2.5D packaging technology. As CoWoS is gradually reaching its limits and customers are seeking alternative packaging methods, EMIB is emerging as a strong candidate to continue the scaling of chiplets in many directions beyond the traditional reticle limit of 830 mmΒ² of silicon area.

AMD Radeon GPU Drivers 26.5.1 Break Blender Cycles Path-Tracing Engine

AMD's latest Adrenalin Edition 26.5.1 WHQL Drivers are causing problems for Blender users, according to the latest Blender issue tracker. Blender 5.1.1 is crashing with these drivers, particularly when using the Cycles path-tracing render engine. The issue appears to be a ROCm runtime mismatch. Blender 5.1.1 uses ROCm 6 runtimes, but the latest AMD driver 26.5.1 only includes ROCm 7, leading to compatibility issues that prevent the application from running smoothly. Multiple users have reported crashes, and we now understand the background.

Developer involved in AMD evosystem Sahar A. Kashi confirmed that when a factory reset is performed and the system is installed from scratch, the amdhip64_6.dll file is removed, causing Blender Cycles to attempt loading ROCm 7 kernels. However, since Blender version 5.1.1 is compiled only for ROCm 6, this runtime mismatch results in application crashes. To resolve this issue on AMD GPU systems, users should install the older 26.4.1 drivers and use them until Blender version 5.2 is released, and the Cycles path-tracing render engine achieves full ROCm 7 compatibility.

AMD's DGF SuperCompression Shrinks Geometry File Size by 22%

Late last year, AMD released a paper introducing its new Dense Geometry Format (DGF) 3D graphics technology, designed to compete with NVIDIA's RTX Mega Geometry. Today, the company is unveiling DGF SuperCompression (DGFS), a new compression technique within the DGF graphics stack, which reduces game geometry file sizes by up to 22% compared to standard DGF. With DGFS, AMD aims to achieve an average 22% reduction in compressed game asset streams when GDeflate is applied. This results in significantly smaller asset sizes, allowing for larger games without the need for expanded storage. As games now often require hundreds of gigabytes of installation space, compressing storage formats is a key strategy to fit more games on your PC without immediately needing additional storage. A 22% reduction in file size through proprietary compression makes installation sizes much more manageable.

However, it's important to note that since DGFS is a compression format for DGF, the mesh stream array produced by DGFS must be decoded before the game can load it. It's similar to a ZIP file, which you can't use until it's unzipped. DGFS data will behave in the same way, residing only in PC storage, not in memory as it is not a component that will be directly processed by the GPU. AMD has developed a CPU-based decoding process that decodes the compressed DGFS assets in real time during asset streaming, which AMD claims "should be sufficiently fast." There are also potential GPU-based decoding solutions, suggesting we might see faster implementations in the future when it hits software SDKs.
DGF SuperCompression

AMD Radeon RX 9070 XT Drops Below MSRP in China

Chinese GPU distributors are now selling some AMD Radeon RX graphics cards below the MSRP set by AMD. According to Channel Gate, AMD graphics cards in China are currently being sold at a loss, with many channel distributors actually losing money on some Radeon models. For example, the entry-level Radeon RX 7650 GRE, which has an MSRP of 2,099 RMB ($308), is now being sold for about 1,740 RMB ($256), including sales tax. This represents a 17% decrease compared to the MSRP, making it an attractive price point for buyers, but not so much for sellers who have a stockpile of these graphics cards. Moving on to the more recent RDNA 4 generation, the Radeon RX 9060 XT 8 GB edition has an MSRP of 2,499 RMB ($367) but is now being sold at 2,250 RMB ($330). This is another drop of 10% compared to the MSRP, offering gamers a better deal.

The smallest difference is seen with AMD's top RDNA 4 GPU, the Radeon RX 9070 XT. This model has an MSRP of 4,999 RMB ($735) in China but is now trading at a slight discount of less than one percentage point at 4,950 RMB ($727). While this may not seem significant, it indicates that the card is finally trading at prices that AMD encouraged its board partners to adopt for the latest RDNA 4 family. However, these changes are only now starting to appear in China.

DLSS 5: Gamers Prefer Original Visuals, Many Await Real-World Results

When NVIDIA announced its Deep Learning Super Sampling (DLSS) 5 technology, which the company describes as the first real-time neural rendering technology to bring photorealism to game textures, the response from gamers was not entirely positive. Many were unhappy with the results, and the backlash grew online. That's why we asked the TechPowerUp readers in a poll to share their opinions on the technology and found a consistent theme in the results. After collecting nearly 20,000 votes, a majority of 58% of gamers expressed that AI should not alter games at all. They prefer to keep their favorite titles original and intact, as envisioned by the original game studio, without any changes to lighting or photorealism. They want no alterations, meaning AI should not change character faces, apply realistic material rendering, or add any other modifications.

Interestingly, the second-largest group is still undecided, waiting to see real-world results in their favorite AAA titles. When DLSS 5 is implemented in major releases, about 28% of gamers who have seen NVIDIA's DLSS 5 demos believe that the final release will shape their opinion, whether positive or negative. This indicates that the technology has not yet fully resonated with the gaming community, who require further convincing. A smaller segment, 8.1% of those polled, believes that DLSS 5-enabled titles actually look better than native rendering, showing some optimism that the technology can improve visuals. Finally, about 6.4% of respondents are willing to accept visual changes if they lead to a significant FPS boost. As with DLSS and other upscaling methods, gamers now expect to see a substantial performance increase from neural rendering, and the situation appears similar with DLSS 5.

PC Motherboard Sales Face Sharp 25%+ Decline Amid Weak Demand

PC motherboard sales are on track for some of the biggest corrections in recent times as manufacturers struggle with weak demand, according to a DigiTimes report. What began as AI data center expansion quickly started affecting consumer PC DIY endeavors, as severe silicon shortages across the industry drove DRAM and CPU demand so high that prices have increased significantly for DDR4 and DDR5 memory kits, while regular CPUs have also seen a large price increase. In response, PC motherboard makers are caught in the middle of this shortage, seeing their motherboard unit sales revised down significantly. The report notes that all Taiwanese motherboard makers have significantly lowered their 2026 shipment targets, with some experiencing more than a 25% decrease in projected unit sales.

Interestingly, it's not only CPU and memory shortages driving this lowered demand; there are indications that consumers have slowed down their NVIDIA GPU upgrade cycles, which is impacting new motherboard sales. Particularly with the "Blackwell" GPU generation, consumers began purchasing PCIe 5.0 motherboards to achieve the greatest performance increase. However, as these GPUs became rarer and more expensive due to the global DRAM shortage, consumers have become reluctant to upgrade. ASUS is projected to sell about 10 million motherboards in 2026, while MSI and GIGABYTE are now projecting sales of less than 10 million units each. This represents about a 25% yearly decrease from 2025 sales. The worst position is estimated for ASRock, which is expected to see a 30% decrease according to the report.

Microsoft Is Testing a Windows 11 Feature That Maxes Out CPU Speed for Faster App Launches

According to Windows Central, Microsoft is working on a new feature for Windows 11 called "Low Latency Profile," as part of the Windows K2 effort. This feature aims to make app launches noticeably faster by pushing the CPU core to its maximum boost frequency in very short bursts. Reportedly, this feature boosts the CPU to its maximum frequency for 1-3 seconds, resulting in noticeably smoother app launches during testing. When launching Microsoft applications like Edge and Outlook, known as "in-box" apps, the result is about a 40% faster application launch. Other applications, such as the Start Menu and context menus across the operating system, may be up to 70% faster. Overall, the Windows 11 operating system is expected to receive a significant performance boost, though at the cost of the CPU reaching its maximum frequency.

Currently, this feature within the Windows K2 effort is automatic, with no clear indication if it can be turned on or off. Running a CPU at its maximum frequency is somewhat unusual, as the purpose of an operating system is to minimize strain on the PC, leaving headroom for heavier applications to load. However, since the boost is only applied in short bursts of up to three seconds, it is expected that the performance benefits and overall smoothness will outweigh potential issues. These issues include elevated CPU frequency during lighter tasks and general OS usage, which could result in slightly higher temperatures overall. For laptop users, this might lead to faster battery drain, but it is likely that the Windows K2 effort will account for this, with minimal impact.

Palit Confirms Next-Gen GALAX HOF and KFA2 GPUs Already in Development

After the unification of Palit's sub-companies, GALAX and KFA2, under one roof, Palit has confirmed that the company is already developing next-generation GALAX Hall-of-Fame (HOF) graphics cards, as well as regular GALAX and KFA2-branded GPU models. The company claims that the current generation of GPUs is here to stay, but what's especially interesting is the note about next-generation GPUs being in development. This means that the upcoming NVIDIA GeForce RTX 60-Series "Rubin" gaming GPUs will also be available in GALAX HOF and KFA2 variants, allowing extreme overclockers to access more HOF generations in the future. As Palit is one of NVIDIA's biggest add-in card partners, we expect that future product launches will remain NVIDIA-exclusive.

Additionally, Palit has emphasized that the leader of Galax Brazil, Ronaldo from TecLab, will continue with GALAX to work on the GPUs and enhance GALAX branding and market presence in every possible way.

PCIe 8.0 Targets 1 TB/s Bandwidth and May Need a New Connector

PCI-SIG has released a small update on its upcoming PCIe 8.0 standard, with the draft milestone reaching version 0.5. Perhaps the most intriguing aspect of this draft update is not the performance itself, but the exploration of a new connector technology to support this high-bandwidth protocol. Last year, we learned that PCI-SIG plans to implement a 256.0 GT/s raw bit rate and 1 TB/s of bidirectional bandwidth in the x16 lane configuration. We had assumed that the protocol would continue using the familiar connector technology seen in previous PCIe updates. However, it turns out that the current connector might be a limiting factor, prompting the search for a replacement for the traditional PCIe electrical connection.

The traditional PCIe connector is a copper-based link with up to 16 lanes connecting graphics cards to a slot. In a full x16 lane configuration, the PCIe generation supported by the motherboard provides the best performance, offering the maximum bandwidth the platform can deliver. However, with a 256 GT/s raw bit rate, the connector provides about 1 TB/s of bidirectional bandwidth, which is eight times faster than the current PCIe 5.0 platform used with modern GPUs and CPUs. This indicates that the current physical layer facilitating communication between a GPU and a motherboard is nearing saturation with the advent of PCIe 8.0, necessitating the consideration of an alternative connection method.

Google Chrome Silently Downloads 4 GB AI Model on Your PC Without Consent

Google Chrome is reportedly downloading a 4 GB AI model onto user PCs without consent, prior information, or any way for less technical users to discover it independently. According to Alexander Hanff, who publishes a blog called "That Privacy Guy," Google Chrome is installing a 4 GB Gemini Nano model locally without user consent. The researcher discovered that Google Chrome downloads and installs the local AI model automatically, without any user input. Google Chrome initiates this process by creating an "OptGuideOnDeviceModel" folder, which contains a "weights.bin" file that is exactly 4 GB. This file is used for Google's Gemini Nano model, which handles on-device scam detection, AI-assisted writing, and other tasks. The entire process takes about 15 minutes to complete, all without the user's knowledge.

Why does this happen? Google Chrome automatically scans your device to assess whether it can run local AI models and only triggers the download when AI features are active. There is no specific checkbox in the browser settings indicating a 4 GB local AI model download. Interestingly, users who deleted this 4 GB model found that Google Chrome redownloaded it repeatedly, continuing the cycle. The only way to prevent or disable the download is by disabling Chrome's AI features through the "chrome://flags" settings, using enterprise policy settings in your organization, or simply uninstalling Chrome to stop the automatic downloads.

AMD to Expand EPYC Lineup With Specialized CPUs for AI, HPC, and Cloud

AMD's CPU offerings will soon expand into more categories, specializing in different tasks and use cases. During the latest Q1 earnings call, AMD CEO Dr. Lisa Su confirmed that customers are increasingly interested in specialized EPYC solutions and that future CPU generations will cater to different customer needs. This means that agentic AI will receive a dedicated EPYC CPU SKU, HPC workloads will have their own EPYC CPU SKU, AI training and inference will get a separate EPYC SKU, and cloud workloads will have a distinct SKU. This expansion of task-specialized CPU designs will go beyond the "Venice" CPU platform, which is expected to be released this year. At CES 2026, AMD confirmed that EPYC "Venice" will be a highly dense CPU package with up to 256 cores and 512 threads of "Zen 6c" cores, while the regular "Zen 6" configuration will have a maximum of 96 cores and 192 threads.

With this generation of EPYC CPUs based on the "Zen 6" core IP, AMD will promote "Venice" CPUs broadly, while the "Venice" CPU generation will be complemented by the "Verano" CPUs. These appear to use the same "Zen 6" microarchitecture but have optimizations specifically designed for AI infrastructure. Contrary to previous assumptions that "Verano" would be a "Zen 7" IP, it is now confirmed to be part of the 6th-generation EPYC CPU family. This indicates that AMD is starting its CPU customization for different workloads as early as this year. As the server CPU market is projected to reach $120 billion by the end of this decade, less than 3.5 years away in 2030, it makes sense to have different CPU SKUs for each task and capture a significant portion of that revenue. Server CPUs are also expected to grow at a 35% compound annual growth rate, which is impressive for a relatively mature market.

Terafab's Cost Could Reach $119 Billion as First Phase Starts at $55 Billion

Elon Musk recently announced an ambitious project called Terafab, which he plans to build on Tesla's campus in eastern Travis County, Austin, Texas. Thanks to recent court hearings in Grimes County, Texas, we have learned that the first phase of the Terafab project is expected to cost $55 billion. However, if additional phases are constructed, this figure could rise to $119 billion. While this is an astonishingly high amount, it seems justified given Terafab's ambitious goals. Semiconductor manufacturing is one of the greatest marvels of the modern world, with only a few companies competing for the top spot. The engineering and scientific expertise required for building modern semiconductor manufacturing facilities is so rare that it is concentrated in just a few places around the globe.

The goal of Terafab is to consolidate the entire chip manufacturing process under one roof. The plant is expected to integrate several stages of semiconductor production at a single site, including logic fabrication, memory, packaging, testing, and mask production. This setup is unusual, as these steps are typically spread across multiple specialized facilities and companies. The idea behind Terafab is that consolidating these processes could accelerate development by allowing engineers to design, test, and revise chips with fewer delays, essentially enabling rapid prototyping. This contrasts with the traditional, lengthy process of manufacturing chips at one site, packaging them at another, and testing them in-house. A typical semiconductor fab for nodes below 3 nm costs over $20 billion, but that only covers silicon manufacturing. Terafab's goal to handle everything will push that cost astronomically high.

Valve Releases Steam Controller CAD Files for Modders

Valve's recent launch of the Steam Controller was successful from day one, and the company is now releasing something for the modding community to begin their adventures. Today, Valve has released 3D CAD files, providing modders with a 3D model for their needs. This means anyone can now access Steam Controller and Puck drawings in STP and STL formats, along with engineering drawings that highlight critical features and areas to avoid when modding. This allows anyone to design accessories with known clearances. The community's efforts should soon bear fruit, as Valve recognizes that its community is full of enthusiasts with a wealth of expertise who create new accessories daily. This will include 3D prints from creative individuals, as well as potential accessories to enhance the Steam Controller experience for some users.

Yesterday, Valve reassured customers that a restock of the Steam Controller is happening soon, as the initial release was so successful that the entire stock sold out in just 30 minutes. The demand has been exceptional, even for a controller priced at $99. The site experienced such a high volume of traffic that the payment processing system froze, causing many customers to encounter errors before both the website and payment processor resumed functioning. To prepare, some customers loaded their Steam Wallets with funds days in advance, but the high demand still prevented them from securing a unit immediately. For those who managed to purchase one, congratulations. However, those who didn't are turning to resellers, who are charging about $300 for the controller. Valve is reassuring customers not to worry, as another stock drop is coming soon. With the 3D files now in the hands of the modding community, we should start seeing some interesting designs on social media.

AMD Says Agentic AI Could Put More CPUs Than GPUs in Compute Nodes

AMD reported impressive first-quarter 2026 earnings, and during the earnings call, CEO Dr. Lisa Su shared some intriguing insights about the agentic AI era. This era is driving CPU usage to unprecedented levels, to the extent that the number of CPUs in a single compute node is becoming almost equal to the number of GPUs. In response to a question from an analyst, Dr. Lisa Su explained that the traditional setup of one CPU paired with four or even eight GPUs is shifting towards a one-to-one ratio of CPUs to GPUs. This change indicates a surge in CPU demand due to the agentic features, which require large language models to utilize the host CPU for continuous updates and orchestration of these agents. Previously, CPUs primarily served as hosts to initiate GPU operations for training and inferencing AI models. However, as AI becomes more agentic, the CPU's role is becoming significantly more important.
AMD's Lisa Su...We certainly see the movement towards where in the past, the CPU to GPU ratio was primarily just as a host node in like a 1:4 or 1:8 configuration node, now changing and getting closer to a 1:1 configuration or even -- you can even imagine if you get lots and lots of agents that you could have more CPUs and GPUs...

(PR) AMD Reports First Quarter 2026 Financial Results

AMD today announced financial results for the first quarter of 2026. First quarter revenue was $10.3 billion, gross margin was 53%, operating income was $1.5 billion, net income was $1.4 billion and diluted earnings per share was $0.84. On a non-GAAP(*) basis, gross margin was 55%, operating income was $2.5 billion, net income was $2.3 billion and diluted earnings per share was $1.37.

"We delivered an outstanding first quarter, driven by accelerating demand for AI infrastructure, with Data Center now the primary driver of our revenue and earnings growth," said Dr. Lisa Su, AMD chair and CEO. "We are seeing strong momentum as inferencing and agentic AI drive increasing demand for high-performance CPUs and accelerators. Looking ahead, we expect server growth to accelerate meaningfully as we scale supply to meet demand. Customer engagement around MI450 Series and Helios is strengthening, with leading customer forecasts exceeding our initial expectations and a growing pipeline of large-scale deployments providing us with increasing visibility into our growth trajectory."

MemTest86 Adds Preliminary LPCAMM2 Testing Support

PassMark's MemTest86 update version 11.7 (Build 1000) has added preliminary testing for the LPCAMM2 memory type, primarily for Intel's "Meteor Lake" and "Arrow Lake" chips. This means that the latest memory form factor can now be tested using a standardized testing methodology. With the rise of LPCAMM2 appearances, it is only logical that MemTest86 adds support for this form factor, which we have started seeing on Lenovo ThinkBook 14+ and 16+ and Framework Laptop 13 Pro models, especially from major DRAM manufacturers such as CXMT and Samsung. CXMT's LPCAMM2 memory uses LPDDR5X-8533 modules, while Samsung's modules are running LPDDR5X-9600 memory stacks. Interestingly, with further support from SK hynix, Samsung, Micron, and now CXMT, LPCAMM2 is aiming to become a universal standard for memory. Since the upcoming DDR6 memory is scheduled to be the main driver behind (LP)CAMM2, having a memory testing tool is essential.
Below is the list of fixes and enhancements.

Valve Confirms Steam Controller Restock is Happening Soon

Valve's highly anticipated release of the Steam Controller was so successful that the entire stock sold out in just 30 minutes. The demand has been nothing short of exceptional, even for a controller priced at $99 in the United States, €99 in European Union countries, Β£85 in the UK, $149 CAD in Canada, and $149 AUD in Australia. At around 17:00 UTC on May 4, the Steam Controller officially went on sale through Valve's website. The site experienced such a high volume of customers that the payment processing system froze, causing many to encounter errors before both the website and payment processor resumed functioning. To prepare, some customers loaded their Steam Wallets with funds days in advance, but the high demand still prevented them from securing a unit immediately. For those who managed to purchase one, congratulations. However, those who didn't are turning to resellers, who are charging about $300 for the controller. Valve is reassuring customers not to worry, as another stock drop is coming soon.
ValveSteam Controller ran out faster than we anticipated, and we hate that not everyone who wanted one was able to get it. We're working on getting more in stock and will have an update on expected timeline soon.

Apple Eyes Intel and Samsung Foundries for Chip Production in the U.S.

Companies like Apple are always seeking top-tier manufacturing for their Apple Silicon products, which range from the A-Series chips in iPhones and low-power MacBooks to the more powerful M-Series SoCs that power iPads and higher-end Macs. Manufacturing these custom processors has traditionally been handled by TSMC, but Bloomberg now reports that Apple has been in talks with both Intel Foundry and Samsung Foundry to manufacture some of its chips in the coming months. It has been known for some time that Apple is exploring Intel's 18A-P process design kits (PDKs). Apple has used version 0.9.1 of the PDK designed for Intel's 18A-P node. With performance, density, power, and other metrics meeting expectations, Intel could become Apple's source for advanced node production by 2027.

Additionally, Apple is reportedly waiting for Intel to release the 18A-P PDK version 1.0, which is on track to launch in the first half of 2026 or may have already been released to partners. Once available, Apple plans to start with the lowest-end M-series chip, used in MacBook Air and iPad Pro devices, as previously mentioned. This node is particularly interesting due to its performance characteristics, as the 18A-P can deliver a 9% performance increase at the same power level or achieve 18% power savings at the same performance level compared to the standard 18A. This is exactly what Apple is looking for. Coupled with better thermal conductivity, these designs should offer improved heat dissipation and performance compared to what Apple currently achieves with TSMC's 3 nm process in the M5 SoC.

Microsoft Pulls Windows 11 "No Worries" 32 GB RAM Recommendation After Backlash

Microsoft recently published a note stating that gaming PCs running Windows 11 should be equipped with 32 GB of RAM as a "no worries" update for systems handling demanding tasks like Discord alongside AAA titles running in the background. However, after our post reached millions of readers online, gamers reacted negatively to Microsoft's seemingly excessive requirement amid the worst DRAM shortage ever recorded. In response, Microsoft deleted the entire blog post. Now, clicking on the old link redirects to the Windows Learning Center, which features general blog posts with tips and tricks on enhancing your Windows 11 experience, with no trace of the original post.

As readers may recall, Microsoft promised to make Windows 11 a much-improved operating system with better performance, more UI uniformity, and reduced RAM consumption. The company is reportedly using feedback from its Insiders group, user telemetry analytics, and customer focus groups to ensure that Windows 11 is efficient, thoughtfully designed, and stable. Earlier this year, Microsoft pledged to address many user complaints, such as poor memory optimization within its flagship operating system, but these fixes have yet to be implemented. In the meantime, having more RAM is the only solution to keep operations running smoothly. However, when the company began recommending 32 GB as a "no worries" upgrade, these plans seemed like empty promises, causing enthusiasts to stick around without considering alternatives. Now, Microsoft appears to have recognized the mistake and has deleted its previous blog post, indicating that the company is actively listening to user feedback online.

HDD and SSD Shortages Drive Customers to Sign 5-Year Supply Contracts

Hard Disk Drives (HDDs) and Solid State Drives (SSDs) are among the most sought-after commodities in computing today, as the expansion of AI data centers consumes everything in its path. According to Seagate, Sandisk, and Western Digital, demand is so high that customers are signing long-term supply agreements lasting up to five years. This duration is significant because customers are now planning their contracts around demand expansion, which is not only substantial but will also bring better balance to the supply chain. With customers driving steady demand, HDD and SSD manufacturers know exactly how much spinning rust or NAND Flash to produce to meet this demand. Over time, this is a positive development for a supply chain that will adapt with expanding production capacity. However, it poses a short-term challenge for PC gamers.

For example, at the start of this year, we reported that HDD prices have soared by an average of 46% since mid-September. These changes have made spinning rust an expensive commodity, but this is minor compared to NAND Flash prices, which have increased 500% in just a few months. The expansion of AI data centers has depleted any remaining inventory of HDDs and SSDs, leaving the consumer PC market to compete for the few remaining units available for gaming PCs. Interestingly, HDDs contain almost no silicon for storage purposes, so their significant price increases are a supply chain issue unrelated to the semiconductor industry. Apart from the controllers that use silicon, HDD platters are made from materials that are not currently in short supply. However, high demand keeps their prices elevated.

DDR6 Development Aims for Commercial Shipments in 2028

It seems we are not too far from the next-generation Double Data Rate 6 (DDR6) memory for desktops and servers, as memory manufacturers are working with JEDEC to establish a new standard. According to South Korean media outlet The Elec, major memory makers such as SK hynix, Samsung, and Micron have reportedly begun designing DDR6 in their labs and are gradually coordinating module development with substrate manufacturers. This collaborative effort is taking place under the supervision of JEDEC, the industry authority that oversees standard development and ensures a common foundation for design. Manufacturers could have accessed JEDEC's first DDR6 draft since 2024, but the draft still lacks concrete specifications such as finalized voltage ranges, signal usage, power envelopes, and pinout design. However, this is expected to change as manufacturers are now accelerating standard development.

Last year, we reported that the major players mentioned above had already moved past the prototype stages and embarked on rigorous validation cycles. Perhaps the most interesting aspect is the designated throughput of 8,800 MT/s, with plans to scale up to a staggering 17,600 MT/s, nearly doubling the ceiling of today's DDR5. This increase is driven by DDR6's 4Γ—24-bit sub-channel architecture, which requires entirely new approaches to signal integrity. It also differs from DDR5's current 2x32-bit sub-channel structure. To overcome the physical limits faced by DIMM form factors at higher speeds, the industry is betting on CAMM2 technology. Early indications suggest that server platforms will lead the change, with high-end notebooks following once manufacturing ramps up.

Intel Arc GPU Graphics Drivers 101.8737 Beta Released

Intel today released its latest Arc GPU graphics drivers with the version 101.8737 Beta brining new game-ready support for two new releases, including an RPG game Heros of Might & Magic Olden Era, as well as Neverness to Everness open world title. What is perhaps the most interesting thing with this beta driver release is the series of fixed issues in the Pragmata game with DirectX 12 API, which resulted in application crashes when loading into game menu on Intel Arc GPUs. This used to occur on everything from "Panther Lake," "Lunar Lake," and "Meteor Lake" integrated graphics, all the way to discrete Arc "Alchemist" and "Battlemage" solutions. Now the problem is resolved, but some issues remain. For example, Fortnite might crash on "Wildcat Lake" systems during launch, and discrete GPUs might experience some graphics corruption in games like Call of Duty Black Ops 6 and Dune: Awakening. For a full list of known issues, check out the changelog below.
DOWNLOAD: Intel Arc Graphics Driver 101.8737 Beta.

Windows 11 Gamer Base Grows as Linux Slips in Steam Survey Data

The Windows 11 install base seems to be expanding, contradicting the overall sentiment surrounding Microsoft's highly controversial operating system. According to the latest Steam Hardware and Software Survey results from April, Windows 11 now accounts for 67.74% of the Steam gamer base, marking an increase of 0.89% from March. This growth comes at the expense of the remaining two operating systems noted in the Steam survey, primarily Linux and macOS. Interestingly, the April data suggests that Linux-based operating systems now stand at 4.52%, a significant 0.81% decrease from March. Most Linux distributions saw a decline in user share, with only Debian Linux, Ubuntu 24.04 LTS, and Fedora Linux 43 recording a meaningful uptick. The rest experienced a decline in April, indicating some market-wide corrections among gamers worldwide.

For Windows, both Windows 11 and Windows 10, which reached end of life back in October 2025, recorded increases, and the overall share of Windows-based gaming PCs grew by 1.14% in April. Now, Windows accounts for 93.47% of all gaming PCs, meaning that Linux and macOS remain relatively small compared to the dominance of Microsoft's OS. Especially among gamers, switching to a different OS seems problematic, despite recent growth rates. Even as Windows 11 has its own issues, the majority of gamers remain on the platform because it offers the best game compatibility and the lowest learning curve of all the mentioned platforms.

AMD Ryzen AI Max+ PRO 495 "Gorgon Halo" APU Appears with Radeon 8065S

AMD's upcoming APU refresh with the Ryzen AI Max 400 series is divided into "Gorgon Point" and "Gorgon Halo." Today, we see one of the first "Gorgon Halo" APUs appearing in online benchmark databases. The AMD Ryzen AI Max+ PRO 495 "Gorgon Halo" APU, featuring 16 cores and 32 threads based on the current "Zen 5" CPU architecture, has landed in the PassMark testing database. These cores can reach a boost frequency of up to 5.2 GHz, which is about a 100 MHz improvement over the current "Strix Halo" APU generation. Complementing the CPU setup is the RDNA 3.5 GPU architecture, now in the form of a Radeon 8065S, which appears to be an overclocked version of the current Radeon 8060S. This new Radeon 8065S iGPU runs at 3.0 GHz, while the current Radeon 8060S runs at about 2.9 GHz. No increase in cores is expected here, and the "Gorgon Halo's" integrated graphics should continue with the 40 RDNA 3.5 CUs.

In terms of performance, AMD has managed to achieve better efficiency thanks to the higher boost frequency. PassMark's comparisons now list the new Ryzen AI Max+ PRO 495 "Gorgon Halo" APU as about 4% ahead in multicore and about 3% in single-core benchmarks compared to the AMD Ryzen AI Max+ PRO 395 "Strix Halo" APU. Another significant aspect is the integrated memory configuration. With the previous "Strix Halo," the maximum memory configuration was 128 GB, while the latest "Gorgon Point" shows 192 GB of LPDDR5X memory, suggesting that AMD has updated its integrated memory controller to increase the maximum memory capacity.

Steam Controller Goes Official on May 4 with $99 Price Tag

Valve has officially confirmed that its highly-anticipated Steam Controller will go on sale globally on May 4. It will be priced at $99 in the United States, €99 in European Union countries, Β£85 in the UK, $149 CAD in Canada, and $149 AUD in Australia, marking a truly global launch. Designed as a universal control device, the Steam Controller aims to be a versatile gamepad for the broader Steam ecosystem, supporting PCs, laptops, Steam Deck, Steam Machine, and even the Steam Frame VR headset. While maintaining familiar core controls, Valve is clearly focusing on additional inputs, including dual trackpads, a gyro, Grip Sense, and four rear grip buttons, all of which can be customized through Steam Input.

Interestingly, Valve has revealed more details about some of the core technology behind the Steam Controller, with perhaps the most intriguing being magnetic thumbsticks built around TMR technology. Valve claims they offer a better feel, improved responsiveness, and much greater durability. They also add capacitive touch support for motion-based controls, meaning your commands can now be expressed in multiple ways. There is also a new puck accessory that handles both wireless connectivity and charging, snapping onto the controller magnetically to serve as a dock and transmitter in one.

Update 17:00 UTC, May 4: Steam Controller is now officially available!
❌