❌

Normal view

Today β€” 4 May 2026Main stream

Intel Arc GPU Graphics Drivers 101.8737 Beta Released

4 May 2026 at 16:00
Intel today released its latest Arc GPU graphics drivers with the version 101.8737 Beta brining new game-ready support for two new releases, including an RPG game Heros of Might & Magic Olden Era, as well as Neverness to Everness open world title. What is perhaps the most interesting thing with this beta driver release is the series of fixed issues in the Pragmata game with DirectX 12 API, which resulted in application crashes when loading into game menu on Intel Arc GPUs. This used to occur on everything from "Panther Lake," "Lunar Lake," and "Meteor Lake" integrated graphics, all the way to discrete Arc "Alchemist" and "Battlemage" solutions. Now the problem is resolved, but some issues remain. For example, Fortnite might crash on "Wildcat Lake" systems during launch, and discrete GPUs might experience some graphics corruption in games like Call of Duty Black Ops 6 and Dune: Awakening. For a full list of known issues, check out the changelog below.
DOWNLOAD: Intel Arc Graphics Driver 101.8737 Beta.

Windows 11 Gamer Base Grows as Linux Slips in Steam Survey Data

4 May 2026 at 15:35
The Windows 11 install base seems to be expanding, contradicting the overall sentiment surrounding Microsoft's highly controversial operating system. According to the latest Steam Hardware and Software Survey results from April, Windows 11 now accounts for 67.74% of the Steam gamer base, marking an increase of 0.89% from March. This growth comes at the expense of the remaining two operating systems noted in the Steam survey, primarily Linux and macOS. Interestingly, the April data suggests that Linux-based operating systems now stand at 4.52%, a significant 0.81% decrease from March. Most Linux distributions saw a decline in user share, with only Debian Linux, Ubuntu 24.04 LTS, and Fedora Linux 43 recording a meaningful uptick. The rest experienced a decline in April, indicating some market-wide corrections among gamers worldwide.

For Windows, both Windows 11 and Windows 10, which reached end of life back in October 2025, recorded increases, and the overall share of Windows-based gaming PCs grew by 1.14% in April. Now, Windows accounts for 93.47% of all gaming PCs, meaning that Linux and macOS remain relatively small compared to the dominance of Microsoft's OS. Especially among gamers, switching to a different OS seems problematic, despite recent growth rates. Even as Windows 11 has its own issues, the majority of gamers remain on the platform because it offers the best game compatibility and the lowest learning curve of all the mentioned platforms.

AMD Ryzen AI Max+ PRO 495 "Gorgon Halo" APU Appears with Radeon 8065S

4 May 2026 at 12:05
AMD's upcoming APU refresh with the Ryzen AI Max 400 series is divided into "Gorgon Point" and "Gorgon Halo." Today, we see one of the first "Gorgon Halo" APUs appearing in online benchmark databases. The AMD Ryzen AI Max+ PRO 495 "Gorgon Halo" APU, featuring 16 cores and 32 threads based on the current "Zen 5" CPU architecture, has landed in the PassMark testing database. These cores can reach a boost frequency of up to 5.2 GHz, which is about a 100 MHz improvement over the current "Strix Halo" APU generation. Complementing the CPU setup is the RDNA 3.5 GPU architecture, now in the form of a Radeon 8065S, which appears to be an overclocked version of the current Radeon 8060S. This new Radeon 8065S iGPU runs at 3.0 GHz, while the current Radeon 8060S runs at about 2.9 GHz. No increase in cores is expected here, and the "Gorgon Halo's" integrated graphics should continue with the 40 RDNA 3.5 CUs.

In terms of performance, AMD has managed to achieve better efficiency thanks to the higher boost frequency. PassMark's comparisons now list the new Ryzen AI Max+ PRO 495 "Gorgon Halo" APU as about 4% ahead in multicore and about 3% in single-core benchmarks compared to the AMD Ryzen AI Max+ PRO 395 "Strix Halo" APU. Another significant aspect is the integrated memory configuration. With the previous "Strix Halo," the maximum memory configuration was 128 GB, while the latest "Gorgon Point" shows 192 GB of LPDDR5X memory, suggesting that AMD has updated its integrated memory controller to increase the maximum memory capacity.
Before yesterdayMain stream

AMD Readies Full Open-Source HDMI 2.1 Support for Linux

2 May 2026 at 16:32
If readers recall, AMD has been trying to get the HDMI Forum, the governing body behind the development of the HDMI standard, to approve open-source HDMI 2.1 support on Linux but faced strong rejection. However, today the situation appears to be different. An AMD Linux developer hinted that the company is preparing full HDMI 2.1 support for the AMDGPU driver, bringing a complete open-source implementation after years of work. Helping this effort is Valve, whose Steam Machine runs on the SteamOS Linux operating system and uses AMD graphics. Late last year, we reported that Valve was reportedly attempting to persuade the HDMI Forum to approve AMD's efforts to bring this implementation to the open-source Linux stack, but we haven't received an update since.

Today, the situation looks a bit different as AMD has submitted the first set of Linux kernel patches, focusing on the Fixed Rate Link (FRL) feature, exclusive to the HDMI 2.1 standard. This feature enables higher bandwidth over the port, effectively supporting 4K at 120 Hz and 5K at 240 Hz resolutions on AMD GPUs running Linux-based operating systems. As these resolutions require higher data bandwidth, it is necessary to use the newer HDMI 2.1 standard over the currently supported HDMI 2.0 in AMDGPU open-source graphics drivers.

Microsoft Now Recommends 32 GB RAM as a "No Worries" Upgrade for Windows 11

1 May 2026 at 21:21
Microsoft has published an updated support document outlining what the company believes the best Windows 11 gaming PC systems have in common. Among the listed specifications, Microsoft makes an interesting note about the system's RAM capacity configuration. The document suggests that 16 GB is a baseline for a modern PC, describing it as a good "starting point." However, the company recommends that gamers aim to surpass this baseline, as it now suggests 32 GB as a "no worries" upgrade for gaming PCs. This higher RAM capacity makes running multiple workloads much easier, which is naturally true, but raises question about the feasibility of Microsoft's future plans to bring more optimizations to the operating system.

Most TechPowerUp readers are PC enthusiasts who understand how a RAM-limited system performs and know that more RAM is almost always better. However, in an era where the DRAM shortage is making it difficult for enthusiasts to easily buy more RAM, many are turning to more optimized operating systems like Linux-based distributions or even macOS, which is an entirely different platform. Microsoft now recommends a 32 GB capacity, stating it "helps if you run Discord, browsers, or streaming tools alongside your games." The extra memory also provides newer titles with more breathing room as memory demands continue to rise. This might also indicate that Microsoft's operating system is more RAM-hungry these days than it used to be, during a challenging time for RAM upgrades.

AMD Ryzen AI Halo Mini-PC to Arrive in June

1 May 2026 at 19:28
AMD previewed its Ryzen AI Halo mini-PC during the CES 2026 showcase, and the machine is set to be released as soon as June arrives. According to a Reddit user, AMD presented a Ryzen AI Halo box during AMD AI Dev Day, showcasing the system in its full glory. This machine is powered by a Ryzen AI Max 395+ APU, featuring a 16-core/32-thread "Zen 5" CPU, a large integrated GPU based on the RDNA 3.5 graphics architecture with 40 compute units, and a Microsoft Copilot+ ready NPU with 50 TOPS. It supports up to 128 GB of unified LPDDR5X memory, is compatible with Windows 11 and Linux, and comes with pre-loaded AI models optimized for the hardware. At the AI Dev Day, AMD demonstrated the device running on the Ubuntu operating system, which is likely to be the preferred OS for many AI developers targeted by this system.

AMD has developed an innovative cooling solution for the "Strix Halo" SoC, which includes a baseplate, a network of direct-touch flat heatpipes, an aluminium channel heatsink, and two lateral airflow blowers. AMD stated that the Ryzen AI Halo AI developer platform will be available from Q2 2026 which matches this supposed June launch. Interestingly, the price of this 128 GB model remains unknown, which is the biggest mystery, but don't expect it to come cheap. Below are some of the first real-life pictures, showcasing the design illuminated by a programmable RGB strip surrounding the box.

NVIDIA GeForce RTX 3060 12 GB Returns in June With AIC Partners ASUS, MSI, Colorful, and GALAX

1 May 2026 at 15:43
Chinese Board Channels now confirm that NVIDIA's upcoming resurrection of the GeForce RTX 3060 12 GB edition will take place in June, with many of NVIDIA's existing add-in-card (AIC) partners assisting in relaunching this five-year-old GPU. Interestingly, there are rumors that the recently integrated GALAX within the Palit group will be among these partners. Other AIC partners include NVIDIA's usual launch partners like ASUS, Colorful, and MSI. It will be interesting to see whether these AIC partners design new PCBs for the GeForce RTX 3060 12 GB relaunch or use their older designs, which they probably stopped producing years ago. We have already reported that NVIDIA is reintroducing the GeForce RTX 3060 12 GB SKU with a 192-bit wide memory bus.

For this, NVIDIA will once again use Samsung's 8 nm DUV node, as it has in the past. The entire NVIDIA "Ampere" architecture lineup was produced on the 8 nm DUV node, and its return after several years was unexpected. We also reported that the rumored RTX 5050 9 GB edition is reportedly on hold, as NVIDIA is pausing the transition from its 8 GB RTX 5050 "Blackwell" version to a 9 GB model due to the reintroduction of the GeForce RTX 3060 12 GB "Ampere" GPU. Since both of these GPUs compete in the budget segment, the company will reportedly only release the older GeForce RTX 3060 12 GB SKU as its primary entry-level design.

Noctua Explains Why chromax.black Fan Releases Take So Long

1 May 2026 at 12:55
Austrian fan maker Noctua has published a technical blog explaining why it sometimes takes a long time for the company to release the dark-edition chromax.black fans after the initial beige and brown design is out. The company compares the level of engineering required for a new color code to painting a Formula 1 car, rather than a simple color change like you would typically do with a wooden fence. Noctua is known for its scientific testing, rigorous performance evaluations, highly detailed lab experiments, and more, which make its fans worthwhile. This has created a massive fan base within the enthusiast community over the years, who now eagerly await each new product release. When it comes to manufacturing, the company applies that same rigor everywhere, and a simple color change is not taken lightly. For example, Noctua produces its fans using injection molding, where plastic is melted and forced into a steel mold. However, when a new pigment is used, the entire calculation can be disrupted.

Noctua designs its fans with high precision to maximize airflow performance. This means that blade impellers have a tip clearance before hitting the fan frame of only a few tenths of a millimeter, about 0.5 mm for 120 mm fans and about 0.7 mm for 140 mm fans. Introducing any third-party pigment into this process could disrupt the structure of this fan tip clearance and potentially interfere with Noctua's Sterrox liquid-crystal polymer (LCP) material used in its fans. Specifically, color pigments have particles that directly affect how the injection mold behaves, as they carry their own particle imperfections. This negatively impacts the hundreds of thousands of performance tests that Noctua conducts in the lab and significantly delays the chromax.black product launch.

Apple Reportedly Gives Up on Vision Pro After Disappointing Refresh

30 April 2026 at 00:19
According to sources close to MacRumors, Apple is abandoning the development of the next-generation Vision Pro headset after the product failed to capture significant market share. With sales of "only" 600,000 units, Apple has not seen this product line take off as its other products have. Launched in February 2024, the Vision Pro headset debuted with a steep $3,499 price tag. In October 2025, about a year and a half later, Apple updated the Vision Pro with its latest 3 nm M5 SoC, typically used in MacBooks, but even this refresh failed to generate significant interest and orders from users. Although the system offers a technically impressive solution, the market has reacted poorly, particularly due to the high price point Apple set.

Technically, the headset features a micro-OLED 3D display system with 23 million pixels and weighs between 750-800 grams, depending on the headband choice. However, users have complained about the device's weight and distribution, especially around the nose area, which often feels heavy on a single pressure point. No headband choice has been able to completely alleviate this issue. Additionally, the price point is too high for consumers, especially for a technology that is relatively new to the Apple ecosystem. As a result, MacRumors tipsters suggest that Apple is close to completely abandoning the project.

Intel Prepares HBM Killer: HB3DM Memory Stacks with Z-Angle Technology

29 April 2026 at 22:27
Intel and SoftBank, through their subsidiary Saimemory, have been developing an alternative technology to the popular high-bandwidth memory (HBM) to provide more bandwidth and capacity for memory modules used with powerful AI accelerators. At VLSI 2026 in June, Saimemory is scheduled to present a paper on the newly developed HB3DM memory, which is based on Z-Angle Memory (ZAM) technology. This name refers to the vertical (Z-axis) stacking of dies, similar to traditional HBM. However, Intel aims to achieve impressive results using state-of-the-art manufacturing technology. The first generation of HB3DM will feature a total of nine layers, stacked using a hybrid bonding technique for 3D chip placement. At the base will be a logic layer that manages data movement within the chip, with eight DRAM layers on top for data storage. Each layer will include about 13,700 TSVs for hybrid bonding.

In terms of capacity, HB3DM will offer about 1.125 GB per layer, translating to 10 GB per memory module. Intel can achieve approximately 0.25 Tb/s of memory bandwidth per mmΒ², and for a 10 GB module with a 171 mmΒ² die area, we can expect around 5.3 TB/s per module. These impressive figures could quickly overshadow competing HBM4 memory, as HB3DM offers much higher bandwidth. HBM4 provides speeds of around 2 TB/s per stack, less than half of what HB3DM will deliver. However, HB3DM is limited by capacity, with only 10 GB available, whereas HBM4 can reach up to 48 GB per stack. Intel may increase the number of layers in production as HB3DM progresses, but for now, it is emerging as a bandwidth leader.

Intel 18A-P Node Delivers 9% Performance Increase, 18% Power Savings

29 April 2026 at 21:36
Intel's next-generation 18A node is ready, and the company has tested it, showcasing some impressive results. At the VLSI 2026 Symposium in Honolulu, Hawaii, Intel will present its new research on the capabilities of the upcoming 18A-P node. According to the paper, the 18A-P node can deliver a 9% performance increase at the same power level or achieve 18% power savings at the same performance level compared to the standard 18A. However, there is more to this than meets the eye. Typically, node generations show similar performance and power improvements across generations. What would be expected in terms of power and performance improvements when transitioning from 18A to 14A is now already available with the 18A-P node, but without any density improvements. This makes the 18A-P node a very attractive option for external customers who expect the transistor density of the 18A node found in "Panther Lake," but with significantly better characteristics.

For reference designs, Intel uses an Arm core sub-block to test frequency and power scaling. The new 18A-P node can yield much better results on paper, but one of the most interesting improvements is in manufacturing, specifically in something called skew corners. When a node is manufactured, no two transistors are identical due to the inherent physics of the manufacturing process, especially at today's scale. These variations are measured between fast and slow "corners," meaning faster and slower transistors. The skew refers to how wide the performance and power gap is between these transistors. Intel has managed to improve the skew corners on the 18A-P node by 30% compared to the standard 18A, meaning that power and performance characteristics are now more predictable, especially for parametric yields. This means that chip functions are now more predictable, and Intel has to deal with far fewer variations with the new node.
Below is Intel's paper abstract about the 18A-P node.

Intel Stock Surges to All-Time High on Foundry Revival and Strong CPU Demand

29 April 2026 at 20:21
Intel's stock is one of the best-performing semiconductor-related names in 2026, with the company's share price reaching a new all-time high of $94.10 per share at the time of writing. This is remarkable news, considering that about a year ago, Intel's stock hit a decade-low of $17.67 per share. This marks a growth of more than 400% in a single year for a company that is one of the most strategically important in the United States' sovereign semiconductor manufacturing. Intel's rise began with investment from the United States government, aimed at supporting the only company left in the U.S. conducting R&D and advanced silicon manufacturing. Since then, Intel has been on an upward trajectory, and the share price shows no signs of slowing down.

Contributing to this success is the revival of Intel's Foundry business, which is on track to attract many external customers. Intel Foundry recently achieved a significant milestone by improving yields across all major foundry nodes currently in high-volume manufacturing. This includes the Intel 4, Intel 3, and 18A nodes, which power the majority of Intel's product portfolio. In the latest Q1 2026 earnings call, Intel CFO David Zinsner noted that the company continues to improve yields on its older nodes, such as Intel 4 and Intel 3, while refining the yield of the current top-performing 18A node to reduce waste and increase the number of functional chips, even in larger dies. Additionally, Tesla signed on as Intel's first major 14A customer for Elon Musk's Terafab AI chip complex in Austin, indicating that the foundry's success in attracting external clients is just beginning.

Blind Test Shows Gamers Prefer NVIDIA DLSS 4.5 Over AMD FSR 4.1

29 April 2026 at 15:50
Back in February, ComputerBase conducted a large blind test comparing in-game screenshots generated using the latest upscaling technologies: AMD's FSR 4.0 and NVIDIA's Deep Learning Super Sampling 4.5. The testing has since been updated, and community votes have been processed, revealing that AMD's updated upscaling technology, FSR 4.1, shows significant improvement over FSR 4.0. However, it still trails behind NVIDIA's DLSS 4.5 in visual quality. In the latest ComputerBase testing, the following games were upscaled using FSR 4.0, FSR 4.1, and DLSS 4.5: Year 117 - Pax Romana, ARC Raiders, Assassin's Creed Shadows, Call of Duty: Black Ops 7, Kingdom Come 2: Deliverance, Resident Evil Requiem, and The Last of Us Part I. In all of these games, the ComputerBase review concluded that DLSS 4.5 was the top performer, a view confirmed by the community in a separate blind test vote.

Similar to the previous test, the ComputerBase team conducted the comparison using videos labeled with three options, without revealing which rendering method was used, to ensure a fully blind test. This resulted in a community verdict with two notable outcomes. First, NVIDIA's DLSS 4.5 remains the leader in image quality, with 6 out of 7 games showing the best results using DLSS 4.5. The only game where AMD's FSR upscaling came out on top was Resident Evil Requiem, where DLSS 4.5 placed second behind FSR 4.1. Overall, DLSS 4.5 is seen as providing sharper visual details and more consistent frame generation compared to AMD's FSR upscaling.

Palit Confirms: GALAX, KFA2, and HOF Branding to Continue

29 April 2026 at 15:03
Today, we reported that GALAX is ending its operations as an independent company and integrating into its parent company, Palit. However, users were left wondering whether Palit would stop offering GALAX-branded products, which have significant recognition among gamers. The official company response is that the branding will continue to be active. This means that GALAX-branded Hall-of-Fame (HOF) GPUs for extreme overclocking, the KFA2 brand for Europe, and other GALAX-branded products will remain available on the market. In simple terms, this is just a corporate structure change, with Palit consolidating its ventures under one roof as the parent company. Ongoing customer commitments, including RMA, warranty claims, and general support, will now be handled by Palit, while the design and development of new GPUs under the GALAX brand will continue.
Below is a complete statement from Palit, followed by a statement from GALAX.

GALAX Shuts Down, Famous GPU Vendor Taken Over by Palit After 30 Years

29 April 2026 at 11:39
The legendary maker of Hall of Fame (HOF) GeForce GPUs, known for their exceptional overclocking capabilities, GALAX, is officially closing its operations after 30 years in business. GALAX, along with its KFA2 brand for the European market, will now be closed, with existing product inquiries managed by Palitβ€”one of the largest GPU add-in card (AIC) manufacturers and a significant NVIDIA partner. Founded in Hong Kong in 1994, GALAX distinguished itself by creating high-performance designs with NVIDIA GeForce GPUs, particularly known for its HOF series. These iconic white-themed designs feature massive VRM circuitry for overclocking and higher-binned dies suitable for LN2 extreme overclocking scenarios. Over the past few generations, multiple world overclocking GPU records were achieved with GALAX HOF cards, and the brand has maintained that design language throughout the years.

After more than 30 years in business, GALAX is closing its operations, and these will be transferred to Palit, which will take full responsibility for "all activities and commitments related to the brand." This includes RMA services, warranty claims, product launches, and more. Interestingly, the announcement does not mention that the GALAX branding will be phased out. Only the actual company operations will be integrated into Palit. It's possible that Palit will retain the GALAX branding and its HOF name, which is well-known for high-performance overclocking among enthusiasts. It's worth noting that GALAX and its sister brand KFA2 have been operating for years with Palit's support as the parent company, so it's uncertain if the brand will continue its market presence under different management. GALAX and KFA2 have been sub-companies of Palit, and management claims that now is the time to unite all of Palit's brands under one roof.

Update 11:05 UTC: Palit confirmed that the current GALAX branding will continue to be present on the market.

EU Now Requires USB-C Charging for New Laptops Up to 100 W

28 April 2026 at 19:48
The European Union has officially imposed a new rule for selling laptops with a power rating of 100 W or less, requiring them to use a USB-C port for charging. This rule takes effect today, April 28, Tuesday, as the European Commission has been exploring ways to reduce electronic waste and has been planning this since imposing a similar rule on smartphones in 2024. As readers may recall, modern smartphones have largely been shipping with USB-C ports since the European Commission mandated that all newly sold smartphones must have a unified connection, instead of multiple connectors that create a significant amount of e-waste across Europe. With laptops, the EU legislation now aims to address this issue in the laptop sector, as they contribute significantly to the problem.

However, there are exceptions to this rule. The traditional USB-C power delivery mechanism can deliver 240 W through a single port, but gaming laptops sometimes require more power. Gaming laptops can continue to use the typical barrel power connector on models that exceed 100 W of power, whereas any laptop model with a power rating of 100 W or less must adopt USB-C as its primary charging connector. From today, it is illegal to sell laptops that do not meet the European Commission's standards across the European Union. However, this rule does not apply to computers sold on the second-hand market; only new devices entering the EU zone must comply.

Microsoft's Shader Model 6.10 Opens Direct Access to GPU AI Engines

28 April 2026 at 19:15
Microsoft has released the Shader Model 6.10 preview, included in the new AgilitySDK 1.720-preview build. This preview introduces a compelling feature related to GPU-dedicated AI engine control. According to the developer blog, Shader Model 6.10 features a new, streamlined algebra matrix API that reveals all known matrix operations for popular gaming GPUs from AMD, Intel, and NVIDIA. This means that modern GPUs have dedicated hardware for processing AI workloads, typically involving matrix multiplication and accumulation. Modern machine learning-based upscaling relies on this hardware, whether it's Tensor cores from NVIDIA, XMX cores from Intel, or AI accelerators in AMD GPUs, each with its own communication method. To unify access, Microsoft is introducing a new API from the class linalg::Matrix, which will expose all matrix operations to the shader language. This allows neural rendering operations to be executed across multiple GPUs with a single programming effort.

As the developer behind the DirectX 12 API, Microsoft is observing a significant increase in graphics features utilizing neural network-based rendering techniques to enhance user graphics. This will require more matrix units in modern gaming GPUs. To provide a unified layer of abstraction for programming and executing neural rendering operations, Microsoft hopes that Shader Model 6.10 will become the standard for every GPU maker. Interestingly, this feature is supported across all NVIDIA RTX hardware, as it includes Tensor cores. Intel support is planned for an upcoming release, with B-series GPUs expected to be compatible. Only AMD's RDNA 4-based Radeon RX 9000 series GPUs support this feature, with no support planned for older models like the RX 7000 series and below.

NVIDIA Officially Launches GeForce RTX 5070 Laptop GPU with 12 GB GDDR7 Memory

28 April 2026 at 18:24
NVIDIA has officially launched its new GeForce RTX 5070 12 GB laptop edition GPU, featuring higher-capacity GDDR7 memory. This confirms earlier rumors about an upgraded memory configuration. In a quiet release, NVIDIA announced its decision to use 24 Gb (3 GB) GDDR7 memory modules, which offer a 50% increase in memory capacity compared to the current 16 Gb (2 GB) GDDR7 configurations. As demand for GPU memory remains high, NVIDIA can balance the supply of 16 Gb GDDR7 modules by utilizing a new batch of 24 Gb GDDR7 modules from partners like SK hynix, Samsung, and Micron. The company describes this move as a way to ensure a sufficient supply of 16 Gb GDDR7 memory for the remaining GeForce RTX 50-Series "Blackwell" GPUs, maintaining healthy supply levels. The new GPU SKU complements the existing RTX 5070 8 GB Laptop Edition model, providing gamers with more laptop configurations to choose from.
NVIDIADemand for GeForce RTX GPUs remains strong, and memory supply is constrained. In order to maximize memory availability, we are releasing the GeForce RTX 5070 Laptop GPU 12 GB configuration with 24 Gb G7 memory. This gives our partners access to an additional pool of memory to complement the 16 Gb G7 supply that currently ships with most GeForce GPUs. The 12 GB configuration will exist alongside the current 8 GB configuration, and allows our partners to bring a broader range of GeForce RTX 5070 laptops to consumers.

Steam Controller Goes Official on May 4 with $99 Price Tag

27 April 2026 at 21:49
Valve has officially confirmed that its highly-anticipated Steam Controller will go on sale globally on May 4. It will be priced at $99 in the United States, €99 in European Union countries, Β£85 in the UK, $149 CAD in Canada, and $149 AUD in Australia, marking a truly global launch. Designed as a universal control device, the Steam Controller aims to be a versatile gamepad for the broader Steam ecosystem, supporting PCs, laptops, Steam Deck, Steam Machine, and even the Steam Frame VR headset. While maintaining familiar core controls, Valve is clearly focusing on additional inputs, including dual trackpads, a gyro, Grip Sense, and four rear grip buttons, all of which can be customized through Steam Input.

Interestingly, Valve has revealed more details about some of the core technology behind the Steam Controller, with perhaps the most intriguing being magnetic thumbsticks built around TMR technology. Valve claims they offer a better feel, improved responsiveness, and much greater durability. They also add capacitive touch support for motion-based controls, meaning your commands can now be expressed in multiple ways. There is also a new puck accessory that handles both wireless connectivity and charging, snapping onto the controller magnetically to serve as a dock and transmitter in one.

Intel Turns Scrap Dies Into Usable CPUs as Demand Keeps Rising

27 April 2026 at 21:29
We have been reporting on the intense demand in the CPU sector currently sweeping the industry. AMD and Intel have sold out their inventories, and lead times are now weeks long. However, Intel, which manufactures a large part of its CPU portfolio, has been handling the situation differently. According to Intel and confirmed by industry analyst Ben Bajarin, the company is repurposing some of the dies found on the very edge of the silicon wafer, which would have otherwise become silicon scrap, into products that customers are eager to buy. This approach allows even some SKUs, which may have been flawed due to various effects like silicon node yield, to find new life through repurposing. Intel's customers are so eager for CPUs that higher-end parts, which would typically be discarded, are now being sold as independent SKUs. No silicon is left behind.

For example, when Intel produces a Xeon 6 "Granite Rapids" CPU on the Intel 3 node, the CPU compute die can accommodate up to 44 cores per die, but some are disabled for yield and power reasons. When the yield fails, these compute dies are repurposed into SKUs with lower core counts, as it costs Intel money to manufacture the die, and wasting it would be a significant loss of resources. However, when the die doesn't achieve satisfactory yield on the Intel 3 production line, the company would usually discard it if it can't extract a reasonable number of working cores. Amidst one of the highest-ever recorded CPU shortages, Intel is doing the opposite. The company is using these defective or scrap dies and packaging them into lower-class SKUs to sell to its customers, as any CPU availability is crucial, and customers are eagerly purchasing them. Imagine a wafer edge, which is round, having only a few working P-Cores, and that being packaged into a CPU that a hyperscaler would integrate into their offering.

Intel "Wildcat Lake" Outruns Apple's MacBook Neo and Its Successor in First Benchmarks

27 April 2026 at 17:19
Intel recently unveiled its "Wildcat Lake" Core 300 series of laptop processors, designed for the entry-level market with excellent CPU and GPU capabilities to meet basic needs. By combining two "Cougar Cove" P-cores with four LPE "Darkmont" cores, these processors should provide sufficient CPU power for basic tasks in the entry-level segment. Especially when paired with the NPU 5, Xe display and media engine, and a GPU featuring up to two Xe3 cores, this SoC should handle tasks with ease, including some very light entry-level gaming. Today, the first benchmarks appeared, showcasing what this CPU is capable of and how it compares to industry-leading solutions in this price range, such as Apple's newest MacBook Neo, thanks to the initial PassMark benchmark results.

In the latest benchmark, PassMark recorded the Intel Core 5 320 "Wildcat Lake" with two P-cores running at up to 4.6 GHz and LPE cores with a maximum turbo frequency of 3.4 GHz. This SoC scored about 4,047 points in the single-threaded rating and approximately 15,222 in the multithreaded rating in the PassMark evaluation benchmark. This places its rating just above Apple's first-generation M1 SoC in both single and multithreaded results. Additionally, the design surpasses Apple's A18 Pro SoC in the current MacBook Neo and the A19 Pro SoC that is expected to be included in the second-generation MacBook Neo design scheduled for 2027. Technically, both are based on different architectures, but they serve the same purpose: providing customers with an affordable SoC/laptop design that delivers solid computing power for all basic tasks.

Steam Controller to Arrive on May 4 in Japan

27 April 2026 at 16:42
Valve's Steam Controller is reportedly just a few days away from launch, as the company is preparing for a May 4 release in Japan. According to a now-deleted post by 4Gamer, the Japanese launch is expected on May 4, with a price of $99. This price includes the controller, a charging cable, and a 2.4 GHz dongle for wireless connectivity to a PC. The removed 4Gamer post mentioned the exact launch time as May 4 at 15:00 local time in Japan, which means the launch timing will differ for the Western hemisphere. This effectively sets the launch date exactly a week from the time of writing, confirming rumors of an imminent release. As a reminder, Valve's Steam Controller will feature elements that justify its $99 price point, including four programmable buttons, dual touchpads, a hall effect sensor, HD rumble for haptic feedback, decent battery life, and connectivity options that include Bluetooth and a separate 2.4 GHz dongle.

Below are some pictures from the 4Gamer article, which has now been taken down.

No New Intel Arc Gaming GPUs? Xe3 Skipped, Xe4 "Druid" Uncertain

27 April 2026 at 12:09
Intel's Arc gaming GPU roadmap is in a weird state, as the company has reportedly been reshuffling a significant portion of its dedicated Arc desktop GPU launches for the upcoming quarters. Currently, Intel offers its second-generation "Battlemage" architecture, based on the Xe2 IP, as a dedicated desktop gaming GPU in the form of the B580 mid-range graphics card. For those interested in Xe3-based "Celestial" or Xe3P-based "Crescent Island" dedicated GPU variants, Intel leaker Jaykihn has confirmed that there won't be an update anytime soon. Even the next-generation Xe4 "Druid" is being reconsidered for dedicated gaming GPUs. Intel has previously confirmed that it will continue GPU development, but desktop gamers might not be the primary focus, leaving notebook users in a better position.

Intel currently offers new GPU IP through integrated graphics, such as the Arc B390 iGPU found in "Panther Lake" processors, which use Xe3 IP. However, desktop dedicated GPUs are still using Xe2, and even for a maxed-out "Battlemage" configuration, Intel only offers the Arc Pro B70 and Arc Pro B65 graphics cards. These cards maximize the BMG-G31 Xe2 silicon but are intended for professional users. A recent driver update added the ability to play games on these cards, but they are still primarily designed for AI workloads and professional visualization, and they are priced higher than what the average gamer would want. Gamers are still seeking clarity about future updates, and the lack of recent rumors regarding an additional Arc gaming discrete GPU is concerning. Below are older roadmaps, and we expect to see an updated version sometimes in the near future as Intel confirms changes.

Windows 11 Updates Can Now Be Skipped and Even Paused Indefinitely

27 April 2026 at 10:40
Microsoft is... giving users exactly what they want? In the latest Windows Insider preview for Windows 11, Microsoft is rolling out an updated Windows Update experience that gives users more control over their update process. This includes the option to skip updates entirely during the out-of-box experience (OOBE) menu, allowing you to set up a PC without applying updates first. Previously, new PC configurations and installations might have been several months behind on Windows 11 updates, and Microsoft would force users to apply these updates immediately. This led to prolonged OS setup times and user frustration. Now, for those who need to move to a new desktop quickly and apply updates at a later date, Microsoft is finally offering an option during OOBE to install the OS first and apply updates later.

Additionally, Microsoft is now allowing users to pause Windows 11 updates in the settings with a dedicated calendar that extends up to 35 days. This ensures you have ample time to schedule an update session without disrupting your workflow. However, the 35-day mark is not a hard stop, as users can re-pause updates for another 35 days even after the original period has expired, essentially allowing updates to be paused indefinitely. There are no limits on how many times you can reset this pause date, giving you full control over the entire update process. This could be especially useful when a new update series arrives, and you want to wait and see if there are any OS issues or known problems before applying the update. For example, in the latest Windows 11 April update, some PCs might experience a BitLocker trigger, but this will be resolved in a future fix. Users can wait a few days for a fix to arrive and then install updates all at once without being forced to do so immediately.
❌
❌