❌

Reading view

NVIDIA Feynman GPUs Push Power Semi Content To $191,000, 17 Times Increase Over Blackwell As Industry Embraces 800V DC Architectures

NVIDIA Feynman GPUs Push Power Semi Content To $191,000, 17 Times Increase Over Blackwell As Industry Pushes 800V DC Architectures

As compute requirements grow in AI datacenters, so do the power requirements, which are estimated to reach 17x higher with NVIDIA's Feynman. NVIDIA Feynman Racks Estimated To Feature 17x Higher Power Semi Costs Per Rack Versus Blackwell NVIDIA Feynman GPUs feature several groundbreaking features and will launch in 2028, after Rubin. The company has been working hard to deliver more efficient AI solutions, but as requirements grow, power requirements have increased tremendously. Morgan Stanley Research has published a chart that visualizes the total power semi content of three AI rack solutions from NVIDIA. Starting with the baseline Blackwell or B200, […]

Read full article at https://wccftech.com/nvidia-feynman-gpus-push-power-semi-content-17-times-higher-vs-blackwell/

NVIDIA Fast-Forwarded Co-Packaged Optics Five Years Ahead of Schedule, Arriving First With Its Feynman GPUs

An NVIDIA chip labeled 'GH200 H100 NVL' is positioned centrally within a complex cooling system.

NVIDIA's Feynman GPUs will be the first to feature Co-Packaged Optics, but this wasn't always the case until the AI giant decided to switch gears. Co-Packaged Optics Were Many Years Away, But NVIDIA Decided To Move Ahead With Its Feynman GPUs CPO or Co-Packaged Optics (Silicon Photonics) is the next-generation solution thatΒ reduces reliance on copper and harnesses light to transfer signals. These CPOs are packaged alongside hardware accelerators such as GPUs and will be a key solution for next-gen AI factories, offering improved interconnect latency and creating high-bandwidth connections between CPU and GPU. If we go by the original plans, […]

Read full article at https://wccftech.com/nvidia-fast-forwarded-co-packaged-optics-five-years-ahead-arriving-with-feynman-gpus/

Intel’s ZAM Memory Threatens HBM’s AI Throne With 2x The Bandwidth of HBM4, More Capacity & Low Thermal Constraints

Intel's ZAM Memory Threatens HBM's AI Throne With 2x The Bandwidth of HBM4, More Capacity & Low Thermal Constraints

Intel's Z-Angle Memory (ZAM) is approaching completion as it races towards taking a bite at the AI boom while challenging HBM as a viable alternative. Intel's ZAM Challenges HBM As A Big Memory Innovation In the High-Bandwidth, High-Capacity Segment Offering 2x The Speed of HBM4 Z-Angle Memory or ZAM has been stirring up a lot of talk in the memory segment. The upcoming memory standard is being developed by Intel and SoftBank & aims to offer a low-power, high-density replacement to HBM. Now, new details have been shared that provide more insight into ZAM memory. For starters, the new memory […]

Read full article at https://wccftech.com/intel-zam-memory-threatens-hbms-ai-throne-with-2x-the-bandwidth-of-hbm4/

Sub-1nm Process Technology Won’t Arrive Till 2034, Logic Roadmap Highlights 2D FETs For 0.2nm & Sub 0.2nm Nodes By 2043-2046

Sub-1nm Process Technlogy Won't Arrive Till 2034, IMEC Logic Roadmap Highlights 2DFETs For 0.2nm & Sub 0.2nm Nodes By 2043-2046

Moore's Law has slowed down, but progress continues in logic development as a new roadmap points to sub-1nm process nodes around 2034. It will Be Years Before Process Technology Go Sub-1nm, But They Are In Development: 0.7nm by 2034 & <0.2nm by 2046 Process technologies have slowed down as we transition into the Angstrom era. While newer nodes continue to offer uplifts, they are getting expensive to produce as the machinery needed to achieve newer designs comes at higher costs. Furthermore, the reliance on chiplets through advanced package solutions has reduced the need to shift to newer nodes immediately, as […]

Read full article at https://wccftech.com/sub-1nm-process-node-technology-wont-arrive-till-2034-logic-roadmap-2dfets-sub-0-2nm-2046/

TSMC’s A16 β€˜1.6nm’ Node Promises 10% Speed Boost or 20% Power Cut Over 2nm, With Backside Power Hitting Production by Q4 2026

TSMC's next-generation A16 or 1.6nm process tech will be the start of its "Angstrom Era" journey, delivering improved performance/power profiles versus 2nm. Angstrom Era For TSMC - A16 Process Tech Offers Better Performance/Power Profiles Versus 2nm While Adding Backside Power At the 2026 VLSI symposium, TSMC will be presenting its A16 process node technology. A16 is part of TSMC's Angstrom era family of nodes, which includes A14 and the recently announced A13 & A12. In a preview provided by VLSI of the upcoming paper titled "T1.5", TSMC restates the performance/power profile advantages. One of the biggest features for A16 will […]

Read full article at https://wccftech.com/tsmc-a16-node-promises-speed-boost-power-cut-over-2nm-backside-power-production-q4-2026/

Agentic AI Pushes CPUs to Pack 400 GB of Memory, 4x More Than Today, as DRAM Shortage Spirals Toward 2027

CPUs or GPUs, but require lots of memory for running Agentic AI, and this demand is spiraling to unseen levels as DRAM constraints persist. CPUs Running Agentic AI Will Be Equipped With Up to 400 GB of Memory, Further Crushing The DRAM Supply Chain Memory makers are earning big profits but are also unable to meet the demand. We have seen reports on how major manufacturers are rapidly expanding their production facilities, but these are yet to become operational, and Samsung itself has stated that 2027 will be worse for the DRAM industry than 2026, so it's looking like a […]

Read full article at https://wccftech.com/agentic-ai-pushes-cpus-to-pack-400-gb-of-memory-4x-more-than-today/

Intel & AMD Work On APX, The Next Major Step In The Evolution of x86 Architectures, Adds More Performance Without Requiring More Die Area & Power

Intel & AMD Work On APX, The Next Major Step In The Evolution of x86 Architectures, Adds More Performance Without Requiring More Die Area & Power

APX or Advanced Performance Extensions are the next evolution of x86 as Intel & AMD co-develop new standards for the architecture. APX Expands the x86 Instruction Set, Bringing Faster Performance & New Features That Will Benefit Both Intel and AMD's Next-Gen Chips Two days ago, we talked about ACE (AI Compute Extensions), which is a unified instruction set that aims to increase matrix-multiply performance for next-gen x86 chips. ACE is just one part of the grander scheme in which both Intel and AMD are working together to evolve the x86 architecture under a single unified framework through the recently established […]

Read full article at https://wccftech.com/intel-amd-work-on-apx-the-next-major-step-in-the-evolution-of-x86-architectures/

❌