Samsung and AMD are assembling the backbone of next-gen AI with HBM4
Samsung and AMD are tightening the loop around AI infrastructure. The new MOU signals something more integrated: memory, logic, packaging, and system architecture being tuned as a single stack.
The signing ceremony was attended by Dr. Lisa Su, Chair and CEO of AMD, and Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics.
At the center sits HBM4. Samsung is pushing its 1c DRAM node paired with a 4nm logic base die, targeting up to 13 Gbps per pin and roughly 3.3 TB/s of bandwidth.
AMD’s next Instinct part, the MI455X, is being positioned as the primary consumer of that bandwidth. If Samsung can hold yield rates on that 4nm base die while stacking aggressively, AMD gets a cleaner path to scale without the usual power penalties.
Then there is “Venice,” AMD’s 6th Gen EPYC platform. DDR5 is still in play here, but the tuning matters. High-capacity, high-efficiency DIMMs become critical when CPUs are orchestrating GPU-heavy nodes.
“Samsung and AMD share a commitment to advancing AI computing, and this agreement reflects the growing scope of our collaboration,” said Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics. “From industry-leading HBM4 and next-generation memory architectures to cutting-edge foundry and advanced packaging, Samsung is uniquely positioned to deliver unrivaled turnkey capabilities that support AMD’s evolving AI roadmap.”

Memory latency and consistency start to show up in scheduling efficiency, especially in mixed workloads where inference and training coexist. Meanwhile, AMD is clearly thinking beyond individual accelerators.
Helios is a rack-scale design philosophy; GPUs, CPUs, and memory are being co-optimized as a single unit. That changes how vendors plug in. Instead of shipping components, they are effectively shaping system behavior at the silicon level.
You can see it in how the stack lines up:
- HBM4 feeding MI455X with sustained bandwidth
- EPYC “Venice” handling orchestration with DDR5 tuned for consistency
- Helios is tying it together into a predictable, scalable rack unit
“Powering the next generation of AI infrastructure requires deep collaboration across the industry,” said Dr. Lisa Su, Chair and CEO of AMD. “We are thrilled to expand our work with Samsung, bringing together their leadership in advanced memory with our Instinct GPUs, EPYC CPUs and rack-scale platforms. Integration across the full computing stack, from silicon to system to rack, is essential to accelerating AI innovation that translates into real-world impact at scale.”

The post Samsung and AMD are assembling the backbone of next-gen AI with HBM4 appeared first on Sammy Fans.























































