❌

Reading view

This PCIe AI Accelerator Card Can Run 700B LLMs Locally With 384 GB Memory at Just 240W, Less Than Half The Power of RTX PRO 6000 Blackwell

A circuit board labeled HTX301 Evaluation Platform features an HTX301 chip in the center.

A Taiwanese company has announced its new PCIe AI accelerator card that can run 700B LLMs locally at just 240W, ending the need for large GPU clusters. Taiwanese Company Unveils Its PCIe AI Accelerator That Devalues Large-Scale AI Installations By Running 700B LLMs on A Single Card Skymizer, a Taiwan-based company specializing in AI software and hardware, has announced its brand new solution, the HTX301. The HTX301 is designed for On-Prem AI, offering a PCIe Add-in-Card design and offering large-scale levels of AI performance at sub-250W TDPs. Some of the highlights of the card include: The company says that the […]

Read full article at https://wccftech.com/this-pcie-ai-accelerator-card-packs-384-gb-memory-run-700b-llms-240w/

AMD Launches MI350P, Its First PCIe β€œInstinct” In Four Years – Packs CDNA 4 GPU With 4.6 PFLOPs AI Compute, 144 GB HBM3E at 600W

The image shows an AMD Instinct MI350P graphics card against a dark, abstract background.

AMD has announced its brand new Instinct MI350P PCIe GPU accelerator, which is the first PCIe design in years and is aimed at AI workloads. The Instinct MI350P PCIe GPU Takes The MI350X Chips, Cuts It Into Half For 128 CUs, 144 GB HBM3E & 600W Power With the Instinct MI350P PCIe GPU, AMD gives enterprise users an option to expand their AI computing capabilities without having to invest in expensive infrastructure. The PCIe design of the MI350P makes it an easy-to-use and drop-in solution that brings lots of performance in a standard dual-slot and server-focused design. Designed to help […]

Read full article at https://wccftech.com/amd-mi350p-first-pcie-instinct-in-four-years-cdna-4-gpu-4-6-pflops-ai-144-gb-hbm3e-600w/

❌