AMD Sees Agent Computers as the Next Step in AI PCs
AMD has published a guide for running OpenClaw locally on Windows through two distinct hardware paths it calls "RyzenClaw" and "RadeonClaw," both built around its own silicon and designed to keep AI agent workloads off the cloud entirely. The push is part of AMD's broader "Agent Computer" narrative, where it argues that not every AI workload belongs in a data center, people and businesses want control over their data, affordable always-on AI without usage limits, and the confidence that their models are running locally rather than on someone else's infrastructure. The setup runs through WSL2 with LM Studio handling local LLM inference via llama.cpp, and supports Memory.md through local embeddings, no cloud dependency required. AMD says the environment can be configured in under an hour, targeting early adopters and developers experimenting with personal AI agents.
The RyzenClaw path is built around a Ryzen AI Max+ system with 128 GB of unified memory, AMD specifically recommends reserving 96 GB as variable graphics memory for this use case. Running the Qwen 3.5 35B A3B model, that configuration delivers around 45 tokens per second, processes 10,000 input tokens in roughly 19.5 seconds, supports a 260K token context window, and can run up to six agents concurrently. AMD is positioning this as capable of "agent swarm" experimentation on consumer hardware. RadeonClaw takes a different approach, pairing OpenClaw with the Radeon AI PRO R9700, a workstation-class card with 32 GB of VRAM. That setup is considerably faster, around 120 tokens per second with the same model, and 10,000 input tokens processed in about 4.4 seconds. The tradeoff is a smaller 190K token context window and support for only two concurrent agents, compared to six on the Ryzen AI Max+ path.
The RyzenClaw path is built around a Ryzen AI Max+ system with 128 GB of unified memory, AMD specifically recommends reserving 96 GB as variable graphics memory for this use case. Running the Qwen 3.5 35B A3B model, that configuration delivers around 45 tokens per second, processes 10,000 input tokens in roughly 19.5 seconds, supports a 260K token context window, and can run up to six agents concurrently. AMD is positioning this as capable of "agent swarm" experimentation on consumer hardware. RadeonClaw takes a different approach, pairing OpenClaw with the Radeon AI PRO R9700, a workstation-class card with 32 GB of VRAM. That setup is considerably faster, around 120 tokens per second with the same model, and 10,000 input tokens processed in about 4.4 seconds. The tradeoff is a smaller 190K token context window and support for only two concurrent agents, compared to six on the Ryzen AI Max+ path.

























































































































































































































































































































































































































































































































