“I’ve been saving for months to get the Corsair Dominator 64GB CL30 kit,” one beleagured PC builder wrote on Reddit. “It was about $280 when I looked,” said u/RaidriarT, “Fast forward today on PCPartPicker, they want $547 for the same kit? A nearly 100% increase in a couple months?”



They can ALL be run on RAM, theoretically. I bought 128GB so I can run GLM 4.5 with the experts offloaded to CPU, with a custom trellis/K quant mix; but this is a ‘personal use’ tinkerer setup basically no one but hobbyists will touch.
Qwen Next is good at that because its very low active parameter.
…But they aren’t actually deployed that way. They’re basically always deployed on cloud GPU boxes that serve dozens/hundreds of people at once, in parallel.
AFAIK the only major model actually developed for CPU inference is one of the esoteric Gemma releases, aimed at mobile. And the bitnet experiments, which aren’t very big so far.
(In case it’s not obvious, this is my special interest, and I’m happy to ramble on about how to set up ‘niche gaming rig hybrid models’ for anyone interested).
I for one would enjoy triggering your unskippable cutscenes in setting up local CPU based AI if it can work on Linux with an older amd card.
Don’t have funds for anything fancy, but would be interesting in playing around with it. Been wanting to get something like that setup for home assistant.
Plenty of folks do AMD. A popular homelabsetup is 32GB AMD MI50 GPUs, which are quite cheap on eBay. Even Intel is fine these days!
But what’s your setup, precisely? CPU, RAM, and GPU.
Looks like I’m running an AMD Ryzen 5 2600 CPU, AMD Radeon RX 570 GPU, and 32GB RAM
Mmmmm… I would wait a few days, and try a GGUF quantization of Kimi Linear once its better supported: https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct
Otherwise you can mess with Qwen 3 VL now, in the native llama.cpp UI. But be aware that Qwen is pretty sycophantic like ChatGPT: https://huggingface.co/unsloth/Qwen3-VL-30B-A3B-Instruct-GGUF/blob/main/Qwen3-VL-30B-A3B-Instruct-UD-Q4_K_XL.gguf
If you’re interested, I can work out an optimal launch command. But to be blunt, with that setup, you’re kinda better off using free LLM APIs with a local chat UI.