Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU
11 points by anju-kushwaha
by CableNinja
2 subcomments
Ive been trying to run local, effectively followed this guide (before the guide existed), and have not had any success. Llama builds fine, and then when i start it up, it just indefinitely spins its progress bar. I left it sit for 3 days and nada.
Running on an 8core 12gb ram vm, which has an amd rx5500xt (8gb) passed through. ROCm built, llama built with the correct flags.