One command, and you are running the models even with the rocm drivers without knowing.
If llama provides such UX, they failed terrible at communicating that. Starting with the name. Llama.cpp: that's a cpp library! Ollama is the wrapper. That's the mental model. I don't want to build my own program! I just want to have fun :-P
I started with Ollama, and it was great. But I moved to llama.cpp to have more up-to-date fixes. I still use Ollama to pull and list my models because it's so easy. I then built my own set of scripts to populate a separate cache directory of hardlinks so llama-swap can load the gguf's into llama.cpp.
1. MIT-style licenses are "do what you want" as long as you provide a single line of attribution. Including building big closed source business around it.
2. MIT-style licenses are "do what you want" under the law, but they carry moral, GPL-like obligations to think about the "community."
To my knowledge Georgi Gerganov, the creator of llama.cpp, has only complained about attribution when it was missing. As an open-source developer, he selected a permissive license and has not complained about other issues, only the lack of credit. It seems he treats the MIT license as the first kind.
The article has other good points not related to licensing that are good to know. Like performance issues and simplicity that makes me consider llama.cpp.
At the time I dropped it for LMStudio, which to be fair was not fully open source either, but at least exposed the model folder and integrated with HF rather than a proprietary model garden for no good reason.
This is the reason I had stopped using it. I think they might be doing it for deduplication however it makes it impossible to use the same model with other tools. Every other tool can just point to the same existing gguf and can go. Whether its their intention or not, it's making it difficult to try out other tools. Model files are quite large as you know and storage and download can become issues. (They are for me)
pacman -Ss ollama | wc -l
16
pacman -Ss llama.cpp | wc -l
0
pacman -Ss lmstudio | wc -l
0
Maybe some day.I will switch once we have good user experience on simple features.
A new model is released on HF or the Ollama registry? One `ollama pull` and it's available. It's underwhelming? `ollama rm`.
Both llama.cpp and ollama are great and focused on different things and yet complement each other (both can be true at the same time!)
Ollama has great ux and also supports inference via mlx, which has better performance on apple silicon than llama.cpp
I'm using llama.cpp, ollama, lm studio, mlx etc etc depending on what is most convenient for me at the time to get done what I want to get done (e.g. a specific model config to run, mcp, just try a prompt quickly, …)
- vLLM https://vllm.ai/ ?
- oMLX https://github.com/jundot/omlx ?
NO, it is not simpler or even as simple as Ollama.
There are multiple options-- llama server and cli, its not obivous which model to use.
With ollama, its one file. And you get the models from their site, you can browse an easy list.
I dont have the time to go thru 20billlion hugging face models and decide which is the one for me.
Thanks, but I'm sticking with Ollama
Due to this post I had to search a bit and it seems that llama.cpp recently got router support[1], so I need to have a look at this.
My main use for this is a discord bot where I have different models for different features like replying to messages with images/video or pure text, and non reply generation of sentiment and image descriptions. These all perform best with different models and it has been very convenient for the server to just swap in and out models on request.
[1] https://huggingface.co/blog/ggml-org/model-management-in-lla...
% ramalama run qwen3.5-9b
Error: Manifest for qwen3.5-9b:latest was not found in the Ollama registryIt's a joke... but also not really? I mean VLC is "just" an interface to play videos. Videos are content files one "interact" with, mostly play/pause and few other functions like seeking. Because there are different video formats VLC relies on codecs to decode the videos, so basically delegating the "hard" part to codecs.
Now... what's the difference here? A model is a codec, the interactions are sending text/image/etc to it, output is text/image/etc out. It's not even radically bigger in size as videos can be huge, like models.
I'm confused as why this isn't a solved problem, especially (and yes I'm being a big sarcastic here, can't help myself) in a time where "AI" supposedly made all smart wise developers who rely on it 10x or even 1000x more productive.
Weird.
In contrast to Ollama, this is a self-contained library, not a server.
I wrote some quick notes on this blogpost, just to jot down how we think about good open-source citizenship: https://www.nobodywho.ai/posts/notes-on-friends-dont-let-fri...
So given, as the author states, Ollama runs the LLMs inefficiently, what is the tool that runs them most efficiently on limited hardware ?
The fact that they are trying to make money is normal - they are a company. They need to pay the bills.
I agree that they should improve communication, but I assume it is still small company with a lot of different requests, and some things might be overlooked.
Overall I like the software and services they provide.
The progression follows the pattern cleanly:
1. Launch on open source, build on llama.cpp, gain community trust
2. Minimize attribution, make the product look self-sufficient to investors
3. Create lock-in, proprietary model registry format, hashed filenames that don’t work with other tools
4. Launch closed-source components, the GUI app
5. Add cloud services, the monetization vectorWhen i'm using Ollama - I honeslty don't care about performance, I'm looking to try out a model and then if it seems good, place it onto a most dedicated stack specifically for it.
FWIW llama.cpp does almost everything ollama does better than ollama with the exception of model management, but like, be real, you can just ask it to write an API of your preferred shape and qwen will handle it without issue.
It is a parasitic stack that redirects investment into service wrappers while leaving core infrastructure underfunded
We have to suffer with limits and quotas as if we are living in the Soviet Union
What is the llama-cpp alternative?
llama.cpp was already public by March 10, 2023. Ollama-the-company may have existed earlier through YC Winter 2021, but that is not the same thing as having a public local-LLM runtime before llama.cpp. In fact, Ollama’s own v0.0.1 repo says: “Run large language models with llama.cpp” and describes itself as a “Fast inference server written in Go, powered by llama.cpp.” Ollama’s own public blog timeline then starts on August 1, 2023 with “Run Llama 2 uncensored locally,” followed by August 24, 2023 with “Run Code Llama locally.” So the public record does not really support any “they were doing local inference before llama.cpp” narrative.
And that is why the attribution issue matters. If your public product is, from day one, a packaging / UX / distribution layer on top of upstream work, then conspicuous credit is not optional. It is part of the bargain. “We made this easier for normal users” is a perfectly legitimate contribution. But presenting that contribution in a way that minimizes the upstream engine is exactly what annoys people.
The founders’ pre-LLM background also points in the same direction. Before Ollama, Jeffrey Morgan and Michael Chiang were known for Kitematic, a Docker usability tool acquired by Docker on March 13, 2015. So the pattern that fits the evidence is not “they pioneered local inference before everyone else.” It is “they had prior experience productizing infrastructure, then applied that playbook to the local-LLM wave once llama.cpp already existed.”
So my issue is not that Ollama is a wrapper. Wrappers can be useful. My issue is that they seem to have taken the social upside of open-source dependence without showing the level of visible credit, humility, and ecosystem citizenship that should come with it. The product may have solved a real UX problem, but the timeline makes it hard to treat them as if they were the originators of the underlying runtime story.
They seem very good at packaging other people’s work, and not quite good enough at sounding appropriately grateful for that fact.
I was using LM Studio since I've moved to MacOS so that's fine I guess
This is the game. We shouldn't delude ourselves into thinking there are alternative ways to become profitable around open source, there aren't. You effectively end up in this trap and there's no escape and then you have to compromise on everything to build the company, return the money, make a profit. You took people's money, now you have to make good, there's no choice. And anyone who thinks differently is deluded. Open source only goes one way. To the enterprise. Everything else is burning money and wasting time. Look at Docker. Textbook example of the enormous struggle to capture the value of a project that had so much potential, defined an industry and ultimately failed. Even the reboot failed. Sorry. It did.
This stuff is messy. Give them some credit. They give you an epic open source project. Be grateful for that. And now if you want to move on, move on. They don't need a hard time. They're already having a hard time. These guys are probably sweating bullets trying to make it work while their investors breathe down their necks waiting for the payoff. Let them breathe.
Good luck to you ollama guys!
Clients get disappointed, alternatives have better services, and more are popping out monthly. If they continue that way, nothing good will happen, unfortunately :(
At the top could have been a link to equivalent llamacpp workflows to ollamas.
I wish the op had gone back and written this as a human, I agree with not using Ollama but don't like reading slop.