by syntaxing
4 subcomments
- I’m really excited for lmster and to try it out. It’s essentially what I want from ollama. Ollama has deviated so much from their original core principles. Ollama has been broken and slow to update model support. There’s this “vendor sync” I’ve been waiting (essentially update ggml) for weeks.
- These days I don't feel the need to use anything other than llama.cpp server as it has a pretty good web UI and router mode for switching models.
by minimaxir
1 subcomments
- LMStudio introducing a command line interface makes things come full circle.
by thousand_nights
3 subcomments
- man they really butchered the user interface, the "dark" mode now isn't even dark, it's just grey, and it's looking more like a whitespacemaxxed children's toy than a tool for professionals
- How does LM Studio differ from Ollama?
Why would I use one rather than the other?
The impression I get is that LM Studio is basically an Ollama-type of solution but with an IDE included -- is that a fair approximation?
Things change so fast in the AI space that I really cannot keep up :(
- LM Studio is awesome in a way how easily you can start with local models. Nice UX, not needed to tweak every detail, but giving you the options to do so if you want.
- This release introduces parallel requests with continuous batching for high throughput serving, all-new non-GUI deployment option, new stateful REST API, and a refreshed user interface.
by saberience
14 subcomments
- What’s the main use-case for this?
I get that I can run local models, but all the paid for (remote) models are superior.
So is the use-case just for people who don’t want to use big tech’s models? Is this just for privacy conscious people? Or is this just for “adult” chats, ie porn bots?
Not being cynical here, just wanting to understand the genuine reasons people are using it.
- Finally UI that is not so ugly. Now I'm only wondering if I somehow can setup that I can share the same LLM models between LM Studio and llamabarn/Ollama (so that I don't have to waste storage on duplicated models).
- I've been using LM Studio for a while, this is a nice update. For what I need, running a local model is more than adequate. As long as you have sufficient RAM, of course.
by doanbactam
0 subcomment
- I've been using Ollama for local dev, but the model management here seems easier to use. The new UI looks much cleaner than the previous versions. Has anyone benchmarked the server mode against Ollama yet? The model management here is fantastic, but switching environments is a pain if the API compatibility isn't solid. Let's go with a mix of appreciation for the tool and a technical question about integration/performance, as that's classic HN.
- My complaint is that LM Studio insists on installing as admin on my Mac. For no apparent reason, and they refuse to say why.
by desipenguin
0 subcomment
- Does this version support only M-Series mac ?
Download page (https://lmstudio.ai/download) shows only `M Series` in the running dropdown
- Is there an iOS/Android app that supports the LM Studio API(s) endpoints? That seems to be the "missing" client, especially now with llmster (tbh I haven't looked very hard)
- this is not open source
- Personally, I would not run LM Studio anywhere outside of my local network as it still doesn't support adding an SSL cert. I guess you can just layer a proxy server on top of it, but if it's meant to be easy to set up, it seems like a quick win that I don't see any reason not to build support for.
https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1...
- hijacking this, what is the best local model (and tool to use it) for programming, if i only have 256gb ssd on a mac? im very used to codex and while i get that it will never be this smart locally, is there any coding model like it, not too heavy on space?
- lmster is what was lacking in lmstudio (yes, they have lms but it lacks so many functionalities that the GUI version has).
but it's a bit too little too late. people running this probably can already setup llama.cpp pretty easily.
lmstudio also has some overhead like ollama; llama.cpp or mlx alone are always faster.
by chocobaby15
1 subcomments
- When are you guys going to offer cloud inference as well?
by ai_critic
1 subcomments
- What exactly is the difference between lms and llmsterm?
by Der_Einzige
2 subcomments
- Why is it that there are ZERO truly prosumer LLM front ends from anyone you can pay?
The closest thing we have to an LLM front end where you can actually CONTROL your model (i.e. advanced sampling settings) is oobabooga/sillytavern - both ultimately UIs designed mostly for "roleplay/cooming". It's the same shit with image gen and ComfyUI too!!!
LM Studio purported to be something like those two, but it has NEVER properly supported even a small fraction of the settings that LLMs use, and thus it's DOA for prosumer/pros.
I'm glad that claude code and moltbot are killing this whole genre of Software since apparently VC backed developers can't be trusted to make it.
- Does it work with NPUs ?
by MarginalGainz
0 subcomment
- [dead]
- edit: disregard, new version did not respect old version's developer mode setting
- Is the GUI still unable to connect to an instance of lm-studio running elsewhere?
by huydotnet
2 subcomments
- I was hoping for the /v1/messages endpoint to use with Claude Code without any extra proxies :(