LM Studio support is broken
in progress
Daniel Nguyen
in progress
Thanks for the report. Although LM Studio is no longer on my "official" support list as I ran into quite a lot of issue with it.
I will try to fix this issue though.
D
Dmitry Matora
Daniel Nguyen what can you suggest as alternative to running multiple models simultaneously and quickly switching between them on the client side?
Daniel Nguyen
Dmitry Matora Not sure about your workflow but I mostly use Ollama now. I'm adding support for local vision model (llava) and embedding :D
D
Dmitry Matora
Daniel Nguyen from what I see Ollama is lacking this https://github.com/ollama/ollama/issues/2109
There are multiple cases where you want multiple models run at the same time - switching between models quickly to see different answers to same question, multiple family members using different models on same local server simultaneously, etc...
Daniel Nguyen
Dmitry Matora I see. Did you try the latest version? They added concurrency features (still experimental though)
D
Dmitry Matora
Daniel Nguyen tried to import dolphin gguf models into ollama - both mistral and llama3 import crashed with exception. Seems like LM Studio is the only choice.
Daniel Nguyen
Dmitry Matora good to know. I will add official support for LMStudio then