Custom Server uses incorrect URL for Load Models
in progress
D
Dan Sully
If I create a Custom OpenAI compatible server with the Chat Endpoint of:
Works for making requests. However, if I try to "Load Models", Bolt AI requests:
Instead of the correct: https://my.host/api/v1/beta/models
C
Christopher Lane
For me the issue is somewhat different, as the models endpoint seems to get a 200 response but no list is retrieved. For context, I'm using llama.cpp with llama-swap for switching between models. In Open WebUI the list of models loads correctly.
[INFO] Request 127.0.0.1 "GET /v1/models HTTP/1.1" 200 1600 "Python/3.12 aiohttp/3.11.11" 74.41µs
[INFO] Request 127.0.0.1 "GET /v1/models HTTP/1.1" 200 1600 "BoltAI/163 CFNetwork/1410.4 Darwin/22.6.0" 52.514µs
{"data":[{"created":1748871394,"id":"gemma3-27b","object":"model","owned_by":"llama-swap"},{"created":1748871394,"id":"devstral-24b","object":"model","owned_by":"llama-swap"},{"created":1748871394,"id":"mistral-24b","object":"model","owned_by":"llama-swap"}]}
C
Christopher Lane
Ok, this was an issue on llama-swap side, now fixed. Thanks!
https://github.com/mostlygeek/llama-swap/pull/154
D
Dan Sully
Yes, that works. Although I tried it before seeing your comment here. I put in the full URL, which caused the app to crash whenever I went to the Providers setting. Had to manually update the SQLite database to fix it.
Not sure if you want to strip off everything but the "path" portion for input on that field.
Daniel Nguyen
Dan Sully: Thanks. I'll check.
Daniel Nguyen
I added the ability to customize the model listing endpoint. Can you check if it works for you.
The value is the relative path to the domain.
For example if the chat endpoint is https://my.host/api/v1/beta/openai/chat/completions, the model listing config would be:
/api/v1/beta/models
(relative to https://my.host)Daniel Nguyen
in progress
J
Jonathan Rico
Same issue, Bolt not load the mode list, and I can not add it manually.
this is my url
and here the model url
C
Christopher Lane
I'm experiencing similar issues. When "Support model listing?" is checked, refreshing the list doesn't return anything. The "Default model" selector is broken too.
"GET /v1/models HTTP/1.1" 200 913 "BoltAI/162 CFNetwork/1410.4 Darwin/22.6.0" 55.509µs