Custom Server uses incorrect URL for Load Models
in progress
D
Dan Sully
If I create a Custom OpenAI compatible server with the Chat Endpoint of:
Works for making requests. However, if I try to "Load Models", Bolt AI requests:
Instead of the correct: https://my.host/api/v1/beta/models
Daniel Nguyen
in progress
J
Jonathan Rico
Same issue, Bolt not load the mode list, and I can not add it manually.
this is my url
and here the model url
C
Christopher Lane
I'm experiencing similar issues. When "Support model listing?" is checked, refreshing the list doesn't return anything. The "Default model" selector is broken too.
"GET /v1/models HTTP/1.1" 200 913 "BoltAI/162 CFNetwork/1410.4 Darwin/22.6.0" 55.509µs