Native Support for LiteLLM OpenAI Proxy Server
in progress
D
Darien Kindlund
This is NOT a traditional HTTP/HTTPS/SOCKS proxy. It's a custom one that allows users to load balance their traffic across multiple API endpoints (such as multiple Azure OpenAI endpoints in different regions that have the same model type).
Ideally, when a BoltAI user provides their credentials for accessing their LiteLLM proxy, then BoltAI should enumerate all the advertised models the user is able to access and then make those models available through the drop-downs -- without making the user have to configure each model separately:
Docs:
API:
Daniel Nguyen
in progress
Sorry for the delay. You can do this in the beta version of BoltAI. I will release a stable version soon.
Here is a quick video guide: https://share.cleanshot.com/JdGBmGgN
Daniel Nguyen
under review