This is NOT a traditional HTTP/HTTPS/SOCKS proxy. It's a custom one that allows users to load balance their traffic across multiple API endpoints (such as multiple Azure OpenAI endpoints in different regions that have the same model type).
Ideally, when a BoltAI user provides their credentials for accessing their LiteLLM proxy, then BoltAI should enumerate all the advertised models the user is able to access and then make those models available through the drop-downs -- without making the user have to configure each model separately:
Docs:
API: