Firstly massive thanks for the huge 1.12 update. At this point I often use the OpenAI API just to use this app, not the other way around haha.
I have some more feature requests, starting with this one:
There are several other API providers besides OpenAI that work exactly like OpenAI's ones. The only thing that needs to be changed is the base url itself (such as api.openai.com) and the API key.
They often support the same models as OpenAI's but sometimes other models are offered too.
For example, you could spin up your own inference server through this:
I have tried this and works by changing API Host under Settings>Models>OpenAI>Advanced>API Host
(This by default does not require an api key so it can be made to work by just leaving the api key field blank)
I have used some other providers too and those work as well by replacing the API Host and API Key fields.
But currently only one such provider can be made available at a time since the OpenAI Model page is static and we are not able to add more of it.
Most of these other providers have url like
https://<base url>/v1/chat/completions or other paths for other modalities.
If this feature were to be considered to be added, perhaps the flow could be to setup the base url and the api key first, then perhaps a button to add all OpenAI modalities (like transcription, chat completion, image generation, etc). And then an option to add custom models if the user wants to. It may not be necessary to fetch /v1/models for custom providers as they may not all have well defined lists for models, thus letting the user add themselves.
Let me know if you would consider this feature and I can share an endpoint and my api key there for testing. I can also test custom builds if you want.