Unable to use Inline AI with offline model
complete
R
Ralf
Amazing🔝🙏🙏🙏
Daniel Nguyen
complete
I've released
v1.29.4
to fix the issue with gpt
trigger. It will always use OpenAI.To use a different AI service, please use Inline Asssitant instead.
S
Static
Daniel Nguyen Maybe handle the error better if it's only for ChatGPT. Currently still getting an API error followed by a long message that includes the following:
NSLocalizedDescription=The network connection was lost., NSErrorFailingURLStringKey=https://api.openai.com/v1/chat/completions, NSErrorFailingURLKey=https://api.openai.com/v1/chat/completions, _kCFStreamErrorDomainKey=1}
I don't think the end user cares about this. Perhaps a better error box that says please enter OpenAI API key to use this feature or something similar would be cleaner. For now I have removed the shortcut trigger since I am using offline models.
R
Ralf
Perfect, thx a lot 🙏
Daniel Nguyen
Can you use an Inline Assistant for this use case? https://boltai.com/docs/ai-inline/inline-assistant
The
GPT:
trigger is really for OpenAI model. It's very limited, and doesn't support other parameters such as custom system prompt or GPT parameters.Using the Inline Assistant is recommended.
R
Ralf
Daniel Nguyen
Thanks, Daniel!
The Inline Assistant is definitely solid, and I see why it’s recommended. That said, it’d be awesome if the GPT trigger could still work independently of the default model. Right now, if I set the default to something other than OpenAI, the GPT trigger just stops working, which is super limiting.
It’d be great if the GPT trigger could always default to OpenAI, no matter what the default model is. That way, we get the flexibility of using custom models and still keep the GPT trigger functional.
Any chance this could be improved?
Cheers!
Ralf
Daniel Nguyen
Ralf: Ah you're right. I've fixed it and will release soon. It should always be OpenAI.
Daniel Nguyen
under review
R
Ralf
I have the same problem. It would be perfect if I could select the Inline AI command independently of the default model and if I could also select a local LLM for Inline AI