Changelog
Follow up on the latest improvements and updates.
RSS
Hi everyone. It’s Daniel again 👋
My goal has always been to make BoltAI the best AI client for Mac. Thanks to your feedback, I’ve made a lot of improvements to the BoltAI v2 app. Update to the latest version v2.0.10 (build 30) and let me know what you think.
Here are the most notable changes.
## Sub Folder
BoltAI now supports sub folder. Move and reorder folders with drag-and-drop, or use the context menu.

## Chat Inspector Pane
Similar to v1, the Chat Inspector Pane allows you to configure chat’s advanced settings.
In v2, I improved the UX to make the configuration profile more clear. By default the chat follows the global configuration you set in App Settings. When you customize the LLM parameters, for example, the chat switched to Chat Configuration profile and won’t be affected by the changes in the global configuration.
Within a project folder, you can switch between Folder Profile and Chat Profile.
In the Info tab, you can find the canonical deep link to the chat. Use this link to refer to the chat on your notes or other apps. When triggered, it would open the exact chat in a new chat companion window.

## Memory Tool
Memory Tool allows LLMs to remember helpful information between conversations, making its response more relevant and personalized.
Make sure you’re using a model with tool-use capability and enable the plugin in the Tools popover. When applicable, LLM would automatically use and update memories to improve the response.
You can also teach the model to remember something new by simply saying it. For example: “Remember that I’m an indie developer and would prefer a simple solution to engineering problems”.
You can manage the memories in Settings > Plugins > Memories.

## Web Fetch
Web Fetch Tool gives LLM the ability to read the content of a web page. It’s pretty straightforward: enable it in Tool popover and ask the model about a particular web page.
With the Web Fetch tool, you can use a local model while still accessing up-to-date content from the internet. This allows you to enjoy the benefits of current information without compromising your privacy.
Web Fetch tool is automatically enabled when using a search tool such as Brave Search.

## Other quality-of-life improvements:
*
Recent Dictations
. Find your recent dictations in the app menubar. Useful when auto-paste fails.*
New Developer Settings
. Allow resetting local database and re-sync everything. *
Instant Chat Bar
auto-attaches clipboard snippets or the current selection (requires Accessibility permission)*
Reply-finished notifications
let BoltAI alert you as soon as a background response wraps up; toggle this in Settings → General.*
Set AI Service Order
. Change the order of configured AI Services in the service selection popover. *
Project Default Plugins
. You can configure the default plugin for a project now.* Other bug fixes and improvements.
## And that’s it for this week
I’m going to make BoltAI v2 the best AI chat app for Mac. And I need your help to shape the future of it.
Please keep the feedback coming 🙏
Hi everyone, BoltAI v2 is finally here ✨
First of all, thank you for your continued support, and sorry for the lack of communication from my end. I’ve been putting my 100% attention into v2 development over the last few months and wasn’t able to share much of the progress.
With that said, I’m very happy with the result. After months of effort, I’m super excited to share that the BoltAI v2 beta is ready for public release. It’s a lot better than v1, which is already pretty good, I think.
Let’s get to it.

---
## Download Link for BoltAI v2
Here is the download link if you want to jump straight to the product: https://updates.boltai.com/dmg/BoltAI-2.0.5.dmg
Note that it’s a completely new app, with a different app bundle ID. You can import data from v1 to v2 to speed up the configuration process (keyboard shortcut: Command + Shift + I).
Since v2 doesn’t have full feature parity with v1 yet, you may want to use both v1 and v2 in parallel.
## What’s news
Here are some of the notable improvements:
Cloud Sync Ready
You can choose to sync all your data with BoltAI Cloud for easy access. BoltAI Cloud is hosted on Supabase with strict row-level security (RLS) enforcement. Your data is encrypted at rest and in transit.
BoltAI supports end-to-end encryption for your API keys. You can set your own passphrase to encrypt your API keys. This way, nobody can see your API keys, not even me as the developer.
Cloud Sync is completely optional. You can skip it and use BoltAI 100% locally.
Faster UI, better UX
I rebuilt v2 from the ground up with a stronger foundation. It targets macOS 13+ and only uses modern Apple APIs, making the whole app snappier.
You can quickly navigate between chats in v2 with standard keyboard shortcut:
- ⌘1 to ⌘9 to select first 9 chats in the sidebar. Demo: https://share.cleanshot.com/6KFJx21j
- ⌘[ and ⌘] to go back / go forward.
- ⌘[ and ⌘] to select previous / next item in the sidebar. Demo: https://share.cleanshot.com/y2NBxcHb
I rebuilt the chat renderer completely. It’s faster and looks more modern now. I tested it with a very large chat (43k+ messages), and it doesn’t affect performance at all. Give it a try and let me know what you think.
The model selection popover also got a major redesign. Now you can quickly switch to another service and model with just a keyboard shortcut.
New “Instant Chat Bar”

Following your feedback, I reworked the “Instant Command” in v1 and made it work even better now: better design, native look and feel, while still quite powerful. Press Control + Space to trigger it.
In Settings, you can enable more advanced features such as shake to activate, auto-attach clipboard content, and so on.
Demo:
- Instant Chat https://share.cleanshot.com/ZYc76RH0
- Shake to Chat https://share.cleanshot.com/kNRXWqwX
New “Instant Dictation”

I reworked the Inline Whisper feature from v1, and now you can use it with local AI models too! In v2, I decided to use the Parakeet model as it’s fast, consumes less RAM, and can work 100% offline.
Like in v1, you can choose to copy the output to the clipboard or paste it directly into the app.
Tighter AI provider integration
BoltAI v2 continues to support a wide range of AI services, now including subscriptions such as Claude Code or GitHub Copilot.
It works even better in v2, where all provider responses are unified to give you the same experience. In v1, there were a lot of inconsistencies between providers.
I integrated deeper with each AI service provider in v2 to take full advantage of model capabilities.
For example, when using the OpenAI provider, you can also use native tools such as Web Search, Image Generation, Code Interpreter, and more. Go to Settings > Plugins to configure native provider tools.

More Secure
BoltAI v1 supported some great out-of-app features such as AI Command, AI Inline, and File Sync. Some of these features required turning sandboxing off (the default security model from Apple). In v2, I reworked this and made the app sandboxed by default. This way, for future features like AI agents, we can all be reassured that the app won’t touch your files without asking first.
And a lot more…
I’m curious to hear your thoughts on the new app.
## Future Roadmap
BoltAI v2 is still in active development. It has ~80% feature parity with v1 now.
I will continue to port features from v1 to v2 in the coming months. Please let me know which features in v1 you want to use in v2 first. I’ll prioritize them.
Top-priority features I’m porting to v2 right now are: MCP support, AI Command, and other features for teams. Stay tuned.
Please bookmark and share your feedback on this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9
## FAQs:
Is my v1 license valid for v2?
Yes. BoltAI v2 continues to follow the same licensing model as v1: a Perpetual License with 1 year of updates. Think of it as v1.99.
However, in v2 I’ll introduce a subscription plan for Cloud Sync (optional) as it incurs real recurring expenses. Note that the Cloud Sync feature is completely optional. You can turn sync off and keep using it like v1.
In the first beta, there is no license validation yet. Use it for free while in beta!
Will v1 keep receiving updates / bug-fixes?
I’ll continue to fix critical bugs in v1 until all features are ported to v2. New features and major updates will be focused on v2.
How do I migrate my data?
To import data from v1, open BoltAI v2 app, press Command + Shift + I and follow the instruction.
Note that in the first release, AI Commands & AI Assistants are not supported yet. You can import: AI Services, Folders, Chats, Prompts and Memories.
How do I report bugs?
Please report bugs for v2 using this board: https://feedback.boltai.com/?b=646b16f66b8d963816ca5dc9
## And that’s it, for now
See you in the next release 👋
new
improved
v1.36.5 (build 169)
- Added support for OpenAI's latest model gpt-5-pro
- Support gpt-5 without Org Verification (turn streaming off)
- Supports Responses API for o3-pro, o1-pro, and other deep research models
- Azure OpenAI Service: added support for GPT-5 and other models
- Switched to max_completion_tokensinstead ofmax_tokens
- Fetch model list directly from Github Copilot
- Improved MCP security
- Improved OpenRouter error handling
- New implementation of OpenRouter's API key validation
- Fixed the issue where gpt-5 model family doesn't work with temperature and top_p parameters
new
improved
fixed
v1.36.0
- [New] Added support for OpenAI's new Responses API (o1-pro & o3-pro models)
- [New] LM Studio provider now supports Tool Use
- [New] Added MCP logging (use Console.app to view detail logs)
- [Improved] New implementation of MCP server initialization
- [Improved] Switched to OpenAI-compatible API endpoint for Google AI provider
- [Improved] Improved model list fetching on app startup
- [Fix] Not showing AI response for some local AI models
- [Fix] Invalid JSON response when using Google AI
- [Fix] option+returndoesn't trigger alt AI Command
- [Fix] Incorrectly parse some MCP tool call parameters link
- [Fix] Spaces aren't escaped in MCP server args link
new
improved
v1.35.3

- Claude 4 models support Reasoning Effort params
- New voice models: GPT-4o Transcribe and GPT-4o mini Transcribe
- Support for AWS's ProcessAWSCredentialIdentityResolver
- Added support for all AWS regions
- WAL mode enabled by default for faster and better database performance
- Improved Inspector pane rendering with reduced flickering
- New message rendering & icon set with a focus on content
- Customizable model listing endpoint for OpenAI-compatible servers (default: /v1/models)
- Attempted fix for the Message Editor window issue
new
improved
fixed
v1.34.1: Model Context Protocol support

- New: BoltAI is now an MCP client. MCP servers allows you to extend BoltAI's capabilities with custom tools and commands
- New: Added support for GitHub Copilot and Jina DeepSearch
- New: Added OpenAI's new models: o3, o4-mini, GPT-4.1, GPT-4.1 mini, GPT-4.1 nano
- New: Support more UI customization options: App Sidebar & Chat Input UI
- Improved: Automatically pull model list from Google AI
What's new?
Model Context Protocol
The most significant change of this release is MCP support.
An MCP server, short for Model Context Protocol server, is a lightweight program that exposes specific capabilities to AI models via a standardized protocol, enabling AI to interact with external data sources and tools.
MCP servers allows you to extend BoltAI's capabilities with custom tools and commands.
To learn more about using MCP servers in BoltAI, read more at https://boltai.com/docs/plugins/mcp-servers
New AI Providers
BoltAI now supports 2 new AI service providers: GitHub Copilot* and Jina DeepSearch
*
Note
: an active Copilot subscription is requiredOpenAI's new models
This release added support for latest models: o3, o4-mini and GPT-4.1 model family.
Note that to stream o3 model, you will need to verify your org. Alternatively, you can disable streaming for o3 in
Settings > Advanced > Feature Flags
More customizations
Go to
Settings > Appearance
to personalize your app. You can now customize the App Sidebar and Chat Input Box. Enjoy.See you in the next update 👋
new
v1.33.0

- New:Added support for Claude 3.7 Sonnet
- Improved:Added support for reasoning models on OpenRouter
- Improved:Better cache breakpoint handling for Anthropic models
- Other bug fixes and improvements
What's new?
The most notable change in this release is supporting Anthropic's new Claude 3.7 model. This new model allows you to set a custom "thinking budget tokens". You can decide how hard the model think.
BoltAI simplifies this by using the same Reasoning Effort setting:
- Low: 4K thinking tokens
- Medium: 16k thinking tokens
- High: 32k thinking tokens
It's recommended to start with medium (16k tokens) as it's the optimal setting when using Claude 3.7 extended thinking.
new
improved
v1.32.3 (build 150)

- New:Added support for Github Marketplace, Pico AI Homelab (MLX), Cerebras AI.
- New:Fetch model lists for Custom AI services.
- Improved:o1 models now support streaming.
- Improved:Copy reasoning content to clipboard.
- Improved:Better error handling for AI services.
- Improved:Better Advanced Voice Mode.
new
fixed
v1.32.0 build 146

- New: Added support for OpenAI's reasoning_effortparameter
- New: Added support for OpenRouter's reasoningcontent.
- Fixed: Projects not showing all AI models.
new
improved
fixed
v1.31.0 build 144

- New: Added support for citations when using Perplexity or Web Search AI Plugins.
- New: Show AI Plugin input params if available.
- New: Chats within projects now support drag-and-drop and other operations.
- Fixed: o1 doesn't work with custom GPT parameters.
- Fixed: Cannot setup a new Anthropic service
Load More
→