Bug Reports

BoltAI Does Not Respect XDG Base Directory Spec
For *NIX systems, there is an open specification for a standard set of directories that applications should tap into regarding where a user would prefer an app's configuration/state/metadata files go to over just some .dotdir cluttering up a user's home directory, which may or may not contain sensitive data. https://specifications.freedesktop.org/basedir-spec/latest/ Currently MCP configuration on macOS gets dropped in ~/.boltai , regardless of the presence of a known preferred XDG standard location. A good number of apps will fallback to a home dotdir if a known XDG location isn't present on the machine and it would be nice if BoltAI at least made the attempt to read/store data in the standard's location before writing to ~/.boltai . The most fitting place to check first for a user's config of MCPs would be at ~/.local/share/boltai/mcp.json ( $XDG_DATA_HOME/boltai/mcp.json ) to conform with the following: "$XDG_DATA_HOME defines the base directory relative to which user-specific data files should be stored. If $XDG_DATA_HOME is either not set or empty, a default equal to $HOME/.local/share should be used." $XDG_DATA_HOME (~/.local/share/boltai) is preferred over $XDG_CONFIG_HOME (~/.config/boltai) because it is common for people to put their config dir in a git repository and share publicly on something like GitHub. Because the MCP file will probably contain tokens/auth, it is more fit to be a private "user data" file than a public config file.
2
·

under review

Cannot make more than 5 tool calls per request without erroring out
Using 1.36.4 and tried with Anthropic and OpenAI models: I tried out the BoltAI MCP capability on a MCP that crawls a graph database and will retrieve information based on what the user asks for. This usually takes a few calls for the LLM to "feel around" and see what data is out there and how it is structured. I usually found that the provider sends an error message after the 5th tool call, right when the LLM seems to get its bearings. To abstract and isolate the issue, I got the memory knowledge graph MCP and instructed it to fill the memory with 20 fun facts of a random topic. It consistently fails on the 6th call because the previous tool calls aren't provided in the request. Here are two instances of this happening: MCP: { "mcpServers" : { "memory" : { "args" : [ "-y", "@modelcontextprotocol/server-memory" ], "command" : "npx", "env" : { "MEMORY_FILE_PATH" : "/Users/zack/.local/state/mcp/memory.json" } } } } OpenAI Error Message: Sorry, OpenAI has rejected your request. Here is the error message: Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'. Anthropic Error Message: Sorry, your request has been rejected. Here is the error message: {"error":{"message":"Provider returned error","code":400,"metadata":{"raw":"{"message":"messages.0.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_018SdtQtQUurgmV1ECHwXXVD. Each tool_result block must have a corresponding tool_use block in the previous message."}","provider_name":"Amazon Bedrock"}},"user_id":"redacted"}
0
Load More