Cannot make more than 5 tool calls per request without erroring out
in progress
Zack
Using 1.36.4 and tried with Anthropic and OpenAI models:
I tried out the BoltAI MCP capability on a MCP that crawls a graph database and will retrieve information based on what the user asks for. This usually takes a few calls for the LLM to "feel around" and see what data is out there and how it is structured. I usually found that the provider sends an error message after the 5th tool call, right when the LLM seems to get its bearings.
To abstract and isolate the issue, I got the memory knowledge graph MCP and instructed it to fill the memory with 20 fun facts of a random topic. It consistently fails on the 6th call because the previous tool calls aren't provided in the request. Here are two instances of this happening:
MCP:
{
"mcpServers" : {
"memory" : {
"args" : [
"-y",
"@modelcontextprotocol/server-memory"
],
"command" : "npx",
"env" : {
"MEMORY_FILE_PATH" : "/Users/zack/.local/state/mcp/memory.json"
}
}
}
}
OpenAI Error Message:
Sorry, OpenAI has rejected your request. Here is the error message: Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
Anthropic Error Message:
Sorry, your request has been rejected. Here is the error message: {"error":{"message":"Provider returned error","code":400,"metadata":{"raw":"{"message":"messages.0.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_018SdtQtQUurgmV1ECHwXXVD. Each tool_result block must have a corresponding tool_use block in the previous message."}","provider_name":"Amazon Bedrock"}},"user_id":"redacted"}
Daniel Nguyen
in progress
Zack
Bump - this feels very limiting since tool call limits are a lot higher on other chat clients
Daniel Nguyen
Zack: Sorry for the delay. Can you share your Context Limit setting? It's likely that it's the culprit. BoltAI treats each tool call / tool result as an entry in the context. Try increase it to 50 to see if it fixed it.
I reworked this in v2 and will release soon!
Zack
Daniel Nguyen Ah that looks to be the case. My only challenge with the context limit setting affecting tool calls is it might be a bit unintuitive that not deliberately setting that value to be something higher will break an ongoing request where it is calling tools. Just in my opinion, I think it should check/trim the context window with each outbound user request and then let the context grow as much as it needs to finish the task the user wanted, then trim again with the next user message. That way there is a balance of doing what the user wanted, but also trying to keep the context window small where it can