Using 1.36.4 and tried with Anthropic and OpenAI models:
I tried out the BoltAI MCP capability on a MCP that crawls a graph database and will retrieve information based on what the user asks for. This usually takes a few calls for the LLM to "feel around" and see what data is out there and how it is structured. I usually found that the provider sends an error message after the 5th tool call, right when the LLM seems to get its bearings.
To abstract and isolate the issue, I got the memory knowledge graph MCP and instructed it to fill the memory with 20 fun facts of a random topic. It consistently fails on the 6th call because the previous tool calls aren't provided in the request. Here are two instances of this happening:
MCP:
{
"mcpServers" : {
"memory" : {
"args" : [
"-y",
"@modelcontextprotocol/server-memory"
],
"command" : "npx",
"env" : {
"MEMORY_FILE_PATH" : "/Users/zack/.local/state/mcp/memory.json"
}
}
}
}
OpenAI Error Message:
Sorry, OpenAI has rejected your request. Here is the error message: Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.
Anthropic Error Message:
Sorry, your request has been rejected. Here is the error message: {"error":{"message":"Provider returned error","code":400,"metadata":{"raw":"{"message":"messages.0.content.0: unexpected tool_use_id found in tool_result blocks: toolu_vrtx_018SdtQtQUurgmV1ECHwXXVD. Each tool_result block must have a corresponding tool_use block in the previous message."}","provider_name":"Amazon Bedrock"}},"user_id":"redacted"}