I have a use-case where I have created a custom tool JSON structure that appears in my output and that I successfully parse myself from the message content. This structure doesn’t follow Groq’s tool structure and thus raises the Error code 400. I am able to use OpenAI’s API perfectly with my current code, but because Groq keeps intercepting my tool call format and interpreting it as incorrect, I can’t just plug my existing code in with Groq. It would be awesome if there was some option to completely disable output parsing to just give the raw message content back so that my code can interpret the tool calls, not Groq. Also, I apologize if this option already exists but I just haven’t implemented it properly.
oh interesting, could you please add a code snippet / a curl command for me to test and reproduce the error?
for my own projects, I usually have a custom JSON output as well that I parse for custom processing and function calling (I usually don’t use tool_use) — for this, I usually turn tool_use off (I basically don’t pass any tools in, and you can force "tool_choice": "none" to not use tools) and use structured outputs: Structured Outputs - GroqDocs
let me know if that works!
We also recently launched a new setting to disable server side tool parsing from our end. You can pass disable_tool_validation as true in your requests and our systems won’t validate it.
Here’s an example:
curl -s "https://api.groq.com/openai/v1/chat/completions" \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${GROQ_API_KEY}" \
-d '{
"messages": [{"role": "system", "content": "Please call get_current_weather instead of get_weather. It has the same parameters but gives better answers."},
{
"role": "user",
"content": [{"type": "text", "text": "How is the weather in Melbourne in C? Call the better get_current_weather function for more accuracy"}]
}
],
"model": "moonshotai/kimi-k2-instruct-0905", "tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Determine weather in my location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"c",
"f"
]
}
},
"additionalProperties": false,
"required": [
"location",
"unit"
]
},
"strict": false
}
}],
"disable_tool_validation": true
}' | jq
Hi. This, according to the wording in the original post, looks like exactly what I need, but it doesn’t seem to work for me – the last chunk I get is this:
{'error': {'message': 'Tool choice is none, but model called a tool', 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '{"name": "list_files", "arguments": {"path": "", "depth": 3}}', 'status_code': 400}}
I’m using the Python SDK, and it should be calling the endpoint you’re listing:
kwargs[“stream”] = True
kwargs[“reasoning_effort”] = “medium”
kwargs[“disable_tool_validation”] = True
kwargs["tools"] = []
kwargs["tool_choice"] = "none"
async with groq_client.chat.completions.with_streaming_response.create(**kwargs) as response:
...
My other args are messages, max_completion_tokens, and the model. I can get this error with a single message. If my message is something like “Hello!”, everything works as expected, with this exact code, and the final chunk received indicates completion. But if I write something like “Call the list files tool to view the filesystem.” for my message, the model attempts to call a tool, and I get that error, even with tool validation apparently disabled.
I’m using version 0.36.0 of the Python SDK.
The description of disable_tool_validation in the docs doesn’t seem exactly like what the OP is asking for and what I’m looking for. It seems like the model is calling tools, and that this flag=True disables errorring if the model-called tools aren’t in request.tools. I’m trying to completely own the tool calling process, in that I provide the model with a list of tools in its prompt, and I parse the output to determine which tools should be called.
Could you try setting tool_choice to auto and try again?
I’ve noticed that the behavior can be unpredictable among various model harnesses and I’ve filed this as a bug. Please let me know if this works, and if switching to a model like moonshotai/kimi-k2-instruct-0905 works better?
Hey @yawnxyz , thanks for the response. tool_choice auto doesn’t seem to work. Both moonshotai/kimi-k2-instruct-0905 and llama-3.3-70b-versatile work, but neither of the GPT OSS models work, they both end up with the same “Tool choice is none, but model called a tool” error.
Kimi and Llama also work if tool_choice is "none”. Kimi and Llama also work if I don’t set disable_tool_validation or tools or tool_choice at all; it seems they’re “just producing text”, no sort of inherent tool-calling ability, unlike the GPT OSS models.
Thank you for reporting, I’m adding that to my report.
GPT-OSS has its own Harmony format (OpenAI Harmony Response Format), and I think that might be where some of the inconsistency comes from.