Hey guys đ, we are trying to integrate gpt-oss-120b into our platform through Langchain's ChatGroq but unfortunately the tool binding is not working as we expect to. The model is completely ignoring the tools and not making calls to either of them. We can see in langsmith, that the tools are there to be called but they are ignored. The model is instruct to use any of the tools both in the prompt and in the bind_tools(). We tried the same setup with r1 and it worked succesfully.
Anyone having similar issues with gpt-oss-120b regarding tool binding?
Hi there, sorry to hear that youâre running into issues with tool calling. We just shipped some fixes for tool calling that should make it a lot more reliable. Can you try again and let me know how it goes?
Hi â@benank thanks for the reply, I just tried couple of times this morning but sadly the results are the same as yesterday. The issue is the mandatory requirement to select a tool from langchain "any" is ignored and no tool is selected.
Hi,
I just ran our internal test suite against gpt-oss and I am having similar issues. The tool_choice API parameter seems to be completely ignored by the model.
When :none the model still calls tools.
When forced to call one function it calls a different one. I can provide the prompts if it would help. But it is really easy to reproduce and happens every time.
You know what, here are the requests:
here it ignores the tool_choice specified function
{
"model": "openai/gpt-oss-120b",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What is the status of my order 66DBR7RRF?" }
],
"max_tokens": 2048,
"stream": false,
"tools": [
{
"type": "function",
"function": {
"description": "Get order state based on order number",
"name": "get_order_state",
"parameters": {
"type": "object",
"required": ["identifier", "identifier_type"],
"properties": {
"identifier": {
"type": "string",
"description": "Order identifier value"
},
"identifier_type": {
"type": "string",
"format": "enum",
"enum": ["order_number", "invoice_number"]
}
}
}
}
},
{
"type": "function",
"function": {
"description": "Retrieves general information that can be presented to the user.",
"name": "search_knowledge_base",
"parameters": {
"type": "object",
"required": ["query"],
"properties": {
"query": {
"type": "string",
"description": "The query to search the knowledge base for."
}
}
}
}
}
],
"tool_choice": {
"type": "function",
"function": { "name": "search_knowledge_base" }
},
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"reasoning_effort": "low"
}
here it ignores the :required too_choice value:
{
"model": "openai/gpt-oss-120b",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What is the status of my order 66DBR7RRF?" },
{
"role": "assistant",
"tool_calls": [
{
"id": "fc_aa461f11-4747-4032-924d-b66a0f5f389b",
"type": "function",
"function": {
"name": "get_order_state",
"arguments": "{\"order_number\":\"66DBR7RRF\"}"
}
}
]
},
{
"role": "tool",
"tool_call_id": "fc_aa461f11-4747-4032-924d-b66a0f5f389b",
"content": "Nothing found."
}
],
"max_tokens": 512,
"stream": false,
"tools": [
{
"type": "function",
"function": {
"description": "Get order state based on order number",
"name": "get_order_state",
"parameters": {
"type": "object",
"required": ["identifier", "identifier_type"],
"properties": {
"identifier": {
"type": "string",
"description": "Order identifier value"
},
"identifier_type": {
"type": "string",
"format": "enum",
"enum": ["order_number", "invoice_number"]
}
}
}
}
},
{
"type": "function",
"function": {
"description": "Retrieves general information that can be presented to the user.",
"name": "search_knowledge_base",
"parameters": {
"type": "object",
"required": ["query"],
"properties": {
"query": {
"type": "string",
"description": "The query to search the knowledge base for."
}
}
}
}
}
],
"tool_choice": "required",
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"reasoning_effort": "low"
}
Thanks for these detailed requests! I am able to reproduce the issues that youâre seeing. Iâve been working with our team on these for the last few days, and I have some updates to share.
Issue #1: Tool choice set to a specific function
Our team doesnât have any fixes yet for this issue. Weâve done a lot of testing on this and it seems like a model-specific issue. While we could add some changes to improve this, weâd need to benchmark it and it might involve modifying the system prompt. One way to fix this would be to adjust the system prompt to include something like: âYou MUST call this tool: search_knowledge_baseâ. However, weâre still investigating and Iâll let you know if we have any updates here.
Issue #2: Tool choice ârequiredâ is being ignored in certain cases
We recently shipped some fixes and it looks like the second issue has been resolved - I am not able to reproduce it anymore. It correctly calls a tool with the request you provided. Are you still seeing any issues?
Hi,
and thanks for the update.
I am still seeing the bug with ârequiredâ, here is another request body:
{"model":"openai/gpt-oss-120b","messages":[{"role":"system","content":"You are a helpful assistant."},{"role":"user","content":"What is the status of my order 66DBR7RRF?"},{"role":"assistant","tool_calls":[{"id":"fc_662bd9df-93b8-48a4-9b1e-b272a7d03935","type":"function","function":{"name":"get_order_state","arguments":"{\"identifier\":\"66DBR7RRF\",\"identifier_type\":\"order_number\"}"}}]},{"role":"tool","tool_call_id":"fc_662bd9df-93b8-48a4-9b1e-b272a7d03935","content":"Nothing found."}],"max_tokens":512,"stream":false,"tools":[{"type":"function","function":{"description":"Get order state based on order number","name":"get_order_state","parameters":{"type":"object","required":["identifier","identifier_type"],"properties":{"identifier":{"type":"string","description":"Order identifier value"},"identifier_type":{"type":"string","format":"enum","enum":["order_number","invoice_number"]}}}}},{"type":"function","function":{"description":"Retrieves general information that can be presented to the user.","name":"search_knowledge_base","parameters":{"type":"object","required":["query"],"properties":{"query":{"type":"string","description":"The query to search the knowledge base for."}}}}}],"tool_choice":"required","top_p":1,"frequency_penalty":0,"presence_penalty":0,"reasoning_effort":"low"}
I observe tool_choice=required being ignored, frequently donât get a tool call, just plain text response. Is there any update on this issue?
Any updates here? OpenAI models are generally reliable for tool calling, so is this an implementation issue, or is the model itself unable to initiate tool calls?
I have given up on groq because of this issue. I need tool_choice=required to work!
PLEASE please fix it, I love the speed of groq!
Hi everyone, thanks for your patience on this issue.
Iâve been investigating this over the last week and it looks like we have a bug in our handling of tool_choice=required for the GPT-OSS models. On all other models, we return a 400 error when the model doesnât call a tool when itâs required. On GPT-OSS models, it doesnât return an error, but it should.
I understand that this doesnât solve the issue where it doesnât call tools even when tool_choice is required. A modelâs ability to call tools is based on its post-training and user prompting. On Groqâs side, we arenât able to force a model to call a tool/function, even when you set tool_choice=required.
In the most recent example above (thank you so much for these reproductions - it makes it much easier to investigate), this is what the model sees:
- It sees the entire prompt given to it. This includes the system prompt, previous chat messages, tool calls and tool call results, and tool definitions.
- The system prompt has the most weight here, but it doesnât instruct the model on how to use tools or think about how to handle user input
- The model sees that it already tried to use a tool to find the order state, but nothing was found
Given all this information, the model doesnât see any reason to call another tool. It already called the only relevant tool and got a result. There are no prompts or instructions that tell the model that it should either search again (call the same tool) or to call the search_knowledge_base tool. If you are expecting the model to search the knowledge base, it needs to know when to do so. Right now, it doesnât know that you might expect it to search more, or to try getting the order state again. It doesnât have any information on when to do this.
My recommendation for this specific case would be:
- Add instructions in the system prompt to tell the model what to do when the order status isnât found. What are you expecting it to do? Should it call the tool again, or call a different tool? Be explicit.
- Add more details to your tool definitions. Usually 3 sentences is a good length. This will tell the model more about each tool - what it does, when to use it, and the inputs that are expected. For example, you might modify the knowledge search tool to include something like âUse this tool if you are unable to find order state.â
Please let me know if you have any other questions or example requests youâd like me to look into! I understand that this can be frustrating when you expect the model to call tools and it doesnât. Thereâs a lot going on here under the hood - setting tool_choice=required is more like âplease call a toolâ, and our backend should validate that a tool was called, but will not force the model to call a tool. A lot of this comes down to the modelâs capability to call tools and the prompting structure.
Hi everyone!
Structured Outputs if failing for me (getting 400) on GPT-OSS 120B and GPT-OSS 20B: I am getting:âTool choice is none, but model called a toolâ. I was working on 21st Aug but failing on 22nd Aug.
Summary (regression?)
Starting August 22, 2025 (IST), all of my LangChain agents that use Groq Structured Outputs (response_format: { type: "json_schema" }
) with openai/gpt-oss-120b and openai/gpt-oss-20b
began failing with HTTP 400:
BadRequestError: Error code: 400
{'error': {
'message': 'Tool choice is none, but model called a tool',
'type': 'invalid_request_error',
'code': 'tool_use_failed',
'failed_generation': '{"name": "<|constrain|>schema", "arguments": ...}'
}}
This looks like the platform is trying to invoke an internal tool (the schema constraint tool) even though Structured Outputs explicitly say âstreaming and tool use are not currently supportedâ. (console.groq.com)
Separately, the API reference documents that tool_choice
defaults to none
when no tools are present, which seems to collide with the internal tool call shown above. (console.groq.com)
Environment
-
LangChain:
langchain
,langchain-core
,langchain-groq
(latest) -
Provider: Groq
-
Model:
openai/gpt-oss-120b
(listed as supporting Structured Outputs) (console.groq.com) -
No streaming, no user-declared tools in this step, deterministic seed set.
Minimal repro (LangChain, Python)
from langchain_groq import ChatGroq
from langchain_core.runnables import RunnableSequence
from langchain_core.output_parsers import PydanticOutputParser
from pydantic import BaseModel
from langchain_core.prompts import PromptTemplate
class Output(BaseModel):
capital: str
prompt_template = PromptTemplate.from_template("What is the Capital of {country}?")
model = ChatGroq(
model="openai/gpt-oss-120b",
temperature=0.0,
max_retries=3,
reasoning_format="parsed",
model_kwargs={"seed": 0},
)
agent: RunnableSequence = (
prompt_template
| model.bind(
response_format={
"type": "json_schema",
"json_schema": {"name": "schema", "schema": Output.model_json_schema()},
},
reasoning_effort="low",
# Tried tool_choice="auto" and "required" as well â still fails
)
| PydanticOutputParser(pydantic_object=Output)
)
output = agent.invoke({"country": "India"})
Observed vs expected
-
Expected: Structured output that matches the schema (or a schema-mismatch error), with no tool calls because SO docs say tool use isnât supported. (console.groq.com)
-
Observed: 400 error stating âTool choice is none, but model called a toolâ and showing a failed generation for
"<|constrain|>schema"
.
What Iâve already tried
-
Forcing
tool_choice="auto"
and"required"
in the same.bind(...)
call â still fails with the same 400. -
Removing
reasoning_format
andreasoning_effort
â still fails. -
Swapping to JSON Object Mode (
response_format={"type": "json_object"}
) â still failing -
Using LangChainâs
.with_structured_output(..., method="json_mode")
â still failing. (LangChain)
Why I suspect a platform bug
-
SO docs: âtool use not supported with Structured Outputs,â yet the backend is attempting a tool call for schema enforcement. (console.groq.com)
-
API docs: default
tool_choice
isnone
when no tools are provided. If the backend internally calls a tool in this mode, it explains the exact error. (console.groq.com) -
A recent thread reports
tool_choice
being ignored on GPT-OSS models â possibly related to the same path. (community.groq.com)
Request for guidance
-
Can Groq confirm this regression on
openai/gpt-oss-120b
Structured Outputs as of Aug 22, 2025? -
Is the internal schema-enforcement tool expected to run for SO even though docs say tool use isnât supported? If so, what
tool_choice
should clients set? -
Is there a recommended temporary workaround on GPT-OSS?
-
Any ETA or changelog note I should track for a fix? (Iâm monitoring the docs & changelog.) (console.groq.com)
References
-
Groq Structured Outputs doc (notes limitations + supported models). (console.groq.com)
-
Groq API Reference (behavior of
tool_choice
defaults). (console.groq.com) -
Community thread about
tool_choice
issues on GPT-OSS models. (community.groq.com)
If Groq engineers need more specifics (request IDs, timestamps, org/project), I can share them privately. Thanks!
Iâve been running into the exact same problem since this morning. Everything had been working fine since launch, but starting today Iâm getting the following error:
BadRequestError: Error code: 400 {'error': { 'message': 'Tool choice is none, but model called a tool', 'type': 'invalid_request_error', 'code': 'tool_use_failed', 'failed_generation': '{"name": "<|constrain|>schema", "arguments": ...}' }}
Iâm using the OpenAI SDK.
Just wanted to report the same issue, all our GROQ structured output usecases are down since it does not generate structured outputs and only fails with HTTP/400
Also reporting the same issue - structured outputs is failing with GPT-OSS 120B and GPT-OSS 20B with the same errors as described above (400, Tool choice is none, but model called a tool).
Reporting the same issue, GPT-OSS-120B gives the same error, tools are None and tool choice is ânoneâ, and it is failing
Getting the same error for Structured output generation: `400 Tool choice is none, but model called a tool `.
Tried both with groq sdk as well as openai sdk.
I have set tool_choice to âautoâ (even though this shouldnt be needed as groqs documentation)
Getting a ton of the following error:{
"request_id": "req_01k397em1jez3rs0gd7gcevbw2",
"created_at": "2025-08-22T15:48:11.442Z",
"error": {
"message": "Tool choice is none, but model called a tool",
"type": "invalid_request_error",
"param": "",
"code": "tool_use_failed"
}
}
My production use case is getting hammered by this bug.
Iâd estimate that 2/3 to 3/4 of my requests are failing with this error.
Hi everyone, thank you for the detailed information and reproductions. Iâm working with the team now to resolve this. I will post an update shortly.
Weâve identified the issue and are working on a fix now.
We have an active incident for this now: https://groqstatus.com/