OpenAI GPT-OSS Models doesn't support structure output

https://console.groq.com/docs/structured-outputs

Request feature for supporting structure output (json_schema) when using OpenAI-compatible API.

We just added support for this! Have fun!

1 Like

When I use OpenAI OSS with Groq in LangGraph, is structured output supported?
Or I am doing something wrong in case of the groq (both have same code) It uses pydantic in case of the OSS mode In case of gpt4.1 it uses internal structured output

We just updated Langchain for json_schema support — langchain/libs/partners/groq/langchain_groq/chat_models.py at master · langchain-ai/langchain · GitHub
This part should show how json_schema takes precedence: could you try that out: ```
if method == “json_schema”:

Some applications require that incompatible parameters (e.g., unsupported

methods) be handled.

Can I define the output format the same way OpenAI does with structured outputs? For example, in LangGraph when I use with_structured_output , ChatGPT ensures the output is always in the specified format. But in the above case, we first extract the result and then try to parse the output which might fail if the LLM not provide the same format

— OpenAI —

if model_type.lower() == “openai”:
required_keys = [“model_name”, “openai_api_key”]
_check_required_keys(required_keys, kwargs)
llm = ChatOpenAI(**kwargs)

— Groq —

elif model_type.lower() == “groq”:
required_keys = [“model_name”, “groq_api_key”]
_check_required_keys(required_keys, kwargs)
llm = ChatGroq(**kwargs)

— Unknown —

else:
raise ValueError(f"Unknown model_type: {model_type}")
if structured_schema:
llm = llm.with_structured_output(structured_schema)
return llm It works fine with openai but when i use groq with openai open source model it fails

Yes, our system internally checks against your given JSON Schema using Zod, and if the model is unable to output the correct JSON schema then we fail the tool call, so from an API standpoint either the entire API call fails, or if it doesn’t, it should be guaranteed to be in accordance to your given schema. We’ve added some more details on how it works in our docs: Structured Outputs - GroqDocs

You guys are amazing! Just changed aws bedrock inference to here only for this feature wit oss (we are heavy users), would love to see the new kimi k2 thinking in groq (not necessary the structured feature)

1 Like

Thank you, I’m glad it worked for you!!

Yes we’re working on k2 thinking, but don’t have any ETA for you though.