Hi there! We don’t currently have any plans to update our SDKs to support the Responses API. Is there any reason you want it in the Groq SDK instead of using the OpenAI SDK pointed to the Groq endpoint? For example, to use the Responses API with Groq in Python, you can use the OpenAI SDK like this: import openai
client = openai.OpenAI(
api_key=“your-groq-api-key”,
base_url=“https://api.groq.com/openai/v1”
)
response = client.responses.create(
model=“llama-3.3-70b-versatile”,
input=“Tell me a fun fact about the moon in one sentence.”,
)
print(response.output_text)
That approach looks right! Yes, we’d likely only introduce our own responses SDK if there are features we have that OpenAI doesn’t have. Right now, it should be the same, so using OpenAI’s SDK should work. Our chat completions API is meant to be compatible with the OpenAI completions API, but we do have a few differences (like you’ve seen). So you’re absolutely right on that.
Ok excellent. Thank you! In that case I’m all set I will say: one pro of adding the responses API to your SDK is that then if you do decide to eventually diverge from the OpenAI responses spec, it would flag an error in my python static type checker (mypy, currently), which would then help us to understand that something changed and be able to react accordingly. Right now since we’re using the openai spec and python sdk, we’re sort of “flying blind” when it comes to what Groq is doing, since openai allows people to attach arbitrary additional attributes to their objects. Otherwise, if Groq extends the API, I would see it back on my side as an extra_field in the pydantic object that wouldn’t show up in any code hinting, so it wouldn’t be easy to know that there was something new. tl;dr, adding this to your SDK will help me to know in a programmatic way whenever the Groq API extends or modifies the OpenAI Responses API Spec.