Groq Python SDK Support for Responses API

Hi! Found working on https://github.com/mozilla-ai/any-llm

Is there a plan for the groq python sdk to support the OpenAI Response API? https://github.com/groq/groq-python

There is no Issues tab in that repo so I’m not sure where to ask this question.

Hi there! We don’t currently have any plans to update our SDKs to support the Responses API. Is there any reason you want it in the Groq SDK instead of using the OpenAI SDK pointed to the Groq endpoint? For example, to use the Responses API with Groq in Python, you can use the OpenAI SDK like this: import openai
client = openai.OpenAI(
api_key=“your-groq-api-key”,
base_url=“https://api.groq.com/openai/v1
)
response = client.responses.create(
model=“llama-3.3-70b-versatile”,
input=“Tell me a fun fact about the moon in one sentence.”,
)
print(response.output_text)

@benank Thanks! That’s what I ended up doing: any-llm/src/any_llm/providers/groq/groq.py at main · mozilla-ai/any-llm · GitHub In that case, Does this mean the Groq SDK would only introduce their own responses function if they divert from the OpenAI spec? I assume that for the Compeletions API, Groq has their own SDK so that there is proper typing for stuff like reasoning groq-python/src/groq/types/chat/chat_completion_message.py at main · groq/groq-python · GitHub which is an extension and not included in the OpenAI completion spec. Please let me know if I’m confused or misunderstanding something :sweat_smile:

That approach looks right! Yes, we’d likely only introduce our own responses SDK if there are features we have that OpenAI doesn’t have. Right now, it should be the same, so using OpenAI’s SDK should work. Our chat completions API is meant to be compatible with the OpenAI completions API, but we do have a few differences (like you’ve seen). So you’re absolutely right :wink: on that.

Ok excellent. Thank you! In that case I’m all set :slight_smile: I will say: one pro of adding the responses API to your SDK is that then if you do decide to eventually diverge from the OpenAI responses spec, it would flag an error in my python static type checker (mypy, currently), which would then help us to understand that something changed and be able to react accordingly. Right now since we’re using the openai spec and python sdk, we’re sort of “flying blind” when it comes to what Groq is doing, since openai allows people to attach arbitrary additional attributes to their objects. Otherwise, if Groq extends the API, I would see it back on my side as an extra_field in the pydantic object that wouldn’t show up in any code hinting, so it wouldn’t be easy to know that there was something new. tl;dr, adding this to your SDK will help me to know in a programmatic way whenever the Groq API extends or modifies the OpenAI Responses API Spec.