GPT-oss-120b Reasoning Tokens Not Counted in Responses API Usage Statistics

I’m using the Groq Responses API with GPT-OSS models and noticed that reasoning tokens are not being counted in the usage statistics, even when reasoning is enabled and working correctly.

Environment:

Issue:
When making requests with reasoning enabled (reasoning: { effort: ‘high’ }), the API:

  1. :white_check_mark: Successfully generates reasoning content (visible in output array with type: ‘reasoning’)
  2. :white_check_mark: Counts total tokens in usage.input_tokens and usage.output_tokens
  3. :cross_mark: Always returns 0 for usage.input_tokens_details.reasoning_tokens and
    usage.output_tokens_details.reasoning_tokens

Example Request:
{
“model”: “openai/gpt-oss-120b”,
“input”: “What is 2 + 2? Think step by step.”,
“instructions”: “You are a helpful assistant.”,
“reasoning”: { “effort”: “high” },
“temperature”: 0.7
}

Actual Response (abbreviated):
{
“output”: [
{
“type”: “reasoning”,
“content”: [{“type”: “reasoning_text”, “text”: “Let me calculate…”}]
}
],
“usage”: {
“input_tokens”: 599,
“output_tokens”: 40,
“total_tokens”: 639,
“input_tokens_details”: {
“cached_tokens”: 0,
“reasoning_tokens”: 0 // ← Should be non-zero
},
“output_tokens_details”: {
“cached_tokens”: 0,
“reasoning_tokens”: 0 // ← Should be non-zero
}
}
}

Expected Behavior:
The reasoning_tokens fields should contain the actual count of tokens used for reasoning, separate from the
regular response tokens.

Questions:

  1. Is this a known limitation of the current beta implementation?
  2. Are there plans to implement proper reasoning token counting?
  3. Is there anything I need to configure differently to get reasoning tokens counted?

Thank you for your help!

Thanks for this report and the detailed information! This looks like it’s an issue on our end - I’m working with the team now to fix this.

I’ll post here again once it’s fixed!

Brilliant - thank you!