Skip to main content

I noticed the tool use fail more often than before recently while there is no related code changes on our side.  The model is llama-3.3-70b-versatile

It is generating a valid JSON response, but it's wrapping it in XML-like function tags (<function=json>...</function>), which is causing the tool use to fail because it expects pure JSON.

A sample error response:

url: 'https://api.groq.com/openai/v1/chat/completions',
  requestBodyValues: sObject],
  statusCode: 400,
  responseHeaders: eObject],
  responseBody: `{"error":{"message":"Failed to call a function. Please adjust your prompt. See 'failed_generation' for more details.","type":"invalid_request_error","code":"tool_use_failed","failed_generation":"\\u003cfunction=json\\u003e{ … }\\u003c/function\\u003e"}}\n`,
  isRetryable: false,
  data: >Object]

 

Thank you for reporting this issue, I’ll submit the bug report.

I’ll email you about some specific information about your API request.


Reply