Groq allows for a hallucinated tool to be send as a response, but does not allow that same hallucinated tool to be in the message history when requesting litellm.completion().
I’m using ‘groq/llama-3.1-8b-instant’ through LiteLLM. I created a tool (Python function get_weather). I asked the question “What’s the weather in Tokyo?” and allowed this tool to be used with litellm.completion(model=self.model, messages=self.messages) . My Python function had an issue, so I returned that message self.messages.append({"role": "tool", "tool_call_id": tc.id, "content": "Function ... failed"}).
So far, so good. However, the LLM suddenly hallucinates the tool usage ‘brave_search‘, which doesn’t exist.
BadRequestError: litellm.BadRequestError: GroqException - {"error":{"message":"tool call validation failed: attempted to call tool 'brave_search' which was not in request.tools","type":"invalid_request_error","code":"tool_use_failed","failed_generation":"The get_weather function is not available. However, we can use another function that fetches current weather for the desired location.\n\n\u003cfunction=brave_search\u003e{\"query\": \"Tokyo weather today\"}\u003c/function\u003e"}}
On my side of the code, calling a function that doesn’t exist is handled, however, I add all LLM responses to the message history as is. Because this not-allowed tool call is in the message history, groq does not accept my request. Not sure if this is in OpenAIError, or on your side, but to me it’s weird that this is blocked (just ignore it chat history?). If you’re going to check on this, do it at the LLM response (which I can imagine is challenging to handle).
Sorry, my code is too messy, so it would take time to make a small standalone demonstrating this bug.