Sometimes it just responds with reasoning tokens but no response. I am able to reproduce it consistently for a prompt with tools. Has anyone else seen this. Any recommendations!
1 Like
Hi,
I have the same problem than you. Sometimes, the LLM output contains “…… ….” and also reasoning. I am going to try the answer in this topic : Bug] GPT-OSS-120B: Reasoning tokens and gibberish output appearing in responses despite configuration to hide reasoning
Try and tell me also
1 Like