I would very much like to see support for Z.ai’s GLM-4.5 and GLM-4.5V models. In my testing, GLM-4.5 was very impressive--one of the first open models I have seen that approaches or matches OpenAI models’ tool calling ability. GLM-4.5 is thus an extremely good model for agentic use cases; the only problem with it is the speed. There are not yet any fast providers for it, which makes it harder to use. Groq could fill this gap and enable strong agentic toolchains with fast inference speed.
GLM-4.5V is one of the first frontier-level multimodal models that is open-source; Groq is currently lacking in multimodal models, with the two Llama 4 models being the only ones currently, and they are lacking in performance; definitely not frontier-level. Mistral Pixtral Large would be another good multimodal model to add, but given the performance I experienced with GLM-4.5, I expect GLM-4.5V to be the most valuable multimodal model to implement right now.
1 Like
Thanks for the suggestion - I personally haven’t spent much time with GLM-4.5V, but I’ll bring up your suggestions for consideration!