Groq API user for the last 1.5 years (since 6/23/2024).
No new models for the last 6 months.
Highly disturbing.
Expecting respect from Groq team to their users, informing us in advance, so that we have enough time for transition to alternatives.
Groq API user for the last 1.5 years (since 6/23/2024).
No new models for the last 6 months.
Highly disturbing.
Expecting respect from Groq team to their users, informing us in advance, so that we have enough time for transition to alternatives.
I agree, really confused why we aren’t getting access to any of the newer models. 2025 was an open source model renaissance.
Kimi 2.5, Deepseek 3.2, GLM, MiniMax, the other stuff on the Qwen 3 line. There are so much out there. I love groq and want to keep using please
I think they were focused on selling the company. Which ended up happening… NVIDIA.
@Vasiliy_Goncharenko @osoggy guys, what are the other alternatives you are considering?
Do you think fireworks is a good bet if Groq team doesn’t do anything?
@yawnxyz - You guys should respond here. It’s honestly annoying at this moment.
Indeed! We want Kimi 2.5! Cerebras also doesn’t have the latest open-source models either!
Thanks for the feedback. We hear you on the pace of new model releases. We’re tracking interest in newer models and will share updates when we have something to announce.
We care a lot about developers building on Groq and we’ll keep focusing on delivering a great experience on GroqCloud.
Sry cant belive this, today deprecation of llama4. Without real alternitive. WTF ? Feels more that NVIDIA is shutting down Groq slowly.
Hey, I love you guys and I love groq
RIP Groq. Another win for Nvidia, another loss for the open-source community. You just killed the best sandbox for vision AI—don’t be surprised when the developers leave with it
Deprecation of Kimi-K2-Instruct with no good alternative is a massive issue.
We have been using groq on our saas for about 5 months in production. Its highly disappointing to see there are no updates on the newer open source models. We really wanted to use minimax, GLM, qwen and other models, seems like the Groq team is quietly dropping support for devs like us and only looking at enterprise customers and their datacenters. RIP
I started using Groq at the beginning of the year, and most of the LLM models were already pretty old. Now another four months have passed, and not only is there nothing new, we have fewer models after deprecations. This is a field where things move and improve very quickly.
I get that you want to research and pick the most robust thing, but in the meantime we’re stuck with ancient models. Picking any model released this year with a blindfold would be better than GPT-OSS-120b.
I just came here expecting all the top model since they were acquired by Nvidia and I found a dust bin! glad they have Nvidia because they just pissed away their community here for sure.