We need kimi k2.5 asap!

Before I spend 100k on another provider PLEASE add the new kimi model! It’s coding abilities are far greater then any open source model by far! Waiting for this!!

13 Likes

Yes please!! I hope team is working on it!!

Kimi k2.5 feels like the best llm at the moment, I’m really considering dropping my Claude Max sub, being able to run Kimi 2.5 through groq API would 100% make me drop the sub.

10 Likes

Yes! Would lice to see kimi v2.5 soon!

8 Likes

One more vote for being able to run Kimi 2.5

6 Likes

If they don’t add Kimi-K2.5 soon, the company’s completely dead :frowning:

1 Like

I hope I’m wrong, but since NVIDIA’s investment, this service will eventually disappear and will only be a module of the graphics cards we buy from NVIDIA. That’s why we’re seeing this decline in available models.

1 Like

This wasn’t part of the deal. Only the IP and teams were part of the deal. Cloud division is being sold off separately. And hence the limbo

Wdym? We’re not going to have the Groq cloud service eventually? :frowning:

Thanks for the suggestion. We’re tracking interest in Kimi K2.5, and we’ll share an update if we plan to support it.

6 Likes

wow, groq is really dead

1 Like

I would be very beneficial to you if you were to support it. You have speed. Opus 4.5 and GPT 5.2 have brains… imagine if we could have both?

My understand is the Kimi K2.5 is massive model and Groq inference chip may not be able to handle this big model like this unless massive parallel processing which may cost them $$ to operate. So either the company is really slowing down since acquisition or just completely faint

Yes!! Just came here to request to add Kimi K2.5!

1 Like

Does it good for copywriting?

Add Kimi k2.5 for sure! This is not a question

1 more vote on it as well!

yo hurry up and add this now.

1 Like

hey - count this comment as me expressing interest

We have a massive enterprise software that requires exceptionally low latency and a model as robust as Kimi K2 Instruct. Not only is the deprecation unwarranted, Groq’s suggestion to use GPT-OSS-120b is asinine and insulting. Do they even understand the models that they’re hosting? If there’s another service offering similarly low latency, then I’m ready to make the switch. Obviously Groq isn’t up to enterprise standards as a provider anymore if this is how they’re going to choose to run things.

1 Like

and instead, we’re getting the notice that K2 is being removed and to just use GPT OSS 120B

That’s.. certainly a choice. I was hoping to see the OLD K2-instruct removed, and K2.5 on the freed up infra. Doesn’t seem like that’s the plan :confused:

If prices need to be adjusted a bit, go for it. Or, at least break the deafening silence

2 Likes