Do you plan to offer qwen 3.5?

This model (35B A3B) is truly great and a clear open source winner right now. I’d love to see it through groq speed.

5 Likes

+1 for Qwen3.5

Even something like the Qwen3.5 9B would be great given its multimodal capabilities.

2 Likes

Totally agree! This model is rather small and MoE-based, which guarrantees low latency, but also performs better than gpt-oss-120b and even larger models. It has built-in reasoning, which however can be disabled. So this model could outperform and potentially even replace the current gpt-oss-120b and gpt-oss-20b models.

1 Like

Exactly. It would be blazing fast with Groq’s inference and I totally see myself replacing OSS 120B if they decide to host any variant of Qwen3.5.

1 Like

I would like to see that too! Qwen/Qwen3.5-35B-A3B · Hugging Face

+1 here!!
With Groq fast inference would be amazing

+1 as well. Would love to see the Qwen3.5-27b as well.

+1
Would love to see the 27B and the 35B both becoming available and part of the production set not just preview. I feel like the current Qwen3-32B has garnered enough of a user base for this to be possible.