The enforced migration from kimi-k2-instruct-0905 to openai/gpt-oss-120b on the Groq platform constitutes a significant technological downgrade rather than an upgrade.
Key data points:
-
Capacity Halved: The context window is being reduced from 256k tokens to 128k tokens, physically obstructing the analysis of large-scale repositories and legal documents.
-
Performance Drop: Agentic coding capabilities (SWE-bench Verified) show a decline from 69.2% (Kimi-K2) to 62.4% (GPT-OSS).
-
Architecture Scaling: Kimi-K2 utilizes a 1-trillion parameter architecture (32B active), whereas GPT-OSS is limited to 117 billion parameters (5.7B active)—optimized specifically to fit within a single NVIDIA H100 GPU.
By replacing high-efficiency SOTA models with hardware-constrained alternatives, the platform risks neutralizing its competitive advantage. This move aligns with concerns regarding strategic innovation suppression to protect legacy hardware monopolies.
Please keep running kimi-k2-instruct-0905 until kimi-k2.5 is available.