models April 9, 2026 5 min read

Kimi K2.5: The Chinese Model Redefining the Boundaries of Open‑Source Performance

A comprehensive analysis of Moonshot AI's Kimi K2.5: trillion‑parameter architecture, parallel agent swarm, performance that challenges GPT‑5.4 at a fraction of the cost.

A

AI DayaHimour Team

April 9, 2026

Kimi K2.5: The Chinese Model Redefining the Boundaries of Open‑Source Performance

The January Shock: When the Frontier Is Pushed

On 27 January 2026, Chinese startup Moonshot AI released its Kimi K2.5 model. What set it apart was not merely benchmark numbers, but an entirely new architectural concept: “Agent Swarm”. The model can coordinate up to 100 sub‑agents to execute 1,500 tool steps in parallel, cutting the runtime of complex tasks by 4.5× compared to traditional single‑path agents. Crucially, it is the first open‑source model that genuinely reaches the performance “frontier”.

Expert Architecture: A Trillion Parameters with Surgical Efficiency

K2.5 is built on a mixture‑of‑experts (MoE) architecture with a total of one trillion parameters, but activates only 32 billion of them per token during inference. This design gives the model enormous knowledge memory while keeping computational efficiency high enough to make deployment affordable.

Core specifications: 61 layers including one dense layer, 384 experts with 8 activated per token, a 256,000‑token context window, a 160,000‑token vocabulary, and a MoonViT visual encoder of 400 million parameters. Pretraining was performed on 15 trillion tokens of mixed text and image data, making it a genuinely multimodal model from the ground up.

Performance on Benchmarks

The following results are documented from independent platforms such as LiveBench and Artificial Analysis:

Main Benchmarks — April 2026

AIME 2025 96.1%
HMMT 2025 95.4%
GPQA Diamond 87.6%
MathVista 90.1%
SWE-Bench Verified 76.8%
LiveCodeBench 85.0%
HLE (with tools) 50.2%
Mathematics & Science
Programming
Agents
On the SWE‑bench Verified test in particular, the 76.8% score is the highest ever recorded for an open‑source model, surpassing DeepSeek V3.2 by a noticeable margin.

Agent Comparison: When the Swarm Outperforms

Agent Performance Comparison — Kimi K2.5 Agent Swarm vs. Competitors

BrowseComp

Kimi K2.5 Agent Swarm 76.8%
Kimi K2.5 66.6
Claude Opus 4.5 32.8

Wide Search

Kimi K2.5 Agent Swarm 79.0
Kimi K2.5 73.7
Claude Opus 4.5 76.2

In-house Bench

Kimi K2.5 Agent Swarm 68.3
Kimi K2.5 41.6
Claude Opus 4.5 49.8

These numbers reveal that putting Kimi K2.5 into “Agent Swarm” mode transforms it into a completely different large‑scale search tool, surpassing Claude Opus 4.5 by a wide margin in BrowseComp (75.4 vs. 32.8). Competition in Wide Search is more balanced.

Global Ranking: LMSYS Arena

The LMSYS Chatbot Arena relies on blind human voting between the outputs of two models simultaneously and is one of the most objective benchmarks for conversational quality. In the March 2026 update, Kimi K2.5 ranked ninth globally in the Code Arena with an Elo score of 1809, becoming the first Chinese model (alongside GLM‑5) to enter the top‑ten list. The leading models scored between 1941 and 2002 points, meaning a gap of 130–190 points — a difference that can be narrowed, but real.

Artificial Analysis Analysis: Intelligence vs. Cost

The Artificial Analysis platform measures agent performance on complex real‑world tasks and combines performance with actual cost. Key findings for K2.5:

Intelligence Index: 47 points vs. an average of 26 points for similar models. On the GDPval‑AA scale (for agent‑cognitive tasks such as analysis and presentations), it scored 1309 Elo points, surpassing GLM‑4.7, DeepSeek V3.2, and Gemini 3 Pro.

Total Cost: $371 to run the full intelligence test suite, more than 4× less than Claude Opus 4.5 or GPT‑5.2.

Hallucination Rate: Just 64% compared to 74% for the previous Kimi K2 Thinking. The model prefers to abstain from answering rather than fabricating unverified information—a positive trait for reliability contexts.

Speed: 40.8 tokens per second vs. an average of 51, with a clear tendency towards verbosity (89 million tokens in the test suite vs. an average of 15 million). This can be valuable for in‑depth analysis but raises costs in interactive applications.

Pricing: The Price War That Changed the Equation

Official pricing via Alibaba Cloud: $0.60 per million input tokens, $3.00 for output. Via OpenRouter: $0.38 input, $1.72 output. With cached‑input discounts, the effective input price can drop below $0.25 for repetitive use cases.

The most striking comparison: K2.5 is about 9× cheaper than Claude Opus 4.5, and roughly 37× cheaper than Claude Opus 4.6. Given that it outperforms these two models on several agent benchmarks, the equation becomes top‑tier performance at a fraction of the cost.

The Complete Kimi Family

ModelParametersActive ParametersUse CasePrice
Kimi K2 Lite~30B30BSimple tasksFree
Kimi K2~100B100BGeneral use$0.30/$0.10
Kimi K2.51T32BComplex tasks$1.20/$0.40
Kimi K2 Pro~2T64BSpecialized research$6.00/$2.00

Real Limitations

The 256,000‑token context window is the most prominent constraint, falling short of the millions of tokens offered by Gemini, Claude, and even Llama 4 Maverick. This limits its use for tasks requiring processing of huge documents or entire code repositories in a single context.

On the ARC‑AGI‑2 test for abstract reasoning, it scored only 12% at an evaluation cost of $0.28—a figure that indicates a gap in the ability to infer entirely new abstract patterns, a challenge shared by most current‑generation models, though the leading closed models score higher by a margin.

Independent reports also suggest that workflows heavily reliant on tool calling may need more human supervision compared to OpenAI or Anthropic models, especially in sensitive production environments.

Fully Available: MIT Without Restrictions

The model was released under the MIT open‑source license, available for use, modification, and commercial distribution. Accessible via kimi.com and the mobile app, or via API from Fireworks AI, Together AI, Nebius, and Cloudflare Workers AI, or by downloading the weights from Hugging Face to run a full version of about 595 GB or a dynamically quantized compressed version of about 240 GB.

Questions Seeking Answers

What Kimi K2.5 brings to the stage is deeper than mere benchmarks: if an open‑source model with this level of performance in programming, mathematics, and agent‑based search is available at this cost, where does the actual value of closed models lie? The partial answer lies in two points: quality of explanations and human‑guided reasoning (where Claude models excel), and handling of extremely large contexts (where Gemini excels). But the deeper question concerns whether these performance gaps will continue to widen or will narrow, given that the team that produced K2.5 now has a freely available model for continuous improvement at a pace not subject to immediate profit pressures.

Kimi K2.5Moonshot AIagent swarmLMSYSArtificial AnalysisLiveBenchmodel comparison2026

Total Views

... readers

Share this article:

Related Articles

Kling 3.0: A Full Cinematic Journey at Your Fingertips – Kuaishou's Most Powerful Intelligent Video Model
models

Kling 3.0: A Full Cinematic Journey at Your Fingertips – Kuaishou's Most Powerful Intelligent Video Model

Discover Kling 3.0 released on 5 February 2026: videos up to 15 seconds at 4K resolution, native multilingual audio, and multi‑shot cinematic control that brings creativity to life.

Apr 4, 2026 Read More
Gemma 4: Google's Most Powerful Open‑Source Model — Comprehensive Analysis of the New 2026 Model Family
models

Gemma 4: Google's Most Powerful Open‑Source Model — Comprehensive Analysis of the New 2026 Model Family

Google launches Gemma 4 in four sizes from 2B to 31B, built on Gemini 3 technology, with Apache 2.0 open commercial license. Supports 140 languages, 256K context window, and runs on mobile devices with unprecedented efficiency.

Apr 4, 2026 Read More
MiniMax M2.7: The Chinese Model That Challenges Claude Opus 4.6 at 20× Lower Cost – Complete Analysis
models

MiniMax M2.7: The Chinese Model That Challenges Claude Opus 4.6 at 20× Lower Cost – Complete Analysis

MiniMax launches M2.7, a 230‑billion‑parameter MoE model (only 10 billion active) specially designed for smart agents and programming. It scores 56.22% on SWE‑Pro and reaches 90% of Claude Opus 4.6 performance at only 7% of the cost.

Apr 2, 2026 Read More