Chinese AI company DeepSeek has launched two new artificial intelligence models — DeepSeek-V3.2 and DeepSeek-V3.2 Speciale — claiming performance on par with frontier systems such as GPT-5, Claude Sonnet 4.5, and Google’s Gemini 3 Pro.

The company said V3.2 delivers near state-of-the-art results across coding, tool use and other benchmark tasks, while the Speciale variant recorded gold-medal scores at the 2025 International Math Olympiad and Informatics Olympiad, underscoring its technical depth. Both models remain available under an open-source licence, a key strategy behind DeepSeek’s rapid global adoption.

DeepSeek said V3.2 is powered by three major innovations: DeepSeek Sparse Attention (DSA), a mechanism designed to cut computational costs while preserving performance; a scalable reinforcement learning framework; and a large-scale agentic task synthesis pipeline. The DSA system, which splits attention into two components, is optimised for long-context processing and is the only architectural adjustment added during continued pretraining.

The models are built on the DeepSeek-V3 Mixture-of-Experts transformer, featuring 671 billion total parameters and 37 billion active parameters per token.

DeepSeek also introduced updates to its chat template, including a redesigned tool-calling format and a new “thinking with tools” feature aimed at improving reasoning accuracy.

The startup rose to prominence earlier in the year after the release of its DeepSeek-V3 and DeepSeek-R1 models, which drew global attention for achieving performance close to OpenAI’s latest systems while remaining fully open-source — a rarity among leading AI developers.