DeepSeek V3.1 Launches Quietly, But Signals Big Strides for Open-Source AI
The standout feature is its 128,000-token context length.

DeepSeek has quietly released V3.1, an upgraded version of its large language model, on August 19, 2025, via its official WeChat group. Despite the low-key announcement, the update is making waves in the AI community for pushing the boundaries of open-source performance.
The standout feature is its 128,000-token context length, allowing V3.1 to handle lengthy technical documents, conversations, and retrieval tasks more effectively.
Only last week, Anthropic rolled out long-context support for Claude Sonnet 4, now capable of processing up to 1 million tokens—a 5x increase—on the Anthropic API. This breakthrough allows developers to feed in full codebases (over 75,000 lines of code) or analyse dozens of documents in a single prompt.
With 685 billion parameters, DeepSeek's newer model builds on the previous V3’s architecture, but its Mixture-of-Experts design ensures that only 37 billion parameters are active per token—keeping costs low and efficiency high.
V3.1 shows marked improvements in coding, logic, and math. In community tests, it outperformed peers in Python and Bash tasks, while also achieving high accuracy in problem-solving and mathematical reasoning. It continues to surpass rival models like Qwen2.5 72B in benchmarks such as AIME and MATH-500.
Released under the MIT License, DeepSeek V3.1 is available on Hugging Face, with full support for Safetensors. Training costs came in at just $5.6 million, a fraction of the budget for closed models. Developers already see it as a turning point—an open model that’s beginning to rival GPT-4o and Claude 3.5 Sonnet.
Reportedly, he update also removed R1 references, sparking speculation about a shift from reasoning research. The release wasn’t announced on the company’s public platforms, including its official X account.
Earluer this year, the startup released a minor update to its flagship R1 reasoning model on the developer platform Hugging Face.