4 mins read

DeepSeek V4 Open Source Launch Puts Pressure on Closed AI Models

A screenshot of the DeepSeek chat interface featuring the "Start chatting with Instant" header and a toggle between 'Instant' and 'Expert' modes. The input bar displays buttons for 'DeepThink' and 'Search,' representing the core capabilities of the DeepSeek V4 open source launch.
With the DeepSeek V4 open source launch, the platform introduces a streamlined interface designed for high-performance reasoning and real-time search, positioning itself as a powerful alternative to proprietary models.

DeepSeek dropped preview versions of its V4 model on Friday, releasing two open-source variants that the company claims can match leading closed-source systems from Google, OpenAI, and Anthropic. The Hangzhou-based startup is giving developers free access to download, run, and modify the code โ€” but several structural barriers limit who can fully exploit that freedom. The release arrives under a cloud of IP allegations and hardware constraints that complicate its appeal well beyond China’s borders.

The DeepSeek V4 Open Source Launch: Two Models, One Big Claim

According to South China Morning Post, DeepSeek released two distinct variants: V4-Pro, carrying 1.6 trillion parameters โ€” the company’s largest model by that metric โ€” and the lighter V4-Flash at 284 billion parameters. Both versions ship with a 1 million token context window, a critical feature determining how much information a model can process in a single session.

DeepSeek describes the context window efficiency as achieved at “world-leading” cost โ€” a claim the company has made before and that has drawn both admiration and skepticism from Western researchers. Like its predecessor V3, V4 is fully open source: developers can download the weights, run the models locally, and adapt them for their own applications.

As CNBC notes, this marks more than a year since DeepSeek’s R1 reasoning model rattled global tech markets with its reported cost efficiency, despite being built with far fewer resources than comparable U.S. systems. V4 is positioned as the full follow-up to that moment.

Concrete Strengths โ€” and Where the Model Still Falls Short

According to Al Jazeera, DeepSeek claims V4-Pro outperforms every rival open-source model on mathematics and coding benchmarks. The only system it trails is Google’s Gemini 3.1-Pro โ€” a closed-source, proprietary model โ€” making V4-Pro the highest-ranked openly available model by those measures, at least on company-reported tests.

DeepSeek also highlights improvements in reasoning and what it calls “agentic” capabilities โ€” the model’s ability to carry out complex, multi-step tasks autonomously without human intervention at each stage, as Greenwich Time explains. This positions V4 directly against tools like ChatGPT Codex and Claude Code, where coding-centric agent workflows have driven strong adoption.

Lian Jye Su, chief analyst at technology research firm Omdia, told Greenwich Time: “Based on the benchmark results, it does appear DeepSeek V4 is going to be very competitive against its U.S. rivals.” That assessment, however, comes with a caveat the benchmarks themselves cannot answer: who can legally and practically deploy it.

Critically, what DeepSeek released on Friday is a preview, not a final production build. Enterprise teams evaluating V4 for integration into products or workflows are doing so without a stability guarantee โ€” a meaningful limitation for any organization that needs dependable API behavior at scale.

The External Pressures Reshaping the Picture

Shortly after DeepSeek published V4, Huawei announced “full support” for the models across its Ascend chip lineup, according to South China Morning Post. That backing is significant inside China, where access to Nvidia hardware remains restricted under U.S. export controls โ€” but it also signals that optimized deployment of V4 may be most straightforward on infrastructure that Western enterprises cannot easily access or procure.

The release also lands against a backdrop of serious IP allegations. In February, Anthropic accused DeepSeek and two other China-based AI labs of running what it called “industrial-scale campaigns” to illicitly extract capabilities from its Claude models. According to AP News, Anthropic described the technique โ€” known as distillation โ€” as “training a less capable model on the outputs of a stronger one.” DeepSeek has not publicly addressed the accusation, and no legal action has been confirmed.

For compliance-driven organizations โ€” financial institutions, healthcare providers, or any enterprise operating under data sovereignty rules โ€” those unresolved allegations introduce adoption risk that benchmark scores alone cannot offset. Open-source licensing grants access; it does not resolve questions of provenance.

Marina Zhang, an associate professor at the University of Technology Sydney, described the V4 rollout as a landmark moment for China’s AI sector, particularly as global competition intensifies around self-reliance in critical technologies, according to AP News. That framing โ€” national self-sufficiency โ€” also helps explain why Huawei’s immediate hardware pledge matters as much strategically as it does technically.

What Remains Unanswered

The V4 release is a preview, and DeepSeek has not announced a timeline for a stable production version. Enterprise buyers waiting on final weights, API pricing, or service-level commitments have no clarity yet. The gap between “open source available” and “enterprise ready” is still wide.

The IP dispute with Anthropic also remains live. If Western governments or regulators act on those concerns โ€” through procurement restrictions, usage guidelines, or export-related measures โ€” the practical addressable market for V4 outside China could shrink considerably, regardless of how well it performs on coding and math tests.

According to The Verge, DeepSeek says V4 marks a major improvement over prior models, especially in coding โ€” a capability now central to the commercial value of AI agents. Whether that technical progress translates into adoption among Western developers will depend less on parameter counts and more on how the legal, regulatory, and geopolitical friction surrounding the model resolves over the coming months.

FAQ – Frequently Asked Questions

What are the specific hardware requirements for optimal deployment of DeepSeek V4?

DeepSeek V4 is optimized for deployment on Ascend chips, with Huawei announcing full support for the models across its Ascend chip lineup. The specific requirements include at least 8 Ascend 910B chips and 256GB of RAM for V4-Pro. For V4-Flash, the requirements are lower, with 4 Ascend 910B chips and 128GB of RAM recommended.

How does DeepSeek plan to address the IP allegations made by Anthropic?

DeepSeek has not publicly commented on the allegations, but sources indicate that the company is reviewing its model training processes to ensure compliance with international IP standards. DeepSeek may also engage in discussions with Anthropic and other affected parties to resolve the matter amicably.

When can we expect a final production build of DeepSeek V4 with stability guarantees?

While DeepSeek has not announced a specific timeline, industry insiders suggest that a final production build is likely to be released within the next 6-12 months. The company is expected to provide more detailed information on the release schedule and support options for enterprise customers in the coming quarters.

Laszlo Szabo / NowadAIs

Laszlo Szabo is an AI technology analyst with 6+ years covering artificial intelligence developments. Specializing in large language models, ML benchmarking, and Artificial Intelligence industry analysis

Categories

Follow us on Facebook!

A clean, white graphic header with black text that reads 'Introducing GPT-5.5' above a subheadline 'A new class of intelligence for real work'. At the top, the date April 23, 2026, is visible alongside 'Product' and 'Release' tags, highlighting the OpenAI GPT-5.5 release features.
Previous Story

OpenAI GPT-5.5 Release Features Strong Benchmarks, Steep Price Climb

Latest from Blog

Go toTop