LOADING...
Summarize
DeepSeek trained its AI for $294,000, far below US costs
The company disclosed the information in a research paper

DeepSeek trained its AI for $294,000, far below US costs

Sep 18, 2025
06:57 pm

What's the story

Chinese artificial intelligence (AI) company DeepSeek has revealed that its R1 model was trained at a significantly lower cost than what US competitors have reported. The company made the disclosure in a peer-reviewed article recently published in the academic journal Nature. The revelation is likely to spark fresh debates about China's position in the global AI race and its transparency regarding technology access amid export restrictions.

Cost comparison

Costs starkly contrast with US counterparts

The training of DeepSeek's reasoning-focused R1 model cost $294,000 and utilized 512 NVIDIA H800 chips. This is a stark contrast to the "much more" than $100 million that OpenAI CEO Sam Altman had said was spent on "foundational model training" in 2023. However, Altman's company has not provided detailed figures for its releases. The costs associated with training large-language models for AI chatbots are generally high due to the need to run powerful chip clusters for weeks or months.

Scrutiny

Controversy over chip usage

DeepSeek's claims about its development costs and the technology it used have been challenged by US companies and officials. The H800 chips were designed by NVIDIA for the Chinese market after the US banned exports of its more powerful H100 and A100 AI chips to China in October 2022. However, DeepSeek has maintained that it only uses lawfully acquired H800 chips, not H100s.

Admission

DeepSeek admits to using banned chips

In a supplementary information document with the Nature article, DeepSeek admitted for the first time that it owns A100 chips and had used them in preparatory stages of development. The researchers wrote, "Regarding our research on DeepSeek-R1, we utilized the A100 GPUs to prepare for the experiments with a smaller model." After this initial phase, R1 was trained for a total of 80 hours on the 512 chip-cluster of H800 chips.

Methodology

Unique approach to training R1 model

The DeepSeek team has shared their unique approach to training the R1 model. They used a system of rewards, similar to how humans learn from their experiences and mistakes. This method helped them overcome some of the expensive computational and scaling challenges that usually come with teaching AI models human-like reasoning. The team's work marks a major step forward in making advanced AI systems more efficient and accessible.