OpenAI, Anthropic, Google unite against AI piracy in China
What's the story
In a rare show of unity, OpenAI, Anthropic, and Google have teamed up to tackle the issue of Chinese competitors replicating their advanced artificial intelligence (AI) models. The three tech giants are sharing information through the Frontier Model Forum, an industry nonprofit they founded with Microsoft in 2023. The collaboration aims to detect adversarial distillation attempts that violate their terms of service.
Security concern
Imitation models pose economic and national security risks
The collaboration highlights a major concern for US AI firms. They fear that some users, particularly in China, are creating imitation versions of their products. These could undercut them on price and siphon away customers while posing a national security risk. According to US officials, unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit.
Accusations
OpenAI's memo to Congress
OpenAI has confirmed its involvement in the information-sharing effort on adversarial distillation through the Frontier Model Forum. The company recently sent a memo to Congress accusing Chinese firm DeepSeek of trying to free-ride on the capabilities developed by OpenAI and other US frontier labs.
Technique explained
Understanding distillation in AI
Distillation is a process where an older "teacher" AI model trains a newer "student" model to replicate its capabilities. This is often cheaper than building an original model from scratch. While some forms of distillation are accepted and encouraged by AI labs, others have been controversial when used by third parties, especially in adversary nations like China or Russia, to replicate proprietary work without authorization.
Economic impact
Economic implications for US AI firms
Most models created by Chinese labs are open weight, meaning parts of the underlying AI system are publicly available for users to freely download. This makes them cheaper to use, posing an economic challenge for US AI companies that have kept their models proprietary. They hope customers will pay for access to their products and help offset the hundreds of billions of dollars they've spent on data centers and other infrastructure.
Model scrutiny
Controversy surrounding DeepSeek's R1 model
DeepSeek's surprise release of the R1 reasoning model in January 2025 drew significant scrutiny. Soon after, Microsoft and OpenAI investigated whether the Chinese start-up had improperly exfiltrated large amounts of data from US firm's models to create R1. In February, OpenAI warned US lawmakers that DeepSeek had continued using increasingly sophisticated tactics to extract results from US models, despite heightened efforts to prevent misuse of its products.