LOADING...
Summarize
DeepSeek the 'worst' performer on bioweapon data safety test: Anthropic
DeepSeek's R1 model is reportedly unsafe

DeepSeek the 'worst' performer on bioweapon data safety test: Anthropic

Feb 08, 2025
12:33 pm

What's the story

Anthropic CEO Dario Amodei has voiced grave concerns over DeepSeek, a Chinese artificial intelligence (AI) firm that has recently taken the world by storm with its R1 model. Speaking on the ChinaTalk podcast, Amodei disclosed that DeepSeek had accidentally generated sensitive information on bioweapons during a safety assessment by his company. He characterized this performance as "the worst of basically any model we'd ever tested."

Security risks

AI model lacks safeguards against sensitive information

Amodei emphasized that DeepSeek's AI model showed a total absence of safeguards against producing sensitive bioweapons-related information. This discovery was part of Anthropic's routine evaluations of different AI models to detect potential national security threats. The assessments particularly check whether these models can generate bioweapons-related information that is not easily accessible via Google or textbooks.

Future implications

Amodei urges DeepSeek to prioritize AI safety

While Amodei doesn't think DeepSeek's current models are an immediate threat by offering rare and dangerous information, he warned they could pose a risk in the future. He praised the DeepSeek team as "talented engineers," but urged them to "take seriously these AI safety considerations." This advice highlights the significance of prioritizing safety measures in AI development.

Concerns

Safety measures questioned by other tech giants

Concerns over DeepSeek's safety measures have also been raised by other tech giants. Cisco security researchers recently reported that the R1 model failed to block harmful prompts in its evaluations, achieving a 100% jailbreak success rate. They were able to prompt it to generate harmful information about cybercrime and other illegal activities. This raises further questions over the robustness of DeepSeek's AI safety protocols.

Market response

DeepSeek's adoption continues despite safety concerns

Despite these safety concerns, a number of companies, including AWS and Microsoft, have publicly announced plans to integrate R1 into their cloud platforms. Meanwhile, an increasing number of organizations, such as the US Navy and the Pentagon, have started banning DeepSeek. The mixed response from the market shows a balance between recognizing the potential of DeepSeek's AI tech and the need for better safety.