China's AI models meet US ones in 'frontier risks'
What's the story
A recent study has revealed that Chinese artificial intelligence (AI) models are approaching "frontier risks" levels similar to those of their US counterparts. The term "frontier risks" refers to the potential dangers these advanced AI systems could pose to public safety and social stability. The research was conducted by Concordia AI, a Beijing-based consultancy specializing in AI safety in China.
Risk assessment
Risks of misuse and loss of control
The study noted that the recent advancements in AI models from DeepSeek and other Chinese companies have increased the risk of these systems being misused by malicious actors or escaping human control. This has raised alarms among experts about possible catastrophic consequences, even the destruction of humanity. The research analyzed 50 leading AI models and found that Chinese ones are now on par with their US counterparts in terms of such risks.
Safety enhancement
Hope findings help improve model safety
Fang Liang, the head of AI safety and governance at Concordia AI in China, said they hope their findings can help these companies improve the safety of their models. The study highlights the need for further research and development to mitigate any potential risks associated with the advanced AI systems.
Model scrutiny
DeepSeek's R1 model flagged for cyberattack risk
The study also flagged DeepSeek's flagship R1 model, which was last updated in May, as having the highest risk score for being used in cyberattacks. This highlights the potential vulnerabilities of AI systems and underscores the need for stringent safety measures to prevent their misuse.