Why South Korea's new AI rules are facing backlash
What's the story
South Korea has introduced a comprehensive set of artificial intelligence (AI) regulations, the first of its kind in the world. The new laws require companies to label AI-generated content and add invisible digital watermarks for clearly artificial outputs like cartoons or artwork. However, they have faced criticism from both tech start-ups and civil society groups. Start-ups argue that the regulations are too stringent, while civil society groups contend that they don't go far enough to protect users.
Compliance requirements
AI Basic Act mandates risk assessments and safety reports
The AI Basic Act, which came into effect last week, mandates companies providing AI services to conduct risk assessments and document decision-making processes for "high-impact AI" systems used in medical diagnosis, hiring, and loan approvals. If a human makes the final decision, the system may not fall under this category. Extremely powerful AI models are also required to submit safety reports but no models worldwide currently meet these standards.
Regulatory goals
South Korea's AI regulations aim to promote industry
The new laws are part of South Korea's ambition to become one of the world's top three AI powers, along with the US and China. Government officials say that these regulations are mainly focused on promoting industry rather than restricting it. However, a survey in December revealed that 98% of AI start-ups were unprepared for compliance, leading to widespread frustration among them.
Criticism
Concerns over competitive imbalance and limited protection
Critics have raised concerns over a competitive imbalance created by these regulations, as all Korean companies face regulation regardless of size. Only foreign firms meeting certain thresholds such as Google and OpenAI must comply. Civil society groups have also criticized the new legislation for providing limited protection to people harmed by AI systems, arguing that it mainly protects "users" such as hospitals, financial companies, and public institutions using AI systems.
Regulatory strategy
South Korea's unique approach to AI regulation
South Korea has opted for a flexible, principles-based framework for AI regulation, unlike the EU's strict risk-based model or the US and UK's sector-specific approaches. This strategy is based on "trust-based promotion and regulation," according to Melissa Hyesun Yoon, a law professor at Hanyang University specializing in AI governance. The Ministry of Science and ICT hopes these laws will "remove legal uncertainty" and create "a healthy and safe domestic AI ecosystem."