California governor rejects major AI safety legislation: Here's why
California Governor Gavin Newsom has vetoed a major piece of Artificial Intelligence (AI) safety legislation, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). His decision was swayed by various factors, including worries about how it could affect AI firms, California's top spot in the sector, and concerns over the bill's wide-ranging nature. He detailed his issues with SB 1047 in a thorough veto message.
Newsom's concerns about AI safety bill
Newsom shared his worries about the bill, saying it doesn't take into account if an AI system is used in high-stakes situations or deals with crucial decisions or sensitive info. He pointed out that the bill sets tough standards even for basic tasks if they're part of a bigger system. "I do not believe this is the best approach to protecting the public from real threats posed by the technology," he said.
Governor warns of potential false sense of security
Newsom said the bill might give folks a false sense of security about keeping up with fast-moving tech. He pointed out that smaller, specialized models could be just as risky, if not more so, than what SB 1047 aims to tackle. He also warned this could choke off the innovation that benefits everyone. Even though he vetoed it, Newsom still thinks we need safety measures and penalties for violators but wants an evidence-based approach when it comes to AI systems.
Senator Wiener expresses disappointment over veto
Senator Scott Wiener, the main author of the bill, expressed his disappointment over Newsom's decision. He said it's a setback for those who believe in keeping an eye on big corporations that make crucial decisions affecting public safety and welfare. "This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers," he said.
SB 1047: A look at the proposed AI safety measures
SB 1047, introduced in late August, aimed to set the strictest AI rules in the US. It was designed for AI companies operating in California and would have applied to models that cost over $100 million to train or more than $10 million to fine-tune. The bill also required developers to have "kill switch" safeguards and testing protocols to lower the risk of catastrophic events like cyberattacks or pandemics.