California Gov. Gavin Newsom vetoed a controversial artificial intelligence safety bill on Sept. 29.
Six things to know:
1. If signed into law, Senate Bill 1047 would have required tech companies developing large AI models to implement certain safeguards and requirements, including creating written safety and security protocols, conducting annual third-party safety audits, and ensuring models could be promptly shut down if needed. The bill would've also established a new state entity to oversee the creation of AI models.
2. In a Sept. 29 letter to senators, Mr. Newsom acknowledged the importance of regulating AI, but said he felt the bill did not adequately address its complexities. He noted that the bill's exclusive focus on the most expensive and large AI models could give the public a false sense of security.
3. Smaller, specialized models — including those that use medical data — may be equally or even more dangerous than larger models, potentially curtailing innovation, according to Mr. Newsom.
"While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," he said.
4. The governor told The Wall Street Journal that he is collaborating with top AI researchers to develop new legislation for regulating AI.
5. As California is home to 32 of the world's 50 leading AI companies, the legislation would have set a precedent for how the technology is regulated nationwide. Major technology companies, including Meta, Google, Microsoft and OpenAI opposed the bill, voicing concerns about its vague standards and potential impact on innovation.
6. The bill's author, Democratic State Senator Scott Wiener, called the veto a "setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and future of the planet," in a Sept. 29 post on X.