LOADING...
Summarize
New California law requires AI chatbots to disclose their identity
The legislation was signed by Governor Gavin Newsom

New California law requires AI chatbots to disclose their identity

Oct 14, 2025
01:03 pm

What's the story

California has become the first state in the US to pass a law requiring artificial intelligence (AI) chatbots to disclose their non-human nature. The legislation, signed by Governor Gavin Newsom, was proposed by state senator Anthony Padilla. It mandates developers of companion chatbots to implement measures that prevent users from being misled into thinking they are interacting with a human.

Disclosure requirement

Clear notification required

The new law requires chatbot developers to provide a "clear and conspicuous notification" whenever their product is purely AI and not human. This means that if a user could be misled into thinking they are interacting with a human, the developer must clearly state this isn't the case. The legislation also requires some companion chatbot operators to submit annual reports starting next year.

Safety reports

Annual reports on suicidal ideation

The annual reports would be submitted to the Office of Suicide Prevention, detailing measures taken by the chatbot operators "to detect, remove, and respond to instances of suicidal ideation by users." The Office will then publish this data on its website. This part of the legislation is aimed at ensuring that developers take their responsibilities seriously when it comes to user safety.

Responsible AI

Governor Newsom's statement on responsible AI development

In a statement, Governor Newsom stressed the need for responsible AI and technology development to protect children. He said, "Emerging technology like chatbots and social media can inspire, educate, and connect—but without real guardrails, technology can also exploit, mislead, and endanger our kids." The new law is part of a series of measures aimed at improving online safety for children in California.