California has taken the lead with Senate Bill 243, becoming the first U.S. state to regulate AI “companion” chatbots designed to emulate friendship or intimacy. The law requires these systems to clearly disclose their artificial nature and restrict sensitive conversations, particularly with minors. Its enactment reflects an attempt to tread carefully between technological innovation and user safety.
Under the law, companion chatbots must clearly identify themselves as non-human to prevent deception. They are restricted from engaging in sexual or self-harm discussions with minors, and must trigger intervention protocols when detecting suicidal content.
The final version of SB 243 was pared back from earlier drafts: mandatory third-party audits were removed, and its scope was narrowed to minors rather than all users.
Emotional boundaries: transparency and responsibility
Advocacy groups have criticized the diluted version, arguing it risks becoming symbolic rather than substantive. Without stronger oversight measures, the law may lack teeth. Implementing the restrictions may also prove challenging: developers might avoid legitimate conversations about mental health for fear of liability, and global chatbot services may struggle to accurately identify California minors for content filtering.
Proponents, including the governor, assert the law is necessary: while AI can educate and connect, it also carries risks of exploitation or deception without guardrails.
Coupled with California’s other recent AI transparency bills, SB 243 positions the state as a pioneer in algorithmic governance. Still, observers caution that before similar rules are adopted elsewhere, California must demonstrate real world efficacy in balancing protection and freedom in emotional AI interactions.