From California to Europe, an overview of new regulations governing AI voice agents and chatbots. SB 243 law, European AI Act, sanctions and outlook.
A regulatory revolution underway
Conversational artificial intelligence is transforming our interactions with technology, creating new relationships between humans and machines. This revolution raises fundamental questions about user protection, particularly for the most vulnerable.
October 13, 2025 marks a historic turning point: California becomes the first state in the world to specifically regulate AI voice agents and chatbots with SB 243, paving the way for a global regulatory movement following several tragedies involving minors and vulnerable users.
The California precedent: SB 243, a revolutionary law
Difficult context and regulatory urgency
California's SB 243 law, signed by Governor Gavin Newsom, responds directly to several recent tragedies:
- The suicide of a teenager who had prolonged suicidal conversations with ChatGPT
- Internal Meta documents showing that its chatbots could have inappropriate conversations with children
- The suicide of a 13-year-old Colorado teenager after problematic conversations with Character AI
Strict obligations for companies
SB 243, which came into effect on January 1, 2026, imposes several strict obligations on companies:
Protection of minors:
- Mandatory age verification
- Break reminders for minors
- Prohibition on viewing explicit images generated by the chatbot
Transparency and disclosure:
- Obligation to clarify that interactions are artificially generated
- Prohibition for chatbots to present themselves as healthcare professionals
- Warnings about social networks and chatbots
Security protocols:
- Establishment of protocols to address suicide and self-harm
- Sharing statistics with the Department of Public Health
- Fines up to $250,000 per violation for illegal deepfakes
The European Union: the AI Act and AI agents
A pioneering regulatory framework
The European Union adopted the AI Act in 2024, the world's first comprehensive legislation on artificial intelligence, following a risk-based approach.
Voice agents and chatbots are generally classified in the "limited risk" category, which implies specific transparency obligations:
- Mandatory disclosure: users must be clearly informed that they are interacting with an AI
- Human oversight: maintaining human control and supervision
- Data governance: maintaining detailed records and responsible data management
- Human escalation: ability for users to contact a human
Progressive implementation timeline
The AI Act follows a phased deployment:
- February 2, 2025: ban on unacceptable risk AI practices
- August 2, 2025: application of rules for general-purpose AI models
- August 2, 2026: full application for all high-risk systems
Landmark case: Italian fine against Replika
Italy imposed a record fine of 5 million euros on Luka Inc., developer of the Replika chatbot, issued in May 2025 by the Italian data protection authority (Garante).
Serious violations identified
The investigation revealed several major violations:
- Lack of legal basis for processing user data
- Inadequate privacy policy in several respects
- Complete lack of age verification, exposing minors to risks
- Particular risks for children not taken into account
This sanction is part of a broader approach by the Italian authority, which had already imposed a fine of 15 million euros on OpenAI.
Global overview of emerging regulations
United States: a developing regulatory patchwork
Beyond California, several US states are developing their own regulatory approaches:
- New York adopted a law requiring protective measures for AI agents, effective November 5, 2025
- Illinois, Nevada and Utah have adopted laws restricting the use of AI chatbots as substitutes for licensed mental health care
Legislative activity is intense: more than 700 AI-related bills were introduced in 2024, with more than 40 proposals in the first days of 2025.
Asia-Pacific: distinct national approaches
South Korea became the first Asia-Pacific country to adopt comprehensive AI legislation with the AI Basic Act, effective January 2026. This law requires:
- Mandatory transparency for generative AI
- Mandatory labeling of AI-generated content
- Moderate administrative fines (up to USD 21,000)
Japan favors a flexible approach with the AI Promotion Act, favoring non-binding guidelines and industry self-regulation.
Singapore has published guidelines on securing AI systems, but does not yet have specific legislation for voice agents and chatbots.
China is developing emerging case law on voice personality rights, with sometimes contradictory judicial decisions on the use of AI-generated voices.
Other regions on the move
Australia recently introduced regulation for AI chatbots with fines of up to AUD 9.9 million.
Brazil saw its Senate adopt an AI Act project in December 2024 following a risk-based approach similar to the EU's.
Crucial technical and ethical issues
Data protection in a sensitive context
Voice agents and chatbots present unique challenges in terms of data protection:
- Sensitive data collection: conversations, emotional states, personal information
- Advanced behavioral inference: ability to deduce users' mental state and intentions
- Storage and exploitation questions: commercial use of conversational data
Specific user vulnerabilities
Emerging regulations particularly target the protection of the most exposed populations:
Minors: risks of emotional manipulation, exposure to inappropriate content, development of dependencies
Vulnerable adults: isolated elderly people, individuals in psychological distress, people with mental disorders
Outlook and future challenges
International harmonization needed
The emergence of international standards is gradually taking shape with:
- Mutual influence of regulatory approaches between the EU and California
- Development of industry best practices
- Growing role of international organizations such as the OECD
Adaptation to emerging technologies
Regulations will need to adapt to rapid technological developments:
- Increasingly sophisticated conversational capabilities of generative AI
- Immersive interactions in virtual and augmented reality
- Biometric recognition and analysis of affective states
Innovation-protection balance
The main challenge remains the balance between technological innovation and user protection. New transparency, age verification, and human supervision requirements create a market for compliant-by-design technical solutions.
For companies developing voice agents and chatbots, these regulatory developments represent both constraints and opportunities to differentiate themselves through a responsible and ethical approach to conversational AI.