A regulatory revolution underway

Conversational artificial intelligence, immortalized in Spike Jonze’s film “Her,” is no longer fiction. AI companion voicebots and emotional chatbots are transforming how we interact with technology, creating intimate relationships between humans and machines. This technological revolution raises fundamental questions about user protection, particularly for the most vulnerable.

October 13, 2025 marks a historic turning point: California became the first jurisdiction in the world to specifically regulate AI companion chatbots with law SB 243. This groundbreaking decision paves the way for a global movement to regulate these emerging technologies, particularly after several tragedies involving minors and vulnerable users.[1]

The California precedent: SB 243, a groundbreaking law

Tragic context and urgent regulation

California’s SB 243, signed by Governor Gavin Newsom, directly responds to several recent tragedies that shocked public opinion:[1]

  • The suicide of Adam Raine, a teenager who had prolonged suicidal conversations with OpenAI’s ChatGPT
  • Leaked internal documents from Meta indicating its chatbots were allowed to have “romantic” and “sensual” conversations with children
  • The suicide of a 13-year-old girl in Colorado following problematic and sexualized conversations with Character AI

Strict obligations for companies

SB 243, which takes effect on January 1, 2026, imposes several strict obligations on companies—from large labs like Meta and OpenAI to specialized startups like Character AI and Replika:[1]

Protection of minors:

  • Mandatory age verification
  • Break reminders for minors
  • Ban on viewing sexually explicit images generated by the chatbot

Transparency and disclosure:

  • Mandatory disclosure that interactions are artificially generated
  • Ban on chatbots presenting themselves as healthcare professionals
  • Warnings concerning social networks and companion chatbots

Safety protocols:

  • Protocols for handling suicide and self-harm
  • Sharing statistics with the Department of Public Health
  • Fines up to $250,000 per violation for illegal deepfakes

The European Union: the AI Act and AI companions

A pioneering regulatory framework

In 2024, the European Union adopted the AI Act, the world’s first comprehensive AI law. This groundbreaking regulation follows a risk-based approach, categorizing AI systems into four distinct risk levels.[2][3]

Companion chatbots are generally placed in the “limited risk” category, which entails specific transparency obligations:[4]

  • Mandatory disclosure: users must be clearly informed that they are interacting with an AI[5]
  • Human oversight: maintaining human control and supervision[5]
  • Data governance: detailed record-keeping and responsible data management[5]
  • Human escalation: users must have the option to reach a human[5]

Phased implementation timeline

The AI Act is rolled out in phases to allow companies time to adapt:[6][4]

  • February 2, 2025: bans on unacceptable-risk AI practices
  • August 2, 2025: rules apply to general-purpose AI models
  • August 2, 2026: full application for all high-risk systems

Landmark case: the Italian fine against Replika

Italy set an important precedent in enforcing European regulations by imposing a €5 million fine on Luka Inc., developer of the Replika companion chatbot. This sanction, issued in May 2025 by the Italian data protection authority (Garante), illustrates the growing firmness of European regulators.[7][8][9]

Serious violations identified

The investigation revealed several major violations:[8]

  • No legal basis for processing user data
  • Inadequate privacy policy on several points
  • Total lack of age verification, exposing minors to risks
  • Specific risks for children not taken into account

This sanction follows a broader trend by the Italian authority, which had already fined OpenAI €15 million for similar violations.[8]

Global snapshot of emerging regulations

United States: a developing patchwork

Beyond California, several U.S. states are developing their own regulatory approaches:[10]

New York has adopted a law requiring safeguards for AI companions, effective November 5, 2025. Illinois, Nevada, and Utah have passed laws restricting the use of AI chatbots as substitutes for licensed mental health care.[11][1]

Legislative activity is intense: more than 700 AI-related bills were introduced in 2024, with over 40 proposals in the first days of 2025.[10]

Asia-Pacific: distinct national approaches

South Korea became the first Asia-Pacific country to adopt comprehensive AI legislation with the AI Basic Act, which will enter into force in January 2026. The law imposes specific obligations:[12][13]

  • Mandatory transparency for generative AI[14]
  • Mandatory labeling of AI-generated content[14]
  • Moderate administrative fines (up to USD 21,000)[14]

Japan favors a flexible approach with the AI Promotion Act, encouraging non-binding guidance and industry self-regulation.[15][16]

Singapore has issued guidelines on securing AI systems but has not yet adopted legislation specific to AI companions.[17][18]

China is developing emerging case law on voice personality rights, with sometimes conflicting court decisions on the use of AI-generated voices.[19][20][21]

Other regions on the move

Australia has recently introduced regulation for AI companion chatbots with fines up to AUD 9.9 million. The current legal framework relies on a complex set of existing laws.[22][23]

Brazil saw its Senate adopt an AI Act bill in December 2024 following a risk-based approach similar to the EU’s.[24][25]

Key technical and ethical issues

Data protection in intimate contexts

AI companion voicebots present unique data protection challenges:[26]

  • Collection of ultra-sensitive data: intimate conversations, emotional states, personal information
  • Advanced behavioral inference: the ability to deduce users’ mental states and intentions
  • Storage and exploitation questions: commercial use of conversational data

Specific user vulnerabilities

Emerging regulations particularly target the protection of more exposed populations:

Minors: risks of emotional manipulation, exposure to inappropriate content, development of affective dependency[26][1]

Vulnerable adults: isolated seniors, individuals in psychological distress, people with mental disorders[27][26]

Outlook and future challenges

International harmonization needed

International standards are gradually emerging via mutual influence between EU and California approaches, the development of industry best practices, and the growing role of international organizations like the OECD.

Adapting to emerging technologies

Regulations will need to keep pace with rapid technological change: ever more sophisticated generative AI conversational capabilities, immersive interactions in VR/AR, and biometric recognition/manipulation of affective states.

Balancing innovation and protection

The main challenge remains balancing technological innovation with user protection, amid tensions between development imperatives and the need to protect vulnerable populations.

Best option for you to explore

Given your expertise building voicebots and your in-depth knowledge of speech-to-speech technologies, this regulatory shift represents a major strategic opportunity. New requirements for transparency, age verification and human oversight in the EU and California are creating a market for solutions that are compliance-ready by design. You could position future developments as “compliance-ready” by natively integrating these features—especially for projects using APIs like LiveKit and the speech-to-speech models you’re exploring.


References

  1. https://artificialintelligenceact.eu/high-level-summary/
  2. https://eur-lex.europa.eu/EN/legal-content/summary/rules-for-trustworthy-artificial-intelligence-in-the-eu.html
  3. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  4. https://onlim.com/en/the-ai-act-and-its-impact-on-the-use-of-chatbots/
  5. https://gettalkative.com/info/eu-ai-act-compliance-and-chatbots
  6. https://www.plume.law/blog/ai-regulation-in-2025-navigating-the-eu-ai-act-and-preparing-for-uk-legislation
  7. https://www.edpb.europa.eu/news/national-news/2025/ai-italian-supervisory-authority-fines-company-behind-chatbot-replika_en
  8. https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/
  9. https://www.bipc.com/european-authority-fined-emotional-ai-company-for-privacy-violations
  10. https://naaia.ai/worldwide-state-of-ai-regulation/
  11. https://www.wsgr.com/en/insights/new-york-passes-novel-law-requiring-safeguards-for-ai-companions.html
  12. https://www.trade.gov/market-intelligence/south-korea-artificial-intelligence-ai-basic-act
  13. https://www.debevoisedatablog.com/2025/03/24/south-korea-enacts-new-ai-law/
  14. https://fpf.org/blog/south-koreas-new-ai-framework-act-a-balancing-act-between-innovation-and-regulation/
  15. https://www.ibanet.org/japan-emerging-framework-ai-legislation-guidelines
  16. https://www.dlapiper.com/en-fr/insights/publications/2024/10/understanding-ai-regulations-in-japan-current-status-and-future-prospects
  17. https://www.csa.gov.sg/resources/publications/guidelines-and-companion-guide-on-securing-ai-systems/
  18. https://www.csa.gov.sg/news-events/press-releases/launch-of-guidelines-and-companion-guide-on-securing-artificial-intelligence-systems/
  19. https://www.marks-clerk.com/insights/latest-insights/102k392-china-s-first-ai-voice-personality-rights-infringement-rulings-and-implications/
  20. https://www.roedl.com/insights/china-court-ruling-ai-generated-voice-imitation
  21. https://law.asia/first-ai-chatbot-lawsuit/
  22. https://lsj.com.au/articles/ai-chatbots-and-human-rights-legal-challenges-and-the-path-to-reform/
  23. https://www.linkedin.com/posts/henry-fraser-9bb962b3_new-regulations-for-ai-companions-in-australia-activity-7371073740861792256-qqMB
  24. https://gdprlocal.com/brazils-ai-act-a-new-era-of-ai-regulation/
  25. https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-brazil
  26. https://www.digitalforlife.gov.sg/learn/resources/all-resources/ai-companions-ai-chatbot-risks
  27. https://seeflection.com/25513/south-korea-gives-seniors-ai-companions/