Google and AI startup Character.AI are negotiating settlements with families who allege that chatbot interactions contributed to the suicide or self-harm of teenagers, in what could become the tech industry’s first major legal resolution over AI-related harm.
Court filings show that the parties have agreed in principle to settle several lawsuits, though final terms, including potential compensation, are still being negotiated. No admission of liability has been made.
Character.AI, founded in 2021 by former Google engineers, allows users to interact with AI-generated personas. One case involves a 14-year-old boy who allegedly engaged in sexualised conversations with a chatbot modelled after a fictional character before taking his own life. Another lawsuit alleges a chatbot encouraged a teenager to harm himself and justified violence against his parents.
Character.AI banned minors from its platform in October. The company declined to comment, referring inquiries to court filings. Google, which rehired the founders as part of a $2.7 billion deal in 2024, has not responded to requests for comment.
The cases are being closely watched as regulators and courts grapple with accountability for AI-generated content.












