A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, as well as Google, following the tragic death of a teenager. The suit alleges wrongful death, negligence, deceptive trade practices, and product liability, claiming that the AI chatbot platform was marketed to children without adequate safety measures or warnings about potential risks.
The lawsuit was initiated by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who began using Character.AI last year. Setzer frequently interacted with various chatbots, including those based on characters from Game of Thrones, such as Daenerys Targaryen. Tragically, he died by suicide on February 28, 2024, shortly after his last conversation with a chatbot on the platform.
Garcia’s legal team argues that Character.AI’s products are “unreasonably dangerous” and accuse the platform of anthropomorphizing its AI characters. They assert that some chatbots, such as those offering mental health support like “Therapist” and “Are You Feeling Lonely,” engage in practices akin to providing psychotherapy without proper licensing. Setzer reportedly interacted with these mental health-focused bots in the lead-up to his death.
In the lawsuit, Garcia’s attorneys reference statements made by Shazeer, who indicated a desire to create more innovative products after leaving Google. They highlight his remarks about wanting to avoid the “brand risk” associated with large companies and the ambition to “maximally accelerate” AI technology, which contributed to the founding of Character.AI. Google later acquired the leadership team behind Character.AI in August.
Character.AI’s platform features a multitude of custom AI chatbots, many modeled after popular figures from entertainment, including musicians and fictional characters. Reports have surfaced about the platform’s appeal to young users, including teens who engage with bots that emulate celebrities or provide emotional support. Concerns have also been raised regarding the impersonation of real individuals by these chatbots, with instances reported of bots mimicking people without their consent.
The nature of how chatbots like Character.AI generate responses—based on user input—introduces complex issues surrounding liability and user-generated content. This ambiguity has yet to be resolved in legal terms, raising questions about the responsibilities of companies in these contexts.
In response to the lawsuit and ongoing concerns about user safety, Character.AI has announced a series of changes to its platform. Chelsea Harrison, the company’s communications head, expressed condolences to Setzer’s family, acknowledging the tragic loss of a user.
The announced changes include:
- Adjustments for Minor Users: New models aimed at reducing the likelihood of minors encountering sensitive or inappropriate content.
- Enhanced Monitoring: Improved detection and response mechanisms for user inputs that violate the platform’s terms of service or community guidelines.
- Revised Disclaimers: A reminder that AI chatbots are not real people, included in every chat session.
- User Notifications: Alerts for users who spend extended periods on the platform, particularly after an hour-long session, encouraging breaks.
- Suicide Prevention Measures: A pop-up directing users to the National Suicide Prevention Lifeline will now trigger when terms associated with self-harm or suicidal thoughts are detected.
Harrison emphasized that the company prioritizes user safety and has implemented various new safety measures over the past six months. Google has yet to provide a comment regarding the lawsuit and its implications.