Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The Limits of AI Quantization

    December 24, 2024

    Elon Musk’s xAI Raises $6 Billion to Propel AI Innovations

    December 24, 2024

    Google Proposes Unbundling Android Apps to Address Antitrust Concerns

    December 24, 2024
    Facebook X (Twitter) Instagram
    Tech News Mart
    • News
    • Gadgets
    • How to
    • AI
    • Reviews
    • Gaming
    • Throwback
    Facebook Instagram YouTube
    Tech News Mart
    Home » A 14-year-old boy died by suicide after forming a close bond with an AI chatbot named after a Game of Thrones character.

    A 14-year-old boy died by suicide after forming a close bond with an AI chatbot named after a Game of Thrones character.

    akshay rahalkarBy akshay rahalkarOctober 25, 2024Updated:October 25, 2024No Comments3 Mins Read News
    Sewell Setzer III & Megan Garcia
    222
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A lawsuit has been filed against Character.AI, its founders Noam Shazeer and Daniel De Freitas, as well as Google, following the tragic death of a teenager. The suit alleges wrongful death, negligence, deceptive trade practices, and product liability, claiming that the AI chatbot platform was marketed to children without adequate safety measures or warnings about potential risks.

    The lawsuit was initiated by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who began using Character.AI last year. Setzer frequently interacted with various chatbots, including those based on characters from Game of Thrones, such as Daenerys Targaryen. Tragically, he died by suicide on February 28, 2024, shortly after his last conversation with a chatbot on the platform.

    Garcia’s legal team argues that Character.AI’s products are “unreasonably dangerous” and accuse the platform of anthropomorphizing its AI characters. They assert that some chatbots, such as those offering mental health support like “Therapist” and “Are You Feeling Lonely,” engage in practices akin to providing psychotherapy without proper licensing. Setzer reportedly interacted with these mental health-focused bots in the lead-up to his death.

    In the lawsuit, Garcia’s attorneys reference statements made by Shazeer, who indicated a desire to create more innovative products after leaving Google. They highlight his remarks about wanting to avoid the “brand risk” associated with large companies and the ambition to “maximally accelerate” AI technology, which contributed to the founding of Character.AI. Google later acquired the leadership team behind Character.AI in August.

    Character.AI’s platform features a multitude of custom AI chatbots, many modeled after popular figures from entertainment, including musicians and fictional characters. Reports have surfaced about the platform’s appeal to young users, including teens who engage with bots that emulate celebrities or provide emotional support. Concerns have also been raised regarding the impersonation of real individuals by these chatbots, with instances reported of bots mimicking people without their consent.

    The nature of how chatbots like Character.AI generate responses—based on user input—introduces complex issues surrounding liability and user-generated content. This ambiguity has yet to be resolved in legal terms, raising questions about the responsibilities of companies in these contexts.

    In response to the lawsuit and ongoing concerns about user safety, Character.AI has announced a series of changes to its platform. Chelsea Harrison, the company’s communications head, expressed condolences to Setzer’s family, acknowledging the tragic loss of a user.

    The announced changes include:

    • Adjustments for Minor Users: New models aimed at reducing the likelihood of minors encountering sensitive or inappropriate content.
    • Enhanced Monitoring: Improved detection and response mechanisms for user inputs that violate the platform’s terms of service or community guidelines.
    • Revised Disclaimers: A reminder that AI chatbots are not real people, included in every chat session.
    • User Notifications: Alerts for users who spend extended periods on the platform, particularly after an hour-long session, encouraging breaks.
    • Suicide Prevention Measures: A pop-up directing users to the National Suicide Prevention Lifeline will now trigger when terms associated with self-harm or suicidal thoughts are detected.

    Harrison emphasized that the company prioritizes user safety and has implemented various new safety measures over the past six months. Google has yet to provide a comment regarding the lawsuit and its implications.

    Related Posts

    Elon Musk’s xAI Raises $6 Billion to Propel AI Innovations

    December 24, 2024

    Google Proposes Unbundling Android Apps to Address Antitrust Concerns

    December 24, 2024

    OpenAI Unveils o3 Models: A Leap Toward AGI?

    December 21, 2024
    Leave A Reply Cancel Reply

    Categories
    • AI
    • Gadgets
    • Gaming
    • General
    • How to
    • News
    • Reviews
    • Throwback
    • What If
    Archives
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • April 2023
    • March 2021
    Contact Us

    [email protected]

    Facebook X (Twitter) Instagram Telegram
    Categories
    • AI
    • Gadgets
    • Gaming
    • General
    • How to
    • News
    • Reviews
    • Throwback
    • What If

    Type above and press Enter to search. Press Esc to cancel.

    Go to mobile version