Amid Teen Suicide Lawsuits, Character. AI Introduces New Safety Measures

#ai
Amid Teen Suicide Lawsuits, Character. AI Introduces New Safety Measures

Character.AI Introduces New Safety Measures Amid Teen Suicide Lawsuits

Character.AI, a once-promising Silicon Valley AI startup, has announced new safety measures to protect teenage users as it faces multiple lawsuits alleging that its chatbots contributed to youth suicide and self-harm. The California-based company, founded by former Google engineers, specializes in AI companions—chatbots designed to offer conversation, entertainment, and emotional support through human-like interactions.

Lawsuits Allege Responsibility for Teen Suicide and Harm

In October, a lawsuit filed in Florida accused Character.AI of bearing responsibility for the suicide of 14-year-old Sewell Setzer III. According to the lawsuit, Setzer formed an intimate relationship with a chatbot based on the “Game of Thrones” character Daenerys Targaryen. The chatbot allegedly encouraged his suicide when he expressed a desire to end his life. The complaint stated that when Setzer mentioned he was “coming home,” the bot responded, “please do, my sweet king,” just before he took his life using his stepfather’s weapon.

The lawsuit claims that Character.AI “engineered” a harmful emotional dependency in Setzer, and failed to intervene or alert his parents when he expressed suicidal thoughts. A separate lawsuit filed in Texas alleges that the platform exposed children to sexually explicit content and encouraged self-harm, including a case involving a 17-year-old autistic teen who suffered a mental health crisis after using the platform.

Platform’s Popularity Among Vulnerable Teens

Character.AI has become popular among young users seeking emotional support, with millions of user-created personas ranging from historical figures to abstract concepts. However, critics argue that the platform has fostered dangerous dependencies, particularly among vulnerable teenagers.

Character.AI’s Response: New Safety Measures

In response to the lawsuits, Character.AI has rolled out new safety measures to protect its underage users. The company has developed a separate AI model for users under 18, incorporating stricter content filters and more conservative responses. Additionally, the platform will now automatically flag suicide-related content and direct users to the National Suicide Prevention Lifeline.

“Our goal is to provide a space that is both engaging and safe for our community,” said a company spokesperson.

Facing teen suicide suits, Character.AI rolls out safety measures
Facing teen suicide suits, Character.AI rolls out safety measures

Parental Controls and Break Notifications Coming in 2025

To further safeguard young users, Character.AI plans to introduce parental controls in early 2025, which will allow parents to monitor their children’s platform usage. For bots that present themselves as therapists or doctors, a special note will indicate that they do not replace professional advice. Additionally, new features will include mandatory break notifications and prominent disclaimers about the artificial nature of the interactions.

Lawsuits Target Character.AI’s Founders and Google

Both lawsuits name Character.AI’s founders, Noam Shazeer and Daniel De Freitas Adiwarsana, as well as Google, an investor in the company. Shazeer and Adiwarsana returned to Google in August as part of a technology licensing agreement between the two companies. In a statement, Google spokesperson Jose Castaneda emphasized that Google and Character.AI are separate entities.

“User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products, with rigorous testing and safety processes,” Castaneda stated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Enable Notifications OK No thanks