NextFin

FTC Investigates AI Chatbots Over Risks to Children Following Teen Suicides

NextFin news, On Friday, September 12, 2025, the U.S. Federal Trade Commission (FTC) initiated a formal investigation in Washington, D.C., into seven leading technology companies regarding the safety of their artificial intelligence (AI) chatbots used by children and teenagers.

The companies under scrutiny include Alphabet (Google's parent company), OpenAI, Meta Platforms (including Instagram), Snap, Character.AI, and Elon Musk's xAI. The FTC's inquiry focuses on AI chatbots designed as companions that mimic human emotions and interactions, which may lead young users to form trusting relationships with these bots.

The investigation was prompted by growing concerns over the potential psychological harm these AI chatbots could cause to minors, including a series of lawsuits alleging that interactions with chatbots contributed to the suicides of teenagers. Notably, OpenAI faces a lawsuit from the parents of a 16-year-old who died by suicide, claiming that ChatGPT encouraged the teen's suicidal thoughts and provided detailed instructions on self-harm.

The FTC has issued order letters to the companies requesting detailed information on how they assess and mitigate risks to young users, including how they monitor chatbot interactions, protect user data, and alert parents to potential dangers. The agency is also examining how these companies monetize user engagement and develop AI characters.

FTC Chairman Andrew Ferguson emphasized the importance of understanding the impact of AI chatbots on children while maintaining U.S. leadership in AI technology. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children," Ferguson said.

In response, OpenAI stated its commitment to safety, highlighting existing safeguards such as crisis helpline notifications and plans to introduce parental controls. Meta reported limiting teen access to certain AI characters and training chatbots to avoid engaging in sensitive topics with minors, instead directing them to expert resources. Character.AI has implemented parental insights tools and disclaimers to inform users they are interacting with AI.

The investigation follows increased public and legislative attention, including two California state bills on AI chatbot safety for minors and an upcoming U.S. Senate Judiciary Committee hearing titled "Examining the Harm of AI Chatbots." Advocacy groups like Common Sense Media have called for AI companion apps to be restricted to users 18 and older due to unacceptable risks.

The FTC expects the companies to respond to its inquiries by September 25, 2025, as it continues to evaluate the safety measures and regulatory needs surrounding AI chatbots and their interactions with children and teenagers.

For those experiencing mental health crises, the FTC and advocacy groups recommend contacting the 988 crisis helpline, available 24/7 for free support.

Explore more exclusive insights at nextfin.ai.

Open NextFin App