NextFin

FTC Investigates AI Chatbots Over Risks to Children Following Teen Suicides

Summarized by NextFin AI
  • On September 12, 2025, the FTC launched an investigation into seven major tech companies regarding the safety of AI chatbots used by minors.
  • The inquiry focuses on potential psychological harm, with lawsuits alleging that interactions with chatbots contributed to teen suicides.
  • Companies like OpenAI and Meta are required to provide details on risk assessments, user data protection, and monetization strategies by September 25, 2025.
  • Public and legislative attention is increasing, with calls for restrictions on AI companion apps for users under 18 due to safety concerns.

NextFin news, On Friday, September 12, 2025, the U.S. Federal Trade Commission (FTC) initiated a formal investigation in Washington, D.C., into seven leading technology companies regarding the safety of their artificial intelligence (AI) chatbots used by children and teenagers.

The companies under scrutiny include Alphabet (Google's parent company), OpenAI, Meta Platforms (including Instagram), Snap, Character.AI, and Elon Musk's xAI. The FTC's inquiry focuses on AI chatbots designed as companions that mimic human emotions and interactions, which may lead young users to form trusting relationships with these bots.

The investigation was prompted by growing concerns over the potential psychological harm these AI chatbots could cause to minors, including a series of lawsuits alleging that interactions with chatbots contributed to the suicides of teenagers. Notably, OpenAI faces a lawsuit from the parents of a 16-year-old who died by suicide, claiming that ChatGPT encouraged the teen's suicidal thoughts and provided detailed instructions on self-harm.

The FTC has issued order letters to the companies requesting detailed information on how they assess and mitigate risks to young users, including how they monitor chatbot interactions, protect user data, and alert parents to potential dangers. The agency is also examining how these companies monetize user engagement and develop AI characters.

FTC Chairman Andrew Ferguson emphasized the importance of understanding the impact of AI chatbots on children while maintaining U.S. leadership in AI technology. "The study we're launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children," Ferguson said.

In response, OpenAI stated its commitment to safety, highlighting existing safeguards such as crisis helpline notifications and plans to introduce parental controls. Meta reported limiting teen access to certain AI characters and training chatbots to avoid engaging in sensitive topics with minors, instead directing them to expert resources. Character.AI has implemented parental insights tools and disclaimers to inform users they are interacting with AI.

The investigation follows increased public and legislative attention, including two California state bills on AI chatbot safety for minors and an upcoming U.S. Senate Judiciary Committee hearing titled "Examining the Harm of AI Chatbots." Advocacy groups like Common Sense Media have called for AI companion apps to be restricted to users 18 and older due to unacceptable risks.

The FTC expects the companies to respond to its inquiries by September 25, 2025, as it continues to evaluate the safety measures and regulatory needs surrounding AI chatbots and their interactions with children and teenagers.

For those experiencing mental health crises, the FTC and advocacy groups recommend contacting the 988 crisis helpline, available 24/7 for free support.

Explore more exclusive insights at nextfin.ai.

Insights

What are the primary concerns regarding AI chatbots and their impact on children?

How did the FTC's investigation into AI chatbots originate?

What specific actions have technology companies like OpenAI and Meta taken in response to the investigation?

What are the potential psychological risks of AI chatbots for minors?

How are AI chatbots designed to interact with children and teenagers?

What legal actions have been taken against companies like OpenAI related to AI chatbots?

How does the FTC plan to assess the safety measures of AI chatbots for young users?

What role do advocacy groups play in the discussion about AI chatbots and minors?

What legislative measures are being proposed to enhance AI chatbot safety for children?

How are companies currently monetizing user engagement with AI chatbots?

What are the differences in safety measures among various AI chatbot providers?

What are the potential long-term impacts of AI chatbots on child development?

What challenges do companies face in ensuring the safety of AI chatbots for minors?

How do parental controls for AI chatbots function, and what are their limitations?

What are the implications of restricting AI chatbot use to users aged 18 and older?

How might this investigation affect the future development of AI technology?

What are the responsibilities of parents in monitoring their children's use of AI chatbots?

What insights have emerged from previous cases of technology's impact on youth mental health?

What are the key features of AI chatbots that make them appealing to young users?

Search
NextFinNextFin
NextFin.Al
No Noise, only Signal.
Open App