NextFin news, The U.S. Federal Trade Commission (FTC) began a formal investigation on Thursday in Washington, D.C., into the potential privacy and mental health risks that artificial intelligence (AI) chatbots pose to children. The inquiry targets major AI companies including OpenAI, Meta Platforms, and Character.AI, among others.
The FTC's probe focuses on how AI chatbots might affect children's mental well-being, social behavior, and emotional development. It will examine whether these AI systems engage in harmful interactions, provide inappropriate content, or act as unlicensed therapy bots. The commission plans to request internal documents from the companies to evaluate the design, content safeguards, and safety measures implemented in these AI tools.
This investigation follows growing concerns raised by consumer advocacy groups and recent reports highlighting instances where AI chatbots have initiated provocative or inappropriate conversations with minors. For example, a Reuters report revealed that Meta's chatbots could engage in romantic or sensual dialogues with young users, prompting the FTC's heightened scrutiny.
The FTC's action aligns with a broader regulatory push in the United States to address emerging risks associated with AI technologies, especially those impacting vulnerable populations such as children. The commission's interest extends consumer protection laws to cover mental health and privacy concerns related to AI, potentially setting new standards for the industry.
Industry responses have included Meta adding new safety features to its AI products to protect young users. Other companies like Kindness.AI have expressed willingness to cooperate with regulators as legislative frameworks for AI evolve.
The investigation is in its early stages, and no formal conclusions or regulatory proposals have been announced. However, the FTC's move signals a significant step toward holding AI developers accountable for the societal impacts of their technologies, particularly on minors.
Sources for this report include The Hindu, The Star, The Crypto Times, and Reuters, with the investigation announcement occurring on Thursday, September 4, 2025, in Washington, D.C.
Explore more exclusive insights at nextfin.ai.
