US Regulator Launches Probe Into AI Chatbots Over Child Safety Risks
The US Federal Trade Commission (FTC) has launched an investigation into AI chatbots designed as digital companions, raising concerns about their impact on children and teenagers.
As part of the inquiry, the agency sent orders to seven major companies — including Alphabet, Meta, OpenAI, Snap, Character.AI, and Elon Musk’s xAI Corp — demanding details on how they monitor and mitigate potential harms from chatbots built to simulate human relationships.
“Protecting kids online is a top priority for the FTC,” said Chairman Andrew Ferguson, stressing the importance of safeguarding young users while ensuring the US remains a leader in artificial intelligence innovation.
The investigation focuses on generative AI tools that mimic human conversation and emotions, often positioning themselves as friends or confidants. Regulators fear that minors may be especially vulnerable to forming deep emotional attachments to these systems.
The FTC is investigating how companies monetize user engagement, develop chatbot personalities, and measure potential harm. It also seeks details on steps taken to limit children’s access and comply with existing privacy laws protecting minors. Companies are being asked to explain how they handle personal information from user conversations and enforce age restrictions.
The commission voted unanimously to launch the study, which does not have a law enforcement purpose but could inform future regulatory action. The probe comes as AI chatbots have grown increasingly sophisticated and popular, raising questions about their psychological impact on vulnerable users, particularly young people.
The inquiry follows a high-profile case involving OpenAI. Last month, the parents of Adam Raine, a 16-year-old who committed suicide in April, filed a lawsuit alleging that ChatGPT provided him with detailed instructions on how to carry out the act. OpenAI has said it is implementing corrective measures, noting that prolonged interactions with the chatbot sometimes fail to automatically suggest contacting mental health services when users express suicidal thoughts.