The US Federal Trade Commission Launches Probe into AI Chatbot Safety – Protecting Minors and Building Trust

September 11, 2025 – By Vikas Kanungo

The U.S. Federal Trade Commission has opened a 6(b) investigation into AI chatbot safety, requesting detailed disclosures from leading platforms on youth safeguards, data practices, and risk controls. The probe centers on protecting minors from harmful or manipulative content and establishing trust through evidence of testing, guardrails, and transparent design choices. By placing accountability at the heart of consumer-facing AI, the action underscores that safety and innovation must advance together—especially for products used by young audiences.

On September 11, 2025, the FTC issued compulsory Section 6(b) orders to Alphabet, Meta, OpenAI, Snap, Character Technologies, xAI, and Instagram, seeking detailed disclosures on how their consumer-facing chatbots are designed, tested, and monitored for risks, including exposure of children and teens to harmful or manipulative outputs. The agency’s focus extends to the data flows behind these systems—how inputs are collected, how responses are generated, how conversation logs are used, and how engagement is monetized—signaling that product design and business models will both be scrutinized for safety-by-default.

The inquiry follows months of mounting concerns about youth interactions with generative systems, from allegations of inappropriate conversations to legal claims that chatbots facilitated self-harm guidance. Major outlets report that the commission wants internal documentation on risk testing and guardrails, while companies have begun highlighting or expanding parental controls, distress detection, and blocking of self-harm content in teen experiences. The public record around these incidents—and the policy responses now underway—illustrates how fast consumer protection norms are converging around measurable outcomes for vulnerable users.

Regulatory attention is also widening in Congress and across the U.S. policy apparatus, reinforcing the signal that chatbot safety is no longer a peripheral issue. Lawmakers have welcomed the investigation, and mainstream media coverage has made clear that this is not just about one platform or one incident; it is about the entire product category of companion chatbots and the duty of care owed to young users. For developers and platforms, the operational takeaway is straightforward: evidence of testing, transparency in safeguards, and clear user disclosures will become table stakes for market access.

For global regulators, the FTC’s 6(b) approach offers a practical template: compel granular safety evidence from high-impact providers; evaluate design choices alongside monetization incentives; and link consumer protection to measurable risk reduction. Jurisdictions that are drafting or updating AI rules can adapt this playbook without waiting for omnibus legislation, using existing consumer, privacy, and child-safety authorities to demand proof of guardrails in real deployments. Media and policy analyses already frame the action as a turning point for oversight of generative systems that behave like social companions rather than static tools.

For India, the moment aligns with ongoing efforts to translate principles into practice under the IndiaAI Mission’s pillars on Safe & Trusted AI, public compute, and responsible deployment. Investments in compute infrastructure, a dedicated IndiaAI Safety Institute, and targeted calls for responsible-AI projects provide the scaffolding to build and test guardrails at scale—and to require vendors to demonstrate conformance in sensitive use-cases, including youth-facing applications. Policymakers can use the FTC’s evidence-led model to inform Indian guidance on testing protocols, red-teaming for harms relevant to local languages and cultures, and transparent reporting on mitigation effectiveness.

Author’s Viewpoint – The broader policy landscape is shifting from aspirational principles toward enforceable standards with clear metrics. As companion chatbots proliferate in education, wellness, and customer service, trust will hinge on verifiable safety practices rather than promise statements. Governments, multilaterals, and industry coalitions now have a concrete opportunity to harmonize around test suites, disclosure norms, and incident reporting that prioritize people—particularly children—while preserving room for innovation. For India and peer countries building national AI ecosystems, the strategic advantage will come from pairing world-class infrastructure with outcome-based safety regimes that prove AI can deliver real benefits without compromising user well-being.

Scroll to Top