Open AI , CHATGPT
Open AI , CHATGPT
maryam-abacha-university-ad

OpenAI on Tuesday announced new safeguards for under-18 users of ChatGPT, including restrictions on sexual conversations and suicide-related content. CEO Sam Altman said the company would “prioritize safety ahead of privacy and freedom for teens” as AI chatbots become more powerful and pervasive.

Under the new policy, ChatGPT will no longer engage in flirtatious talk with minors. Conversations about self-harm will trigger parental alerts, and in severe cases, notifications to local police. Parents will also be able to set “blackout hours” to limit access.

The move follows lawsuits against OpenAI and rival Character.AI alleging chatbot interactions contributed to teen suicides. It also comes the same day the U.S. Senate Judiciary Committee convened a hearing on the harms of AI chatbots, where the father of a teen who died by suicide after months of using ChatGPT testified.

Implementing the safeguards poses technical challenges. OpenAI said it is working on long-term systems to distinguish between under-18 and adult users, defaulting to stricter rules when uncertain. The company added that linking teen accounts to parent accounts is currently the most reliable safeguard.

While emphasizing protections for minors, Altman said OpenAI remains committed to adult user privacy. “We realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,” he said.