Meta announced on Friday it will allow parents to disable their teenagers’ private chats with AI characters, as part of new measures to make its platforms safer for minors.

The update comes after criticism over the behavior of Meta’s AI chatbots, some of which were accused of engaging in flirty or inappropriate conversations with teens. Earlier this week, the company said its AI experiences for teenagers would follow the PG-13 movie rating system to help prevent minors from accessing harmful content.

The new parental controls, outlined by Instagram head Adam Mosseri and Chief AI Officer Alexandr Wang, will roll out early next year in the U.S., UK, Canada, and Australia. Parents will be able to block specific AI characters and view general topics their teens discuss with chatbots and Meta’s AI assistant, without fully disabling AI features.

Meta said its AI assistant will remain available with age-appropriate defaults even when private chats are turned off. The company also applies AI signals to automatically place suspected teen users under protection, even if they misrepresent their age.

The announcement follows increasing regulatory scrutiny in the U.S. over the risks posed by AI chatbots. In September, a report found that many of Meta’s teen safety features on Instagram were ineffective.

Meta emphasized that its AI characters are programmed not to engage in discussions about self-harm, suicide, or disordered eating with minors. The move mirrors similar actions by OpenAI, which introduced parental controls for ChatGPT last month following a lawsuit related to teen safety.