OpenAI is moving to fill a critical safety leadership role that has reportedly been vacant for months, as the company faces mounting scrutiny over the societal and mental health impacts of its AI systems.

The ChatGPT maker said over the weekend that it is hiring a new Head of Preparedness to lead its safety strategy and anticipate how its advanced models could be misused or cause harm. The opening was announced via a job listing shared on X by OpenAI CEO Sam Altman.

According to the listing, the role comes with an annual salary of $555,000, in addition to equity, and will oversee the technical execution of OpenAI’s Preparedness framework — the company’s internal system for tracking and mitigating risks posed by frontier AI capabilities.

The hiring push comes as OpenAI faces multiple allegations related to the impact of ChatGPT on users’ mental health, including several wrongful death lawsuits. An internal OpenAI study previously found that more than one million ChatGPT users — about 0.07 per cent of weekly active users — showed signs of severe mental health distress, including psychosis, mania and suicidal ideation.

Altman acknowledged the growing risks, noting that the “potential impact of models on mental health was something we saw a preview of in 2025,” and described the role as “critical at an important time.”

The Head of Preparedness will be tasked with managing risks across areas such as cybersecurity, biological threats and the safe deployment of increasingly autonomous systems. Altman warned that the position would be demanding, describing it as “a stressful job” that requires jumping “into the deep end pretty much immediately.”

OpenAI’s safety teams have seen significant turnover in recent years. In July 2024, then-head of preparedness Aleksander Madry was reassigned, with oversight temporarily handed to AI safety researchers Joaquin Quinonero Candela and Lilian Weng. Weng later left the company, while Candela moved earlier this year to lead recruiting.

In November 2025, Andrea Vallone, head of OpenAI’s model policy safety team, also announced her departure. Vallone reportedly played a key role in shaping how ChatGPT responds to users experiencing mental health crises.