OpenAI has warned that its forthcoming artificial intelligence models could present a “high” cybersecurity risk as their capabilities continue to advance rapidly.

In a blog post on Wednesday, the company said future systems might be able to generate working zero-day remote exploits against well-defended infrastructure or assist with sophisticated enterprise and industrial intrusion operations with real-world consequences.

As a result, OpenAI said it is significantly increasing investment in defensive security research, including tools that help cyber defenders audit code, identify vulnerabilities, and strengthen their security workflows.

To mitigate risks, the company said it is relying on a combination of access controls, infrastructure hardening, egress controls, and enhanced monitoring. OpenAI also plans to launch a new program that will offer tiered access to more advanced capabilities for qualified organisations working specifically on cyber defense.

The Microsoft-backed company will also establish a Frontier Risk Council, an advisory group of experienced cybersecurity practitioners who will work directly with OpenAI’s internal teams. The council will initially focus on cybersecurity but later expand to other high-risk capability domains.

The announcement comes as governments and security experts continue to scrutinize how increasingly powerful AI models could be misused by attackers—or deployed as defensive tools.