AI startup Anthropic has unveiled Project Glasswing, a cybersecurity initiative aimed at using advanced artificial intelligence to detect and address software vulnerabilities at scale.

The project brings together major technology firms and critical infrastructure organisations to test a powerful unreleased AI model known internally as Claude Mythos Preview. The system is designed with advanced coding and reasoning capabilities to analyse vulnerabilities in operating systems, web browsers and widely used software.

According to early testing results, the model has already identified several high-risk flaws, including vulnerabilities that had remained undetected for years.

Access to the model is restricted to a closed group of partners due to concerns about potential misuse. Participating organisations include Microsoft, Amazon Web Services and Google.

Microsoft said the initiative represents a significant shift in cybersecurity strategy, allowing organisations to address threats beyond human capacity. AWS noted that it has begun integrating the model into its internal systems to strengthen security operations, while Google emphasised the importance of industry-wide collaboration in tackling emerging threats.

Anthropic said Project Glasswing is focused on defensive cybersecurity, particularly in response to the rise of AI-driven attacks. The system can identify vulnerabilities and, in some cases, simulate how they could be exploited, highlighting both its benefits and risks.

The initiative also aims to support open-source software security. Anthropic has committed up to $100 million in usage credits and $4 million in funding to improve vulnerability detection and patching in widely used open-source projects.

The launch comes amid growing concerns among experts that increasingly powerful AI systems could accelerate both cyber defence and cybercrime.