Artificial intelligence continues to reshape the cyber threat landscape at pace. Over the last two years, we have seen AI dramatically reduce the barrier to entry for threat actors, accelerate vulnerability discovery, and compress the time between disclosure and exploitation. Today, a new development is emerging that may prove equally consequential; this time on the defensive side.
Known as Project Glasswing, a consortium of some of the world’s largest technology companies is using advanced large language models (LLMs), most notably Claude Mythos, to proactively identify vulnerabilities in software before it is deployed. While this shift promises clearer benefits for software resilience over the long term, it also introduces near term risks that organisations, insurers and regulators cannot ignore.
Project Glasswing is a coordinated effort by major technology providers to embed AI directly into secure software development. Using Claude Mythos (an advanced LLM reported to excel at coding, reasoning and security analysis) participants are scanning vast codebases to identify exploitable weaknesses earlier, faster and at greater scale than traditional approaches.
Anthropic (the company behind Claude Mythos and the organiser of Project Glasswing) has stated that Claude Mythos has already identified thousands of high‑severity vulnerabilities, including instances across major operating systems and browsers. While the model itself is not publicly accessible, the implications are clear: AI is now capable of finding vulnerabilities far more quickly than human‑led processes alone.
At first glance, this appears unequivocally positive. Over time, fewer exploitable weaknesses should reach production environments, reducing systemic risk and improving digital resilience. However, the immediate consequences are more nuanced.
With great power comes great responsibility. The access to this model has been deliberately restricted (not in the public domain) given its ability to craft exploits that could take advantage of the vulnerabilities in question. This shows the potential adverse impact it could have on the industry should it fall into the wrong hands.
One of the most important trends highlighted in recent threat analysis is the collapse in mean time to exploit. Since 2018, the average time between a vulnerability being disclosed and weaponised has fallen from years to hours. As AI accelerates vulnerability discovery even further, that window is likely to continue narrowing.
This means organisations must respond to critical patches faster than ever before. Traditional monthly or quarterly patching cycles are increasingly misaligned with the speed of modern threat actors; many of whom now rely on automation rather than deep technical expertise.
Recent incidents reinforce this reality. During the week of 13–19 April 2026 alone, ransomware and disruptive cyber events affected education providers, healthcare software platforms, global aviation systems and major digital entertainment brands. In several cases, initial access exploited known weaknesses where remediation had not kept pace with exploitation.
AI’s dual‑use nature remains a defining feature of today’s threat landscape. On one hand, initiatives like Project Glasswing promise cleaner code and a healthier digital ecosystem over time. On the other, the same advances reduce the effort required to identify, chain and deploy exploits.
This has two important consequences:
There is also a broader strategic concern. As defensive AI rapidly uncovers long‑hoarded zero‑day vulnerabilities, there is a risk of short‑term destabilisation. Nation states and sophisticated threat actors may seek to deploy previously weaponised vulnerabilities before they are discovered and patched, leading to a temporary spike in high‑impact campaigns.
For organisations, the message is clear: cyber resilience now depends less on whether vulnerabilities exist and more on how quickly they can be identified, prioritised and mitigated.
Strong fundamentals matter more than ever:
Beyond prevention, defence‑in‑depth controls play an essential role when compromise does occur:
These are no longer best practices for mature organisations; they are rapidly becoming baseline expectations in a world shaped by AI‑accelerated threats.
While robust controls reduce risk, no organisation can eliminate it entirely; particularly as the pace of technological change intensifies. This is where cyber insurance plays a critical role, not just as a financial backstop, but as part of a broader resilience framework.
At Brit, we believe cyber insurance should actively support insureds both before and after an incident. Key components include:
As initiatives like Project Glasswing reshape how vulnerabilities are discovered and mitigated, cyber insurance must evolve alongside them - supporting faster decisions, better preparedness and more resilient outcomes.
Project Glasswing represents a significant milestone in the application of AI to defensive security engineering. Over time, it may materially improve software integrity across the digital economy. In the meantime, however, organisations face a period of compressed timelines, heightened exposure and accelerated decision‑making.
Navigating this transition requires a combination of sound cyber hygiene, adaptive security controls and responsive risk transfer. In an environment where vulnerabilities are found faster than ever, resilience, not perfection, will define success.