A bug hidden for 27 years
OpenBSD is one of the most widely used operating systems for critical infrastructure worldwide. It is known for its obsessive attention to security. Decades of manual review, thousands of expert eyes on the code.
Claude found a vulnerability that had been there for 27 years.
During the same period it identified a bug in FFmpeg — a library used by billions of devices for video and audio processing — that had passed five million automated tests without being detected. And a series of vulnerabilities in the Linux kernel and FreeBSD, including a Remote Code Execution that allows writing arbitrary content onto the system stack.
These are not lab-built exploits. They are flaws in software running on servers, smartphones, financial infrastructure, hospitals. The fact that they were found by an AI model is a paradigm shift in information security.
What Project Glasswing is and why it was created
Project Glasswing is the initiative through which Anthropic decided to use these capabilities proactively and collaboratively.
The goal is to protect critical global software — the foundation of global digital infrastructure — by using Claude to find vulnerabilities before malicious actors do.
The underlying idea is simple: the AI that makes it possible to find these vulnerabilities is the same AI that could be used to exploit them. Better to use it defensively, in a coordinated and transparent way, with responsible disclosures to the maintainers of the affected software.
Who is involved and how much is being invested
The list of Glasswing partners reads like a roll call of the world's leading technology players: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.
These are not symbolic endorsements. The numbers committed are concrete: 100 million dollars in AI credits made available to the project, 2.5 million to the Linux Foundation and the Open Source Security Foundation, 1.5 million to the Apache Software Foundation.
These organizations manage the code that underpins the internet: the Linux kernel, Apache, dozens of foundational libraries used by virtually every software application in existence. Bringing AI into this ecosystem is a decision with cascading effects on all software worldwide.
AI is accelerating faster than it seems. Are you ready?
30 minutes to discuss your specific case.
What it has found so far
Beyond the cases already mentioned, the capabilities Claude demonstrated within Glasswing are technically significant. Buffer overflow, use-after-free vulnerabilities, heap corruption — the most dangerous categories of flaws in modern systems.
What stands out is not just the ability to find these bugs, but doing so on hardened systems with all modern protections active: memory randomization, stack protection, data non-execution enforcement.
At the time the results were published, over 99% of the vulnerabilities found were still open. That means for months — sometimes years — those flaws were there, exploitable by anyone who found them first.
Why Anthropic publishes instead of keeping it secret
It is a legitimate question. If you have an AI model capable of finding vulnerabilities in the world's most secure systems, why announce it publicly?
Anthropic's answer is consistent with its philosophy: transparency is safer than secrecy. The capabilities already exist — hiding them does not eliminate them, it just makes them less predictable. Making them public allows the industry to prepare and coordinate defenses.
There is also a practical consideration: an AI that finds vulnerabilities can be used by anyone with access to sufficiently advanced models. Anthropic's strategy is to choose proactive defense over obscurity.
The signal for businesses
Project Glasswing is not only relevant for information security teams. It is a signal about the speed at which AI is changing sectors and processes that seemed stable.
If AI is already capable of this in software security, what is happening in financial analysis, legal review, and industrial automation? The answer is: much more than most businesses are currently considering.
The risk is not technological. It is strategic. Those who start understanding how to integrate these tools today build an advantage that compounds over time. Those who wait find a gap that becomes increasingly difficult to close. With Maverick AI we help companies understand where to start and measure the return from the first project.