News6 min readPublished on 2026-04-07

Project Glasswing: Anthropic and big tech join forces for software security

Anthropic launched Project Glasswing with AWS, Google, Microsoft, Apple, Nvidia and other big tech companies to protect critical global software with AI. What it is, how much is being invested, and why it matters for businesses.

In a nutshell

Project Glasswing is Anthropic's initiative involving the world's major technology players to use Claude in finding vulnerabilities in critical open source software. 100 million dollars in resources, bugs discovered after 27 years, and a clear signal about the speed at which AI is changing the rules.

A bug hidden for 27 years

OpenBSD is one of the most widely used operating systems for critical infrastructure worldwide. It is known for its obsessive attention to security. Decades of manual review, thousands of expert eyes on the code.

Claude found a vulnerability that had been there for 27 years.

During the same period it identified a bug in FFmpeg — a library used by billions of devices for video and audio processing — that had passed five million automated tests without being detected. And a series of vulnerabilities in the Linux kernel and FreeBSD, including a Remote Code Execution that allows writing arbitrary content onto the system stack.

These are not lab-built exploits. They are flaws in software running on servers, smartphones, financial infrastructure, hospitals. The fact that they were found by an AI model is a paradigm shift in information security.

What Project Glasswing is and why it was created

Project Glasswing is the initiative through which Anthropic decided to use these capabilities proactively and collaboratively.

The goal is to protect critical global software — the foundation of global digital infrastructure — by using Claude to find vulnerabilities before malicious actors do.

The underlying idea is simple: the AI that makes it possible to find these vulnerabilities is the same AI that could be used to exploit them. Better to use it defensively, in a coordinated and transparent way, with responsible disclosures to the maintainers of the affected software.

Who is involved and how much is being invested

The list of Glasswing partners reads like a roll call of the world's leading technology players: AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.

These are not symbolic endorsements. The numbers committed are concrete: 100 million dollars in AI credits made available to the project, 2.5 million to the Linux Foundation and the Open Source Security Foundation, 1.5 million to the Apache Software Foundation.

These organizations manage the code that underpins the internet: the Linux kernel, Apache, dozens of foundational libraries used by virtually every software application in existence. Bringing AI into this ecosystem is a decision with cascading effects on all software worldwide.

AI is accelerating faster than it seems. Are you ready?

30 minutes to discuss your specific case.

Book a call

What it has found so far

Beyond the cases already mentioned, the capabilities Claude demonstrated within Glasswing are technically significant. Buffer overflow, use-after-free vulnerabilities, heap corruption — the most dangerous categories of flaws in modern systems.

What stands out is not just the ability to find these bugs, but doing so on hardened systems with all modern protections active: memory randomization, stack protection, data non-execution enforcement.

At the time the results were published, over 99% of the vulnerabilities found were still open. That means for months — sometimes years — those flaws were there, exploitable by anyone who found them first.

Why Anthropic publishes instead of keeping it secret

It is a legitimate question. If you have an AI model capable of finding vulnerabilities in the world's most secure systems, why announce it publicly?

Anthropic's answer is consistent with its philosophy: transparency is safer than secrecy. The capabilities already exist — hiding them does not eliminate them, it just makes them less predictable. Making them public allows the industry to prepare and coordinate defenses.

There is also a practical consideration: an AI that finds vulnerabilities can be used by anyone with access to sufficiently advanced models. Anthropic's strategy is to choose proactive defense over obscurity.

The signal for businesses

Project Glasswing is not only relevant for information security teams. It is a signal about the speed at which AI is changing sectors and processes that seemed stable.

If AI is already capable of this in software security, what is happening in financial analysis, legal review, and industrial automation? The answer is: much more than most businesses are currently considering.

The risk is not technological. It is strategic. Those who start understanding how to integrate these tools today build an advantage that compounds over time. Those who wait find a gap that becomes increasingly difficult to close. With Maverick AI we help companies understand where to start and measure the return from the first project.

AI is accelerating faster than it seems. Are you ready?

Maverick AI works with companies in PE, pharma, fashion, manufacturing and consulting to implement Claude. If you want to understand where AI can add value in your organisation, book a call.

Book a call

Domande Frequenti

Project Glasswing is an Anthropic initiative that uses Claude to find security vulnerabilities in critical open source software worldwide. Partners include AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, JPMorgan Chase, Linux Foundation and Palo Alto Networks. The project has made 100 million dollars in AI resources available and donated 4 million to open source foundations.
Anthropic chose transparency for a practical reason: hiding these capabilities does not eliminate them. Advanced AI models are being developed by other actors as well. Making Claude's capabilities in cybersecurity public allows the industry to prepare and coordinate defenses. It is the logic of responsible vulnerability disclosure applied to AI.
Not exactly. The point of Glasswing is not that these systems are dangerous, but that AI analysis capabilities have surpassed manual analysis for certain types of vulnerabilities. Finding a bug does not automatically mean being able to exploit it: responsible disclosure to maintainers allows flaws to be corrected before malicious actors can use them.
No. The open source software that Glasswing helps protect is used in almost every sector: banking, hospitals, industrial production, public infrastructure. For businesses, the most relevant message is about the speed of AI progress: if AI has reached these capabilities in software security, something similar is happening in almost every domain.
With clarity, not panic. Glasswing is a signal about the trajectory of AI. The sensible reaction is to start concretely understanding how AI can transform your organization's processes. Maverick AI works with companies to identify the most promising use cases and launch concrete implementations with Claude.

Want to learn more?

Contact us to find out how we can help your company with tailored AI solutions.

Anthropic implementation partner in Italy. We work with companies in PE, pharma, fashion, manufacturing and consulting.

Stay informed on AI for business

Get updates on Claude AI, business use cases and implementation strategies. No spam, just useful content.

Get in Touch
Project Glasswing: Anthropic, Google, Microsoft and AWS for Software Security | Maverick AI