A 27-year-old bug found in hours
OpenBSD is one of the most widely used operating systems for servers and critical infrastructure. It has a security reputation built over thirty years. Yet inside it was hiding a bug nobody had ever found — a code error that could cause a remote system crash.
It had been there for 27 years.
It was not found by a penetration testing team. It was found by Claude Mythos, Anthropic's research model, within Project Glasswing — an initiative launched with AWS, Google, Microsoft, Cisco, Apple, NVIDIA and other partners to protect critical global software.
Same story for FFmpeg — a library used in every video application. A 16-year-old bug that had escaped five million automated tests. Found by Claude in hours.
The message is not 'AI is incredible'. The message is: the software you use every day — on your servers, on your employees' computers, in your management systems — almost certainly has vulnerabilities that no one has found yet.
What zero-day vulnerabilities are, without jargon
A vulnerability is an error in a piece of software's code. It can be a trivial bug or a critical flaw that allows someone to enter the system, steal data, or shut everything down.
When the flaw is known to the vendor, it is fixed with an update. When it is unknown — neither to the vendor nor the security community — it is called a zero-day. The name comes from the fact that whoever discovers it has zero days of advantage to defend against it, because the patch does not yet exist.
The Claude Mythos data captures an uncomfortable reality: at the time of publication, over 99% of the vulnerabilities found were still open. That means for months — sometimes years — those flaws were there, exploitable by anyone who found them first.
Why this changes the rules for all businesses
Until now, the dominant mental model for security was fairly simple: install an antivirus, train employees on phishing, make backups. If you use widely-used software and keep it updated, you are reasonably safe.
This model held as long as finding vulnerabilities was difficult, costly, and required rare expertise. An experienced penetration tester takes weeks to analyze a complex system.
AI scales. Claude Mythos analyzed OpenBSD at a cost of about $20,000 per thousand scans, finding dozens of issues. What previously required a senior team for months now takes an AI model for hours.
The problem is bilateral: AI can be used for defense, but it can also be used — by those with different intentions — for attack. The time window in which a flaw remains unexploited is shrinking.
Want to understand how AI changes your company's security?
30 minutes to discuss your specific case.
What to do now, concretely
You do not need to become a cybersecurity expert. You need three habits that many companies still do not have.
The first is enabling automatic updates wherever possible. The window between the release of a patch and the moment it is exploited by attackers has shrunk to a few days. Managing updates manually is a luxury that SMEs can no longer afford in many contexts.
The second is reducing the attack surface. Every service exposed on the internet, every software installed but not used, is a potential entry point. A periodic audit of what is actually active is worth more than many expensive tools.
The third is having a plan for 'when it happens', not just 'if it happens'. Verified backups, incident response procedures, vendor contact details. You do not need an internal CISO — you need to know who to call and what to do in the first hours.
AI as a defense tool, not just an attack tool
The flip side is that the same technology used to find flaws can be used for defense.
Project Glasswing is making 100 million dollars in AI credits available to open source security organizations. The goal is to do systematic vulnerability research before others do.
For businesses, this translates into a concrete perspective: AI can do vulnerability scanning on proprietary codebases, analyze security configurations, and identify anomalous behavior in logs. It does not replace a security team, but it dramatically lowers the cost of doing things that were previously only accessible to large companies.
A realistic scenario for a small business is not 'build our own Claude Mythos'. It is using AI to do simpler but equally useful things: analyzing your own systems for risky configurations, monitoring anomalies, training employees with personalized simulations.
The next step for your company
Cybersecurity in 2026 is no longer just a technical problem. It is a business strategy problem. Every uncorrected flaw is an operational, reputational, and — under NIS2 and GDPR regulations — potentially legal risk.
Understanding how AI changes this landscape does not require becoming technical. It requires asking the right questions: what are my critical systems? Who manages the updates? What happens if someone gets into our data tomorrow?
Maverick AI helps companies answer these questions and understand where AI can make a difference. If you want to understand where to start, book a call.