Powered by Smartsupp

Anthropic Launches 'Mythos' Frontier AI Model for Elite Cybersecurity Initiative



By admin | Apr 07, 2026 | 3 min read


Anthropic Launches 'Mythos' Frontier AI Model for Elite Cybersecurity Initiative

On Tuesday, Anthropic unveiled a preview of its latest frontier model, Mythos, which will be employed by a select group of partner organizations for cybersecurity applications. According to a previously leaked internal memo, the AI startup described this model as one of its "most powerful" to date.

This limited release is a component of a new security effort named Project Glasswing. Under this initiative, more than 40 partner organizations will utilize the model for "defensive security work" and to secure essential software. Although Mythos was not specifically trained for cybersecurity tasks, the preview will be applied to scan both proprietary and open-source software systems for code vulnerabilities.

Anthropic asserts that over recent weeks, Mythos has detected "thousands of zero-day vulnerabilities, many of them critical." The company noted that many of these security flaws are one to two decades old.

Mythos is a general-purpose model designed for Anthropic's Claude AI systems, boasting strong agentic coding and reasoning capabilities. The company's frontier models represent its most advanced and high-performance offerings, built to handle complex tasks such as agent-building and coding.

The organizations granted early access to Mythos include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the project, these partners are expected to eventually share insights gained from using the model, allowing the broader technology industry to benefit. Anthropic confirmed that this preview will not be made widely available.

The company also mentioned it has held "ongoing discussions" with federal officials regarding Mythos's use. These talks are likely complicated by an ongoing legal dispute with the Trump administration, stemming from the Pentagon labeling Anthropic a supply-chain risk. This designation followed the AI lab's refusal to permit autonomous targeting or surveillance of U.S. citizens.

Information about Mythos first emerged last month due to a data security incident reported by Fortune. A draft blog post concerning the model—originally codenamed "Capybara"—was found in an unsecured cache of documents within a publicly accessible data lake. Anthropic later attributed this leak to "human error," after it was initially discovered by security researchers.

The leaked document stated, "'Capybara' is a new name for a new tier of model: larger and more intelligent than our Opus models - which were, until now, our most powerful." It further described the model as "by far the most powerful AI model we've ever developed."

According to the leak, Anthropic claimed its new model significantly outperformed its current public models in areas like "software coding, academic reasoning, and cybersecurity." The document also warned that if weaponized by malicious actors to discover and exploit bugs—rather than fix them—the model could pose a serious cybersecurity threat. Mythos is being deployed specifically for defensive purposes to address vulnerabilities.

Last month, the company inadvertently exposed nearly 2,000 source code files and over half a million lines of code due to an error during the launch of version 2.1.88 of its Claude Code software package. In its subsequent cleanup efforts, Anthropic accidentally caused the takedown of thousands of code repositories on GitHub.




RELATED AI TOOLS CATEGORIES AND TAGS

Comments

Please log in to leave a comment.

No comments yet. Be the first to comment!