By Thaddeus Ng
On 26 March 2026, Fortune reported that Anthropic had left close to 3,000 unpublished assets accessible through a misconfigured content management system (CMS). Among these included a draft blog post naming an unreleased AI model, provisionally named “Claude Mythos” or “Capybara” (Disclosure: Straits Interactive’s proprietary Gen-AI platform, Capabara, is not among Anthropic’s leaked assets). Claude Mythos displayed exceptional scores across various benchmarks such as coding, academic reasoning, and cybersecurity tasks, ranking higher than existing models.
Five days later, Claude Code Node Package Manager (npm) package shipped externally with an internal source map bundled in, exposing roughly 512,000 lines of code across 1,906 files to public registries. In a single week, Anthropic suffered two separate packaging failures that leaked critical competitive intelligence. The up-and-coming Claude model introduces itself to the public in an ironic fashion, aiming to advance cybersecurity through Project Glasswing but compromising on this very purpose in its debut.
What was Exposed?
The first incident involving Anthropic’s CMS exposed a draft blog describing their new model, Claude Mythos, as a “step change” in capability while flagging its own potential cybersecurity risks. Close to 3,000 unpublished assets were publicly accessible without any form of authentication.
The second incident was discovered by security researcher Chaofan Shou, who found that Claude Code version 2.1.88 contained a source map file pointing to the complete, unreleased codebase. By the morning of 31 March 2026, the leaked codebase had been mirrored across GitHub and analysed by thousands of developers (and by extension, AI models). Furthermore, developers who analysed the leaked repository identified internal architecture including a self-healing memory system and an undisclosed background agent feature named KAIROS.
Anthropic responded with an immediate Digital Millennium Copyright Act (DMCA) sweep affecting more than 8,000 repositories, characterising the incident as “a release packaging issue caused by human error, not a security breach”.
How it Happened
While the two incidents were a result of human oversight, learning points can be taken away from examining the cause of the leaks. In the case of the exposed draft blog, Anthropic’s backend tool used to manage drafted content was configured in a way that made them publicly accessible without any password by default. The lapse was found in human oversight, where staff did not change the default setting of the CMS.
Similarly, Claude’s source code leak was a result of unchecking a default setting. Before developers release software code to the public, they “compress” it into a smaller, harder-to-read format to make it run faster. A source map must then be designed to “reverse” the compression for future debugging. Anthropic’s build toolchain uses Bun, which automatically generates and includes source map files unless explicitly disabled by an engineer. When they finally shipped the compressed code to the public, the setting was not disabled, which exposed the “key” to reverse compression, meaning that anyone could reconstruct the full readable code.
The two incidents reflect failures in release management hygiene, where lapses in processes and governance of coordinating releases lead to compromises in operational security (opsec).
Takeaways for Governance Professionals
The failure modes are caused by default settings in Anthropic’s build pipeline and management system, making them replicable and fixable for any organisation shipping AI-enabled software.
Two critical actions must follow for professionals in AI governance from this case study:
First, pre-publication content storage systems must uphold the same security controls as production systems. Pre-released assets constitute competitive intelligence, and their leakage can jeopardise operations. Further, data leakage of any form, be it a pre-publication blog post or in more severe cases, personal data, potentially harms an organisation’s reputation and loses consumer trust. Governance professionals must ensure that the privacy settings of an organisation’s CMS reflect its release protocols.
Second, organisations should establish a release governance policy that mandates independent verification of package contents prior to publication. Disabling source-map files in the release file is a simple setting change with potentially significant consequences. Release checklists can be implemented into standard operating procedures to ensure that human-in-the-loop verification gates inspect actual package contents before publication. Automating this process might prove productive, but it will be problematic without proper human oversight and governance.
Frontier AI capability does not insulate an organisation from basic release-engineering mistakes. These mistakes have plagued opsec in the software development industry for more than a decade. In the pursuit of mature AI governance, management teams must realise that it is not the advancement of model capability that will define the gold standard, but the fundamental discipline of simple pipeline processes.
Sources: Fortune, NeuralTrust, Weights & Biases, Anthropic, Futurism, The Hacker News, Axios