
Anthropic has moved to contain the damage after core information about its artificial intelligence (AI) coding tool was leaked due to an employee mistake.
GitHub, a code-sharing website for developers, said Tuesday that Anthropic filed a mass takedown request covering 8,100 repositories that had cloned and stored the source code of Claude Code, the company's key development tool.
The move is interpreted as Anthropic's effort to prevent further spread of its proprietary information across the internet. The source code leak put the company at risk of exposing critical trade secrets and know-how to competitors.
"No sensitive customer data or credentials were leaked or exposed," an Anthropic spokesperson said in a statement. "This was a product release packaging issue caused by human error, not a security breach, and we are implementing measures to prevent a recurrence."
According to Chaofan Shou, chief technology officer of U.S. security firm Puzleland, and others, Claude Code's source code was made public a day earlier through NPM, a package repository used by developers worldwide. The exposed code totaled more than 512,000 lines across 1,900 files. The leaked code quickly spread to code-sharing platforms including GitHub. Anthropic attributed the incident to an employee error related to a distribution package.
The leak marks the second exposure of Anthropic's core assets within the past week. On May 25, a configuration error in the company's content management system temporarily exposed specifications for Claude Mythos, a next-generation AI model that has not yet been officially released.
Anthropic maintains that sensitive customer data and authentication credentials were not affected by either incident. However, industry observers have raised concerns that the incidents could affect the company's initial public offering planned for the fourth quarter of this year.
