Search TorNews

Find cybersecurity news, guides, and research articles

Popular searches:

Home » News » Cyber Threats » Hackers Target Anthropic Mythos AI, Expose Supply Chain Weakness

Hackers Target Anthropic Mythos AI, Expose Supply Chain Weakness

By:
Last updated:April 23, 2026
Human Written
  • The hacking group ShinyHunters recently released screenshots of user panels, AI dashboards, and cost analytics as evidence that they accessed internal servers of Anthropic’s Mythos model.

  • It seems the group got into the system by going through a third- party vendor, which showed a big weak spot in their supply chain.

  • Mythos can pull off complicated cyberattacks all by itself without human input, which is what makes the alleged breach alarming.

Hackers Target Anthropic Mythos AI, Expose Supply Chain Weakness

Hackers reportedly breached Anthropic’s internal systems tied to Claude Mythos, the company’s very powerful AI model designed to hunt for security flaws.

Anthropic is currently investigating the claim, noting the hackers may have gained access through a third-party vendor network.

Attack Hunter Becomes a Victim 

On Wednesday, a well-known hacking group named ShinyHunters made bold claims on their leak site. They claim to have broken into internal systems tied to Anthropic’s super, secret Mythos model. The group posted screenshots showing user management panels and experiment dashboards. They even shared shots of cost and performance analytics.

For now, Anthropic says they are looking into it. ShinyHunters has a history of targeting major tech companies, the group was also responsible for the Luxshare ransomware attack that stole data from Apple and Nvidia, demonstrating their pattern of breaching high-profile tech targets and their supply chains.

Their best guess? The hackers likely got in through a vendor’s network. Interestingly, they discovered no evidence that indicated any compromise of their core systems. However, the fact that the event itself happened is alarming.

What makes this breach a significant concern? Mythos is much different from ChatGPT. It’s a frontier model built to spot vulnerabilities in software. The UK’s AI Safety Institute recently tested it.

They found it could complete a 32 step cyberattack simulation. This was a first for any AI model. Usually, these tasks take human experts days to finish. If a hacker gets good at using this tool, they could automate chaos.

How the Hackers Gained Access

Bloomberg looked into it, and it turned out the group didn’t even use anything complex, just some simple tricks. One of the users had legitimate access as a worker for a third party contractor. Using that foothold, they guessed the online address of the model. They basically just walked through an open door.

Anthropic had only released Mythos to a few big names like Apple and Goldman Sachs. They call this “Project Glasswing.” The goal was to let friendly companies find bugs before the bad guys did. But the plan hit a snag immediately. The unauthorized group got in on the very same day the testing was announced.

Right now, the hacking group says they aren’t causing trouble. Reports suggest they are just playing around with the tech. They aren’t running dangerous security prompts because they don’t want to get caught.

But their very presence inside the system proves a scary point. AI infrastructure is the new frontier for cybercrime. Even without a classic hack, the risk of insider or supply chain access is huge.

Mythos Might Not Be What they Hyped It to Be

We should take a breath and look at the facts, though. The breach sounds scary. But the model itself? Might not be as mighty as they’ve hyped it. Some security experts believe Anthropic overhyped Mythos.

Testers from Mozilla and AWS ran the model through its paces. They found it very fast. It saved engineers a lot of time. It found nothing a good human hacker wouldn’t find anyway. One expert called the scare a “misleading hype.”

Here’s the thing: Attackers don’t need fancy Mythos. They’re already using cheap open source tools to achieve the same thing. The real story here is the failure of controlled release. It shows that keeping a powerful AI secret is incredibly hard, especially when you hand it out to vendors and contractors. Welcome to the next battleground.

Share this article

About the Author

Joahn G

Joahn G

Cyber Threat Journalist

Joahn is a cyber threat journalist dedicated to tracking the evolving landscape of digital risks. His reporting focuses on ransomware gangs, data breach incidents, and state-sponsored cyber operations. By analyzing threat actor motives and tactics, he provides timely intelligence that helps readers understand and anticipate the security challenges of tomorrow.

View all posts by Joahn G >
Comments (0)

No comments.