Anthropic investigating claim of unauthorised access to Mythos AI tool

There is currently no suggestion that malicious actors have managed to get hold of the model, and Anthropic says it does not have evidence its systems are affected.
But the report of access by unauthorised users raises questions about the ability of large AI companies to stop their advanced AI models from getting into the wrong hands.
This was “most likely through misuse of access rather than a classic hack,” according to Raluca Saceanu, chief executive of cyber-security company Smarttech247.
Anthropic has released the Mythos model to some tech and financial companies in order to help them secure their systems against its reported ability to exploit vulnerabilities.
But that relies on those companies making sure their own access is tightly controlled.
The person already had permission to view Anthropic’s AI models through work they had done for a third-party contractor, according to Bloomberg.
The outlet also reported the group has been using the model since it gained access – although not for hacking, because they do not want to be detected.
“When powerful AI tools are accessed or used outside their intended controls, the risk is not just a security incident but the spread of capabilities that could be used for fraud, cyber abuse, or other malicious activity,” Saceanu said.




