What risk is posed by Anthropic's potential cyberweapon super-AI?


News of an AI that can find and exploit weaknesses in major software previously considered secure is causing concern in the financial industry. — dpa

SAN FRANCISCO: Questions are being raised about the security precautions protecting a new AI model that is said to be extremely powerful at finding software vulnerabilities and could be used to launch major cyberattacks with a global impact.

Financial news service Bloomberg reported on Tuesday that a small number of unauthorised people with knowledge of Anthropic systems gained access to the model known as Claude Mythos Preview. Anthropic said it was looking into the report.

Earlier in April, Anthropic said it had developed an AI that was able to find security gaps in various major software programs, some of which had gone undetected for decades.

In the wrong hands, the AI model could lead to the development of dangerous cyberweapons.

Anthropic has no plans to release Mythos and has so far granted access to selected companies and organisations so they can fix vulnerabilities in their software. The developers of the Firefox web browser later announced that they were able to patch 271 security holes with Mythos.

According to Bloomberg, an employee of an external Anthropic service provider who had access to the AI firm's systems was among the unauthorised users. The users were also helped by knowledge of how Anthropic stored previous models.

The company told Bloomberg it had so far seen no evidence that there had been access to the model outside the service provider's systems.

Announcing Mythos, Anthropic said it could discover "thousands" of serious vulnerabilities in widely used operating systems and web browsers. In the video software FFmpeg, the model tracked down a loophole that had been left unpatched for 16 years.

AI-built software exploits in a few hours

Mythos Preview was also able, within a few hours, to develop programmes to exploit these vulnerabilities, which experts said would have taken them several weeks.

In a test, an early version of the software was given the task of breaking out of a shielded computer environment and reporting this to the tester. According to Anthropic, the software bypassed the security precautions, gained broader internet access for itself and sent the employee an email that surprised him while he was sitting in the park with a sandwich.

The company did not specifically train the model to do all this, it said. With rapid progress in artificial intelligence, such capabilities could soon be available to online attackers, Anthropic warned.

In a partnership called Project Glasswing, companies are being given access to Mythos to find security flaws. Partners include Apple, Amazon, Microsoft, NVIDIA. the Linux Foundation, the IT security firms CrowdStrike and Palo Alto Networks and the network specialist Cisco.

Worries among finance industry

News of an AI that can find and exploit weaknesses in major software previously considered secure has caused concern in the financial industry, among others.

Joachim Nagel, the president of Germany's central bank, the Bundesbank, warned of significant risks to the financial sector, pointing to new and complex threats from autonomous AI agents engaging in harmful behaviour.

"Early identification and containment of such risks is of crucial importance for financial stability, as the current discussion about Anthropic's Mythos makes clear," Nagel said.

The AI model appeared to be a double-edged sword because it could be used not only to improve digital security systems but also to exploit their vulnerabilities for malicious purposes.

"We must prevent the misuse of this technology." At the same time, all relevant institutions would have to have access to the technology in order to avoid distortions in competition.

Anthropic is best known for the AI software Claude, which competes with OpenAI's ChatGPT.

The company recently made headlines over a dispute with the Pentagon: Anthropic denied the use of its AI in autonomous weapons or for mass surveillance in the US.

The Defense Department then declared Anthropic a supply chain risk, largely blocking the company's path to doing business with the US government. Anthropic is taking legal action against this. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Google’s internal politics leave it playing catch-up on AI coding
Samsung workers protest over huge pay gap with SK Hynix, threaten long strike
Unemployment spikes for key Chinese age group as AI use spreads
Tesla sees EU approval for driver assistance system within months
How ‘age tech’ might help you grow old at home
STMicroelectronics posts Q1 above estimates, forecasts stronger Q2
Australia working with Anthropic over cybersecurity vulnerabilities
Dassault Systemes reports first-quarter revenue in line with estimates
Tim Cook regrets Maps flub, sees Apple Watch as his proudest work
Besi posts higher bookings in first-quarter, as AI boosts demand for hybrid bonding tech

Others Also Read