Anthropic Says AI Can Hack Smart Contracts After Spotting $4.6M in Exploits

Miravi

Level 9
Thread author
Verified
Well-known
Aug 31, 2024
423
3,015
768
USA
Anthropic has shown that powerful AI systems can find weaknesses in blockchain apps and turn them into profitable attacks worth millions of dollars, raising fresh concerns about how exposed DeFi really is.

In a recent study with MATS and Anthropic Fellows, the company tested AI agents on a benchmark called SCONE-bench (Smart CONtracts Exploitation), built from 405 smart contracts that were actually hacked between 2020 and 2025.

When they ran 10 leading models in a simulated environment, the agents managed to exploit just over half of the contracts, with the simulated value of stolen funds reaching about $550.1m.

To reduce the chance that models were simply recalling past incidents, the team then looked only at 34 contracts that were exploited after March 1, 2025, the latest knowledge cutoff for these systems.

Opus 4.5 And GPT-5 Located $4.6M In Value From New Exploit Targets

On that cleaner set, Claude Opus 4.5, Claude Sonnet 4.5 and GPT-5 still produced working exploits on 19 contracts, worth a combined $4.6m in simulated value. Opus 4.5 alone accounted for about $4.5m.

Anthropic then tested whether these agents could uncover brand new problems rather than replay old ones. On Oct. 3, 2025, Sonnet 4.5 and GPT-5 were run, again in simulation, against 2,849 recently deployed Binance Smart Chain contracts that had no known vulnerabilities.

Both agents found two zero-day bugs and generated attacks worth $3,694, with GPT-5 doing so at an API cost of about $3,476.
 
People talk about AI as if it were some new digital criminal, but in reality it only reflects what many humans carry inside: the inability to let go of their shadow. The machine doesn’t invent greed or deceit, it merely amplifies them when someone uses it like a hunting dog. The irony is that some still expect technology to redeem them, when they haven’t even learned to redeem themselves. The evolution we need is not in the code, but in the will to stop repeating the same vices with ever more sophisticated tools.

AI doesn’t commit crimes: it simply holds up the mirror. The problem is that many, when they look at it, discover their shadow is larger than themselves… and prefer to blame the glass rather than their own face. 😉
 
It's so over. AI can detect and exploit tiny vulnerabilities human beings would never pick up on.
 

You may also like...