AI identifies 92% of actual DeFi vulnerabilities.

5

New research indicates that specialized AI significantly outperforms general-purpose models in identifying exploited DeFi vulnerabilities.

AI identifies 92% of exploits (Modified by CoinDesk)

Key points:

  • A dedicated AI security agent recognized vulnerabilities in 92% of 90 compromised DeFi contracts ($96.8 million in exploit value), in contrast to 34% and $7.5 million for a standard GPT-5.1-based coding agent operating on the same foundational model.
  • The disparity arose from the domain-specific security methodology applied above the model, rather than from differences in fundamental AI performance, as noted in the report.
  • The results emerge as earlier research from Anthropic and OpenAI reveals that AI agents can perform comprehensive smart contract exploits at minimal cost, raising concerns that offensive AI capabilities are advancing quicker than defensive measures.

A dedicated AI security agent identified vulnerabilities in 92% of compromised DeFi in a new open-source benchmark.

The report, published Thursday by AI security firm Cecuro, assessed 90 actual smart contracts that were exploited from October 2024 to early 2026, accounting for $228 million in confirmed losses. The specialized system highlighted vulnerabilities associated with $96.8 million in exploit value, compared to only 34% detection and $7.5 million in coverage from a baseline GPT-5.1-based coding agent.

STORY CONTINUES BELOWStay updated with the latest news.Subscribe to the Crypto Daybook Americas Newsletter today. See all newslettersSign me up

Both systems operated on the same underlying model. The report attributes the difference to the application layer: domain-specific methodology, structured review phases, and DeFi-oriented security heuristics layered on top of the model.

The findings surface amid increasing apprehension that AI is facilitating crypto-related crime. Additional research from Anthropic and OpenAI has demonstrated that AI agents are now capable of executing end-to-end exploits on most known vulnerable smart contracts, with exploit capabilities reportedly doubling approximately every 1.3 months. The typical cost of an AI-driven exploit attempt is about $1.22 per contract, significantly reducing the barrier for large-scale scanning.

Previous CoinDesk articles highlighted how malicious actors, including North Korea, have started employing AI to enhance hacking operations and automate aspects of the exploitation process, emphasizing the growing divide between offensive and defensive capabilities.

Cecuro contends that many teams depend on general-purpose AI tools or one-time audits for security, a strategy that the benchmark suggests may overlook high-value, complex vulnerabilities. Several contracts within the dataset had previously undergone professional audits prior to being compromised.

The benchmark dataset, evaluation framework, and baseline agent have been made available as open-source on GitHub. The company stated that it has not disclosed its complete security agent due to concerns that similar tools could be repurposed for offensive purposes.