Skip to main content

2 posts tagged with "security auditing"

Smart contract security audits and analysis

View all tags

$606M in 18 Days: Why Upgrade-Introduced Bugs Are DeFi's New Top Attack Vector

· 12 min read
Dora Noda
Software Engineer

In just 18 days this April, attackers drained $606 million from DeFi. That single stretch erased Q1 2026's losses 3.7 times over and made the month the worst since the February 2025 Bybit heist. Two protocols — Drift on Solana and Kelp DAO on Ethereum — accounted for 95 percent of the damage. Both had been audited. Both passed static analysis. Both shipped routine upgrades that quietly invalidated the assumptions their auditors had verified.

This is the new face of DeFi risk. The catastrophic exploits of 2026 are no longer about reentrancy bugs or integer overflows that fuzzers can spot in CI. They are about upgrade-introduced vulnerabilities: subtle changes to bridge configurations, oracle sources, admin roles, or messaging defaults that turn previously safe code into an open door — without any single line of Solidity looking obviously wrong.

If you build, custody, or simply hold assets in DeFi, the takeaway from April 2026 is uncomfortable: a clean audit report dated three months ago is no longer evidence that a protocol is safe today.

The April Pattern: Configuration, Not Code

To understand why "upgrade-introduced" deserves its own category, look at how the two largest exploits actually unfolded.

Drift Protocol — $285 million, April 1, 2026. Solana's largest perp DEX lost more than half its TVL after attackers spent six months running a social-engineering campaign against the team. Once trust was established, they used Solana's "durable nonces" feature — a UX convenience designed to let users pre-sign transactions for later submission — to trick Drift Security Council members into authorizing what they thought were routine operational signatures. Those signatures eventually handed admin control to the attackers, who whitelisted a fake collateral token (CVT), deposited 500 million units of it, and withdrew $285 million in real USDC, SOL, and ETH. The Solana feature was working as designed. Drift's contracts were doing what their admins instructed. The attack lived entirely in the gap between what the multisig signers thought they were approving and what they actually were.

Kelp DAO — $292 million, April 18, 2026. Attackers attributed by LayerZero to North Korea's Lazarus Group compromised two RPC nodes underpinning Kelp's cross-chain rsETH bridge, swapped the binaries running on them, and used a DDoS to force a verifier failover. The malicious nodes then told LayerZero's verifier that a fraudulent transaction had occurred. The exploit only worked because Kelp ran a 1-of-1 verifier configuration — meaning a single LayerZero-operated DVN had unilateral authority to confirm cross-chain messages. According to LayerZero, that 1-of-1 setup is the default in its quickstart guide and is currently used by roughly 40 percent of protocols on the network. In 46 minutes, an attacker drained 116,500 rsETH — about 18 percent of the entire circulating supply — and stranded wrapped collateral across 20 chains. Aave, which lists rsETH, was forced into a liquidity crisis as depositors raced for the exit.

Neither attack required a smart-contract bug. Both required understanding how a configuration — multisig signing flows, default DVN counts, RPC redundancy — had been silently elevated from "operational detail" to "load-bearing security assumption."

Why Static Audits Miss This Class of Bug

The traditional DeFi audit is optimized for the wrong threat model. Firms like Certik, OpenZeppelin, Trail of Bits, and Halborn excel at line-by-line code review and at running invariant tests against a frozen contract version. That catches reentrancy, access-control mistakes, integer overflows, and OWASP-style failures.

But the upgrade-introduced bug class has three properties that defeat that workflow:

  1. It lives in composed runtime behavior, not source code. A bridge's safety depends on its messaging layer's verifier configuration, the DVN set, the RPC redundancy of those DVNs, and the slashing exposure of those operators. None of that is in the Solidity an auditor reads.

  2. It is introduced by changes, not by initial deployment. Kelp's bridge presumably looked fine when LayerZero v2 was first integrated. The DVN count became dangerous only as TVL grew large enough to be worth attacking and as Lazarus invested in compromising RPC infrastructure.

  3. It requires behavioral differential testing — answering "was invariant X preserved under the new code path?" — which none of the major audit firms productize as a scheduled, post-upgrade service. You get a one-time audit at version 1.0, and a separate one-time audit at version 1.1, but no continuous statement that upgrading from 1.0 to 1.1 doesn't break properties that 1.0 relied on.

The Q1 2026 statistics put a number on the gap. DeFi recorded $165.5 million in losses across 34 incidents in the entire quarter. April alone produced $606 million in 12 incidents. The deployment side scaled — over $40 billion in new TVL was added in Q1 — while audit capacity, incident response, and post-deployment validation stayed roughly flat. Something had to give.

Three Forces Making 2026 the Year This Bites at Scale

1. Upgrade cadence has accelerated at every layer

Every L1 and L2 is iterating faster. Ethereum's Pectra upgrade is in active rollout, Fusaka and Glamsterdam are in design, and Solana, Sui, and Aptos all ship execution-layer changes on multi-week cycles. Each chain-level upgrade can subtly shift gas semantics, signature schemes, or transaction ordering in ways that ripple into application-layer assumptions. Drift's exploit is a clean example — a Solana feature (durable nonces) intended for UX convenience became the carrier for an admin takeover.

2. Restaking compounds the upgrade surface area

The restaking stack — EigenLayer (still over 80 percent of the market), Symbiotic, Karak, Babylon, Solayer — adds a third dimension to the problem. A single LRT like rsETH sits atop EigenLayer, which sits atop native ETH staking. Each layer ships its own upgrades on its own schedule. A change to EigenLayer's slashing semantics has implicit consequences for every operator and every LRT consuming that operator's validation. When Kelp's bridge was drained, the contagion immediately threatened EigenLayer's TVL, because the same depositors had three-layer rehypothecation exposure they had never been forced to model. EigenCloud's roadmap, with its imminent EigenDA, EigenCompute, and EigenVerify expansions, will only widen that surface.

3. AI-driven DeFi activity moves faster than human review

Agent stacks like XION, Brahma Console, and Giza now interact with upgraded contracts at machine speed. Where a human treasurer might wait days after a contract upgrade before re-engaging, an agent backtests it, integrates it, and routes capital through it within hours. Any upgrade that quietly breaks an invariant gets stress-tested by adversarial flow before a human auditor can re-review it.

The Defensive Architecture Beginning to Emerge

The encouraging news is that the security-research community has not been idle. April 2026's losses have catalyzed concrete proposals across four fronts.

Continuous formal verification. Certora's long-running collaboration with Aave — funded as a continuous-verification grant rather than a one-shot engagement — is now a template. The Certora Prover automatically re-runs invariant proofs every time a contract changes, surfacing breakages before merge. Halmos and HEVM offer alternative open-source paths to the same goal. When formal verification recently caught a vulnerability in an integration with Ethereum's Electra upgrade that traditional audits had missed, it was not an outlier; it was a preview.

Upgrade-diff audit services. Spearbit, Zellic, and Cantina have started piloting paid services that audit the diff between two contract versions, not the new version in isolation. The model treats each upgrade as a new attestation and explicitly examines whether prior invariants are preserved. The Ethereum Foundation's $1M audit subsidy program, launched April 14, 2026, with a partner roster including Certora, Cyfrin, Dedaub, Hacken, Immunefi, Quantstamp, Sherlock, Spearbit, Zellic, and Zokyo, is partly aimed at expanding capacity for exactly this kind of work.

Chaos engineering and runtime monitoring. OpenZeppelin Defender and emerging tools are wiring forked-mainnet simulations into CI pipelines, allowing protocols to replay adversarial scenarios against every proposed upgrade. The discipline is borrowed directly from Web2 SRE practice — and is overdue in DeFi.

Time-locked upgrade escrows. The Compound Timelock v3 pattern, where every governance-approved upgrade sits in a public queue for a fixed delay before execution, gives the community time to spot issues that internal review missed. It does not prevent upgrade-introduced bugs, but it does buy time for them to be discovered before exploitation.

The TradFi Comparison: Continuous Audit Is the Norm Outside DeFi

Traditional finance solved the analogous problem decades ago. SOC 2 Type II, the standard most institutional service providers are held to, is not a one-time attestation; it is a six-to-twelve-month continuous-audit window. Basel III's counterparty-risk framework requires banks to update their capital models as exposures change, not annually. A custody bank that upgraded a settlement system would not be allowed to operate on a "we audited v1; v2 was just a small change" basis.

DeFi's prevailing culture — "audit once, deploy forever, re-audit only on major rewrites" — is the practice TradFi explicitly rejected after the 2008 crisis. At the current loss rate, the industry is on track for $2 billion or more in annual upgrade-exploit losses. That is large enough to attract regulators who already view DeFi auditing standards as substandard, and it is large enough to make continuous validation a precondition for institutional capital.

What This Means for Builders, Depositors, and Infrastructure

For protocol teams, the operational mandate is straightforward, even if it is not cheap: every upgrade must be treated as a new release that re-derives, not inherits, its security guarantees. That means scheduled re-audits on a diff basis, formal-verification specs that travel with every governance proposal, and meaningful timelocks before execution. It means publishing — Aave-style — a quantified cascade-risk framework that names which protocols you depend on and what your exposure looks like when one of them fails.

For depositors, the lesson is that "this protocol was audited" is no longer a useful signal on its own. The right question is "when was the last continuous-verification run, against what invariants, and on what version of the deployed code?" Protocols that cannot answer that should be priced accordingly.

For infrastructure providers — RPC operators, indexers, custodians — the Kelp incident is a direct warning. The compromise lived in two RPC nodes whose binaries were silently swapped. Anyone running infrastructure that participates in cross-chain verification (DVNs, oracle nodes, sequencers) is now part of the security model whether they signed up to be or not. Reproducible builds, attested binaries, multi-operator quorums above 1-of-1 defaults, and signed-binary verification at startup are no longer optional.

Chain-level upgrades — Pectra and Fusaka on Ethereum, parallel-execution rollouts on Solana and Aptos, Glamsterdam's throughput targets — will keep widening the surface. The protocols and infrastructure operators who survive 2026 will be the ones who adopted continuous validation early enough that their next routine upgrade is also their next provable security checkpoint.

BlockEden.xyz operates production RPC, indexer, and node infrastructure across Sui, Aptos, Ethereum, Solana, and a dozen other chains. We treat every protocol upgrade — at the chain layer or the application layer — as a new security event, not a maintenance task. Explore our enterprise infrastructure to build on a foundation designed to survive the upgrade cadence ahead.

Sources

Exploring User Perceptions of Security Auditing in the Web3 Ecosystem

· 6 min read
Dora Noda
Software Engineer

For professionals in the Web3 space, a security audit is not just a technical necessity but a critical milestone in a project's lifecycle. However, a groundbreaking study from the University of Macau and Pennsylvania State University—based on in-depth interviews with 20 users and an analysis of over 900 Reddit posts—reveals a stark reality: a significant gap exists between the industry's auditing practices and the end-user's actual perceptions, trust models, and behavioral decisions.

This report is more than an academic discussion; it serves as an intelligence briefing for all Web3 practitioners. It identifies the pain points in the current audit ecosystem and provides a clear strategic roadmap for leveraging audits more effectively to build trust and guide user behavior.

Core Insights: How Do Users Perceive Your "Security Certificate"?

The study systematically reveals users' cognitive biases and behavioral patterns throughout the audit information chain:

1. The "Tunnel Vision" Effect in Information Acquisition The primary, and often sole, channel through which users access audit information is the project's official website. All interviewees confirmed this behavior pattern.

  • Strategic Implication: Your website is the main battlefield for communicating the value of an audit. Do not assume users will dig deeper into an audit firm’s website or cross-reference information on-chain. How audit information is presented on your site directly shapes the user's first impression and trust foundation.

2. The Bipolarization of Perceived Information Value Users generally find the information value of current audit reports to be insufficient, which manifests in two ways:

  • Insufficient Value for Experts: Technically proficient users feel that many reports are “hurried, formulaic, and repetitive,” lacking depth and meaningful insights.
  • Prohibitively High Barrier for Novices: Non-technical users are overwhelmed by professional jargon and code, making comprehension difficult. An external review of audit firm websites reinforces this: more than a third of firms lack detailed descriptions of their service processes, and most inadequately disclose their auditors’ professional expertise.
  • Strategic Implication: The current one-size-fits-all PDF report format is failing to meet the needs of different user segments. Projects and audit firms must consider layered, interactive disclosure strategies—concise summaries, visual risk assessments, and full technical details for expert scrutiny.

3. The Fragility of the Trust Model: Reliance on Reputation Amidst Widespread Skepticism Users cite an audit firm’s “reputation” as the primary criterion for judging quality, but this trust model is fragile.

  • The Ambiguity of Reputation: Many interviewees could not name more than one audit firm, suggesting that users’ perception of reputation is vague and easily influenced.
  • Fundamental Doubts about Independence: Because audit services are paid for by the project, users widely question their impartiality. One interviewee summarized: “It’s unlikely they’ll openly criticize or ‘bring down’ their clients.” Reddit discussions echo similar skepticism.
  • Strategic Implication: User trust is not built on technical details but on perceptions of independence and impartiality. Proactively increasing audit process transparency—such as disclosing workflows with clients—is more critical than simply publishing a technical report.

4. The True Value of an Audit: "Proof of Effort" Despite doubts about effectiveness and fairness, there is near-universal consensus: the act of undergoing an audit itself is a powerful signal of a project’s commitment to security and responsibility.

  • One participant explained: it shows “that the application is serious about its security and at least willing to invest in an audit.”
  • Strategic Implication: An audit is not just a technical safeguard but also a crucial marketing and trust-building tool. Its symbolic meaning far outweighs how much of the content users actually understand. Teams should emphasize their investment in independent audits in marketing and community communications.

5. User Decision-Making Behavior: Binary and Asymmetrical

  • Focus on "Presence," Not "Quality": Users spend very little time reviewing audit information—typically less than 10 minutes. They care more about whether an audit exists than about its details.
  • Asymmetrical Influence: Positive audit results significantly boost community confidence. Negative results do generate concern but have limited deterrent effects for high-risk users.
  • Strategic Implication: The binary “Audited/Not Audited” status is the single most influential variable in user decision-making. Projects should ensure this status is clearly visible. Audit firms, in turn, can design their report conclusions to be more impactful for user decision-making.

Future-Facing Design and Strategic Transformation

Based on these insights, the study provides a clear action plan for practitioners:

  1. For Audit Firms: Reshape Reports and Service Models
  • From Static to Interactive: Move away from traditional PDF reports toward interactive web platforms with layered data, clickable code snippets, and built-in feedback mechanisms.
  • Embrace Radical Transparency: Proactively disclose audit methodologies, key processes, and even client interactions (minus core secrets) to demonstrate independence and impartiality.
  • Drive Industry Standardization: The absence of standards erodes industry credibility. Firms should help establish uniform practices, risk classifications, and reporting norms—and educate the community.
  1. For Project Teams: Integrate Audits into UX & Communication Strategy
  • Optimize Information Presentation: Clearly display audit information on your website. A concise “Audit Summary” page that links to the full report is more effective than a simple PDF link.
  • Leverage "Proof of Effort": Frame the completion of a third-party audit as a core trust milestone in marketing, community AMAs, and whitepapers.
  • Embrace an Educational Role: Partner with auditors to co-host security education events. This raises awareness while boosting trust in both the project and the audit brand.
  1. For Community and Ecosystem Builders: Harness the Power of Collective Intelligence
  • Empower the Community: Support technical experts or KOLs in providing third-party interpretations and reviews of audit reports.
  • Explore DAO Governance: Experiment with models where audits are commissioned or overseen by a DAO. This approach can strengthen independence and credibility through community voting and incentives.

In conclusion, this research sounds a clear warning: the Web3 industry can no longer treat auditing as an isolated technical function. Practitioners must confront the gap between current practices and user perception, placing user experience and trust-building at the center. Only by increasing transparency, optimizing communication, and driving standardization can we collectively build a safer and more trustworthy decentralized future.