OWASP SC Top 10: 2026 Shows $905M in Losses Across 122 Incidents—Did DeFi Security Actually Get Worse?

I’ve spent the last three years hunting vulnerabilities in smart contracts—finding critical bugs in three major DeFi protocols before they could be exploited. Every discovery reinforces a uncomfortable truth: we’re in an arms race we might be losing.

The 2026 OWASP Reality Check

The OWASP Smart Contract Top 10: 2026 just dropped, and the numbers are sobering. Built on analysis of 122 deduplicated security incidents from 2025, representing $905.4 million in smart contract losses alone. Not total crypto losses—just smart contract vulnerabilities.

What troubles me most? This happened despite the DeFi auditing industry reaching maturity. Firms like Trail of Bits, OpenZeppelin, and Certora are conducting thousands of audits annually. Yet the losses haven’t decreased—if anything, they’ve become more sophisticated.

The New Threat: Proxy & Upgradeability Vulnerabilities

The 2026 ranking introduces SC10: Proxy & Upgradeability Vulnerabilities as an entirely new category. This isn’t just another vulnerability class—it signals a fundamental shift in the threat landscape.

What makes proxy vulnerabilities particularly insidious:

Storage Collisions: When upgrade patterns don’t maintain consistent storage layouts, attackers can overwrite critical variables like owner addresses. I’ve seen this allow complete protocol takeovers.

Initialization Bugs: Uninitialized proxies or reinitializers that don’t properly lock after use. The Wormhole case showed how catastrophic forgotten initialization can be.

Malicious Upgrades: Compromised admin keys or weak governance allowing attackers to upgrade contracts with malicious code. This is every protocol’s nightmare scenario.

Delegatecall Risks: Proxies using delegatecall to implementations containing selfdestruct can be irreversibly destroyed.

Attack Evolution: Beyond Simple Bugs

The 2026 ranking reflects something deeply concerning: attackers are no longer targeting simple code bugs. They’re chaining vulnerabilities together in multi-stage attacks:

  1. Flash loan for massive capital (no collateral required)
  2. Oracle manipulation through low-liquidity DEX trades
  3. Exploit mispriced assets in lending protocols
  4. Repay flash loan, pocket millions

All within a single atomic transaction.

Oracle manipulation alone accounted for $8.8 billion in losses in 2025. Yet over 60% of new DeFi deployments still rely on single-source oracles despite decentralized alternatives like Chainlink being readily available.

Are Traditional Audits Fundamentally Insufficient?

Here’s the question that keeps me up at night: If we have mature auditing firms conducting thousands of audits, why are losses increasing rather than decreasing?

Consider the Balancer incident from November 2025. The protocol suffered a $128 million exploit despite having 11 audits from 4 top-tier firms (OpenZeppelin, Trail of Bits, Certora, ABDK). The attacker manipulated internal vault logic to drain liquidity across multiple blockchains.

The problem? Traditional code audits analyze contracts in isolation. They don’t model:

  • Adversarial multi-protocol interactions
  • Economic attack scenarios
  • Governance failure modes
  • Cross-chain attack vectors
  • Novel vulnerability compositions

Audits provide a snapshot of code correctness at a point in time. They can’t predict how protocols will be integrated with others, how governance will be compromised, or how multiple “secure” components can be exploited when composed adversarially.

What Actually Works?

Based on my research and field experience, here’s what shows promise:

Formal Verification: Mathematical proofs of security properties. Certora’s work on Uniswap V4 found vulnerabilities missed by manual audits. Expensive and complex, but effective for critical invariants.

Economic Simulation: Stress-testing protocols under adversarial economic conditions. Most audits skip this entirely.

Continuous Monitoring: Real-time detection of suspicious transactions. Security isn’t a one-time event; it’s a process.

Bug Bounties: Incentivize white-hat researchers to find vulnerabilities before black-hats do. ImmuneFi and similar platforms have prevented hundreds of millions in losses.

Design for Minimal Upgradeability: The best proxy vulnerability is the one that doesn’t exist. Consider immutable core contracts with upgradeable periphery only.

The Hard Question

Did DeFi security actually get worse in 2025, or are we just getting better at measuring losses we’ve always had?

I lean toward the former. Protocol complexity has exploded. Composability—DeFi’s superpower—has become an attack surface. Every protocol integration multiplies the potential attack vectors exponentially.

We need to move beyond checkbox security (“we got audited by X”) toward defense-in-depth: formal verification + economic modeling + continuous monitoring + bug bounties + security-first culture + minimal upgradeability.

The current approach isn’t working. $905.4 million in losses from 122 incidents proves it.

What’s your experience? Are we building on fundamentally fragile foundations, or is this the inevitable cost of innovation?

:locked: Trust but verify, then verify again.


References:

This hits close to home. I spent the last month auditing a DeFi protocol that had already been audited by two other firms—and I still found three critical vulnerabilities, including a storage collision risk in their proxy pattern.

The Auditor’s Dilemma

Sophia, you’re absolutely right about the fundamental limitations. As someone who does this work daily, I need to be honest about what audits can and cannot provide:

What audits CAN do:

  • Identify known vulnerability patterns (reentrancy, integer overflow, access control issues)
  • Verify code correctness against specifications
  • Check for common security anti-patterns
  • Test edge cases in isolated contract behavior

What audits CANNOT do:

  • Predict novel attack vectors (attackers are creative)
  • Model adversarial multi-protocol interactions
  • Guarantee security against future integrations
  • Catch governance and economic attack scenarios
  • Provide ongoing security (code changes post-audit)

The Balancer Case: A Wake-Up Call

The Balancer incident is particularly sobering for those of us in the auditing space. 11 audits from 4 top firms, yet still exploited for $128M. This isn’t a failure of those firms—it’s a systemic limitation of the audit model.

Trail of Bits did their most recent Balancer audit in September 2022. The November 2025 exploit happened over 3 years later. How many integrations happened in that time? How many protocol upgrades? How many new composability patterns were introduced?

The audit scope likely covered the contract code at a specific point in time, with specific assumptions about how it would be used. But protocols evolve. They get integrated with dozens of other protocols. Attack surfaces multiply exponentially.

Beyond Point-in-Time Snapshots

Here’s what I’m seeing work in practice:

1. Layered Security Approach

Don’t rely on a single audit. Layer multiple security measures:

  • 2-3 audits from different firms (different perspectives = different bugs found)
  • Formal verification for critical invariants
  • Continuous bug bounty programs
  • Real-time monitoring and alerting
  • Economic simulation and stress testing

2. Audit + Formal Verification Combo

Aave V4 is doing this right. They spent $1.5 million on security:

  • Multiple manual audit rounds (ChainSecurity, Trail of Bits, Blackthorn)
  • Formal verification with Certora from the earliest design stages
  • Ongoing security testing

Formal verification found issues that manual audits missed. Why? Because formal methods prove mathematical properties must hold under all possible conditions, not just test cases auditors happened to think of.

3. Continuous Security, Not Checkbox Compliance

The protocols surviving long-term treat security as an ongoing process:

:memo: Pre-deployment: Multiple audits + formal verification
:magnifying_glass_tilted_left: Launch: Bug bounty program with meaningful rewards
:shield: Ongoing: Real-time transaction monitoring, automated circuit breakers
:light_bulb: Post-incident: Transparent post-mortems and architectural improvements

The Oracle Problem

Your point about 60% of new deployments still using single-source oracles is mind-blowing. We have Chainlink providing decentralized, manipulation-resistant price feeds. We have TWAP (time-weighted average price) patterns that are harder to manipulate.

Yet developers keep using pool.getPrice() from a single low-liquidity DEX because:

  1. It’s easier to integrate
  2. It’s cheaper (no Chainlink subscription cost)
  3. They assume “our protocol is too small to attract attackers”

That third assumption? Catastrophically wrong. Flash loan attackers scan for these vulnerabilities automatically. Protocol size doesn’t matter when the attack is profitable and can be executed in a single transaction.

Practical Advice for Developers

If you’re building a protocol today:

:white_check_mark: DO:

  • Use Chainlink or similar decentralized oracles for price data
  • Implement TWAP even if using on-chain DEX prices
  • Add circuit breakers that pause on unusual activity
  • Deploy immutable core contracts when possible
  • Use timelocks (48-72 hours) for any upgradeable components
  • Run formal verification on critical logic
  • Launch with meaningful bug bounty ($100K+ for critical findings)

:cross_mark: DON’T:

  • Trust single-source price oracles
  • Use proxy patterns unless absolutely necessary
  • Deploy upgradeable contracts without timelock governance
  • Assume “audited = secure”
  • Skip formal verification for critical invariants
  • Ignore economic attack scenarios in testing

Testing for Chained Exploits

One tool that’s been helpful: Foundry’s fork testing combined with Echidna for property-based testing.

I can now simulate flash loan scenarios, oracle manipulations, and multi-protocol interactions in local tests before deployment. It’s not perfect, but it catches issues traditional unit tests miss.

Example: Testing “can an attacker profit from manipulating our oracle then borrowing?” requires modeling both the DEX state AND the lending protocol state in adversarial conditions. Traditional audits rarely go this deep.

The Hard Truth

Security is getting more complex faster than our tools are improving. Every new protocol adds compositional attack surface. Every integration multiplies risk.

We need better tools, better methodologies, and more realistic expectations about what audits can achieve.

The industry needs to stop marketing “audited by X” as a security guarantee. It’s not. It’s one component of defense-in-depth.

Test twice, deploy once. :shield:


References:

As someone building a DeFi protocol, this conversation is both validating and terrifying. We’re living this reality every day.

The Upgradeability Paradox

Here’s the impossible choice we faced six months ago:

Option A: Immutable Contracts

  • :white_check_mark: No proxy vulnerabilities
  • :white_check_mark: Maximum trust from users
  • :white_check_mark: Simpler architecture
  • :cross_mark: Can’t fix bugs post-deployment
  • :cross_mark: Can’t adapt to market changes
  • :cross_mark: One critical bug = rebuild from scratch

Option B: Upgradeable Contracts

  • :white_check_mark: Can fix bugs quickly
  • :white_check_mark: Can adapt to new DeFi primitives
  • :white_check_mark: Can respond to market conditions
  • :cross_mark: Admin keys = centralization risk
  • :cross_mark: Governance attack surface
  • :cross_mark: Storage collision risks
  • :cross_mark: User trust issues

We chose Option B with heavy safeguards, but I still lose sleep over it.

Our Security Architecture

Here’s what we implemented (sharing in case helpful):

1. 72-Hour Timelock on All Upgrades

Any contract upgrade proposal goes through:

  • Proposal submission
  • 72-hour waiting period (publicly visible)
  • Community review
  • Multi-sig execution (5-of-9)

This saved us once. A proposal was submitted that looked innocent but would have allowed draining user funds. The community caught it during the timelock period.

2. Immutable Core, Upgradeable Periphery

Our core vault logic is immutable. The math for deposit/withdrawal calculations can never change. But:

  • Strategy contracts are upgradeable (need to adapt to yield opportunities)
  • Fee collection is upgradeable (need flexibility)
  • Access control is upgradeable (need to add/remove integrations)

This limits the blast radius of a governance attack. Even if governance is compromised, core user funds logic can’t be changed.

3. Oracle Defense in Depth

We learned the hard way. Started with single-source oracle (Uniswap V2 pool). First month someone tried to manipulate it.

Now we use:

  • Chainlink as primary price source
  • Uniswap V3 TWAP as secondary (30-minute window)
  • Circuit breaker if primary/secondary diverge >5%
  • Emergency pause if manipulation detected

Cost? ~$3K/month in Chainlink feeds. Worth every penny compared to a $2M exploit like NewGold Protocol.

The Flash Loan Wake-Up Call

Three months ago, we got hit by an attempted flash loan attack. Attacker’s strategy:

  1. Flash loan 10M USDC from Aave
  2. Buy our governance token on thin DEX liquidity
  3. Artificially pump price 300%
  4. Borrow maximum from our lending pool using inflated collateral
  5. Dump governance token
  6. Repay flash loan
  7. Keep borrowed assets

What saved us: TWAP oracle + circuit breaker. The 30-minute TWAP didn’t reflect the price spike, so borrowing limits weren’t inflated. Circuit breaker paused the protocol when it detected the price deviation.

Cost of that defense? About 20 hours of development and $3K/month in oracle costs.

Cost of not having it? Would have been ~$1.2M based on our TVL.

The Real Cost of Security

Sarah’s point about Aave spending $1.5M on security resonates. Here’s what our security budget looked like:

  • 2 audits (Trail of Bits + OpenZeppelin): $180K
  • Bug bounty (ongoing): $250K pool
  • Formal verification (Certora, limited scope): $75K
  • Oracle infrastructure (Chainlink): $36K/year
  • Monitoring & incident response: $50K setup + $10K/month
  • Insurance (Nexus Mutual coverage): $45K/year

Total Year 1: ~$700K

For a protocol with $25M TVL, that’s 2.8% of TVL spent on security. Expensive? Yes. Cheaper than a hack? Absolutely.

Attack Evolution: Personal Experience

The sophistication is increasing rapidly. We track attempted attacks (thanks to monitoring):

2024 attacks: Basic front-running, simple reentrancy attempts
2025 attacks: Flash loans + oracle manipulation combos
2026 attacks: Multi-protocol compositions, governance poisoning attempts

Recent example: Attacker tried to:

  1. Manipulate our oracle through a connected DEX
  2. Use inflated prices to borrow from us
  3. Simultaneously exploit a related protocol’s liquidity
  4. Chain the profits across three different DeFi protocols

All designed to execute in a single transaction. If any step failed, entire attack reverts.

This level of sophistication requires defenders to model entire DeFi ecosystem, not just our own contracts.

What I Wish I Knew Earlier

1. Budget 3-5% of TVL for security from day one

Trying to retrofit security is 10x more expensive than building it in.

2. Don’t launch upgradeable without timelock governance

No matter how much you trust your team. Humans make mistakes. Keys get compromised.

3. Treat audits as minimum viable security

Audits are your starting point, not your finish line. You need continuous security.

4. Community monitoring is underrated

Some of our best security catches came from community members reviewing transactions during timelock periods.

5. Economic attacks are harder to defend than code bugs

Sophia’s point about traditional audits missing economic attacks is spot-on. We need tools that simulate adversarial economic conditions, not just test code correctness.

The Uncomfortable Question

Are we building DeFi on fundamentally fragile foundations?

Honestly? Kind of. The composability that makes DeFi powerful also makes it incredibly fragile. Every protocol integration multiplies attack surface exponentially.

But I’m not ready to give up on upgradeable contracts. We just need:

  • Stronger governance safeguards
  • Better economic modeling tools
  • More realistic expectations from users
  • Security-first culture, not move-fast-and-break-things

The protocols that survive long-term will be those that treat security as a process, not a one-time audit checkbox.


References:

Reading this thread is making me realize how much I don’t know—and honestly, it’s a bit scary. I’ve been building DeFi frontends for two years, but the security complexity you’re all discussing feels overwhelming.

The Newcomer’s Perspective

When I started in Web3, I thought “audited by [big firm name]” meant a protocol was safe to use. This thread is making me realize that’s… not really true?

Some questions I’m struggling with:

1. How can regular users (or even junior devs like me) actually assess if a protocol is safe?

I can read Solidity code reasonably well now. But understanding oracle manipulation, flash loan attacks, and proxy vulnerabilities? That’s a whole different level. And I’m a developer—what about non-technical users?

2. Should we just avoid upgradeable contracts entirely?

Diana’s point about the upgradeability paradox resonates. But from a user perspective, I’d rather trust an immutable contract that can’t be changed (even if it has limitations) than an upgradeable one where the team could theoretically drain my funds.

Is that naive? Am I missing something important?

3. If Balancer had 11 audits and still got exploited, how is anyone supposed to stay safe?

This feels like a systemic problem, not a “this one protocol messed up” problem. If top-tier audit firms can’t catch these vulnerabilities, what chance do smaller protocols have?

Things That Confuse Me

Audit Reports:
I’ve read a few audit reports trying to understand them better. But they’re mostly “Issue: Medium Severity, Status: Acknowledged” with technical explanations I don’t fully grasp.

How do you translate “storage collision in proxy pattern” to “this could lose all user funds” for non-experts?

Security Marketing:
Every protocol’s homepage says “Audited by X” prominently. After reading Sarah’s breakdown of what audits can’t do, this feels like misleading marketing.

Should protocols be required to clearly state audit limitations? Like “Audited by X on [date], scope limited to [Y], does not cover [Z]”?

The Cost of Security:
Diana mentioned spending $700K on security for a $25M TVL protocol. That’s great if you have VC funding. But what about smaller community-driven projects that can’t afford multiple audits, formal verification, and Chainlink feeds?

Are they just… doomed to be insecure? Does DeFi only work for well-funded projects?

What I Wish Existed

1. A Simple Security Checklist for Users

Something like:

  • :white_check_mark: Multiple audits from different firms (within last 6 months)
  • :white_check_mark: Bug bounty program with meaningful rewards
  • :white_check_mark: Timelock governance (48+ hours)
  • :white_check_mark: Decentralized oracles (Chainlink/TWAP)
  • :white_check_mark: Immutable core contracts or strong upgrade safeguards
  • :cross_mark: Single-source price oracle
  • :cross_mark: Upgradeable without timelock
  • :cross_mark: Admin keys without multi-sig

Would this actually help assess risk? Or am I oversimplifying?

2. Better Security Communication

I want protocols to be honest about their security posture. Not just “We’re audited!” but:

  • When were audits done?
  • What was the scope?
  • What vulnerabilities were found and how were they fixed?
  • What security assumptions are being made?
  • What are the known risks?

3. More Accessible Security Education

Resources for developers moving from Web2 to Web3 that explain:

  • Why single-source oracles are dangerous (with concrete examples)
  • How flash loan attacks actually work (step-by-step)
  • When to use upgradeable vs immutable contracts
  • How to test for economic attacks, not just code correctness

Am I Being Too Paranoid?

Sometimes I wonder if I’m overthinking this. Millions of people use DeFi protocols daily without issues. The vast majority of transactions complete successfully.

But then I read about $905M in losses from 122 incidents, and I think… maybe paranoia is warranted?

How do you balance “DeFi is powerful and innovative” with “DeFi is fundamentally risky and complex”?

Questions for the Experts

For Sophia: When you audit a protocol, what are the top 3 red flags that make you immediately concerned?

For Sarah: If you were teaching a bootcamp on smart contract security, what’s the most important concept you’d want every new developer to understand?

For Diana: Knowing what you know now, if you were starting your protocol from scratch today, what would you do differently?

I’m here to learn. Thanks for sharing your expertise—it’s genuinely helpful, even if it’s also terrifying. :sweat_smile:


Trying to learn more:

This is the most important security conversation I’ve seen in months. Let me address some systemic issues and potential solutions.

The Compositional Security Problem

Emma’s confusion is completely justified. We’re asking users and even developers to reason about security properties that emerge from the composition of multiple protocols, not from individual contracts.

Here’s the core problem:

Protocol A might be “secure” in isolation.
Protocol B might be “secure” in isolation.
But A ∘ B (A composed with B) creates new attack vectors that neither audit caught.

This is fundamentally a compositional security problem. Traditional audits analyze contracts in isolation. But DeFi’s value comes from composability—protocols building on protocols.

Why Formal Verification Matters

Sarah mentioned Certora’s work on Uniswap V4. Let me explain why this caught bugs manual audits missed.

Manual audits check:
“Does this code do what the spec says?”
“Are there known vulnerability patterns?”
“Can I think of edge cases that break it?”

Formal verification proves:
“Under ALL possible states and inputs, property X MUST hold true”
“Mathematically impossible for invariant Y to be violated”

Example invariant for a lending protocol:

∀ users, ∀ times: total_borrows ≤ total_deposits

Formal verification proves this holds under all conditions, including:

  • Flash loans
  • Oracle manipulation
  • Reentrancy
  • Governance attacks
  • Novel attack compositions

If the proof succeeds, you have mathematical certainty. If it fails, you get a counterexample showing exactly how to violate the invariant.

Protocol-Level Security Solutions

We need to move security up the stack from application-level to protocol-level. Some promising directions:

1. Intent-Based Architecture

Instead of users signing transactions that execute arbitrary code, they sign intents describing desired outcomes. Solvers compete to fulfill intents while guaranteeing security properties.

This makes flash loan + oracle manipulation attacks much harder because the solver is responsible for validating the economic outcome matches the user’s intent.

2. Encrypted Mempools

MEV and front-running are fundamental security problems. Solutions like Flashbots Protect, Secret Network, and Aztec use encryption to hide transaction details until after inclusion.

Can’t front-run what you can’t see.

3. Account Abstraction with Recovery

Most hacks exploit admin key compromise. Account abstraction (EIP-4337) allows:

  • Multi-sig by default
  • Social recovery
  • Spending limits
  • Time-delayed high-value transactions

These don’t prevent smart contract bugs, but they dramatically reduce governance attack surface.

4. Zero-Knowledge State Transitions

Protocols like StarkNet and zkSync use ZK-SNARKs to prove state transitions are valid without revealing transaction details.

This provides:

  • Privacy (can’t analyze what you can’t see)
  • Compression (cheaper L2s)
  • Provable correctness (invalid state transitions are mathematically impossible)

Answering Emma’s Questions

“Should we avoid upgradeable contracts entirely?”

My architectural recommendation:

Core Logic Layer: IMMUTABLE
  ├─ Math (deposit/withdraw calculations)
  ├─ Invariants (security properties)
  └─ Core accounting

Adapter Layer: UPGRADEABLE with 72h timelock
  ├─ Strategy contracts
  ├─ External integrations
  └─ Fee collection

Governance Layer: TIMELOCKED + MULTISIG
  ├─ Parameter adjustments
  ├─ Adapter upgrades
  └─ Emergency pause

This gives you:

  • Safety: Core logic can never be changed, even by governance attack
  • Flexibility: Adapters can be upgraded to integrate new protocols
  • Security: 72-hour timelock gives community time to review and exit if needed

“How can users assess protocol safety?”

Risk framework I use:

Critical Red Flags (avoid):
:police_car_light: No audit or single audit <6 months old
:police_car_light: Upgradeable contracts without timelock
:police_car_light: Single-source price oracle
:police_car_light: Admin keys controlled by single wallet
:police_car_light: No bug bounty program

Yellow Flags (caution):
:warning: New protocol (<6 months live)
:warning: Low TVL (<$10M, thin liquidity = manipulation risk)
:warning: Complex multi-protocol integrations
:warning: Significant upgrades not re-audited
:warning: Governance concentrated to small holder group

Green Flags (better):
:white_check_mark: Multiple audits + formal verification
:white_check_mark: Immutable core or strong upgrade safeguards
:white_check_mark: Decentralized oracles (Chainlink/TWAP)
:white_check_mark: Substantial bug bounty (>$100K for critical)
:white_check_mark: Battle-tested (>1 year, >$50M TVL, no major incidents)
:white_check_mark: Open source + verified contracts

“What about smaller projects that can’t afford $700K security budgets?”

This is a real problem. Some options:

Security As A Service:

  • Shared formal verification infrastructure (Certora)
  • Audit-as-a-service for common patterns
  • Open-source security tooling (Slither, Mythril, Echidna)
  • Community security reviews

Design for Security:

  • Use battle-tested contract templates (OpenZeppelin)
  • Minimize custom logic
  • Prefer immutability over upgradeability
  • Use established oracle infrastructure
  • Start small, grow gradually as you can afford better security

Progressive Decentralization:

  • Launch with strong admin controls + timelock
  • Gradually reduce admin power as protocol proves itself
  • Transfer control to governance only after extensive testing

The Research Frontier

Academia is working on compositional security analysis. Some promising areas:

Formal Methods for DeFi:

  • Proving security properties across multi-protocol compositions
  • Automated vulnerability discovery through model checking
  • Economic attack modeling and simulation

Language-Level Security:

  • Type systems that enforce security properties
  • Move language’s resource-oriented design (Aptos/Sui)
  • Scilla’s functional approach (Zilliqa)

Economic Security Research:

  • Game-theoretic analysis of mechanism design
  • Cryptoeconomic security proofs
  • MEV mitigation at protocol level

The Uncomfortable Truth

Sophia asked: “Are we building on fragile foundations?”

Yes and no.

Yes: Current DeFi architecture has fundamental security limitations. Composability creates exponential attack surface. Upgradeability introduces governance risk. Economic attacks are harder to prevent than code bugs.

No: We’re still early. Security tooling is improving rapidly. Formal verification is becoming more accessible. Protocol designs are maturing. The industry is learning from each exploit.

The protocols that survive long-term will be those that:

  1. Prioritize security from day one, not as afterthought
  2. Use defense-in-depth (multiple security layers)
  3. Design for minimal upgradeability
  4. Invest in formal verification for critical invariants
  5. Treat security as ongoing process, not one-time audit
  6. Build security-first culture across entire team

Actionable Recommendations

For developers:

  • Learn formal verification basics (Certora, K Framework)
  • Study past exploits in depth (rekt.news)
  • Practice attack-oriented thinking
  • Test adversarial scenarios, not just happy paths

For protocols:

  • Budget 3-5% of TVL for ongoing security
  • Immutable core + upgradeable periphery pattern
  • 72-hour timelocks minimum for governance
  • Meaningful bug bounties (>$100K critical)

For users:

  • Diversify across multiple protocols (don’t ape into single protocol)
  • Favor battle-tested over cutting-edge
  • Exit during timelock period if uncomfortable with upgrade
  • Use DeFi insurance (Nexus Mutual) for large positions

We’re in a security arms race. Attackers are getting more sophisticated. Defenders need to level up faster.


Research references: