The 17B Question: Should DeFi Protocols Be Legally Liable for Hacks

The Legal Framework for DeFi Liability Is Being Written Right Now — And Most Builders Aren’t Paying Attention

Since 2020, over $17 billion has been stolen from DeFi protocols. In January 2026 alone, we saw $400M+ in losses. The Bybit hack earlier this year resulted in $1.5 billion stolen — the largest single crypto theft in history.

The question regulators are now asking is no longer “should we regulate DeFi?” It’s “who pays when things go wrong?”

The Current Legal Landscape

As of early 2026, here’s where we stand:

United States:

  • The SEC has consistently argued that DeFi protocols offering financial services must comply with securities laws, regardless of decentralization
  • The CFTC has brought enforcement actions against DeFi protocols for offering unregistered derivatives
  • Class action lawsuits against exploited protocols are now routine — Mango Markets, Euler Finance, and others have faced litigation
  • The GENIUS Act (stablecoin regulation) is moving through Congress but doesn’t directly address hack liability

European Union:

  • MiCA (Markets in Crypto-Assets) is fully in effect as of 2025
  • It imposes operational resilience requirements on crypto-asset service providers
  • DeFi protocols that are “sufficiently decentralized” remain in a gray area
  • The EU is actively developing a framework for DeFi-specific regulation

Singapore & Hong Kong:

  • Both jurisdictions require licensing for DeFi protocols serving their citizens
  • Singapore’s MAS has signaled that protocol developers may bear liability for security failures

Three Legal Theories Being Tested

1. Product Liability
The argument: DeFi protocols are products. When a product is defective (hackable), the manufacturer (developers) are liable.

This theory is gaining traction in US courts. The precedent from traditional software liability is mixed — software has historically been treated as a service, not a product, which limits strict liability claims. But the “code is law” argument cuts both ways: if the code defines the product, then bugs in the code are product defects.

2. Fiduciary Duty
The argument: Protocol developers and DAOs that control upgradeable contracts owe a fiduciary duty to depositors.

This is particularly relevant for protocols with admin keys, governance-controlled parameters, or upgradeable proxies. If you can change the code post-deployment, you arguably have a duty of care to the users who trusted the previous version.

The Ooki DAO case established that DAOs can be held liable as unincorporated associations. This means governance token holders could theoretically be on the hook for hack losses.

3. Negligence
The argument: Protocol teams that fail to implement reasonable security measures (audits, monitoring, circuit breakers) are negligent.

This is the most straightforward theory and the one most likely to succeed. As security best practices become more established, the “standard of care” for DeFi developers is becoming clearer. A protocol that launches without an audit in 2026 is almost certainly negligent. But what about a protocol that got audited but not formally verified? Where’s the line?

The Insurance Analogy

I think the eventual framework will mirror how we handle liability in other industries:

  • Mandatory security standards (like building codes or food safety regulations)
  • Required insurance or bonding (like contractor bonds or medical malpractice insurance)
  • Safe harbor provisions for protocols that meet minimum standards
  • Strict liability for protocols that fail to meet minimums

The GENIUS Act’s approach to stablecoins — requiring reserves, audits, and operational standards — provides a template. I expect similar frameworks for lending, DEX, and bridge protocols within 18-24 months.

What Builders Should Do Now

:balance_scale: Document everything: Security decisions, audit reports, risk assessments, incident response plans. In litigation, documentation is everything.

:clipboard: Establish governance procedures: Clear processes for security upgrades, parameter changes, and emergency responses. Ad-hoc governance looks terrible in court.

:classical_building: Consider legal structure: A properly structured legal entity (foundation, LLC wrapper for the DAO) can limit personal liability for contributors.

The $17B in cumulative DeFi losses is not just a security problem — it’s a legal time bomb. The protocols that survive the coming regulatory wave will be the ones that took liability seriously before they were forced to.

What do you think? Should protocol developers face legal liability for hacks? And if so, where should the line be drawn?

Rachel, this is the most important governance question facing DAOs in 2026, and I think the DAO community is woefully unprepared for it.

The Governance Liability Problem

The Ooki DAO precedent is terrifying for anyone holding governance tokens. If a DAO is an unincorporated association, and the DAO controls an upgradeable protocol that gets hacked, then every governance token holder who voted on security-relevant proposals could theoretically be liable.

Let’s think about what that means in practice:

Scenario: A DAO-governed lending protocol gets exploited for $50M

Who’s liable?

  • The original development team? They wrote the code but may have transferred control to the DAO.
  • The DAO governance voters? They approved the parameters and upgrades.
  • The delegates? Many DAOs have professional delegates who vote on behalf of token holders.
  • The multisig signers? They executed the governance decisions on-chain.

Under current US law, all of them could potentially face liability. That’s not a stable governance model.

Why This Breaks DAO Governance

If governance participation creates legal liability, rational actors will:

  1. Stop voting — Why risk personal liability by participating in governance?
  2. Delegate to liability shields — Push voting power to entities with limited liability structures
  3. Avoid security decisions — Nobody wants to be the person who voted “yes” on the parameter change that enabled an exploit
  4. Centralize control — Paradoxically, liability fear pushes DAOs toward centralized control by professional entities who can carry insurance

This is exactly the opposite of what decentralization is supposed to achieve.

A Governance Framework That Could Work

I’ve been working with several DAOs on a liability-conscious governance model:

1. Security Council with Limited Liability

  • A dedicated security committee with clear authority and legal structure
  • Members carry D&O insurance
  • Authority to make emergency decisions (pauses, parameter changes) without full governance vote
  • Clear scope and accountability

2. Tiered Governance Participation

  • Casual voters (retail token holders): Protected by limited liability provisions
  • Active delegates: Required to incorporate or operate through a legal entity
  • Core contributors: Employment or contractor relationships with the DAO’s legal entity

3. Security Standards as Governance Requirements

  • Mandatory audit requirements before deploying upgrades
  • Minimum monitoring and incident response capabilities
  • Required insurance coverage proportional to TVL

4. Liability Cap Mechanisms

  • DAO treasury insurance pools specifically for hack coverage
  • Governance-approved risk parameters with documented rationale
  • Clear escalation procedures for security incidents

The Uncomfortable Truth

Rachel’s right that the legal framework is being written now. But I worry that it’s being written by people who don’t understand DAOs, and the DAO community isn’t showing up to the conversation.

Every major DAO should have legal counsel advising on liability exposure. Every governance proposal that touches security should include a legal impact assessment. And every DAO contributor should understand that “decentralized” doesn’t mean “nobody is responsible.”

The $17B in losses isn’t just a security failure or a legal problem — it’s a governance failure. We built systems that control billions of dollars but designed governance models that assumed nothing would go wrong.

Rachel’s legal analysis is solid, but I want to push back on one framing: the question shouldn’t just be “should protocols be liable?” It should be “what standard of security should create a safe harbor?”

The Security Standards Problem

Right now, there’s no universally accepted definition of “reasonable security” for DeFi protocols. This matters enormously for the negligence theory Rachel outlined.

In traditional software, we have:

  • ISO 27001 for information security management
  • SOC 2 for service organization controls
  • PCI DSS for payment card data
  • NIST frameworks for cybersecurity

In DeFi, we have… nothing. No standard. No certification. No minimum bar.

This means a court trying to determine whether a protocol was “negligent” has no benchmark. Was getting one audit sufficient? Two? Did they need formal verification? Real-time monitoring? Bug bounties? Nobody knows, and the answer will be determined retroactively by judges who may not understand the technology.

What a DeFi Security Standard Should Look Like

Based on the attack data from January 2026 and the broader trend, here’s what I’d propose as a tiered security standard:

Tier 1: Minimum Viable Security (Required for all protocols)

  • At least one independent security audit from a recognized firm
  • Bug bounty program with meaningful payouts
  • Basic monitoring and alerting
  • Documented incident response plan
  • Timelock on governance actions

Tier 2: Enhanced Security (Required for protocols with over $50M TVL)

  • Two independent audits (different firms)
  • Competitive audit or formal verification
  • Real-time monitoring with automatic pause capabilities
  • Economic attack modeling
  • Insurance or dedicated security reserve fund

Tier 3: Maximum Security (Required for protocols with over $500M TVL)

  • Formal verification of core invariants
  • Continuous fuzzing campaigns
  • Dedicated security team
  • Third-party economic simulation
  • Comprehensive insurance coverage
  • Regular re-audits after upgrades

The Safe Harbor Argument

If protocols that meet Tier 2 or Tier 3 standards still get hacked, should they be liable? I’d argue no — or at least that liability should be significantly limited.

This mirrors how other industries work:

  • A hospital that follows medical standards of care isn’t liable for unpredictable complications
  • A building that meets code isn’t liable for a once-in-a-century earthquake
  • A bank that follows KYC/AML procedures isn’t liable for sophisticated fraud

The Bybit hack is the perfect example: they had extensive security measures, multiple audits, and sophisticated key management. They were compromised through a supply chain attack on a UI component. Should Bybit be liable for a North Korean state-sponsored attack that bypassed every reasonable security measure? I’d argue the answer is nuanced.

:shield: The path forward isn’t blanket liability or blanket immunity — it’s a clear standard that defines what “reasonable security” looks like, and safe harbor provisions for protocols that meet it.

Without that standard, the legal uncertainty will chill innovation far more than any hack ever could.

As someone who builds DeFi protocols, this thread is both necessary and terrifying. Let me share the protocol builder’s perspective on liability.

The Builder’s Dilemma

Every DeFi developer I know is acutely aware of the security risks. We don’t ship code hoping it gets hacked. We spend months on security, sacrifice speed-to-market, and lose sleep over potential vulnerabilities. And yet the framing of “should protocols be liable” feels like it puts builders in an impossible position.

Here’s why:

Software has bugs. Always. Every piece of software ever written has bugs. Microsoft, Google, Apple — companies with billions in resources and decades of experience — ship critical vulnerabilities regularly. The difference is that when Chrome has a zero-day, your browser crashes. When a DeFi protocol has a zero-day, someone loses $50M.

DeFi is adversarial by default. Traditional software operates in an environment where most users are benign. DeFi operates in an environment where sophisticated attackers are actively trying to exploit your code 24/7, with millions of dollars of incentive to succeed. This is a fundamentally different security model than anything else in software engineering.

Composability creates unbounded attack surfaces. When I deploy a lending protocol, I can audit my code. I can formally verify my invariants. But I cannot control every protocol that integrates with mine, every oracle I depend on, or every token that gets listed. The attack surface is the entire DeFi ecosystem, not just my codebase.

Where I Actually Agree With Liability

That said, I think liability makes sense in specific cases:

  1. Gross negligence — Protocols that launch without any audit, without monitoring, with known vulnerabilities. Ship-and-forget protocols that collect fees but invest nothing in security.

  2. Fraud and deception — Teams that claim security measures they didn’t actually implement. Fake audit reports, misrepresented security budgets, phantom bug bounties.

  3. Admin key exploitation — If a protocol team uses admin privileges to drain funds (rug pulls), that’s straightforward fraud and should absolutely carry liability.

  4. Failure to respond — Teams that know about a vulnerability, have the ability to pause or mitigate, and choose not to act.

Where Liability Would Be Destructive

Holding builders liable for sophisticated attacks that bypass reasonable security measures would:

  • Push DeFi development to anonymous teams (harder to sue, harder to hold accountable)
  • Prevent innovation in novel DeFi mechanisms (too risky to try anything new)
  • Concentrate the industry among well-funded entities that can afford maximum security and legal defense
  • Eliminate open-source DeFi development (who contributes to a protocol they might be sued for?)

The open-source question is particularly important. Many critical DeFi components are developed by independent contributors. If an open-source developer contributes a library that’s later used in a protocol that gets hacked, are they liable? That would effectively kill open-source DeFi.

What I Want From Regulation

Rachel’s right that regulation is coming. Here’s what I want it to look like:

  1. Clear, achievable security standards (Sophia’s tiered approach makes sense)
  2. Safe harbor for protocols that meet standards and respond appropriately to incidents
  3. Open-source protections — Contributors to open-source code should not face liability for downstream usage
  4. Proportional liability — Liability should scale with the degree of control. Truly immutable, fully decentralized protocols should have less liability than admin-controlled upgradeable systems
  5. Insurance infrastructure — Regulatory support for building a robust DeFi insurance market

The worst outcome would be a blanket liability regime that drives all DeFi development offshore or underground. The best outcome is a framework that rewards responsible development while providing recourse for genuinely negligent behavior.

Great discussion. Let me bring this down to the startup founder reality, because the liability question has very different implications depending on your stage and structure.

The Founder’s Liability Nightmare

I’ve co-founded three crypto startups and advise a dozen more. The liability question keeps every founder I know up at night, and it’s already changing behavior in ways that aren’t all positive.

What’s Already Happening

1. Jurisdiction Shopping Is Accelerating
Six months ago, I knew three teams relocating from the US to Switzerland, Singapore, or Dubai specifically because of liability concerns. Now it’s closer to a dozen. The US’s approach to DeFi liability — unclear standards, enforcement-by-litigation, and retroactive application of securities laws — is pushing builders to friendlier jurisdictions.

2. Anonymous Teams Are Making a Comeback
The 2022-2023 trend toward doxxed, transparent teams is reversing. Multiple projects I’ve seen launching in 2026 have anonymous or pseudonymous founders specifically because of liability exposure. This is terrible for the ecosystem — anonymous teams are harder to hold accountable for legitimate failures.

3. Insurance Costs Are Becoming Prohibitive
One of my portfolio companies explored D&O insurance that would cover smart contract exploit liability. The quotes came back at 8-12% of coverage annually. For a $50M TVL protocol, that’s $4-6M per year in insurance premiums alone. That’s not viable for any startup.

The Startup-Specific Problem

Diana made an excellent point about open-source liability. Let me extend it to the startup context:

Most DeFi startups are 5-15 person teams. They’re using:

  • Open-source libraries they didn’t write
  • Oracles they don’t control
  • Infrastructure (RPCs, indexers) they don’t operate
  • Tokens and assets they didn’t create

If a hack exploits a vulnerability in an OpenZeppelin library that the startup imported, who’s liable? The startup that used it? OpenZeppelin? The specific contributor who wrote the vulnerable function?

The composability that makes DeFi powerful also makes liability attribution nearly impossible.

What Would Actually Help Startups

1. Progressive Liability Linked to TVL

  • Under $10M TVL: Minimal liability requirements (single audit, basic monitoring)
  • $10M-$100M: Enhanced requirements (Sophia’s Tier 2)
  • Over $100M: Full requirements (Sophia’s Tier 3)

This lets startups launch and grow without being crushed by compliance costs on day one.

2. DAO LLC Wrappers With Clear Liability Limits
Wyoming, Tennessee, and several other states now offer DAO LLC structures. But the liability protections of these structures haven’t been tested in court for hack scenarios. We need legislative clarity that these wrappers actually protect members.

3. Industry-Funded Insurance Pool
A collective insurance pool funded by protocol fees (similar to FDIC but for DeFi) could spread the risk across the ecosystem. Each protocol contributes based on TVL and risk profile. If any member protocol gets hacked, the pool covers losses up to a limit.

4. Safe Harbor Legislation
Rachel mentioned the GENIUS Act model. I’d love to see a “DeFi Safe Harbor Act” that explicitly protects protocols meeting defined security standards from class action lawsuits. The SEC’s Hester Peirce proposed something similar for token offerings — we need the equivalent for protocol security.

The Bottom Line

The current state is the worst possible outcome: unclear liability, no safe harbors, no standards, and no affordable insurance. This creates maximum uncertainty with minimum protection for either builders or users.

Rachel’s right that the framework is being written now. The crypto industry needs to engage with lawmakers proactively, propose reasonable standards, and advocate for safe harbors — before the framework gets written without our input.