North Korea Has Stolen $2B+ From Crypto in 18 Months—Is the Industry Funding a Nuclear Weapons Program?

The numbers are getting harder to ignore.

In February 2025, the Lazarus Group—North Korea’s state-sponsored hacking unit—drained $1.5 billion from Bybit in the largest single crypto heist in history. The FBI formally attributed the attack to DPRK within weeks. Then on April 1, 2026, $285 million was stolen from Drift Protocol on Solana using a combination of social engineering and a legitimate Solana feature called “durable nonces.” TRM Labs attributed this one to North Korea as well.

That’s $1.8 billion from just two attacks. Factor in the dozens of smaller incidents TRM Labs has tracked—this was reportedly the eighteenth DPRK-linked attack in 2026 alone, with over $300 million stolen this year—and the cumulative figure exceeds $6.75 billion all-time according to multiple blockchain analytics firms.

The Uncomfortable Reality

A 2024 UN Panel of Experts report estimated that North Korea funds approximately 40% of its weapons of mass destruction programs through cryptocurrency theft. Let that number settle. Nearly half of a nuclear weapons program is being bankrolled by exploits against DeFi protocols, cross-chain bridges, and exchange infrastructure.

Every protocol that ships with inadequate security, every multi-sig that relies on human operational security without time-bound transaction validity, every bridge that prioritizes speed over verification—these aren’t just financial risks. They’re potential funding channels for weapons proliferation.

The Attack Pattern Is Consistent and Evolving

What makes Lazarus Group particularly dangerous is the sophistication trajectory:

  • 2022-2023: Exploiting code vulnerabilities in bridges (Ronin $625M, Harmony Horizon $100M)
  • 2024: Moving to social engineering combined with smart contract exploitation
  • 2025: Bybit hack—compromising multi-sig signers through sophisticated social engineering, then executing malicious transactions disguised as routine operations
  • 2026: Drift Protocol—using legitimate protocol features (durable nonces) as attack vectors. The attacker spent weeks staging the attack: Tornado Cash withdrawal on March 11, deploying CVT tokens, social engineering security council signers, then executing in under 60 seconds

The pattern shows clear adaptation. When code audits got better, they moved to social engineering. When hot wallet security improved, they targeted multi-sig operational procedures. When multi-sig awareness increased, they exploited legitimate features that decouple signing from execution.

The Industry Response Is Not Proportional

Here’s what troubles me most: the DeFi industry’s security investment is nowhere near proportional to the threat.

  • Protocols still launch with $50K-$150K audits focused on code bugs while $285M is stolen through operational security failures
  • Multi-sig governance structures assume signers review transactions at execution time—durable nonces break this assumption entirely
  • Cross-chain bridges remain the highest-value targets, yet bridge security standards are still voluntary
  • Bug bounty programs max out at $1-5M while individual exploits now routinely exceed $100M

The cost-benefit calculation for nation-state attackers is overwhelmingly favorable. If you can fund a nuclear weapons program by exploiting DeFi protocols with a few dozen engineers, why wouldn’t you?

Questions for the Community

  1. Should the industry treat DPRK-attributed exploits differently? If we know stolen funds are going to weapons programs, does the community have a moral obligation to do more than standard incident response?

  2. Is “move fast and break things” security culture morally defensible when the consequence isn’t just lost user funds but potential weapons proliferation?

  3. What concrete security standards should become mandatory before protocols go live with significant TVL? Time-bound transaction validity? Decentralized sequencer requirements? Mandatory operational security audits beyond code audits?

  4. Should there be industry-wide coordination against state-sponsored attackers—shared threat intelligence, standardized incident response, or even a DeFi security DAO with teeth?

I’m not asking whether DPRK will stop attacking crypto (they won’t). I’m asking whether our collective tolerance for “good enough” security makes every protocol developer partially responsible for what happens with stolen funds.

The best hack is the one that never happens. Right now, we’re making it far too easy.

This thread hits at something I’ve been warning about in every regulatory briefing I’ve given this year.

The “DeFi funds nuclear weapons” framing isn’t just a community talking point—it’s already being used in Congressional testimony. I was on a call last month where a Senate staffer explicitly cited the Lazarus Group’s cumulative theft figures as justification for bringing DeFi protocols under the Bank Secrecy Act. The political power of “your DeFi protocol funded North Korean missiles” cannot be overstated.

The Regulatory Angle Nobody’s Discussing

Here’s what the industry needs to understand: OFAC sanctions compliance is not optional, and ignorance is not a defense. When the Drift Protocol attacker funded their initial operation through Tornado Cash—a sanctioned protocol—every entity that facilitated downstream movement of those funds has potential OFAC liability.

The current regulatory framework has three tools for this:

  1. Sanctions (OFAC): Blocking property of designated entities (already used against Tornado Cash, Lazarus-linked wallets)
  2. AML/KYC requirements: Being expanded to cover DeFi “control persons” under proposed FinCEN rules
  3. Criminal prosecution: DOJ has active Lazarus Group indictments, and any knowing facilitation of fund movement is conspiracy territory

Where I Agree and Disagree With Sophia

I agree that the security response is disproportionate to the threat. But I’d push back on the “mandatory security standards” framing. Who enforces them? The SEC tried to regulate through enforcement and it was a disaster. Mandating security standards through legislation creates compliance theater—protocols check boxes without actually improving security.

What would work is liability. If protocol developers faced civil liability for preventable exploits that fund sanctioned entities, the economic incentive shifts overnight. You don’t need to mandate specific security practices—just make the consequences of failure proportional to the damage.

The uncomfortable truth: governments will use North Korea as the wedge issue to regulate DeFi aggressively. The industry’s choice isn’t “regulate or don’t regulate”—it’s “self-regulate credibly or get regulated by people who don’t understand the technology.” We’re running out of time to choose option A.

As someone who builds cross-chain infrastructure for a living, this thread is personal.

Bridges remain the single highest-value target for state-sponsored attackers, and I understand exactly why. Bridges are trust concentration points—they hold massive TVL, they connect disparate security domains, and they’re operationally complex. The Ronin bridge exploit ($625M) that funded North Korea was possible because 5 of 9 validator keys were compromised. Ronin had nine validators. Nine.

The Bridge Problem Is Structural, Not Just Technical

Sophia’s point about attack pattern evolution is critical. Let me add the bridge-specific dimension:

Generation 1 bridges (2021-2022): Lock-and-mint with trusted validator sets. Attack surface = compromise N-of-M validators. Ronin, Harmony Horizon—classic examples.

Generation 2 bridges (2023-2024): Light client verification, optimistic bridges with fraud proofs. Attack surface shifts to smart contract logic and challenge period manipulation.

Generation 3 bridges (2025-2026): ZK-proof-based verification, intent-based cross-chain execution. Attack surface moves to operational security (multi-sig governance, sequencer manipulation, social engineering of signers).

The Drift exploit is particularly alarming for bridge builders because durable nonces essentially allow pre-signed transactions that never expire. If any cross-chain message relay system uses a similar pattern—where signed messages can be stored and executed at an attacker’s convenience—the same attack vector applies.

What We’re Doing About It

In our bridge implementation, we’ve moved to:

  • Time-bound message validity: Every cross-chain message expires after a configurable window (default: 30 minutes). No durable anything.
  • Execution-context verification: Signers don’t just sign the transaction—they sign the context (block height range, timestamp bounds, preceding state root). Replaying out of context fails verification.
  • Rotating key ceremonies: Validator keys are rotated quarterly with hardware security module (HSM) requirements. No key lives long enough to be socially engineered.

These aren’t novel ideas. They’re basic security hygiene that the industry should have standardized years ago. The fact that a $285M exploit in 2026 used indefinitely valid signed messages is a collective failure.

To answer Sophia’s question #4: yes, we need industry-wide coordination. Not a DAO—those move too slowly. What we need is a shared threat intelligence network similar to what traditional finance has with FS-ISAC (Financial Services Information Sharing and Analysis Center). Real-time indicators of compromise, shared analysis of attack patterns, coordinated blacklisting of attacker wallets across chains. The infrastructure exists. The willingness to collaborate doesn’t.

Let me bring a different angle to this—the market impact.

I run trading bots across multiple chains and track on-chain flow data obsessively. When the Drift exploit happened on April 1, I watched $285M in stolen SOL get bridged to Ethereum in real-time on my dashboard. The sell pressure was visible within minutes. SOL dropped 4.2% in the hour following the exploit, and Drift’s governance token cratered 67%.

Here’s what most people don’t realize: DPRK doesn’t hodl. Chainalysis tracking shows Lazarus Group converts stolen crypto to fiat within 30-90 days through a sophisticated laundering chain: Tornado Cash / cross-chain bridges → privacy coins → OTC desks in jurisdictions with weak AML → Chinese yuan conversion → North Korean front companies. Every major exploit creates sustained sell pressure across multiple assets.

The Hidden Cost to Every Trader

The Bybit $1.5B hack didn’t just hurt Bybit users. The massive ETH sell-off that followed depressed the entire Ethereum ecosystem for weeks. My rough estimate: the market-wide impact of Bybit-related selling was 3-5x the direct exploit amount when you account for cascading liquidations, sentiment-driven selling, and reduced market-making activity.

The same pattern played out with Drift—SOL ecosystem tokens saw $800M+ in market cap evaporation beyond the direct $285M theft. That’s money out of every SOL holder’s pocket.

My Controversial Take

I’m going to be blunt: the industry’s response to state-sponsored hacking is pathetically reactive because there’s no economic incentive to be proactive. Protocol founders get rich from TVL and token launches. Security is a cost center. The developers who ship fast win the market, the developers who ship secure lose to faster competitors.

Until the market punishes insecure protocols before they get exploited—through lower TVL, lower token valuations, required security ratings—nothing changes. Insurance protocols like Nexus Mutual should be 10x their current size, and “uninsurable” should be a red flag that kills a protocol’s TVL overnight.

We literally have on-chain proof that stolen DeFi money funds nuclear weapons, and the market’s reaction is a 48-hour price dip followed by “ngmi” memes. I’ve been in this space since 2017 and this is the one thing that makes me genuinely question whether the industry deserves the legitimacy it seeks.

Brilliant thread. I want to zoom out from the immediate Drift/Bybit discussion and talk about why this is an architectural problem, not just an operational one.

The Root Cause Is Trust Concentration

Every major DPRK-attributed exploit shares a common pattern: trust is concentrated in a small number of human operators. Ronin: 5 of 9 validators. Bybit: multi-sig signers. Drift: security council members. The technical details differ, but the attack surface is always the same—compromise the humans who control the keys.

This is fundamentally an architecture problem. We’ve built “decentralized” protocols that concentrate critical trust in 3-9 human beings. From a nation-state attacker’s perspective, this is easier to exploit than traditional finance, where comparable access would require compromising multiple institutions with independent compliance, HR, and physical security layers.

The Durable Nonce Problem Is Deeper Than You Think

Ben’s point about durable nonces is critical, but the issue generalizes beyond Solana. Any system where:

  1. A signed message can be stored indefinitely
  2. Execution context is not verified at execution time
  3. Signers cannot revoke previously signed messages

…is vulnerable to the same class of attack. This includes:

  • Ethereum meta-transactions with no deadline parameter
  • Gasless relay networks where signed messages are queued
  • Cross-chain message protocols where signed attestations persist
  • DAO governance proposals with no expiration

The fix is conceptually simple but requires protocol-level changes: every signed action must include an expiration timestamp, and execution must verify the action is within its validity window. EIP-4337 (account abstraction) partially addresses this for Ethereum, but most DeFi protocols haven’t adopted it.

What Decentralization Should Actually Mean for Security

Here’s my controversial position: the crypto industry claims “decentralization” as a security property but doesn’t actually implement it where it matters. True decentralization for protocol governance means:

  • Threshold signature schemes where no subset of signers below the threshold can reconstruct the key (not just multi-sig where M-of-N means compromising M individuals)
  • Time-locked execution where governance actions require mandatory delay with public visibility (Compound’s Timelock is a good model)
  • Distributed validator technology where each “validator” is itself distributed across multiple independent operators
  • Social consensus mechanisms where sufficiently large exploits trigger automatic pause and community vote before fund movement

None of these are theoretical. They all exist as implementations. The industry just doesn’t use them because they add friction, increase gas costs, and slow down governance.

To Sophia’s question about moral responsibility: I think the framing should shift from “are developers morally responsible” to “are developers technically capable of doing better.” The answer is unambiguously yes. The tooling exists. The research exists. The patterns are well-documented. Every protocol that launches without time-bound transaction validity and distributed key management in 2026 is making a choice to accept preventable risk. Whether that choice carries moral weight given what we now know about who exploits these vulnerabilities—I’ll leave that to the community to decide.