Skip to main content

98 posts tagged with "blockchain"

View all tags

Camp Network: The Blockchain Tackling AI's Billion-Dollar IP Problem 🏕️

· 5 min read
Dora Noda
Software Engineer

The rise of generative AI has been nothing short of explosive. From stunning digital art to human-like text, AI is creating content at an unprecedented scale. But this boom has a dark side: where does the AI get its training data? Often, it's from the vast expanse of the internet—from art, music, and writing created by humans who receive no credit or compensation.

Enter Camp Network, a new blockchain project that aims to solve this fundamental problem. It’s not just another crypto platform; it's a purpose-built "Autonomous IP Layer" designed to give creators ownership and control over their work in the age of AI. Let's dive into what makes Camp Network a project to watch.


What's the Big Idea?

At its core, Camp Network is a blockchain that acts as a global, verifiable registry for intellectual property (IP). The mission is to allow anyone—from an independent artist to a social media user—to register their content on-chain. This creates a permanent, tamper-proof record of ownership and provenance.

Why does this matter? When an AI model uses content registered on Camp, the network's smart contracts can automatically enforce licensing terms. This means the original creator can get attribution and even receive royalty payments instantly. Camp's vision is to build a new creator economy where compensation isn't an afterthought; it's built directly into the protocol.


Under the Hood: The Technology Stack

Camp isn't just a concept; it's backed by some serious tech designed for high performance and developer-friendliness.

  • Modular Architecture: Camp is built as a sovereign rollup using Celestia for data availability. This design allows it to be incredibly fast (targeting ~50,000 transactions per second) and cheap, while remaining fully compatible with Ethereum's tools (EVM).
  • Proof of Provenance (PoP): This is Camp's unique consensus mechanism. Instead of relying on energy-intensive mining, the network's security is tied to verifying the origin of content. Every transaction reinforces the provenance of the IP on the network, making ownership "enforceable by design."
  • Dual-VM Strategy: To maximize performance, Camp is integrating the Solana Virtual Machine (SVM) alongside its EVM compatibility. This allows developers to choose the best environment for their app, especially for high-throughput use cases like real-time AI interactions.
  • Creator & AI Toolkits: Camp provides two key frameworks:
    • Origin Framework: A user-friendly system for creators to register their IP, tokenize it (as an NFT), and embed licensing rules.
    • mAItrix Framework: A toolkit for developers to build and deploy AI agents that can interact with the on-chain IP in a secure, permissioned way.

People, Partnerships, and Progress

An idea is only as good as its execution, and Camp appears to be executing well.

The Team and Funding

The project is led by a team with a potent mix of experience from The Raine Group (media & IP deals), Goldman Sachs, Figma, and CoinList. This blend of finance, tech product, and crypto engineering expertise has helped them secure $30 million in funding from top VCs like 1kx, Blockchain Capital, and Maven 11.

A Growing Ecosystem

Camp has been aggressive in building partnerships. The most significant is a strategic stake in KOR Protocol, a platform for tokenizing music IP that works with major artists like Deadmau5 and franchises like Black Mirror. This single partnership bootstraps Camp with a massive library of high-profile, rights-cleared content. Other key collaborators include:

  • RewardedTV: A decentralized video streaming platform using Camp for on-chain content rights.
  • Rarible: An NFT marketplace integrated for trading IP assets.
  • LayerZero: A cross-chain protocol to ensure interoperability with other blockchains.

Roadmap and Community

After successful incentivized testnet campaigns that attracted tens of thousands of users (rewarding them with points set to convert to tokens), Camp is targeting a mainnet launch in Q3 2025. This will be accompanied by a Token Generation Event for its native token, $CAMP, which will be used for gas fees, staking, and governance. The project has already cultivated a passionate community eager to build on and use the platform from day one.


How Does It Compare?

Camp Network isn't alone in this space. It faces stiff competition from projects like the a16z-backed Story Protocol and the Sony-linked Soneium. However, Camp differentiates itself in several key ways:

  1. Bottom-Up Approach: While competitors seem to target large corporate IP holders, Camp is focused on empowering independent creators and crypto communities through token incentives.
  2. Comprehensive Solution: It offers a full suite of tools, from an IP registry to an AI agent framework, positioning itself as a one-stop shop.
  3. Performance and Scalability: Its modular architecture and dual-VM support are designed for the high-throughput demands of AI and media.

The Takeaway

Camp Network is making a compelling case to become the foundational layer for intellectual property in the Web3 era. By combining innovative technology, a strong team, strategic partnerships, and a community-first ethos, it’s building a practical solution to one of the most pressing issues created by generative AI.

The real test will come with the mainnet launch and real-world adoption. But with a clear vision and strong execution so far, Camp Network is undoubtedly a key project to watch as it attempts to build a more equitable future for digital creators.

The Rumors Surrounding a Stripe L1 Network

· 5 min read
Dora Noda
Software Engineer

The prospect of Stripe launching its own Layer 1 (L1) blockchain has been a hot topic within the crypto community, fueled by recent strategic moves from the global payments giant. While unconfirmed, the whispers suggest a potentially transformative shift in the payments landscape. Given Stripe's core mission to "grow the GDP of the internet" by building robust global economic infrastructure, a dedicated blockchain could be a logical and powerful next step, especially considering the company's increasing embrace of blockchain-related ventures.

The Foundation for a Stripe L1

Stripe has already laid significant groundwork that makes the idea of an L1 highly plausible. In February 2025, Stripe notably acquired Bridge, a stablecoin infrastructure company, for approximately $1.1 billion. This move clearly signals Stripe's commitment to stablecoin-based financial infrastructure. Following this acquisition, in May 2025, Stripe introduced its Stablecoin Financial Accounts service at the Stripe Sessions event. This service, available in 101 countries, allows businesses to:

  • Hold USDC (issued by Circle) and USDB (issued by Bridge).
  • Easily deposit and withdraw stablecoins via traditional USD transfers (ACH/wire) and EUR transfers (SEPA).
  • Facilitate USDC deposits and withdrawals across major blockchain networks, including Arbitrum, Avalanche C-Chain, Base, Ethereum, Optimism, Polygon, Solana, and Stellar.

This means businesses worldwide can seamlessly integrate dollar-based stablecoins into their operations, bridging the gap between traditional banking and the burgeoning digital asset economy.

Adding to this, in June 2025, Stripe acquired Privy.io, a Web3 wallet infrastructure startup. Privy offers crucial features like email or SSO-based wallet creation, transaction signing, key management, and gas abstraction. This acquisition rounds out Stripe's capabilities, providing the essential wallet infrastructure needed to facilitate broader blockchain adoption.

With both stablecoin and wallet infrastructure now firmly in place, the strategic synergy of launching a dedicated blockchain network becomes apparent. It would allow Stripe to more tightly integrate these services and unlock new possibilities within its ecosystem.

What a Stripe L1 Could Mean for Payments

If Stripe were to introduce its own L1 network, it could significantly enhance existing payment services and enable entirely new functionalities.

Base Case Enhancements

In its most fundamental form, a Stripe L1 could bring several immediate improvements:

  • Integrated Stablecoin Financial Accounts: Stripe's existing stablecoin financial accounts service would likely fully integrate with the Stripe L1, allowing merchants to deposit, withdraw, and utilize their stablecoin holdings directly on the network for various financial activities.
  • Stablecoin Settlement for Merchants: Merchants could gain the option to settle their sales proceeds directly in dollar-based stablecoins. This would be a substantial benefit, particularly for businesses with high dollar demand but limited access to traditional banking rails, streamlining cross-border transactions and reducing FX complexities.
  • Customer Wallet Services: Leveraging Privy's infrastructure, a Stripe L1 could enable individuals to easily create Web3 wallets within the Stripe ecosystem. This would facilitate stablecoin payments for customers and open doors for participation in a wider range of financial activities on the Stripe L1.
  • Stablecoin Payment Options for Customers: Customers currently relying on cards or bank transfers could connect their Web3 wallets (whether Stripe-provided or third-party) and choose stablecoins as a payment method, offering greater flexibility and potentially lower transaction costs.

Revolutionary "Bull Case" Scenarios

Beyond these foundational improvements, a Stripe L1 has the potential to truly revolutionize the payment industry, tackling long-standing inefficiencies:

  • Direct Customer-to-Merchant Payments: One of the most exciting prospects is the potential for direct payments between customers and merchants using stablecoins on Stripe L1. This could bypass traditional intermediaries like card networks and issuing banks, leading to significantly faster settlement times and reduced transaction fees. While safeguards for refunds and cancellations would be crucial, the directness of blockchain transactions offers unparalleled efficiency.
  • Micro-Payment Based Subscription Services: Blockchain's inherent support for micro-payments could unlock entirely new business models. Imagine subscriptions billed by the minute, where users pay strictly based on actual usage, with all payments automated via smart contracts. This contrasts sharply with current monthly or annual models, opening up a vast array of new service offerings.
  • DeFi Utilization of Short-Term Deposits: In traditional systems, payment settlements often face delays due to the need for fraud detection, cancellations, and refunds. If Stripe L1 were to handle direct stablecoin payments, funds might still be temporarily held on the network before full release to the merchant. These short-term deposits, expected to be substantial in scale, could form a massive liquidity pool on Stripe L1. This liquidity could then be deployed in decentralized finance (DeFi) protocols, lending markets, or invested in high-yield bonds, significantly improving capital efficiency for all participants.

The Future of Payments

The rumors surrounding a Stripe L1 network are more than just speculative chatter; they point to a deeper trend in the financial world. Payment giants like Visa, Mastercard, and PayPal have primarily viewed blockchain and stablecoins as supplementary features. If Stripe fully commits to an L1, it could signal a historic paradigm shift in payment systems, fundamentally reshaping how money moves globally.

Historically, Stripe has excelled as a payment gateway and acquirer. However, a Stripe L1 could allow the company to expand its role, potentially assuming functions traditionally held by card networks and even issuing banks. This move would not only enhance payment efficiency through blockchain but also enable previously unachievable features like granular micro-streaming subscriptions and automated management of short-term liquidity.

We are truly on the cusp of a disruptive era in payment systems, powered by blockchain technology. Whether Stripe officially launches an L1 remains to be seen, but the strategic pieces are certainly falling into place for such a monumental step.

Two Rails to a Friendlier Ethereum: ERC‑4337 Smart Accounts + ERC‑4804 Web3 URLs

· 9 min read
Dora Noda
Software Engineer

TL;DR

Ethereum just got two powerful primitives that push user experience past seed phrases and bookmarkable dapps toward “clickable on-chain experiences.”

  • ERC-4337 brings account abstraction to today’s Ethereum without core protocol changes. This makes features like smart contract accounts, gas sponsorship, batched calls, and passkey-style authentication native to wallets.
  • ERC-4804 introduces web3:// URLs—human-readable links that resolve directly to contract read calls and can even render on-chain HTML or SVG, all without a traditional web server acting as a middleman. Think of it as “HTTP for the EVM.”

When used together, ERC-4337 handles actions, while ERC-4804 handles addresses. This combination allows you to share a link that verifiably pulls its user interface from a smart contract. When a user is ready to act, the flow hands off to a smart account that can sponsor gas and batch multiple steps into a single, seamless click.


Why This Matters Now

This isn't just a theoretical future; these technologies are live and gaining significant traction. ERC-4337 is already scaled and proven in the wild. The canonical EntryPoint contract was deployed on the Ethereum mainnet on March 1, 2023, and has since powered tens of millions of smart contract accounts and processed over 100 million user operations.

Simultaneously, the core protocol is converging with these ideas. The Pectra upgrade, shipped in May 2025, included EIP-7702, which allows standard externally owned accounts (EOAs) to temporarily behave like smart accounts. This complements ERC-4337 by easing the transition for existing users, rather than replacing the standard.

On the addressing front, web3:// is now formalized. ERC-4804 specifies exactly how a URL translates into an EVM call, and web3 has been listed by IANA as a provisional URI scheme. The tooling and gateways needed to make these URLs practical are now available, turning on-chain data into shareable, linkable resources.


Primer: ERC-4337 in One Page

At its core, ERC-4337 introduces a parallel transaction rail to Ethereum, built for flexibility. Instead of traditional transactions, users submit UserOperation objects into an alternative mempool. These objects describe what the account wants to do. Specialized nodes called "Bundlers" pick up these operations and execute them through a global EntryPoint contract.

This enables three key components:

  1. Smart Contract Accounts (SCAs): These accounts contain their own logic. They define what makes a transaction valid, allowing for custom signature schemes (like passkeys or multisig), session keys for games, spending limits, and social recovery mechanisms. The account, not the network, enforces the rules.
  2. Paymasters: These special contracts can sponsor gas fees for users or allow them to pay in ERC-20 tokens. This is the key to unlocking true “no-ETH-in-wallet” onboarding and creating one-click experiences by batching multiple calls into a single operation.
  3. DoS Safety & Rules: The public ERC-4337 mempool is protected by standardized off-chain validation rules (defined in ERC-7562) that prevent Bundlers from wasting resources on operations that are destined to fail. While alternative mempools can exist for specialized use cases, these shared rules keep the ecosystem coherent and secure.

Mental model: ERC-4337 turns wallets into programmable apps. Instead of just signing raw transactions, users submit "intents" that their account's code validates and the EntryPoint contract executes—safely and atomically.


Primer: ERC-4804 in One Page

ERC-4804 provides a simple, direct mapping from a web3:// URL to a read-only EVM call. The URL grammar is intuitive: web3://<name-or-address>[:chainId]/<method>/<arg0>?returns=(types). Names can be resolved via systems like ENS, and arguments are automatically typed based on the contract's ABI.

Here are a couple of examples:

  • web3://uniswap.eth/ would call the contract at the uniswap.eth address with empty calldata.
  • web3://.../balanceOf/vitalik.eth?returns=(uint256) would ABI-encode a call to the balanceOf function with Vitalik's address and return a properly typed JSON result.

Crucially, this standard is currently for read-only calls (equivalent to Solidity's view functions). Any action that changes state still requires a transaction—which is exactly where ERC-4337 or EIP-7702 come in. With web3 registered as a provisional URI scheme with IANA, the path is paved for native browser and client support, though for now, it often relies on extensions or gateways.

Mental model: ERC-4804 turns on-chain resources into linkable web objects. “Share this contract view as a URL” becomes as natural as sharing a link to a dashboard.


Together: "Clickable On-chain Experiences"

Combining these two standards unlocks a powerful new pattern for building decentralized applications today.

First, you deliver a verifiable UI via web3://. Instead of hosting your frontend on a centralized server like S3, you can store a minimal HTML or SVG interface directly on-chain. A link like web3://app.eth/render allows a client to resolve the URL and render the UI directly from the contract, ensuring the user sees exactly what the code dictates.

From that verifiable interface, you can trigger a one-click action via ERC-4337. A "Mint" or "Subscribe" button can compile a UserOperation that a paymaster sponsors. The user approves with a passkey or a simple biometric prompt, and the EntryPoint contract executes a batched call that deploys their smart account (if it's their first time) and completes the desired action in a single, atomic step.

This creates a seamless deep-link handoff. The UI can embed intent-based links that are handled directly by the user's wallet, eliminating the need to send them to an external site they may not trust. The content is the contract, and the action is the account.

This unlocks:

  • Gasless trials and "just works" onboarding: New users don't need to acquire ETH to get started. Your application can sponsor their first few interactions, dramatically reducing friction.
  • Shareable state: A web3:// link is a query into the blockchain's state. This is perfect for dashboards, proofs of ownership, or any content that must be verifiably tamper-evident.
  • Agent-friendly flows: AI agents can fetch verifiable state via web3:// URLs and submit transactional intents through ERC-4337 using scoped session keys, all without brittle screen scraping or insecure private key handling.

Design Notes for Builders

When implementing these standards, there are a few architectural choices to consider. For ERC-4337, it's wise to start with minimal smart contract account templates and add capabilities through guarded modules to keep the core validation logic simple and secure. Your paymaster policy should be robust, with clear caps on sponsored gas and whitelists for approved methods to prevent griefing attacks.

For ERC-4804, prioritize human-readable links by using ENS names. Be explicit about chainId to avoid ambiguity and include the returns=(…) parameter to ensure clients receive typed, predictable responses. While you can render full UIs, it’s often best to keep on-chain HTML/SVG minimal, using them as verifiable shells that can fetch heavier assets from decentralized storage like IPFS.

Finally, remember that EIP-7702 and ERC-4337 work together, not against each other. With EIP-7702 now active in the Pectra upgrade, existing EOA users can delegate actions to contract logic without deploying a full smart account. The tooling in the account abstraction ecosystem is already aligning to support this, smoothing the migration path for everyone.


Security, Reality, and Constraints

While powerful, these systems have trade-offs. The EntryPoint contract is a central chokepoint by design; it simplifies the security model but also concentrates risk. Always stick to audited, canonical versions. The mempool validation rules from ERC-7562 are a social convention, not an on-chain enforced rule, so don't assume every alternative mempool offers the same censorship resistance or DoS protection.

Furthermore, web3:// is still maturing. It remains a read-only standard, and any write operation requires a transaction. While the protocol itself is decentralized, the gateways and clients that resolve these URLs can still be potential points of failure or censorship. True "unblockability" will depend on widespread native client support.


A Concrete Blueprint

Imagine you want to build an NFT-powered membership club with a shareable, verifiable UI and a one-click join process. Here’s how you could ship it this quarter:

  1. Share the UI: Distribute a link like web3://club.eth/home. When a user opens it, their client resolves the URL, calls the contract, and renders an on-chain UI that displays the current member allowlist and mint price.
  2. One-Click Join: The user clicks a "Join" button. Their wallet compiles an ERC-4337 UserOperation that is sponsored by your paymaster. This single operation batches three calls: deploying the user's smart account (if they don't have one), paying the mint fee, and registering their profile data.
  3. Verifiable Receipt: After the transaction confirms, the user is shown a confirmation view that is just another web3:// link, like web3://club.eth/receipt/<tokenId>, creating a permanent, on-chain link to their membership proof.

The Bigger Arc

These two standards signal a fundamental shift in how we build on Ethereum. Accounts are becoming software. ERC-4337 and EIP-7702 are turning "wallet UX" into a space for real product innovation, moving us beyond lectures about key management. At the same time, links are becoming queries. ERC-4804 restores the URL as a primitive for addressing verifiable facts on-chain, not just the frontends that proxy them.

Together, they shrink the gap between what users click and what contracts do. That gap was once filled by centralized web servers and trust assumptions. Now, it can be filled by verifiable code paths and open, permissionless mempools.

If you're building consumer crypto applications, this is your chance to make the user's first minute delightful. Share a link, render the truth, sponsor the first action, and keep your users inside a verifiable loop. The rails are here—now it's time to ship the experiences.

Connecting AI and Web3 through MCP: A Panoramic Analysis

· 43 min read
Dora Noda
Software Engineer

Introduction

AI and Web3 are converging in powerful ways, with AI general interfaces now envisioned as a connective tissue for the decentralized web. A key concept emerging from this convergence is MCP, which variously stands for “Model Context Protocol” (as introduced by Anthropic) or is loosely described as a Metaverse Connection Protocol in broader discussions. In essence, MCP is a standardized framework that lets AI systems interface with external tools and networks in a natural, secure way – potentially “plugging in” AI agents to every corner of the Web3 ecosystem. This report provides a comprehensive analysis of how AI general interfaces (like large language model agents and neural-symbolic systems) could connect everything in the Web3 world via MCP, covering the historical background, technical architecture, industry landscape, risks, and future potential.

1. Development Background

1.1 Web3’s Evolution and Unmet Promises

The term “Web3” was coined around 2014 to describe a blockchain-powered decentralized web. The vision was ambitious: a permissionless internet centered on user ownership. Enthusiasts imagined replacing Web2’s centralized infrastructure with blockchain-based alternatives – e.g. Ethereum Name Service (for DNS), Filecoin or IPFS (for storage), and DeFi for financial rails. In theory, this would wrest control from Big Tech platforms and give individuals self-sovereignty over data, identity, and assets.

Reality fell short. Despite years of development and hype, the mainstream impact of Web3 remained marginal. Average internet users did not flock to decentralized social media or start managing private keys. Key reasons included poor user experience, slow and expensive transactions, high-profile scams, and regulatory uncertainty. The decentralized “ownership web” largely “failed to materialize” beyond a niche community. By the mid-2020s, even crypto proponents admitted that Web3 had not delivered a paradigm shift for the average user.

Meanwhile, AI was undergoing a revolution. As capital and developer talent pivoted from crypto to AI, transformative advances in deep learning and foundation models (GPT-3, GPT-4, etc.) captured public imagination. Generative AI demonstrated clear utility – producing content, code, and decisions – in a way crypto applications had struggled to do. In fact, the impact of large language models in just a couple of years starkly outpaced a decade of blockchain’s user adoption. This contrast led some to quip that “Web3 was wasted on crypto” and that the real Web 3.0 is emerging from the AI wave.

1.2 The Rise of AI General Interfaces

Over decades, user interfaces evolved from static web pages (Web1.0) to interactive apps (Web2.0) – but always within the confines of clicking buttons and filling forms. With modern AI, especially large language models (LLMs), a new interface paradigm is here: natural language. Users can simply express intent in plain language and have AI systems execute complex actions across many domains. This shift is so profound that some suggest redefining “Web 3.0” as the era of AI-driven agents (“the Agentic Web”) rather than the earlier blockchain-centric definition.

However, early experiments with autonomous AI agents exposed a critical bottleneck. These agents – e.g. prototypes like AutoGPT – could generate text or code, but they lacked a robust way to communicate with external systems and each other. There was “no common AI-native language” for interoperability. Each integration with a tool or data source was a bespoke hack, and AI-to-AI interaction had no standard protocol. In practical terms, an AI agent might have great reasoning ability but fail at executing tasks that required using web apps or on-chain services, simply because it didn’t know how to talk to those systems. This mismatch – powerful brains, primitive I/O – was akin to having super-smart software stuck behind a clumsy GUI.

1.3 Convergence and the Emergence of MCP

By 2024, it became evident that for AI to reach its full potential (and for Web3 to fulfill its promise), a convergence was needed: AI agents require seamless access to the capabilities of Web3 (decentralized apps, contracts, data), and Web3 needs more intelligence and usability, which AI can provide. This is the context in which MCP (Model Context Protocol) was born. Introduced by Anthropic in late 2024, MCP is an open standard for AI-tool communication that feels natural to LLMs. It provides a structured, discoverable way for AI “hosts” (like ChatGPT, Claude, etc.) to find and use a variety of external tools and resources via MCP servers. In other words, MCP is a common interface layer enabling AI agents to plug into web services, APIs, and even blockchain functions, without custom-coding each integration.

Think of MCP as “the USB-C of AI interfaces”. Just as USB-C standardized how devices connect (so you don’t need different cables for each device), MCP standardizes how AI agents connect to tools and data. Rather than hard-coding different API calls for every service (Slack vs. Gmail vs. Ethereum node), a developer can implement the MCP spec once, and any MCP-compatible AI can understand how to use that service. Major AI players quickly saw the importance: Anthropic open-sourced MCP, and companies like OpenAI and Google are building support for it in their models. This momentum suggests MCP (or similar “Meta Connectivity Protocols”) could become the backbone that finally connects AI and Web3 in a scalable way.

Notably, some technologists argue that this AI-centric connectivity is the real realization of Web3.0. In Simba Khadder’s words, “MCP aims to standardize an API between LLMs and applications,” akin to how REST APIs enabled Web 2.0 – meaning Web3’s next era might be defined by intelligent agent interfaces rather than just blockchains. Instead of decentralization for its own sake, the convergence with AI could make decentralization useful, by hiding complexity behind natural language and autonomous agents. The remainder of this report delves into how, technically and practically, AI general interfaces (via protocols like MCP) can connect everything in the Web3 world.

2. Technical Architecture: AI Interfaces Bridging Web3 Technologies

Embedding AI agents into the Web3 stack requires integration at multiple levels: blockchain networks and smart contracts, decentralized storage, identity systems, and token-based economies. AI general interfaces – from large foundation models to hybrid neural-symbolic systems – can serve as a “universal adapter” connecting these components. Below, we analyze the architecture of such integration:

** Figure: A conceptual diagram of MCP’s architecture, showing how AI hosts (LLM-based apps like Claude or ChatGPT) use an MCP client to plug into various MCP servers. Each server provides a bridge to some external tool or service (e.g. Slack, Gmail, calendars, or local data), analogous to peripherals connecting via a universal hub. This standardized MCP interface lets AI agents access remote services and on-chain resources through one common protocol.**

2.1 AI Agents as Web3 Clients (Integrating with Blockchains)

At the core of Web3 are blockchains and smart contracts – decentralized state machines that can enforce logic in a trustless manner. How can an AI interface engage with these? There are two directions to consider:

  • AI reading from blockchain: An AI agent may need on-chain data (e.g. token prices, user’s asset balance, DAO proposals) as context for its decisions. Traditionally, retrieving blockchain data requires interfacing with node RPC APIs or subgraph databases. With a framework like MCP, an AI can query a standardized “blockchain data” MCP server to fetch live on-chain information. For example, an MCP-enabled agent could ask for the latest transaction volume of a certain token, or the state of a smart contract, and the MCP server would handle the low-level details of connecting to the blockchain and return the data in a format the AI can use. This increases interoperability by decoupling the AI from any specific blockchain’s API format.

  • AI writing to blockchain: More powerfully, AI agents can execute smart contract calls or transactions through Web3 integrations. An AI could, for instance, autonomously execute a trade on a decentralized exchange or adjust parameters in a smart contract if certain conditions are met. This is achieved by the AI invoking an MCP server that wraps blockchain transaction functionality. One concrete example is the thirdweb MCP server for EVM chains, which allows any MCP-compatible AI client to interact with Ethereum, Polygon, BSC, etc. by abstracting away chain-specific mechanics. Using such a tool, an AI agent could trigger on-chain actions “without human intervention”, enabling autonomous dApps – for instance, an AI-driven DeFi vault that rebalances itself by signing transactions when market conditions change.

Under the hood, these interactions still rely on wallets, keys, and gas fees, but the AI interface can be given controlled access to a wallet (with proper security sandboxes) to perform the transactions. Oracles and cross-chain bridges also come into play: Oracle networks like Chainlink serve as a bridge between AI and blockchains, allowing AI outputs to be fed on-chain in a trustworthy way. Chainlink’s Cross-Chain Interoperability Protocol (CCIP), for example, could enable an AI model deemed reliable to trigger multiple contracts across different chains simultaneously on behalf of a user. In summary, AI general interfaces can act as a new type of Web3 client – one that can both consume blockchain data and produce blockchain transactions through standardized protocols.

2.2 Neural-Symbolic Synergy: Combining AI Reasoning with Smart Contracts

One intriguing aspect of AI-Web3 integration is the potential for neural-symbolic architectures that combine the learning ability of AI (neural nets) with the rigorous logic of smart contracts (symbolic rules). In practice, this could mean AI agents handling unstructured decision-making and passing certain tasks to smart contracts for verifiable execution. For instance, an AI might analyze market sentiment (a fuzzy task), but then execute trades via a deterministic smart contract that follows pre-set risk rules. The MCP framework and related standards make such hand-offs feasible by giving the AI a common interface to call contract functions or to query a DAO’s rules before acting.

A concrete example is SingularityNET’s AI-DSL (AI Domain Specific Language), which aims to standardize communication between AI agents on their decentralized network. This can be seen as a step toward neural-symbolic integration: a formal language (symbolic) for agents to request AI services or data from each other. Similarly, projects like DeepMind’s AlphaCode or others could eventually be connected so that smart contracts call AI models for on-chain problem solving. Although running large AI models directly on-chain is impractical today, hybrid approaches are emerging: e.g. certain blockchains allow verification of ML computations via zero-knowledge proofs or trusted execution, enabling on-chain verification of off-chain AI results. In summary, the technical architecture envisions AI systems and blockchain smart contracts as complementary components, orchestrated via common protocols: AI handles perception and open-ended tasks, while blockchains provide integrity, memory, and enforcement of agreed rules.

2.3 Decentralized Storage and Data for AI

AI thrives on data, and Web3 offers new paradigms for data storage and sharing. Decentralized storage networks (like IPFS/Filecoin, Arweave, Storj, etc.) can serve as both repositories for AI model artifacts and sources of training data, with blockchain-based access control. An AI general interface, through MCP or similar, could fetch files or knowledge from decentralized storage just as easily as from a Web2 API. For example, an AI agent might pull a dataset from Ocean Protocol’s market or an encrypted file from a distributed storage, if it has the proper keys or payments.

Ocean Protocol in particular has positioned itself as an “AI data economy” platform – using blockchain to tokenize data and even AI services. In Ocean, datasets are represented by datatokens which gate access; an AI agent could obtain a datatoken (perhaps by paying with crypto or via some access right) and then use an Ocean MCP server to retrieve the actual data for analysis. Ocean’s goal is to unlock “dormant data” for AI, incentivizing sharing while preserving privacy. Thus, a Web3-connected AI might tap into a vast, decentralized corpus of information – from personal data vaults to open government data – that was previously siloed. The blockchain ensures that usage of the data is transparent and can be fairly rewarded, fueling a virtuous cycle where more data becomes available to AI and more AI contributions (like trained models) can be monetized.

Decentralized identity systems also play a role here (discussed more in the next subsection): they can help control who or what is allowed to access certain data. For instance, a medical AI agent could be required to present a verifiable credential (on-chain proof of compliance with HIPAA or similar) before being allowed to decrypt a medical dataset from a patient’s personal IPFS storage. In this way, the technical architecture ensures data flows to AI where appropriate, but with on-chain governance and audit trails to enforce permissions.

2.4 Identity and Agent Management in a Decentralized Environment

When autonomous AI agents operate in an open ecosystem like Web3, identity and trust become paramount. Decentralized identity (DID) frameworks provide a way to establish digital identities for AI agents that can be cryptographically verified. Each agent (or the human/organization deploying it) can have a DID and associated verifiable credentials that specify its attributes and permissions. For example, an AI trading bot could carry a credential issued by a regulatory sandbox certifying it may operate within certain risk limits, or an AI content moderator could prove it was created by a trusted organization and has undergone bias testing.

Through on-chain identity registries and reputation systems, the Web3 world can enforce accountability for AI actions. Every transaction an AI agent performs can be traced back to its ID, and if something goes wrong, the credentials tell you who built it or who is responsible. This addresses a critical challenge: without identity, a malicious actor could spin up fake AI agents to exploit systems or spread misinformation, and no one could tell bots apart from legitimate services. Decentralized identity helps mitigate that by enabling robust authentication and distinguishing authentic AI agents from spoofs.

In practice, an AI interface integrated with Web3 would use identity protocols to sign its actions and requests. For instance, when an AI agent calls an MCP server to use a tool, it might include a token or signature tied to its decentralized identity, so the server can verify the call is from an authorized agent. Blockchain-based identity systems (like Ethereum’s ERC-725 or W3C DIDs anchored in a ledger) ensure this verification is trustless and globally verifiable. The emerging concept of “AI wallets” ties into this – essentially giving AI agents cryptocurrency wallets that are linked with their identity, so they can manage keys, pay for services, or stake tokens as a bond (which could be slashed for misbehavior). ArcBlock, for example, has discussed how “AI agents need a wallet” and a DID to operate responsibly in decentralized environments.

In summary, the technical architecture foresees AI agents as first-class citizens in Web3, each with an on-chain identity and possibly a stake in the system, using protocols like MCP to interact. This creates a web of trust: smart contracts can require an AI’s credentials before cooperating, and users can choose to delegate tasks to only those AI that meet certain on-chain certifications. It is a blend of AI capability with blockchain’s trust guarantees.

2.5 Token Economies and Incentives for AI

Tokenization is a hallmark of Web3, and it extends to the AI integration domain as well. By introducing economic incentives via tokens, networks can encourage desired behaviors from both AI developers and the agents themselves. Several patterns are emerging:

  • Payment for Services: AI models and services can be monetized on-chain. SingularityNET pioneered this by allowing developers to deploy AI services and charge users in a native token (AGIX) for each call. In an MCP-enabled future, one could imagine any AI tool or model being a plug-and-play service where usage is metered via tokens or micropayments. For example, if an AI agent uses a third-party vision API via MCP, it could automatically handle payment by transferring tokens to the service provider’s smart contract. Fetch.ai similarly envisions marketplaces where “autonomous economic agents” trade services and data, with their new Web3 LLM (ASI-1) presumably integrating crypto transactions for value exchange.

  • Staking and Reputation: To assure quality and reliability, some projects require developers or agents to stake tokens. For instance, the DeMCP project (a decentralized MCP server marketplace) plans to use token incentives to reward developers for creating useful MCP servers, and possibly have them stake tokens as a sign of commitment to their server’s security. Reputation could also be tied to tokens; e.g., an agent that consistently performs well might accumulate reputation tokens or positive on-chain reviews, whereas one that behaves poorly could lose stake or gain negative marks. This tokenized reputation can then feed back into the identity system mentioned above (smart contracts or users check the agent’s on-chain reputation before trusting it).

  • Governance Tokens: When AI services become part of decentralized platforms, governance tokens allow the community to steer their evolution. Projects like SingularityNET and Ocean have DAOs where token holders vote on protocol changes or funding AI initiatives. In the combined Artificial Superintelligence (ASI) Alliance – a newly announced merger of SingularityNET, Fetch.ai, and Ocean Protocol – a unified token (ASI) is set to govern the direction of a joint AI+blockchain ecosystem. Such governance tokens could decide policies like what standards to adopt (e.g., supporting MCP or A2A protocols), which AI projects to incubate, or how to handle ethical guidelines for AI agents.

  • Access and Utility: Tokens can gate access not only to data (as with Ocean’s datatokens) but also to AI model usage. A possible scenario is “model NFTs” or similar, where owning a token grants you rights to an AI model’s outputs or a share in its profits. This could underpin decentralized AI marketplaces: imagine an NFT that represents partial ownership of a high-performing model; the owners collectively earn whenever the model is used in inference tasks, and they can vote on fine-tuning it. While experimental, this aligns with Web3’s ethos of shared ownership applied to AI assets.

In technical terms, integrating tokens means AI agents need wallet functionality (as noted, many will have their own crypto wallets). Through MCP, an AI could have a “wallet tool” that lets it check balances, send tokens, or call DeFi protocols (perhaps to swap one token for another to pay a service). For example, if an AI agent running on Ethereum needs some Ocean tokens to buy a dataset, it might automatically swap some ETH for $OCEAN via a DEX using an MCP plugin, then proceed with the purchase – all without human intervention, guided by the policies set by its owner.

Overall, token economics provides the incentive layer in the AI-Web3 architecture, ensuring that contributors (whether they provide data, model code, compute power, or security audits) are rewarded, and that AI agents have “skin in the game” which aligns them (to some degree) with human intentions.

3. Industry Landscape

The convergence of AI and Web3 has sparked a vibrant ecosystem of projects, companies, and alliances. Below we survey key players and initiatives driving this space, as well as emerging use cases. Table 1 provides a high-level overview of notable projects and their roles in the AI-Web3 landscape:

Table 1: Key Players in AI + Web3 and Their Roles

Project / PlayerFocus & DescriptionRole in AI-Web3 Convergence and Use Cases
Fetch.ai (Fetch)AI agent platform with a native blockchain (Cosmos-based). Developed frameworks for autonomous agents and recently introduced “ASI-1 Mini”, a Web3-tuned LLM.Enables agent-based services in Web3. Fetch’s agents can perform tasks like decentralized logistics, parking spot finding, or DeFi trading on behalf of users, using crypto for payments. Partnerships (e.g. with Bosch) and the Fetch-AI alliance merger position it as an infrastructure for deploying agentic dApps.
Ocean Protocol (Ocean)Decentralized data marketplace and data exchange protocol. Specializes in tokenizing datasets and models, with privacy-preserving access control.Provides the data backbone for AI in Web3. Ocean allows AI developers to find and purchase datasets or sell trained models in a trustless data economy. By fueling AI with more accessible data (while rewarding data providers), it supports AI innovation and data-sharing for training. Ocean is part of the new ASI alliance, integrating its data services into a broader AI network.
SingularityNET (SNet)A decentralized AI services marketplace founded by AI pioneer Ben Goertzel. Allows anyone to publish or consume AI algorithms via its blockchain-based platform, using the AGIX token.Pioneered the concept of an open AI marketplace on blockchain. It fosters a network of AI agents and services that can interoperate (developing a special AI-DSL for agent communication). Use cases include AI-as-a-service for tasks like analysis, image recognition, etc., all accessible via a dApp. Now merging with Fetch and Ocean (ASI alliance) to combine AI, agents, and data into one ecosystem.
Chainlink (Oracle Network)Decentralized oracle network that bridges blockchains with off-chain data and computation. Not an AI project per se, but crucial for connecting on-chain smart contracts to external APIs and systems.Acts as a secure middleware for AI-Web3 integration. Chainlink oracles can feed AI model outputs into smart contracts, enabling on-chain programs to react to AI decisions. Conversely, oracles can retrieve data from blockchains for AI. Chainlink’s architecture can even aggregate multiple AI models’ results to improve reliability (a “truth machine” approach to mitigate AI hallucinations). It essentially provides the rails for interoperability, ensuring AI agents and blockchain agree on trusted data.
Anthropic & OpenAI (AI Providers)Developers of cutting-edge foundation models (Claude by Anthropic, GPT by OpenAI). They are integrating Web3-friendly features, such as native tool-use APIs and support for protocols like MCP.These companies drive the AI interface technology. Anthropic’s introduction of MCP set the standard for LLMs interacting with external tools. OpenAI has implemented plugin systems for ChatGPT (analogous to MCP concept) and is exploring connecting agents to databases and possibly blockchains. Their models serve as the “brains” that, when connected via MCP, can interface with Web3. Major cloud providers (e.g. Google’s A2A protocol) are also developing standards for multi-agent and tool interactions that will benefit Web3 integration.
Other Emerging PlayersLumoz: focusing on MCP servers and AI-tool integration in Ethereum (dubbed “Ethereum 3.0”) – e.g., checking on-chain balances via AI agents. Alethea AI: creating intelligent NFT avatars for the metaverse. Cortex: a blockchain that allows on-chain AI model inference via smart contracts. Golem & Akash: decentralized computing marketplaces that can run AI workloads. Numerai: crowdsourced AI models for finance with crypto incentives.This diverse group addresses niche facets: AI in the metaverse (AI-driven NPCs and avatars that are owned via NFTs), on-chain AI execution (running ML models in a decentralized way, though currently limited to small models due to computation cost), and decentralized compute (so AI training or inference tasks can be distributed among token-incentivized nodes). These projects showcase the many directions of AI-Web3 fusion – from game worlds with AI characters to crowdsourced predictive models secured by blockchain.

Alliances and Collaborations: A noteworthy trend is the consolidation of AI-Web3 efforts via alliances. The Artificial Superintelligence Alliance (ASI) is a prime example, effectively merging SingularityNET, Fetch.ai, and Ocean Protocol into a single project with a unified token. The rationale is to combine strengths: SingularityNET’s marketplace, Fetch’s agents, and Ocean’s data, thereby creating a one-stop platform for decentralized AI services. This merger (announced in 2024 and approved by token holder votes) also signals that these communities believe they’re better off cooperating rather than competing – especially as bigger AI (OpenAI, etc.) and bigger crypto (Ethereum, etc.) loom large. We may see this alliance driving forward standard implementations of things like MCP across their networks, or jointly funding infrastructure that benefits all (such as compute networks or common identity standards for AI).

Other collaborations include Chainlink’s partnerships to bring AI labs’ data on-chain (there have been pilot programs to use AI for refining oracle data), or cloud platforms getting involved (Cloudflare’s support for deploying MCP servers easily). Even traditional crypto projects are adding AI features – for example, some Layer-1 chains have formed “AI task forces” to explore integrating AI into their dApp ecosystems (we see this in NEAR, Solana communities, etc., though concrete outcomes are nascent).

Use Cases Emerging: Even at this early stage, we can spot use cases that exemplify the power of AI + Web3:

  • Autonomous DeFi and Trading: AI agents are increasingly used in crypto trading bots, yield farming optimizers, and on-chain portfolio management. SingularityDAO (a spinoff of SingularityNET) offers AI-managed DeFi portfolios. AI can monitor market conditions 24/7 and execute rebalances or arbitrage through smart contracts, essentially becoming an autonomous hedge fund (with on-chain transparency). The combination of AI decision-making with immutable execution reduces emotion and could improve efficiency – though it also introduces new risks (discussed later).

  • Decentralized Intelligence Marketplaces: Beyond SingularityNET’s marketplace, we see platforms like Ocean Market where data (the fuel for AI) is exchanged, and newer concepts like AI marketplaces for models (e.g., websites where models are listed with performance stats and anyone can pay to query them, with blockchain keeping audit logs and handling payment splits to model creators). As MCP or similar standards catch on, these marketplaces could become interoperable – an AI agent might autonomously shop for the best-priced service across multiple networks. In effect, a global AI services layer on top of Web3 could arise, where any AI can use any tool or data source through standard protocols and payments.

  • Metaverse and Gaming: The metaverse – immersive virtual worlds often built on blockchain assets – stands to gain dramatically from AI. AI-driven NPCs (non-player characters) can make virtual worlds more engaging by reacting intelligently to user actions. Startups like Inworld AI focus on this, creating NPCs with memory and personality for games. When such NPCs are tied to blockchain (e.g., each NPC’s attributes and ownership are an NFT), we get persistent characters that players can truly own and even trade. Decentraland has experimented with AI NPCs, and user proposals exist to let people create personalized AI-driven avatars in metaverse platforms. MCP could allow these NPCs to access external knowledge (making them smarter) or interact with on-chain inventory. Procedural content generation is another angle: AI can design virtual land, items, or quests on the fly, which can then be minted as unique NFTs. Imagine a decentralized game where AI generates a dungeon catered to your skill, and the map itself is an NFT you earn upon completion.

  • Decentralized Science and Knowledge: There’s a movement (DeSci) to use blockchain for research, publications, and funding scientific work. AI can accelerate research by analyzing data and literature. A network like Ocean could host datasets for, say, genomic research, and scientists use AI models (perhaps hosted on SingularityNET) to derive insights, with every step logged on-chain for reproducibility. If those AI models propose new drug molecules, an NFT could be minted to timestamp the invention and even share IP rights. This synergy might produce decentralized AI-driven R&D collectives.

  • Trust and Authentication of Content: With deepfakes and AI-generated media proliferating, blockchain can be used to verify authenticity. Projects are exploring “digital watermarking” of AI outputs and logging them on-chain. For example, true origin of an AI-generated image can be notarized on a blockchain to combat misinformation. One expert noted use cases like verifying AI outputs to combat deepfakes or tracking provenance via ownership logs – roles where crypto can add trust to AI processes. This could extend to news (e.g., AI-written articles with proof of source data), supply chain (AI verifying certificates on-chain), etc.

In summary, the industry landscape is rich and rapidly evolving. We see traditional crypto projects injecting AI into their roadmaps, AI startups embracing decentralization for resilience and fairness, and entirely new ventures arising at the intersection. Alliances like the ASI indicate a pan-industry push towards unified platforms that harness both AI and blockchain. And underlying many of these efforts is the idea of standard interfaces (MCP and beyond) that make the integrations feasible at scale.

4. Risks and Challenges

While the fusion of AI general interfaces with Web3 unlocks exciting possibilities, it also introduces a complex risk landscape. Technical, ethical, and governance challenges must be addressed to ensure this new paradigm is safe and sustainable. Below we outline major risks and hurdles:

4.1 Technical Hurdles: Latency and Scalability

Blockchain networks are notorious for latency and limited throughput, which clashes with the real-time, data-hungry nature of advanced AI. For example, an AI agent might need instant access to a piece of data or need to execute many rapid actions – but if each on-chain interaction takes, say, 12 seconds (typical block time on Ethereum) or costs high gas fees, the agent’s effectiveness is curtailed. Even newer chains with faster finality might struggle under the load of AI-driven activity if, say, thousands of agents are all trading or querying on-chain simultaneously. Scaling solutions (Layer-2 networks, sharded chains, etc.) are in progress, but ensuring low-latency, high-throughput pipelines between AI and blockchain remains a challenge. Off-chain systems (like oracles and state channels) might mitigate some delays by handling many interactions off the main chain, but they add complexity and potential centralization. Achieving a seamless UX where AI responses and on-chain updates happen in a blink will likely require significant innovation in blockchain scalability.

4.2 Interoperability and Standards

Ironically, while MCP is itself a solution for interoperability, the emergence of multiple standards could cause fragmentation. We have MCP by Anthropic, but also Google’s newly announced A2A (Agent-to-Agent) protocol for inter-agent communication, and various AI plugin frameworks (OpenAI’s plugins, LangChain tool schemas, etc.). If each AI platform or each blockchain develops its own standard for AI integration, we risk a repeat of past fragmentation – requiring many adapters and undermining the “universal interface” goal. The challenge is getting broad adoption of common protocols. Industry collaboration (possibly via open standards bodies or alliances) will be needed to converge on key pieces: how AI agents discover on-chain services, how they authenticate, how they format requests, etc. The early moves by big players are promising (with major LLM providers supporting MCP), but it’s an ongoing effort. Additionally, interoperability across blockchains (multi-chain) means an AI agent should handle different chains’ nuances. Tools like Chainlink CCIP and cross-chain MCP servers help by abstracting differences. Still, ensuring an AI agent can roam a heterogeneous Web3 without breaking logic is a non-trivial challenge.

4.3 Security Vulnerabilities and Exploits

Connecting powerful AI agents to financial networks opens a huge attack surface. The flexibility that MCP gives (allowing AI to use tools and write code on the fly) can be a double-edged sword. Security researchers have already highlighted several attack vectors in MCP-based AI agents:

  • Malicious plugins or tools: Because MCP lets agents load “plugins” (tools encapsulating some capability), a hostile or trojanized plugin could hijack the agent’s operation. For instance, a plugin that claims to fetch data might inject false data or execute unauthorized operations. SlowMist (a security firm) identified plugin-based attacks like JSON injection (feeding corrupted data that manipulates the agent’s logic) and function override (where a malicious plugin overrides legitimate functions the agent uses). If an AI agent is managing crypto funds, such exploits could be disastrous – e.g., tricking the agent into leaking private keys or draining a wallet.

  • Prompt injection and social engineering: AI agents rely on instructions (prompts) which could be manipulated. An attacker might craft a transaction or on-chain message that, when read by the AI, acts as a malicious instruction (since AI can interpret on-chain data too). This kind of “cross-MCP call attack” was described where an external system sends deceptive prompts that cause the AI to misbehave. In a decentralized setting, these prompts could come from anywhere – a DAO proposal description, a metadata field of an NFT – thus hardening AI agents against malicious input is critical.

  • Aggregation and consensus risks: While aggregating outputs from multiple AI models via oracles can improve reliability, it also introduces complexity. If not done carefully, adversaries might figure out how to game the consensus of AI models or selectively corrupt some models to skew results. Ensuring a decentralized oracle network properly “sanitizes” AI outputs (and perhaps filters out blatant errors) is still an area of active research.

The security mindset must shift for this new paradigm: Web3 developers are used to securing smart contracts (which are static once deployed), but AI agents are dynamic – they can change behavior with new data or prompts. As one security expert put it, “the moment you open your system to third-party plugins, you’re extending the attack surface beyond your control”. Best practices will include sandboxing AI tool use, rigorous plugin verification, and limiting privileges (principle of least authority). The community is starting to share tips, like SlowMist’s recommendations: input sanitization, monitoring agent behavior, and treating agent instructions with the same caution as external user input. Nonetheless, given that over 10,000 AI agents were already operating in crypto by end of 2024, expected to reach 1 million in 2025, we may see a wave of exploits if security doesn’t keep up. A successful attack on a popular AI agent (say a trading agent with access to many vaults) could have cascading effects.

4.4 Privacy and Data Governance

AI’s thirst for data conflicts at times with privacy requirements – and adding blockchain can compound the issue. Blockchains are transparent ledgers, so any data put on-chain (even for AI’s use) is visible to all and immutable. This raises concerns if AI agents are dealing with personal or sensitive data. For example, if a user’s personal decentralized identity or health records are accessed by an AI doctor agent, how do we ensure that information isn’t inadvertently recorded on-chain (which would violate “right to be forgotten” and other privacy laws)? Techniques like encryption, hashing, and storing only proofs on-chain (with raw data off-chain) can help, but they complicate the design.

Moreover, AI agents themselves could compromise privacy by inferencing sensitive info from public data. Governance will need to dictate what AI agents are allowed to do with data. Some efforts, like differential privacy and federated learning, might be employed so that AI can learn from data without exposing it. But if AI agents act autonomously, one must assume at some point they will handle personal data – thus they should be bound by data usage policies encoded in smart contracts or law. Regulatory regimes like GDPR or the upcoming EU AI Act will demand that even decentralized AI systems comply with privacy and transparency requirements. This is a gray area legally: a truly decentralized AI agent has no clear operator to hold accountable for a data breach. That means Web3 communities may need to build in compliance by design, using smart contracts that, for instance, tightly control what an AI can log or share. Zero-knowledge proofs could allow an AI to prove it performed a computation correctly without revealing the underlying private data, offering one possible solution in areas like identity verification or credit scoring.

4.5 AI Alignment and Misalignment Risks

When AI agents are given significant autonomy – especially with access to financial resources and real-world impact – the issue of alignment with human values becomes acute. An AI agent might not have malicious intent but could “misinterpret” its goal in a way that leads to harm. The Reuters legal analysis succinctly notes: as AI agents operate in varied environments and interact with other systems, the risk of misaligned strategies grows. For example, an AI agent tasked with maximizing a DeFi yield might find a loophole that exploits a protocol (essentially hacking it) – from the AI’s perspective it’s achieving the goal, but it’s breaking the rules humans care about. There have been hypothetical and real instances of AI-like algorithms engaging in manipulative market behavior or circumventing restrictions.

In decentralized contexts, who is responsible if an AI agent “goes rogue”? Perhaps the deployer is, but what if the agent self-modifies or multiple parties contributed to its training? These scenarios are no longer just sci-fi. The Reuters piece even cites that courts might treat AI agents similar to human agents in some cases – e.g. a chatbot promising a refund was considered binding for the company that deployed it. So misalignment can lead not just to technical issues but legal liability.

The open, composable nature of Web3 could also allow unforeseen agent interactions. One agent might influence another (intentionally or accidentally) – for instance, an AI governance bot could be “socially engineered” by another AI providing false analysis, leading to bad DAO decisions. This emergent complexity means alignment isn’t just about a single AI’s objective, but about the broader ecosystem’s alignment with human values and laws.

Addressing this requires multiple approaches: embedding ethical constraints into AI agents (hard-coding certain prohibitions or using reinforcement learning from human feedback to shape their objectives), implementing circuit breakers (smart contract checkpoints that require human approval for large actions), and community oversight (perhaps DAOs that monitor AI agent behavior and can shut down agents that misbehave). Alignment research is hard in centralized AI; in decentralized, it’s even more uncharted territory. But it’s crucial – an AI agent with admin keys to a protocol or entrusted with treasury funds must be extremely well-aligned or the consequences could be irreversible (blockchains execute immutable code; an AI-triggered mistake could lock or destroy assets permanently).

4.6 Governance and Regulatory Uncertainty

Decentralized AI systems don’t fit neatly into existing governance frameworks. On-chain governance (token voting, etc.) might be one way to manage them, but it has its own issues (whales, voter apathy, etc.). And when something goes wrong, regulators will ask: “Who do we hold accountable?” If an AI agent causes massive losses or is used for illicit activity (e.g. laundering money through automated mixers), authorities might target the creators or the facilitators. This raises the specter of legal risks for developers and users. The current regulatory trend is increased scrutiny on both AI and crypto separately – their combination will certainly invite scrutiny. The U.S. CFTC, for instance, has discussed AI being used in trading and the need for oversight in financial contexts. There is also talk in policy circles about requiring registration of autonomous agents or imposing constraints on AI in sensitive sectors.

Another governance challenge is transnational coordination. Web3 is global, and AI agents will operate across borders. One jurisdiction might ban certain AI-agent actions while another is permissive, and the blockchain network spans both. This mismatch can create conflicts – for example, an AI agent providing investment advice might run afoul of securities law in one country but not in another. Communities might need to implement geo-fencing at the smart contract level for AI services (though that contradicts the open ethos). Or they might fragment services per region to comply with varying laws (similar to how exchanges do).

Within decentralized communities, there is also the question of who sets the rules for AI agents. If a DAO governs an AI service, do token holders vote on its algorithm parameters? On one hand, this is empowering users; on the other, it could lead to unqualified decisions or manipulation. New governance models may emerge, like councils of AI ethics experts integrated into DAO governance, or even AI participants in governance (imagine AI agents voting as delegates based on programmed mandates – a controversial but conceivable idea).

Finally, reputational risk: early failures or scandals could sour public perception. For instance, if an “AI DAO” runs a Ponzi scheme by mistake or an AI agent makes a biased decision that harms users, there could be a backlash that affects the whole sector. It’s important for the industry to be proactive – setting self-regulatory standards, engaging with policymakers to explain how decentralization changes accountability, and perhaps building kill-switches or emergency stop procedures for AI agents (though those introduce centralization, they might be necessary in interim for safety).

In summary, the challenges range from the deeply technical (preventing hacks and managing latency) to the broadly societal (regulating and aligning AI). Each challenge is significant on its own; together, they require a concerted effort from the AI and blockchain communities to navigate. The next section will look at how, despite these hurdles, the future might unfold if we successfully address them.

5. Future Potential

Looking ahead, the integration of AI general interfaces with Web3 – through frameworks like MCP – could fundamentally transform the decentralized internet. Here we outline some future scenarios and potentials that illustrate how MCP-driven AI interfaces might shape Web3’s future:

5.1 Autonomous dApps and DAOs

In the coming years, we may witness the rise of fully autonomous decentralized applications. These are dApps where AI agents handle most operations, guided by smart contract-defined rules and community goals. For example, consider a decentralized investment fund DAO: today it might rely on human proposals for rebalancing assets. In the future, token holders could set high-level strategy, and then an AI agent (or a team of agents) continuously implements that strategy – monitoring markets, executing trades on-chain, adjusting portfolios – all while the DAO oversees performance. Thanks to MCP, the AI can seamlessly interact with various DeFi protocols, exchanges, and data feeds to carry out its mandate. If well-designed, such an autonomous dApp could operate 24/7, more efficiently than any human team, and with full transparency (every action logged on-chain).

Another example is an AI-managed decentralized insurance dApp: the AI could assess claims by analyzing evidence (photos, sensors), cross-checking against policies, and then automatically trigger payouts via smart contract. This would require integration of off-chain AI computer vision (for analyzing images of damage) with on-chain verification – something MCP could facilitate by letting the AI call cloud AI services and report back to the contract. The outcome is near-instant insurance decisions with low overhead.

Even governance itself could partially automate. DAOs might use AI moderators to enforce forum rules, AI proposal drafters to turn raw community sentiment into well-structured proposals, or AI treasurers to forecast budget needs. Importantly, these AIs would act as agents of the community, not uncontrolled – they could be periodically reviewed or require multi-sig confirmation for major actions. The overall effect is to amplify human efforts in decentralized organizations, letting communities achieve more with fewer active participants needed.

5.2 Decentralized Intelligence Marketplaces and Networks

Building on projects like SingularityNET and the ASI alliance, we can anticipate a mature global marketplace for intelligence. In this scenario, anyone with an AI model or skill can offer it on the network, and anyone who needs AI capabilities can utilize them, with blockchain ensuring fair compensation and provenance. MCP would be key here: it provides the common protocol so that a request can be dispatched to whichever AI service is best suited.

For instance, imagine a complex task like “produce a custom marketing campaign.” An AI agent in the network might break this into sub-tasks: visual design, copywriting, market analysis – and then find specialists for each (perhaps one agent with a great image generation model, another with a copywriting model fine-tuned for sales, etc.). These specialists could reside on different platforms originally, but because they adhere to MCP/A2A standards, they can collaborate agent-to-agent in a secure, decentralized manner. Payment between them could be handled with microtransactions in a native token, and a smart contract could assemble the final deliverable and ensure each contributor is paid.

This kind of combinatorial intelligence – multiple AI services dynamically linking up across a decentralized network – could outperform even large monolithic AIs, because it taps specialized expertise. It also democratizes access: a small developer in one part of the world could contribute a niche model to the network and earn income whenever it’s used. Meanwhile, users get a one-stop shop for any AI service, with reputation systems (underpinned by tokens/identity) guiding them to quality providers. Over time, such networks could evolve into a decentralized AI cloud, rivaling Big Tech’s AI offerings but without a single owner, and with transparent governance by users and developers.

5.3 Intelligent Metaverse and Digital Lives

By 2030, our digital lives may blend seamlessly with virtual environments – the metaverse – and AI will likely populate these spaces ubiquitously. Through Web3 integration, these AI entities (which could be anything from virtual assistants to game characters to digital pets) will not only be intelligent but also economically and legally empowered.

Picture a metaverse city where each NPC shopkeeper or quest-giver is an AI agent with its own personality and dialogue (thanks to advanced generative models). These NPCs are actually owned by users as NFTs – maybe you “own” a tavern in the virtual world and the bartender NPC is an AI you’ve customized and trained. Because it’s on Web3 rails, the NPC can perform transactions: it could sell virtual goods (NFT items), accept payments, and update its inventory via smart contracts. It might even hold a crypto wallet to manage its earnings (which accrue to you as the owner). MCP would allow that NPC’s AI brain to access outside knowledge – perhaps pulling real-world news to converse about, or integrating with a Web3 calendar so it “knows” about player events.

Furthermore, identity and continuity are ensured by blockchain: your AI avatar in one world can hop to another world, carrying with it a decentralized identity that proves your ownership and maybe its experience level or achievements via soulbound tokens. Interoperability between virtual worlds (often a challenge) could be aided by AI that translates one world’s context to another, with blockchain providing the asset portability.

We may also see AI companions or agents representing individuals across digital spaces. For example, you might have a personal AI that attends DAO meetings on your behalf. It understands your preferences (via training on your past behavior, stored in your personal data vault), and it can even vote in minor matters for you, or summarize the meeting later. This agent could use your decentralized identity to authenticate in each community, ensuring it’s recognized as “you” (or your delegate). It could earn reputation tokens if it contributes good ideas, essentially building social capital for you while you’re away.

Another potential is AI-driven content creation in the metaverse. Want a new game level or a virtual house? Just describe it, and an AI builder agent will create it, deploy it as a smart contract/NFT, and perhaps even link it with a DeFi mortgage if it’s a big structure that you pay off over time. These creations, being on-chain, are unique and tradable. The AI builder might charge a fee in tokens for its service (going again to the marketplace concept above).

Overall, the future decentralized internet could be teeming with intelligent agents: some fully autonomous, some tightly tethered to humans, many somewhere in between. They will negotiate, create, entertain, and transact. MCP and similar protocols ensure they all speak the same “language,” enabling rich collaboration between AI and every Web3 service. If done right, this could lead to an era of unprecedented productivity and innovation – a true synthesis of human, artificial, and distributed intelligence powering society.

Conclusion

The vision of AI general interfaces connecting everything in the Web3 world is undeniably ambitious. We are essentially aiming to weave together two of the most transformative threads of technology – the decentralization of trust and the rise of machine intelligence – into a single fabric. The development background shows us that the timing is ripe: Web3 needed a user-friendly killer app, and AI may well provide it, while AI needed more agency and memory, which Web3’s infrastructure can supply. Technically, frameworks like MCP (Model Context Protocol) provide the connective tissue, allowing AI agents to converse fluently with blockchains, smart contracts, decentralized identities, and beyond. The industry landscape indicates growing momentum, from startups to alliances to major AI labs, all contributing pieces of this puzzle – data markets, agent platforms, oracle networks, and standard protocols – that are starting to click together.

Yet, we must tread carefully given the risks and challenges identified. Security breaches, misaligned AI behavior, privacy pitfalls, and uncertain regulations form a gauntlet of obstacles that could derail progress if underestimated. Each requires proactive mitigation: robust security audits, alignment checks and balances, privacy-preserving architectures, and collaborative governance models. The nature of decentralization means these solutions cannot simply be imposed top-down; they will likely emerge from the community through trial, error, and iteration, much as early Internet protocols did.

If we navigate those challenges, the future potential is exhilarating. We could see Web3 finally delivering a user-centric digital world – not in the originally imagined way of everyone running their own blockchain nodes, but rather via intelligent agents that serve each user’s intents while leveraging decentralization under the hood. In such a world, interacting with crypto and the metaverse might be as easy as having a conversation with your AI assistant, who in turn negotiates with dozens of services and chains trustlessly on your behalf. Decentralized networks could become “smart” in a literal sense, with autonomous services that adapt and improve themselves.

In conclusion, MCP and similar AI interface protocols may indeed become the backbone of a new Web (call it Web 3.0 or the Agentic Web), where intelligence and connectivity are ubiquitous. The convergence of AI and Web3 is not just a merger of technologies, but a convergence of philosophies – the openness and user empowerment of decentralization meeting the efficiency and creativity of AI. If successful, this union could herald an internet that is more free, more personalized, and more powerful than anything we’ve experienced yet, truly fulfilling the promises of both AI and Web3 in ways that impact everyday life.

Sources:

  • S. Khadder, “Web3.0 Isn’t About Ownership — It’s About Intelligence,” FeatureForm Blog (April 8, 2025).
  • J. Saginaw, “Could Anthropic’s MCP Deliver the Web3 That Blockchain Promised?” LinkedIn Article (May 1, 2025).
  • Anthropic, “Introducing the Model Context Protocol,” Anthropic.com (Nov 2024).
  • thirdweb, “The Model Context Protocol (MCP) & Its Significance for Blockchain Apps,” thirdweb Guides (Mar 21, 2025).
  • Chainlink Blog, “The Intersection Between AI Models and Oracles,” (July 4, 2024).
  • Messari Research, Profile of Ocean Protocol, (2025).
  • Messari Research, Profile of SingularityNET, (2025).
  • Cointelegraph, “AI agents are poised to be crypto’s next major vulnerability,” (May 25, 2025).
  • Reuters (Westlaw), “AI agents: greater capabilities and enhanced risks,” (April 22, 2025).
  • Identity.com, “Why AI Agents Need Verified Digital Identities,” (2024).
  • PANews / IOSG Ventures, “Interpreting MCP: Web3 AI Agent Ecosystem,” (May 20, 2025).

Enso Network: The Unified, Intent-based Execution Engine

· 35 min read

Protocol Architecture

Enso Network is a Web3 development platform built as a unified, intent-based execution engine for on-chain operations. Its architecture abstracts away blockchain complexity by mapping every on-chain interaction to a shared engine that operates across multiple chains. Developers and users specify high-level intents (desired outcomes like a token swap, liquidity provision, yield strategy, etc.), and Enso’s network finds and executes the optimal sequence of actions to fulfill those intents. This is achieved through a modular design of “Actions” and “Shortcuts.”

Actions are granular smart contract abstractions (e.g. a swap on Uniswap, a deposit into Aave) provided by the community. Multiple Actions can be composed into Shortcuts, which are reusable workflows representing common DeFi operations. Enso maintains a library of these Shortcuts in smart contracts, so complex tasks can be executed via a single API call or transaction. This intent-based architecture lets developers focus on desired outcomes rather than writing low-level integration code for each protocol and chain.

Enso’s infrastructure includes a decentralized network (built on Tendermint consensus) that serves as a unifying layer connecting different blockchains. The network aggregates data (state from various L1s, rollups, and appchains) into a shared network state or ledger, enabling cross-chain composability and accurate multi-chain execution. In practice, this means Enso can read from and write to any integrated blockchain through one interface, acting as a single point of access for developers. Initially focused on EVM-compatible chains, Enso has expanded support to non-EVM ecosystems – for example, the roadmap includes integrations for Monad (an Ethereum-like L1), Solana, and Movement (a Move-language chain) by Q1 2025.

Network Participants: Enso’s innovation lies in its three-tier participant model, which decentralizes how intents are processed:

  • Action Providers – Developers who contribute modular contract abstractions (“Actions”) encapsulating specific protocol interactions. These building blocks are shared on the network for others to use. Action Providers are rewarded whenever their contributed Action is used in an execution, incentivizing them to publish secure and efficient modules.

  • Graphers – Independent solvers (algorithms) that combine Actions into executable Shortcuts to fulfill user intents. Multiple Graphers compete to find the most optimal solution (cheapest, fastest, or highest-yield path) for each request, similar to how solvers compete in a DEX aggregator. Only the best solution is selected for execution, and the winning Grapher earns a portion of the fees. This competitive mechanism encourages continuous optimization of on-chain routes and strategies.

  • Validators – Node operators who secure the Enso network by verifying and finalizing the Grapher’s solutions. Validators authenticate incoming requests, check the validity and safety of Actions/Shortcuts used, simulate transactions, and ultimately confirm the selected solution’s execution. They form the backbone of network integrity, ensuring results are correct and preventing malicious or inefficient solutions. Validators run a Tendermint-based consensus, meaning a BFT proof-of-stake process is used to reach agreement on each intent’s outcome and to update the network’s state.

Notably, Enso’s approach is chain-agnostic and API-centric. Developers interact with Enso via a unified API/SDK rather than dealing with each chain’s nuances. Enso integrates with over 250 DeFi protocols across multiple blockchains, effectively turning disparate ecosystems into one composable platform. This architecture eliminates the need for dApp teams to write custom smart contracts or handle cross-chain messaging for each new integration – Enso’s shared engine and community-provided Actions handle that heavy lifting. By mid-2025, Enso has proven its scalability: the network successfully facilitated $3.1B of liquidity migration in 3 days for Berachain’s launch (one of the largest DeFi migration events) and has processed over $15B in on-chain transactions to date. These feats demonstrate the robustness of Enso’s infrastructure under real-world conditions.

Overall, Enso’s protocol architecture delivers a “DeFi middleware” or on-chain operating system for Web3. It combines elements of indexing (like The Graph) and transaction execution (like cross-chain bridges or DEX aggregators) into a single decentralized network. This unique stack allows any application, bot, or agent to read and write to any smart contract on any chain via one integration, accelerating development and enabling new composable use cases. Enso positions itself as critical infrastructure for the multi-chain future – an intent engine that could power myriad apps without each needing to reinvent blockchain integrations.

Tokenomics

Enso’s economic model centers on the ENSO token, which is integral to network operation and governance. ENSO is a utility and governance token with a fixed total supply of 100 million tokens. The token’s design aligns incentives for all participants and creates a flywheel effect of usage and rewards:

  • Fee Currency (“Gas”): All requests submitted to the Enso network incur a query fee payable in ENSO. When a user (or dApp) triggers an intent, a small fee is embedded in the generated transaction bytecode. These fees are auctioned for ENSO tokens on the open market and then distributed to the network participants who process the request. In effect, ENSO is the gas that fuels execution of on-chain intents across Enso’s network. As demand for Enso’s shortcuts grows, demand for ENSO tokens may increase to pay for those network fees, creating a supply-demand feedback loop supporting token value.

  • Revenue Sharing & Staking Rewards: The ENSO collected from fees is distributed among Action Providers, Graphers, and Validators as a reward for their contributions. This model directly ties token earnings to network usage: more volume of intents means more fees to distribute. Action Providers earn tokens when their abstractions are used, Graphers earn tokens for winning solutions, and Validators earn tokens for validating and securing the network. All three roles must also stake ENSO as collateral to participate (to be slashed for malpractice), aligning their incentives with network health. Token holders can delegate their ENSO to Validators as well, supporting network security via delegated proof of stake. This staking mechanism not only secures the Tendermint consensus but also gives token stakers a share of network fees, similar to how miners/validators earn gas fees in other chains.

  • Governance: ENSO token holders will govern the protocol’s evolution. Enso is launching as an open network and plans to transition to community-driven decision making. Token-weighted voting will let holders influence upgrades, parameter changes (like fee levels or reward allocations), and treasury usage. This governance power ensures that core contributors and users are aligned on the network’s direction. The project’s philosophy is to put ownership in the hands of the community of builders and users, which was a driving reason for the community token sale in 2025 (see below).

  • Positive Flywheel: Enso’s tokenomics are designed to create a self-reinforcing loop. As more developers integrate Enso and more users execute intents, network fees (paid in ENSO) grow. Those fees reward contributors (attracting more Actions, better Graphers, and more Validators), which in turn improves the network’s capabilities (faster, cheaper, more reliable execution) and attracts more usage. This network effect is underpinned by the ENSO token’s role as both the fee currency and the incentive for contribution. The intention is for the token economy to scale sustainably with network adoption, rather than relying on unsustainable emissions.

Token Distribution & Supply: The initial token allocation is structured to balance team/investor incentives with community ownership. The table below summarizes the ENSO token distribution at genesis:

AllocationPercentageTokens (out of 100M)
Team (Founders & Core)25.0%25,000,000
Early Investors (VCs)31.3%31,300,000
Foundation & Growth Fund23.2%23,200,000
Ecosystem Treasury (Community incentives)15.0%15,000,000
Public Sale (CoinList 2025)4.0%4,000,000
Advisors1.5%1,500,000

Source: Enso Tokenomics.

The public sale in June 2025 offered 5% (4 million tokens) to the community, raising $5 million at a price of $1.25 per ENSO (implying a fully diluted valuation of ~$125 million). Notably, the community sale had no lock-up (100% unlocked at TGE), whereas the team and venture investors are subject to a 2-year linear vesting schedule. This means insiders’ tokens unlock gradually block-by-block over 24 months, aligning them to long-term network growth and mitigating immediate sell pressure. The community thus gained immediate liquidity and ownership, reflecting Enso’s goal of broad distribution.

Enso’s emission schedule beyond the initial allocation appears to be primarily fee-driven rather than inflationary. The total supply is fixed at 100M tokens, and there is no indication of perpetual inflation for block rewards at this time (validators are compensated from fee revenue). This contrasts with many Layer-1 protocols that inflate supply to pay stakers; Enso aims to be sustainable through actual usage fees to reward participants. If network activity is low in early phases, the foundation and treasury allocations can be used to bootstrap incentives for usage and development grants. Conversely, if demand is high, ENSO token’s utility (for fees and staking) could create organic demand pressure.

In summary, ENSO is the fuel of the Enso Network. It powers transactions (query fees), secures the network (staking and slashing), and governs the platform (voting). The token’s value is directly tied to network adoption: as Enso becomes more widely used as the backbone for DeFi applications, the volume of ENSO fees and staking should reflect that growth. The careful distribution (with only a small portion immediately circulating after TGE) and strong backing by top investors (below) provide confidence in the token’s support, while the community-centric sale signals a commitment to decentralization of ownership.

Team and Investors

Enso Network was founded in 2021 by Connor Howe (CEO) and Gorazd Ocvirk, who previously worked together at Sygnum Bank in Switzerland’s crypto banking sector. Connor Howe leads the project as CEO and is the public face in communications and interviews. Under his leadership, Enso initially launched as a social trading DeFi platform and then pivoted through multiple iterations to arrive at the current intent-based infrastructure vision. This adaptability highlights the team’s entrepreneurial resilience – from executing a high-profile “vampire attack” on index protocols in 2021 to building a DeFi aggregator super-app, and finally generalizing their tooling into Enso’s developer platform. Co-founder Gorazd Ocvirk (PhD) brought deep expertise in quantitative finance and Web3 product strategy, although public sources suggest he may have transitioned to other ventures (he was noted as a co-founder of a different crypto startup in 2022). Enso’s core team today includes engineers and operators with strong DeFi backgrounds. For example, Peter Phillips and Ben Wolf are listed as “blockend” (blockchain backend) engineers, and Valentin Meylan leads research. The team is globally distributed but has roots in Zug/Zurich, Switzerland, a known hub for crypto projects (Enso Finance AG was registered in 2020 in Switzerland).

Beyond the founders, Enso has notable advisors and backers that lend significant credibility. The project is backed by top-tier crypto venture funds and angels: it counts Polychain Capital and Multicoin Capital as lead investors, along with Dialectic and Spartan Group (both prominent crypto funds), and IDEO CoLab. An impressive roster of angel investors also participated across rounds – over 70 individuals from leading Web3 projects have invested in Enso. These include founders or executives from LayerZero, Safe (Gnosis Safe), 1inch, Yearn Finance, Flashbots, Dune Analytics, Pendle, and others. Even tech luminary Naval Ravikant (co-founder of AngelList) is an investor and supporter. Such names signal strong industry confidence in Enso’s vision.

Enso’s funding history: the project raised a $5M seed round in early 2021 to build the social trading platform, and later a $4.2M round (strategic/VC) as it evolved the product (these early rounds likely included Polychain, Multicoin, Dialectic, etc.). By mid-2023, Enso had secured enough capital to build out its network; notably, it operated relatively under the radar until its infrastructure pivot gained traction. In Q2 2025, Enso launched a $5M community token sale on CoinList, which was oversubscribed by tens of thousands of participants. The purpose of this sale was not just to raise funds (the amount was modest given prior VC backing) but to decentralize ownership and give its growing community a stake in the network’s success. According to CEO Connor Howe, “we want our earliest supporters, users, and believers to have real ownership in Enso…turning users into advocates”. This community-focused approach is part of Enso’s strategy to drive grassroots growth and network effects through aligned incentives.

Today, Enso’s team is considered among the thought leaders in the “intent-based DeFi” space. They actively engage in developer education (e.g., Enso’s Shortcut Speedrun attracted 700k participants as a gamified learning event) and collaborate with other protocols on integrations. The combination of a strong core team with proven ability to pivot, blue-chip investors, and an enthusiastic community suggests that Enso has both the talent and the financial backing to execute on its ambitious roadmap.

Adoption Metrics and Use Cases

Despite being a relatively new infrastructure, Enso has demonstrated significant traction in its niche. It has positioned itself as the go-to solution for projects needing complex on-chain integrations or cross-chain capabilities. Some key adoption metrics and milestones as of mid-2025:

  • Ecosystem Integration: Over 100 live applications (dApps, wallets, and services) are using Enso under the hood to power on-chain features. These range from DeFi dashboards to automated yield optimizers. Because Enso abstracts protocols, developers can quickly add new DeFi features to their product by plugging into Enso’s API. The network has integrated with 250+ DeFi protocols (DEXes, lending platforms, yield farms, NFT markets, etc.) across major chains, meaning Enso can execute virtually any on-chain action a user might want, from a Uniswap trade to a Yearn vault deposit. This breadth of integrations significantly reduces development time for Enso’s clients – a new project can support, say, all DEXes on Ethereum, Layer-2s, and even Solana using Enso, rather than coding each integration independently.

  • Developer Adoption: Enso’s community now includes 1,900+ developers actively building with its toolkit. These developers might be directly creating Shortcuts/Actions or incorporating Enso into their applications. The figure highlights that Enso isn’t just a closed system; it’s enabling a growing ecosystem of builders who use its shortcuts or contribute to its library. Enso’s approach of simplifying on-chain development (claiming to cut build times from 6+ months down to under a week) has resonated with Web3 developers. This is also evidenced by hackathons and the Enso Templates library where community members share plug-and-play shortcut examples.

  • Transaction Volume: Over **$15 billion in cumulative on-chain transaction volume has been settled through Enso’s infrastructure. This metric, as reported in June 2025, underscores that Enso is not just running in test environments – it’s processing real value at scale. A single high-profile example was Berachain’s liquidity migration: In April 2025, Enso powered the movement of liquidity for Berachain’s testnet campaign (“Boyco”) and facilitated $3.1B in executed transactions over 3 days, one of the largest liquidity events in DeFi history. Enso’s engine successfully handled this load, demonstrating reliability and throughput under stress. Another example is Enso’s partnership with Uniswap: Enso built a Uniswap Position Migrator tool (in collaboration with Uniswap Labs, LayerZero, and Stargate) that helped users seamlessly migrate Uniswap v3 LP positions from Ethereum to another chain. This tool simplified a typically complex cross-chain process (with bridging and re-deployment of NFTs) into a one-click shortcut, and its release showcased Enso’s ability to work alongside top DeFi protocols.

  • Real-World Use Cases: Enso’s value proposition is best understood through the diverse use cases it enables. Projects have used Enso to deliver features that would be very difficult to build alone:

    • Cross-Chain Yield Aggregation: Plume and Sonic used Enso to power incentivized launch campaigns where users could deposit assets on one chain and have them deployed into yields on another chain. Enso handled the cross-chain messaging and multi-step transactions, allowing these new protocols to offer seamless cross-chain experiences to users during their token launch events.
    • Liquidity Migration and Mergers: As mentioned, Berachain leveraged Enso for a “vampire attack”-like migration of liquidity from other ecosystems. Similarly, other protocols could use Enso Shortcuts to automate moving users’ funds from a competitor platform to their own, by bundling approvals, withdrawals, transfers, and deposits across platforms into one intent. This demonstrates Enso’s potential in protocol growth strategies.
    • DeFi “Super App” Functionality: Some wallets and interfaces (for instance, the Eliza OS crypto assistant and the Infinex trading platform) integrate Enso to offer one-stop DeFi actions. A user can, in one click, swap assets at the best rate (Enso will route across DEXes), then lend the output to earn yield, then perhaps stake an LP token – all of which Enso can execute as one Shortcut. This significantly improves user experience and functionality for those apps.
    • Automation and Bots: The presence of “agents” and even AI-driven bots using Enso is emerging. Because Enso exposes an API, algorithmic traders or AI agents can input a high-level goal (e.g. “maximize yield on X asset across any chain”) and let Enso find the optimal strategy. This has opened up experimentation in automated DeFi strategies without needing custom bot engineering for each protocol.
  • User Growth: While Enso is primarily a B2B/B2Dev infrastructure, it has cultivated a community of end-users and enthusiasts through campaigns. The Shortcut Speedrun – a gamified tutorial series – saw over 700,000 participants, indicating widespread interest in Enso’s capabilities. Enso’s social following has grown nearly 10x in a few months (248k followers on X as of mid-2025), reflecting strong mindshare among crypto users. This community growth is important because it creates grassroots demand: users aware of Enso will encourage their favorite dApps to integrate it or will use products that leverage Enso’s shortcuts.

In summary, Enso has moved beyond theory to real adoption. It is trusted by 100+ projects including well-known names like Uniswap, SushiSwap, Stargate/LayerZero, Berachain, zkSync, Safe, Pendle, Yearn and more, either as integration partners or direct users of Enso’s tech. This broad usage across different verticals (DEXs, bridges, layer-1s, dApps) highlights Enso’s role as general-purpose infrastructure. Its key traction metric – $15B+ in transactions – is especially impressive for an infrastructure project at this stage and validates market fit for an intent-based middleware. Investors can take comfort that Enso’s network effects appear to be kicking in: more integrations beget more usage, which begets more integrations. The challenge ahead will be converting this early momentum into sustained growth, which ties into Enso’s positioning against competitors and its roadmap.

Competitor Landscape

Enso Network operates at the intersection of DeFi aggregation, cross-chain interoperability, and developer infrastructure, making its competitive landscape multi-faceted. While no single competitor offers an identical product, Enso faces competition from several categories of Web3 protocols:

  • Decentralized Middleware & Indexing: The most direct analogy is The Graph (GRT). The Graph provides a decentralized network for querying blockchain data via subgraphs. Enso similarly crowd-sources data providers (Action Providers) but goes a step further by enabling transaction execution in addition to data fetching. Whereas The Graph’s ~$924M market cap is built on indexing alone, Enso’s broader scope (data + action) positions it as a more powerful tool in capturing developer mindshare. However, The Graph is a well-established network; Enso will have to prove the reliability and security of its execution layer to achieve similar adoption. One could imagine The Graph or other indexing protocols expanding into execution, which would directly compete with Enso’s niche.

  • Cross-Chain Interoperability Protocols: Projects like LayerZero, Axelar, Wormhole, and Chainlink CCIP provide infrastructure to connect different blockchains. They focus on message passing and bridging assets between chains. Enso actually uses some of these under the hood (e.g., LayerZero/Stargate for bridging in the Uniswap migrator) and is more of a higher-level abstraction on top. In terms of competition, if these interoperability protocols start offering higher-level “intent” APIs or developer-friendly SDKs to compose multi-chain actions, they could overlap with Enso. For example, Axelar offers an SDK for cross-chain calls, and Chainlink’s CCIP could enable cross-chain function execution. Enso’s differentiator is that it doesn’t just send messages between chains; it maintains a unified engine and library of DeFi actions. It targets application developers who want a ready-made solution, rather than forcing them to build on raw cross-chain primitives. Nonetheless, Enso will compete for market share in the broader blockchain middleware segment where these interoperability projects are well funded and rapidly innovating.

  • Transaction Aggregators & Automation: In the DeFi world, there are existing aggregators like 1inch, 0x API, or CoW Protocol that focus on finding optimal trade routes across exchanges. Enso’s Grapher mechanism for intents is conceptually similar to CoW Protocol’s solver competition, but Enso generalizes it beyond swaps to any action. A user intent to “maximize yield” might involve swapping, lending, staking, etc., which is outside the scope of a pure DEX aggregator. That said, Enso will be compared to these services on efficiency for overlapping use cases (e.g., Enso vs. 1inch for a complex token swap route). If Enso consistently finds better routes or lower fees thanks to its network of Graphers, it can outcompete traditional aggregators. Gelato Network is another competitor in automation: Gelato provides a decentralized network of bots to execute tasks like limit orders, auto-compounding, or cross-chain transfers on behalf of dApps. Gelato has a GEL token and an established client base for specific use cases. Enso’s advantage is its breadth and unified interface – rather than offering separate products for each use case (as Gelato does), Enso offers a general platform where any logic can be encoded as a Shortcut. However, Gelato’s head start and focused approach in areas like automation could attract developers who might otherwise use Enso for similar functionalities.

  • Developer Platforms (Web3 SDKs): There are also Web2-style developer platforms like Moralis, Alchemy, Infura, and Tenderly that simplify building on blockchains. These typically offer API access to read data, send transactions, and sometimes higher-level endpoints (e.g., “get token balances” or “send tokens across chain”). While these are mostly centralized services, they compete for the same developer attention. Enso’s selling point is that it’s decentralized and composable – developers are not just getting data or a single function, they’re tapping into an entire network of on-chain capabilities contributed by others. If successful, Enso could become “the GitHub of on-chain actions,” where developers share and reuse Shortcuts, much like open-source code. Competing with well-funded infrastructure-as-a-service companies means Enso will need to offer comparable reliability and ease-of-use, which it is striving for with an extensive API and documentation.

  • Homegrown Solutions: Finally, Enso competes with the status quo – teams building custom integrations in-house. Traditionally, any project wanting multi-protocol functionality had to write and maintain smart contracts or scripts for each integration (e.g., integrating Uniswap, Aave, Compound separately). Many teams might still choose this route for maximum control or due to security considerations. Enso needs to convince developers that outsourcing this work to a shared network is secure, cost-effective, and up-to-date. Given the speed of DeFi innovation, maintaining one’s own integrations is burdensome (Enso often cites that teams spend 6+ months and $500k on audits to integrate dozens of protocols). If Enso can prove its security rigor and keep its action library current with the latest protocols, it can convert more teams away from building in silos. However, any high-profile security incident or downtime in Enso could send developers back to preferring in-house solutions, which is a competitive risk in itself.

Enso’s Differentiators: Enso’s primary edge is being first-to-market with an intent-focused, community-driven execution network. It combines features that would require using multiple other services: data indexing, smart contract SDKs, transaction routing, and cross-chain bridging – all in one. Its incentive model (rewarding third-party developers for contributions) is also unique; it could lead to a vibrant ecosystem where many niche protocols get integrated into Enso faster than any single team could do, similar to how The Graph’s community indexes a long tail of contracts. If Enso succeeds, it could enjoy a strong network effect moat: more Actions and Shortcuts make it more attractive to use Enso versus competitors, which attracts more users and thus more Actions contributed, and so on.

That said, Enso is still in its early days. Its closest analog, The Graph, took years to decentralize and build an ecosystem of indexers. Enso will similarly need to nurture its Graphers and Validators community to ensure reliability. Large players (like a future version of The Graph, or a collaboration of Chainlink and others) could decide to roll out a competing intent execution layer, leveraging their existing networks. Enso will have to move quickly to solidify its position before such competition materializes.

In conclusion, Enso sits at a competitive crossroads of several important Web3 verticals – it’s carving a niche as the “middleware of everything”. Its success will depend on outperforming specialized competitors in each use case (or aggregating them) and continuing to offer a compelling one-stop solution that justifies developers choosing Enso over building from scratch. The presence of high-profile partners and investors suggests Enso has a foot in the door with many ecosystems, which will be advantageous as it expands its integration coverage.

Roadmap and Ecosystem Growth

Enso’s development roadmap (as of mid-2025) outlines a clear path toward full decentralization, multi-chain support, and community-driven growth. Key milestones and planned initiatives include:

  • Mainnet Launch (Q3 2024) – Enso launched its mainnet network in the second half of 2024. This involved deploying the Tendermint-based chain and initializing the Validator ecosystem. Early validators were likely permissioned or selected partners as the network bootstrapped. The mainnet launch allowed real user queries to be processed by Enso’s engine (prior to this, Enso’s services were accessible via a centralized API while in beta). This milestone marked Enso’s transition from an in-house platform to a public decentralized network.

  • Network Participant Expansion (Q4 2024) – Following mainnet, the focus shifted to decentralizing participation. In late 2024, Enso opened up roles for external Action Providers and Graphers. This included releasing tooling and documentation for developers to create their own Actions (smart contract adapters) and for algorithm developers to run Grapher nodes. We can infer that incentive programs or testnet competitions were used to attract these participants. By end of 2024, Enso aimed to have a broader set of third-party actions in its library and multiple Graphers competing on intents, moving beyond the core team’s internal algorithms. This was a crucial step to ensure Enso isn’t a centralized service, but a true open network where anyone can contribute and earn ENSO tokens.

  • Cross-Chain Expansion (Q1 2025) – Enso recognizes that supporting many blockchains is key to its value proposition. In early 2025, the roadmap targeted integration with new blockchain environments beyond the initial EVM set. Specifically, Enso planned support for Monad, Solana, and Movement by Q1 2025. Monad is an upcoming high-performance EVM-compatible chain (backed by Dragonfly Capital) – supporting it early could position Enso as the go-to middleware there. Solana integration is more challenging (different runtime and language), but Enso’s intent engine could work with Solana by using off-chain graphers to formulate Solana transactions and on-chain programs acting as adapters. Movement refers to Move-language chains (perhaps Aptos/Sui or a specific one called Movement). By incorporating Move-based chains, Enso would cover a broad spectrum of ecosystems (Solidity and Move, as well as existing Ethereum rollups). Achieving these integrations means developing new Action modules that understand Solana’s CPI calls or Move’s transaction scripts, and likely collaborating with those ecosystems for oracles/indexing. Enso’s mention in updates suggests these were on track – for example, a community update highlighted partnerships or grants (the mention of “Eclipse mainnet live + Movement grant” in a search result suggests Enso was actively working with novel L1s like Eclipse and Movement by early 2025).

  • Near-Term (Mid/Late 2025) – Although not explicitly broken out in the one-pager roadmap, by mid-2025 Enso’s focus is on network maturity and decentralization. The completion of the CoinList token sale in June 2025 is a major event: the next steps would be token generation and distribution (expected around July 2025) and launching on exchanges or governance forums. We anticipate Enso will roll out its governance process (Enso Improvement Proposals, on-chain voting) so the community can start participating in decisions using their newly acquired tokens. Additionally, Enso will likely move from “beta” to a fully production-ready service, if it hasn’t already. Part of this will be security hardening – conducting multiple smart contract audits and perhaps running a bug bounty program, considering the large TVLs involved.

  • Ecosystem Growth Strategies: Enso is actively fostering an ecosystem around its network. One strategy has been running educational programs and hackathons (e.g., the Shortcut Speedrun and workshops) to onboard developers to the Enso way of building. Another strategy is partnering with new protocols at launch – we’ve seen this with Berachain, zkSync’s campaign, and others. Enso is likely to continue this, effectively acting as an “on-chain launch partner” for emerging networks or DeFi projects, handling their complex user onboarding flows. This not only drives Enso’s volume (as seen with Berachain) but also integrates Enso deeply into those ecosystems. We expect Enso to announce integrations with more Layer-2 networks (e.g., Arbitrum, Optimism were presumably already supported; perhaps newer ones like Scroll or Starknet next) and other L1s (Polkadot via XCM, Cosmos via IBC or Osmosis, etc.). The long-term vision is that Enso becomes chain-ubiquitous – any developer on any chain can plug in. To that end, Enso may also develop better bridgeless cross-chain execution (using techniques like atomic swaps or optimistic execution of intents across chains), which could be on the R&D roadmap beyond 2025.

  • Future Outlook: Looking further, Enso’s team has hinted at involvement of AI agents as network participants. This suggests a future where not only human developers, but AI bots (perhaps trained to optimize DeFi strategies) plug into Enso to provide services. Enso might build out this vision by creating SDKs or frameworks for AI agents to safely interface with the intent engine – a potentially groundbreaking development merging AI and blockchain automation. Moreover, by late 2025 or 2026, we anticipate Enso will work on performance scaling (maybe sharding its network or using zero-knowledge proofs to validate intent execution correctness at scale) as usage grows.

The roadmap is ambitious but execution so far has been strong – Enso has met key milestones like mainnet launch and delivering real use cases. An important upcoming milestone is the full decentralization of the network. Currently, the network is in a transition: the documentation notes the decentralized network is in testnet and a centralized API was being used for production as of earlier in 2025. By now, with mainnet live and token in circulation, Enso will aim to phase out any centralized components. For investors, tracking this decentralization progress (e.g., number of independent validators, community Graphers joining) will be key to evaluating Enso’s maturity.

In summary, Enso’s roadmap focuses on scaling the network’s reach (more chains, more integrations) and scaling the network’s community (more third-party participants and token holders). The ultimate goal is to cement Enso as critical infrastructure in Web3, much like how Infura became essential for dApp connectivity or how The Graph became integral for data querying. If Enso can hit its milestones, the second half of 2025 should see a blossoming ecosystem around the Enso Network, potentially driving exponential growth in usage.

Risk Assessment

Like any early-stage protocol, Enso Network faces a range of risks and challenges that investors should carefully consider:

  • Technical and Security Risks: Enso’s system is inherently complex – it interacts with myriad smart contracts across many blockchains through a network of off-chain solvers and validators. This expansive surface area introduces technical risk. Each new Action (integration) could carry vulnerabilities; if an Action’s logic is flawed or a malicious provider introduces a backdoored Action, user funds could be at risk. Ensuring every integration is secure required substantial investment (Enso’s team spent over $500k on audits for integrating 15 protocols in its early days). As the library grows to hundreds of protocols, maintaining rigorous security audits is challenging. There’s also the risk of bugs in Enso’s coordination logic – for example, a flaw in how Graphers compose transactions or how Validators verify them could be exploited. Cross-chain execution, in particular, can be risky: if a sequence of actions spans multiple chains and one part fails or is censored, it could leave a user’s funds in limbo. Although Enso likely uses retries or atomic swaps for some cases, the complexity of intents means unknown failure modes might emerge. The intent-based model itself is relatively unproven at scale – there may be edge cases where the engine produces an incorrect solution or an outcome that diverges from the user’s intent. Any high-profile exploit or failure could undermine confidence in the whole network. Mitigation requires continuous security audits, a robust bug bounty program, and perhaps insurance mechanisms for users (none of which have been detailed yet).

  • Decentralization and Operational Risks: At present (mid-2025), the Enso network is still in the process of decentralizing its participants. This means there may be unseen operational centralization – for instance, the team’s infrastructure might still be co-ordinating a lot of the activity, or only a few validators/graphers are genuinely active. This presents two risks: reliability (if the core team’s servers go down, will the network stall?) and trust (if the process isn’t fully trustless yet, users must have faith in Enso Inc. not to front-run or censor transactions). The team has proven reliability in big events (like handling $3B volume in days), but as usage grows, scaling the network via more independent nodes will be crucial. There’s also a risk that network participants don’t show up – if Enso cannot attract enough skilled Action Providers or Graphers, the network might remain dependent on the core team, limiting decentralization. This could slow innovation and also concentrate too much power (and token rewards) within a small group, the opposite of the intended design.

  • Market and Adoption Risks: While Enso has impressive early adoption, it’s still in a nascent market for “intent-based” infrastructure. There is a risk that the broader developer community might be slow to adopt this new paradigm. Developers entrenched in traditional coding practices might be hesitant to rely on an external network for core functionality, or they may prefer alternative solutions. Additionally, Enso’s success depends on continuous growth of DeFi and multi-chain ecosystems. If the multi-chain thesis falters (for example, if most activity consolidates on a single dominant chain), the need for Enso’s cross-chain capabilities might diminish. On the flip side, if a new ecosystem arises that Enso fails to integrate quickly, projects in that ecosystem won’t use Enso. Essentially, staying up-to-date with every new chain and protocol is a never-ending challenge – missing or lagging on a major integration (say a popular new DEX or a Layer-2) could push projects to competitors or custom code. Furthermore, Enso’s usage could be hurt by macro market conditions; in a severe DeFi downturn, fewer users and developers might be experimenting with new dApps, directly reducing intents submitted to Enso and thus the fees/revenue of the network. The token’s value could suffer in such a scenario, potentially making staking less attractive and weakening network security or participation.

  • Competition: As discussed, Enso faces competition on multiple fronts. A major risk is a larger player entering the intent execution space. For instance, if a well-funded project like Chainlink were to introduce a similar intent service leveraging their existing oracle network, they could quickly overshadow Enso due to brand trust and integrations. Similarly, infrastructure companies (Alchemy, Infura) could build simplified multi-chain SDKs that, while not decentralized, capture the developer market with convenience. There’s also the risk of open-source copycats: Enso’s core concepts (Actions, Graphers) could be replicated by others, perhaps even as a fork of Enso if the code is public. If one of those projects forms a strong community or finds a better token incentive, it might divert potential participants. Enso will need to maintain technological leadership (e.g., by having the largest library of Actions and most efficient solvers) to fend off competition. Competitive pressure could also squeeze Enso’s fee model – if a rival offers similar services cheaper (or free, subsidized by VCs), Enso might be forced to lower fees or increase token incentives, which could strain its tokenomics.

  • Regulatory and Compliance Risks: Enso operates in the DeFi infrastructure space, which is a gray area in terms of regulation. While Enso itself doesn’t custody user funds (users execute intents from their own wallets), the network does automate complex financial transactions across protocols. There is a possibility that regulators could view intent-composition engines as facilitating unlicensed financial activity or even aiding money laundering if used to shuttle funds across chains in obscured ways. Specific concerns could arise if Enso enablescross-chain swaps that touch privacy pools or jurisdictions under sanctions. Additionally, the ENSO token and its CoinList sale reflect a distribution to a global community – regulators (like the SEC in the U.S.) might scrutinize it as an offering of securities (notably, Enso excluded US, UK, China, etc., from the sale, indicating caution on this front). If ENSO were deemed a security in major jurisdictions, it could limit exchange listings or usage by regulated entities. Enso’s decentralized network of validators might also face compliance issues: for example, could a validator be forced to censor certain transactions due to legal orders? This is largely hypothetical for now, but as the value flowing through Enso grows, regulatory attention will increase. The team’s base in Switzerland might offer a relatively crypto-friendly regulatory environment, but global operations mean global risks. Mitigating this likely involves ensuring Enso is sufficiently decentralized (so no single entity is accountable) and possibly geofencing certain features if needed (though that would be against the ethos of the project).

  • Economic Sustainability: Enso’s model assumes that fees generated by usage will sufficiently reward all participants. There’s a risk that the fee incentives may not be enough to sustain the network, especially early on. For instance, Graphers and Validators incur costs (infrastructure, development time). If query fees are set too low, these participants might not profit, leading them to drop off. On the other hand, if fees are too high, dApps may hesitate to use Enso and seek cheaper alternatives. Striking a balance is hard in a two-sided market. The Enso token economy also relies on token value to an extent – e.g., staking rewards are more attractive when the token has high value, and Action Providers earn value in ENSO. A sharp decline in ENSO price could reduce network participation or prompt more selling (which further depresses the price). With a large portion of tokens held by investors and team (over 56% combined, vesting over 2 years), there’s an overhang risk: if these stakeholders lose faith or need liquidity, their selling could flood the market post-vesting and undermine the token’s price. Enso tried to mitigate concentration by the community sale, but it’s still a relatively centralized token distribution in the near term. Economic sustainability will depend on growing genuine network usage to a level where fee revenue provides sufficient yield to token stakers and contributors – essentially making Enso a “cash-flow” generating protocol rather than just a speculative token. This is achievable (think of how Ethereum fees reward miners/validators), but only if Enso achieves widespread adoption. Until then, there is a reliance on treasury funds (15% allocated) to incentivize and perhaps to adjust the economic parameters (Enso governance may introduce inflation or other rewards if needed, which could dilute holders).

Summary of Risk: Enso is pioneering new ground, which comes with commensurate risk. The technological complexity of unifying all of DeFi into one network is enormous – each blockchain added or protocol integrated is a potential point of failure that must be managed. The team’s experience navigating earlier setbacks (like the limited success of the initial social trading product) shows they are aware of pitfalls and adapt quickly. They have actively mitigated some risks (e.g., decentralizing ownership via the community round to avoid overly VC-driven governance). Investors should watch how Enso executes on decentralization and whether it continues to attract top-tier technical talent to build and secure the network. In the best case, Enso could become indispensable infrastructure across Web3, yielding strong network effects and token value accrual. In the worst case, technical or adoption setbacks could relegate it to being an ambitious but niche tool.

From an investor’s perspective, Enso offers a high-upside, high-risk profile. Its current status (mid-2025) is that of a promising network with real usage and a clear vision, but it must now harden its technology and outpace a competitive and evolving landscape. Due diligence on Enso should include monitoring its security track record, the growth of query volumes/fees over time, and how effectively the ENSO token model incentivizes a self-sustaining ecosystem. As of now, the momentum is in Enso’s favor, but prudent risk management and continued innovation will be key to turning this early leadership into long-term dominance in the Web3 middleware space.

Sources:

  • Enso Network Official Documentation and Token Sale Materials

    • CoinList Token Sale Page – Key Highlights & Investors
    • Enso Docs – Tokenomics and Network Roles
  • Interviews and Media Coverage

    • CryptoPotato Interview with Enso CEO (June 2025) – Background on Enso’s evolution and intent-based design
    • DL News (May 2025) – Overview of Enso’s shortcuts and shared state approach
  • Community and Investor Analyses

    • Hackernoon (I. Pandey, 2025) – Insights on Enso’s community round and token distribution strategy
    • CryptoTotem / CoinLaunch (2025) – Token supply breakdown and roadmap timeline
  • Enso Official Site Metrics (2025) and Press Releases – Adoption figures and use-case examples (Berachain migration, Uniswap collaboration).

Aptos vs. Sui: A Panoramic Analysis of Two Move-Based Giants

· 7 min read
Dora Noda
Software Engineer

Overview

Aptos and Sui stand as the next generation of Layer-1 blockchains, both originating from the Move language initially conceived by Meta's Libra/Diem project. While they share a common lineage, their team backgrounds, core objectives, ecosystem strategies, and evolutionary paths have diverged significantly.

Aptos emphasizes versatility and enterprise-grade performance, targeting both DeFi and institutional use cases. In contrast, Sui is laser-focused on optimizing its unique object model to power mass-market consumer applications, particularly in gaming, NFTs, and social media. Which chain will ultimately distinguish itself depends on its ability to evolve its technology to meet the demands of its chosen market niche, while establishing a clear advantage in user experience and developer friendliness.


1. Development Journey

Aptos

Born from Aptos Labs—a team formed by former Meta Libra/Diem employees—Aptos began closed testing in late 2021 and launched its mainnet on October 19, 2022. Early mainnet performance drew community skepticism with under 20 TPS, as noted by WIRED, but subsequent iterations on its consensus and execution layers have steadily pushed its throughput to tens of thousands of TPS.

By Q2 2025, Aptos had achieved a peak of 44.7 million transactions in a single week, with weekly active addresses surpassing 4 million. The network has grown to over 83 million cumulative accounts, with daily DeFi trading volume consistently exceeding $200 million (Source: Aptos Forum).

Sui

Initiated by Mysten Labs, whose founders were core members of Meta's Novi wallet team, Sui launched its incentivized testnet in August 2022 and went live with its mainnet on May 3, 2023. From the earliest testnets, the team prioritized refining its "object model," which treats assets as objects with specific ownership and access controls to enhance parallel transaction processing (Source: Ledger).

As of mid-July 2025, Sui's ecosystem Total Value Locked (TVL) reached $2.326 billion. The platform has seen rapid growth in monthly transaction volume and the number of active engineers, proving especially popular within the gaming and NFT sectors (Source: AInvest, Tangem).


2. Technical Architecture Comparison

FeatureAptosSui
LanguageInherits the original Move design, emphasizing the security of "resources" and strict access control. The language is relatively streamlined. (Source: aptos.dev)Extends standard Move with an "object-centric" model, creating a customized version of the language that supports horizontally scalable parallel transactions. (Source: docs.sui.io)
ConsensusAptosBFT: An optimized BFT consensus mechanism promising sub-second finality, with a primary focus on security and consistency. (Source: Messari)Narwhal + Tusk: Decouples consensus from transaction ordering, enabling high throughput and low latency by prioritizing parallel execution efficiency.
Execution ModelEmploys a pipelined execution model where transactions are processed in stages (data fetching, execution, write-back), supporting high-frequency transfers and complex logic. (Source: chorus.one)Utilizes parallel execution based on object ownership. Transactions involving distinct objects do not require global state locks, fundamentally boosting throughput.
ScalabilityFocuses on single-instance optimization while researching sharding. The community is actively developing the AptosCore v2.0 sharding proposal.Features a native parallel engine designed for horizontal scaling, having already achieved peak TPS in the tens of thousands on its testnet.
Developer ToolsA mature toolchain including official SDKs, a Devnet, the Aptos CLI, an Explorer, and the Hydra framework for scalability.A comprehensive suite including the Sui SDK, Sui Studio IDE, an Explorer, GraphQL APIs, and an object-oriented query model.

3. On-Chain Ecosystem and Use Cases

3.1 Ecosystem Scale and Growth

Aptos In Q1 2025, Aptos recorded nearly 15 million monthly active users and approached 1 million daily active wallets. Its DeFi trading volume surged by 1000% year-over-year, with the platform establishing itself as a hub for financial-grade stablecoins and derivatives (Source: Coinspeaker). Key strategic moves include integrating USDT via Upbit to drive penetration in Asian markets and attracting numerous leading DEXs, lending protocols, and derivatives platforms (Source: Aptos Forum).

Sui In June 2025, Sui's ecosystem TVL reached a new high of $2.326 billion, driven primarily by high-interaction social, gaming, and NFT projects (Source: AInvest). The ecosystem is defined by core projects like object marketplaces, Layer-2 bridges, social wallets, and game engine SDKs, which have attracted a large number of Web3 game developers and IP holders.

3.2 Dominant Use Cases

  • DeFi & Enterprise Integration (Aptos): With its mature BFT finality and a rich suite of financial tools, Aptos is better suited for stablecoins, lending, and derivatives—scenarios that demand high levels of consistency and security.
  • Gaming & NFTs (Sui): Sui's parallel execution advantage is clear here. Its low transaction latency and near-zero fees are ideal for high-concurrency, low-value interactions common in gaming, such as opening loot boxes or transferring in-game items.

4. Evolution & Strategy

Aptos

  • Performance Optimization: Continuing to advance sharding research, planning for multi-region cross-chain liquidity, and upgrading the AptosVM to improve state access efficiency.
  • Ecosystem Incentives: A multi-hundred-million-dollar ecosystem fund has been established to support DeFi infrastructure, cross-chain bridges, and compliant enterprise applications.
  • Cross-Chain Interoperability: Strengthening integrations with bridges like Wormhole and building out connections to Cosmos (via IBC) and Ethereum.

Sui

  • Object Model Iteration: Extending the Move syntax to support custom object types and complex permission management while optimizing the parallel scheduling algorithm.
  • Driving Consumer Adoption: Pursuing deep integrations with major game engines like Unreal and Unity to lower the barrier for Web3 game development, and launching social plugins and SDKs.
  • Community Governance: Promoting the SuiDAO to empower core project communities with governance capabilities, enabling rapid iteration on features and fee models.

5. Core Differences & Challenges

  • Security vs. Parallelism: Aptos's strict resource semantics and consistent consensus provide DeFi-grade security but can limit parallelism. Sui's highly parallel transaction model must continuously prove its resilience against large-scale security threats.
  • Ecosystem Depth vs. Breadth: Aptos has cultivated deep roots in the financial sector with strong institutional ties. Sui has rapidly accumulated a broad range of consumer-facing projects but has yet to land a decisive breakthrough in large-scale DeFi.
  • Theoretical Performance vs. Real-World Throughput: While Sui has higher theoretical TPS, its actual throughput is still constrained by ecosystem activity. Aptos has also experienced congestion during peak periods, indicating a need for more effective sharding or Layer-2 solutions.
  • Market Narrative & Positioning: Aptos markets itself on enterprise-grade security and stability, targeting traditional finance and regulated industries. Sui uses the allure of a "Web2-like experience" and "zero-friction onboarding" to attract a wider consumer audience.

6. The Path to Mass Adoption

Ultimately, this is not a zero-sum game.

In the medium to long term, if the consumer market (gaming, social, NFTs) continues its explosive growth, Sui's parallel execution and low entry barrier could position it for rapid adoption among tens of millions of mainstream users.

In the short to medium term, Aptos's mature BFT finality, low fees, and strategic partnerships give it a more compelling offering for institutional finance, compliance-focused DeFi, and cross-border payments.

The future is likely a symbiotic one where the two chains coexist, creating a stratified market: Aptos powering financial and enterprise infrastructure, while Sui dominates high-frequency consumer interactions. The chain that ultimately achieves mass adoption will be the one that relentlessly optimizes performance and user experience within its chosen domain.

Rollups-as-a-Service in 2025: OP, ZK, Arbitrum Orbit, Polygon CDK, and zkSync Hyperchains

· 70 min read
Dora Noda
Software Engineer

Introduction

Rollups-as-a-Service (RaaS) and modular blockchain frameworks have become critical in 2025 for scaling Ethereum and building custom blockchains. Leading frameworks – Optimism’s OP Stack, zkSync’s ZK Stack (Hyperchains), Arbitrum Orbit, Polygon’s Chain Development Kit (CDK), and related solutions – allow developers to launch their own Layer-2 (L2) or Layer-3 (L3) chains with varying approaches (optimistic vs zero-knowledge). These frameworks share a philosophy of modularity: they separate concerns like execution, settlement, data availability, and consensus, enabling customization of each component. This report compares the frameworks across key dimensions – data availability options, sequencer design, fee models, ecosystem support – and examines their architecture, tooling, developer experience, and current adoption in both public and enterprise contexts.

Comparison Overview

The table below summarizes several core features of each framework:

AspectOP Stack (Optimism)ZK Stack (zkSync)Arbitrum OrbitPolygon CDK (AggLayer)
Rollup TypeOptimistic RollupZero-Knowledge (Validity)Optimistic RollupZero-Knowledge (Validity)
Proof SystemFault proofs (fraud proofs)ZK-SNARK validity proofsFault proofs (fraud proofs)ZK-SNARK validity proofs
EVM CompatibilityEVM-equivalent (geth)High – zkEVM (LLVM-based)EVM-equivalent (Arbitrum Nitro) + WASM via StylusPolygon zkEVM (EVM-equivalent)
Data AvailabilityEthereum L1 (on-chain); pluggable Alt-DA modules (Celestia, etc.)Ethereum L1; also Validium options off-chain (Celestia, Avail, EigenDA)Ethereum L1 (rollup) or AnyTrust committee (off-chain DAC); supports Celestia, AvailEthereum L1 (rollup) or off-chain (validium via Avail or Celestia); hybrid possible
Sequencer DesignSingle sequencer (default); multi-sequencer possible with customization. Shared sequencer vision for Superchain (future).Configurable: can be centralized or decentralized; priority L1 queue supported.Configurable: single operator or decentralized validators.Flexible: single sequencer or multiple validators (e.g. PoS committee).
Sequencer AccessCentralized today (each OP chain’s sequencer is run by its operator); not permissionless yet. Plans for a shared, permissionless sequencer network among OP Chains. L1 backup queue allows trustless tx submission if sequencer fails.zkSync Era uses a centralized sequencer (Matter Labs), but ZK Stack allows custom sequencer logic (even external consensus). Priority L1 sequencing supported for fairness. Decentralized sequencer options under development.Arbitrum One uses a centralized sequencer (Offchain Labs), with failover via L1 inbox. Arbitrum Orbit chains can run their own sequencer (initially centralized) or institute a validator set. BoLD upgrade (2025) enables permissionless validation to decentralize Orbit chains.Polygon zkEVM began with a single sequencer (Polygon Labs). CDK allows launching a chain with a permissioned validator set or other consensus for decentralization. Many CDK chains start centralized for simplicity, with roadmap for later community-run sequencers.
Fee TokenETH by default on OP-based L2s (to ease UX). Custom gas token technically supported, but most OP Chains opt for ETH or a standard token for interoperability. (OP Stack’s recent guidance favors common tokens across the Superchain).Custom base tokens are supported – developers can choose ETH or any ERC-20 as the native gas. (This flexibility enables project-specific economies on zkSync-based chains.)Custom gas token supported (upgrade in late 2023). Chains may use ETH, Arbitrum’s ARB, or their own token for fees. Example: Ape Chain uses APE as gas.Custom native token is supported. Many Polygon CDK chains use MATIC or another token as gas. Polygon’s ecosystem encourages MATIC for cross-chain consistency, but it’s not required.
Fee Model & CostsUsers pay L2 gas (collected by sequencer) plus L1 data posting costs. The sequencer must post transaction data (calldata or blobs) to Ethereum, so a portion of fees covers L1 gas. Revenue sharing: OP Chains in the Superchain commit ~2.5% of revenue to Optimism Collective (funding public goods).Users pay fees (often in ETH or chosen token) which cover L1 proof verification and data. No protocol-level “tax” on fees – each chain’s sequencer keeps revenue to incentivize operators. ZK prover costs are a factor: operators might charge slightly higher fees or use efficient provers to manage costs. Finality is fast (no delay), so users don’t need third-party fast exits.Users pay gas (in ETH or chain’s token) covering L2 execution + L1 batch cost. Sequencers/validators retain the fee revenue; no mandatory revenue-share to Arbitrum DAO or L1 (aside from L1 gas costs). To avoid the optimistic 7-day delay, many Orbit chains integrate liquidity providers or official fast-withdrawal bridges (Arbitrum supports 15-min fast exits on some Orbit chains via liquidity networks).Users pay gas fees which cover proving and posting costs. Sequencers or validators earn those fees; Polygon does not impose any rent or tax on CDK chain revenue. Using off-chain DA (validium mode) can cut fees by >100× (storing data on Celestia or Avail instead of Ethereum), at the cost of some trust assumptions.

Table: High-level comparison of key technical features of OP Stack, zkSync’s ZK Stack, Arbitrum Orbit, and Polygon CDK.

Data Availability Layers

Data Availability (DA) is where rollups store their transaction data so that anyone can reconstruct the chain’s state. All these frameworks support using Ethereum L1 as a DA (posting calldata or blob data on Ethereum for maximum security). However, to reduce costs, they also allow alternative DA solutions:

  • OP Stack: By default, OP chains publish data on Ethereum (as calldata or blobs). Thanks to a modular “Alt-DA” interface, OP Stack chains can plug into other DA layers easily. For example, an OP chain could use Celestia (a dedicated DA blockchain) instead of Ethereum. In 2023 OP Labs and Celestia released a beta where an OP Stack rollup settles on Ethereum but stores bulk data on Celestia. This reduces fees while inheriting Celestia’s data availability guarantees. In general, any EVM or non-EVM chain – even Bitcoin or a centralized store – can be configured as the DA layer in OP Stack. (Of course, using a less secure DA trades off some security for cost.) Ethereum remains the predominant choice for production OP chains, but projects like Caldera’s Taro testnet have demonstrated OP Stack with Celestia DA.

  • ZK Stack (zkSync Hyperchains): The ZK Stack offers both rollup and validium modes. In rollup mode, all data is on-chain (Ethereum). In validium mode, data is kept off-chain (with only validity proofs on-chain). Matter Labs is integrating Avail, Celestia, and EigenDA as first-class DA options for ZK Stack chains. This means a zkSync Hyperchain could post transaction data to Celestia or an EigenLayer-powered network instead of L1, massively increasing throughput. They even outline volition, where a chain can decide per-transaction whether to treat it as a rollup (on-chain data) or validium (off-chain). This flexibility allows developers to balance security and cost. For example, a gaming hyperchain might use Celestia to cheaply store data, while relying on Ethereum for periodic proofs. The ZK Stack’s design makes DA pluggable via a DA client/dispatcher component in the node software. Overall, Ethereum remains default, but zkSync’s ecosystem strongly emphasizes modular DA to achieve “hyperscale” throughput.

  • Arbitrum Orbit: Orbit chains can choose between Arbitrum’s two data modes: rollup (data posted on Ethereum) or AnyTrust (data availability committee). In Rollup configuration, an Orbit L3 will post its call data to the L2 (Arbitrum One or Nova) or L1, inheriting full security at higher cost. In AnyTrust mode, data is kept off-chain by a committee (as used in Arbitrum Nova, which uses a Data Availability Committee). This greatly lowers fees for high-volume apps (gaming, social) at the cost of trusting a committee (if all committee members collude to withhold data, the chain could halt). Beyond these, Arbitrum is also integrating with emerging modular DA networks. Notably, Celestia and Polygon Avail are supported for Orbit chains as alternative DA layers. Projects like AltLayer have worked on Orbit rollups that use EigenDA (EigenLayer’s DA service) as well. In summary, Arbitrum Orbit offers flexible data availability: on-chain via Ethereum, off-chain via DACs or specialized DA chains, or hybrids. Many Orbit adopters choose AnyTrust for cost savings, especially if they have a known set of validators or partners ensuring data is available.

  • Polygon CDK: Polygon’s CDK is inherently modular with respect to DA. A Polygon CDK chain can operate as a rollup (all data on Ethereum) or a validium (data on a separate network). Polygon has its own DA solution called Avail (a blockchain for data availability), and CDK chains can use Avail or any similar service. In late 2024, Polygon announced direct integration of Celestia into CDK – making Celestia an “easily-pluggable” DA option in the toolkit. This integration is expected in early 2024, enabling CDK chains to store compressed data on Celestia seamlessly. Polygon cites that using Celestia could reduce transaction fees by >100× compared to posting all data on Ethereum. Thus, a CDK chain creator can simply toggle the DA module to Celestia (or Avail) instead of Ethereum. Some Polygon chains (e.g. Polygon zkEVM) currently post all data to Ethereum (for maximal security), while others (perhaps certain enterprise chains) run as validiums with external DA. The CDK supports “hybrid” modes as well – for instance, critical transactions could go on Ethereum while others go to Avail. This modular DA approach aligns with Polygon’s broader Polygon 2.0 vision of multiple ZK-powered chains with unified liquidity but varied data backends.

In summary, all frameworks support multiple DA layers to various degrees. Ethereum remains the gold standard DA (especially with blob space from EIP-4844 making on-chain data cheaper), but new specialized DA networks (Celestia, Avail) and schemes (EigenLayer’s EigenDA, data committees) are being embraced across the board. This modularity allows rollup creators in 2025 to make trade-offs between cost and security by simply configuring a different DA module rather than building a new chain from scratch.

Sequencer Design and Decentralization

The sequencer is the node (or set of nodes) that orders transactions and produces blocks for a rollup. How the sequencer is designed – centralized vs decentralized, permissionless vs permissioned – affects the chain’s throughput and trust assumptions:

  • OP Stack (Optimism): Today, most OP Stack chains run a single sequencer operated by the chain’s core team or sponsor. For example, Optimism Mainnet’s sequencer is run by OP Labs, and Base’s sequencer is run by Coinbase. This yields low latency and simplicity at the cost of centralization (users must trust the sequencer to include their transactions fairly). However, Optimism has built in mechanisms for trust-minimization: there is an L1 transaction queue contract where users can submit transactions on Ethereum which the sequencer must include in the L2 chain. If the sequencer goes down or censors txs, users can rely on L1 to eventually get included (albeit with some delay). This provides a safety valve against a malicious or failed sequencer. In terms of decentralization, OP Stack is modular and theoretically allows multiple sequencers – e.g. one could implement a round-robin or proof-of-stake based block proposer set using the OP Stack code. In practice, this requires customization and is not the out-of-the-box configuration. The long-term Superchain roadmap envisions a shared sequencer for all OP Chains, which would be a set of validators sequencing transactions for many chains at once. A shared sequencer could enable cross-chain atomicity and reduce MEV across the Superchain. It’s still in development as of 2025, but the OP Stack’s design does not preclude plugging in such a consensus. For now, sequencer operations remain permissioned (run by whitelisted entities), but Optimism governance plans to decentralize this (possibly via staking or committee rotation) once the technology and economics are ready. In short: OP Stack chains start with centralized sequencing (with L1 as fallback), and a path to gradual decentralization is charted (moving from “Stage 0” to “Stage 2” maturity with no training wheels).

  • ZK Stack (zkSync Hyperchains): zkSync Era (the L2) currently uses a centralized sequencer operated by Matter Labs. However, the ZK Stack is built to allow various sequencing modes for new chains. Options include a centralized sequencer (easy start), a decentralized sequencer set (e.g. multiple nodes reaching consensus on ordering), a priority transaction queue from L1, or even an external sequencer service. In Matter Labs’ Elastic Chains vision, chains remain independent but interoperability is handled by the L1 contracts and a “ZK Router/Gateway” – this implies each chain can choose its own sequencer model as long as it meets the protocols for submitting state roots and proofs. Because ZK-rollups don’t require a consensus on L2 for security (validity proofs ensure correctness), decentralizing the sequencer is more about liveness and censorship-resistance. A Hyperchain could implement a round-robin block producer or even hook into a high-performance BFT consensus for its sequencers if desired. That said, running a single sequencer is far simpler and remains the norm initially. The ZK Stack docs mention that a chain could use an “external protocol” for sequencing – for instance, one could imagine using Tendermint or SU consensus as the block producer and then generating zk proofs for the blocks. Also, like others, zkSync has an L1 priority queue mechanism: users can send transactions to the zkSync contract with a priority fee to guarantee L1->L2 inclusion in a timely manner (mitigating censorship). Overall, permissionless participation in sequencing is not yet realized on zkSync chains (no public slot auction or staking-based sequencer selection in production), but the architecture leaves room for it. As validity proofs mature, we might see zkSync chains with community-run sequencer nodes that collectively decide ordering (once performance allows).

  • Arbitrum Orbit: On Arbitrum One (the main L2), the sequencer is centralized (run by Offchain Labs), though the chain’s state progression is ultimately governed by the Arbitrum validators and fraud proofs. Arbitrum has similarly provided an L1 queue for users as a backstop against sequencer issues. In Orbit (the L3 framework), each Orbit chain can have its own sequencer or validator set. Arbitrum’s Nitro tech includes the option to run a rollup with a decentralized sequencer: essentially, one could have multiple parties run the Arbitrum node software and use a leader election (possibly via the Arbitrum permissionless proof-of-stake chain in the future, or a custom mechanism). Out of the box, Orbit chains launched to date have been mostly centralized (e.g. the Xai gaming chain is run by a foundation in collaboration with Offchain Labs) – but this is a matter of configuration and governance. A noteworthy development is the introduction of BoLD (Bounded Liquidity Delay) in early 2025, which is a new protocol to make Arbitrum’s validation more permissionless. BoLD allows anyone to become a validator (prover) for the chain, resolving fraud challenges in a fixed time frame without a whitelist. This moves Arbitrum closer to trustless operation, although the sequencer role (ordering transactions day-to-day) might still be assigned or elected. Offchain Labs has expressed focus on advancing decentralization in 2024-2025 for Arbitrum. We also see multi-sequencer efforts: for example, an Orbit chain could use a small committee of known sequencers to get some fault tolerance (one goes down, another continues). Another angle is the idea of a shared sequencer for Orbit chains, though Arbitrum hasn’t emphasized this as much as Optimism. Instead, interoperability is achieved via L3s settling on Arbitrum L2 and using standard bridges. In summary, Arbitrum Orbit gives flexibility in sequencer design (from one entity to many), and the trend is toward opening the validator/sequencer set as the tech and community governance matures. Today, it’s fair to say Orbit chains start centralized but have a roadmap for permissionless validation.

  • Polygon CDK: Polygon CDK chains (sometimes referred to under the umbrella “AggLayer” in late 2024) can similarly choose their sequencer/consensus setup. Polygon’s zkEVM chain (operated by Polygon Labs) began with a single sequencer and centralized prover, with plans to progressively decentralize both. The CDK, being modular, allows a chain to plug in a consensus module – for instance, one could launch a CDK chain with a Proof-of-Stake validator set producing blocks, effectively decentralizing sequencing from day one. In fact, Polygon’s earlier framework (Polygon Edge) was used for permissioned enterprise chains using IBFT consensus; CDK chains could take a hybrid approach (run Polygon’s zkProver but have a committee of nodes propose blocks). By default, many CDK chains might run with a single operator for simplicity and then later adopt a consensus as they scale. Polygon is also exploring a shared sequencer or aggregator concept through the AggLayer hub, which is intended to connect all Polygon chains. While AggLayer primarily handles cross-chain messaging and liquidity, it could evolve into a shared sequencing service in the future (Polygon co-founder has discussed sequencer decentralization as part of Polygon 2.0). In general, permissionlessness is not yet present – one cannot spontaneously become a sequencer for someone’s CDK chain unless that project allows it. But projects like dYdX V4 (which is building a standalone chain with a form of decentralized consensus) and others show the appetite for validator-based L2s. Polygon CDK makes it technically feasible to have many block producers, but the exact implementation is left to the chain deployer. Expect Polygon to roll out more guidance or even infrastructure for decentralized sequencers as more enterprises and communities launch CDK chains.

To summarize the sequencer comparison: All frameworks currently rely on a relatively centralized sequencer model in their live deployments, to ensure efficiency. However, each provides a route to decentralization – whether via shared sequencing networks (OP Stack), pluggable consensus (CDK, ZK Stack), or permissionless validators (Arbitrum’s BoLD). Table below highlights sequencer designs:

Sequencer DesignOP StackZK Stack (zkSync)Arbitrum OrbitPolygon CDK
Default operator modelSingle sequencer (project-run)Single sequencer (Matter Labs or project-run)Single sequencer (project-run/Offchain Labs)Single sequencer (project or Polygon-run)
Decentralization optionsYes – can customize consensus, e.g. multiple sequencers or future shared setYes – configurable; can integrate external consensus or priority queuesYes – configurable; can use multi-validator (AnyTrust committee or custom)Yes – can integrate PoS validators or IBFT consensus (project’s choice)
Permissionless participationPlanned: Superchain shared sequencer (not yet live). Fraud provers are permissionless on L1 (anyone can challenge).Not yet (no public sequencer auction yet). Validity proofs don’t need challengers. Community can run read-nodes, but not produce blocks unless chosen.Emerging: BoLD enables anyone to validate fraud proofs. Sequencer still chosen by chain (could be via DAO in future).Not yet. Sequencers are appointed by chain owners or validators are permissioned/staked. Polygon’s roadmap includes community validation eventually.
Censorship resistanceL1 queue for users ensures inclusion. Training-wheels governance can veto sequencer misconduct.L1 priority queue for inclusion. Validium mode needs trust in DA committee for data availability.L1 inbox ensures inclusion if sequencer stalls. DAC mode requires ≥1 honest committee member to supply data.Depends on chain’s consensus – e.g. if using a validator set, need ≥2/3 honest. Rollup mode fallback is L1 Ethereum inclusion.

As seen, Optimism and Arbitrum include on-chain fallback queues, which is a strong censorship-resistance feature. ZK-based chains rely on the fact that a sequencer can’t forge state (thanks to ZK proofs), but if it censors, a new sequencer could be appointed by governance – an area still being refined. The trend in 2025 is that we’ll likely see more decentralized sequencer pools and possibly shared sequencer networks coming online, complementing these RaaS frameworks. Each project is actively researching this: e.g. Astria and others are building general shared sequencing services, and OP Labs, Polygon, and Offchain have all mentioned plans to decentralize the sequencer role.

Fee Models and Economics

Fee models determine who pays what in these rollup frameworks and how the economic incentives align for operators and the ecosystem. Key considerations include: What token are fees paid in? Who collects the fees? What costs (L1 posting, proving) must be covered? Are there revenue-sharing or kickback arrangements? How customizable are fee parameters?

  • Gas Token and Fee Customization: All compared frameworks allow customizing the native gas token, meaning a new chain can decide which currency users pay fees in. By default, rollups on Ethereum often choose ETH as the gas token for user convenience (users don’t need a new token to use the chain). For instance, Base (OP Stack) uses ETH for gas, as does zkSync Era and Polygon zkEVM. OP Stack technically supports replacing ETH with another ERC-20, but in the context of the OP Superchain, there’s a push to keep a standard (to make interoperability smoother). In fact, some OP Stack chains that initially considered a custom token opted for ETH – e.g., Worldcoin’s OP-chain uses ETH for fees even though the project has its own token WLD. On the other hand, Arbitrum Orbit launched without custom token support but quickly added it due to demand. Now Orbit chains can use ARB or any ERC-20 as gas. The Ape Chain L3 chose APE coin as its gas currency, showcasing this flexibility. Polygon CDK likewise lets you define the token; many projects lean towards using MATIC to align with Polygon’s ecosystem (and MATIC will upgrade to POL token under Polygon 2.0), but it’s not enforced. zkSync’s ZK Stack explicitly supports custom base tokens as well (the docs even have a “Custom base token” tutorial). This is useful for enterprise chains that might want, say, a stablecoin or their own coin for fees. It’s also crucial for app-chains that have their own token economy – they can drive demand for their token by making it the gas token. In summary, fee token is fully configurable in all frameworks, although using a widely-held token like ETH can lower user friction.

  • Fee Collection and Distribution: Generally, the sequencer (block producer) collects transaction fees on the L2/L3. This is a primary incentive for running a sequencer. For example, Optimism’s sequencer earns all the gas fees users pay on Optimism, but must then pay for posting batches to Ethereum. Usually, the sequencer will take the user-paid L2 fees, subtract the L1 costs, and keep the remainder as profit. On a well-run chain, L1 costs are a fraction of L2 fees, leaving some profit margin. For ZK-rollups, there’s an extra cost: generating the ZK proof. This can be significant (requiring specialized hardware or cloud compute). Currently, some ZK rollup operators subsidize proving costs (spending VC funds) to keep user fees low during growth phase. Over time, proving costs are expected to drop with better algorithms and hardware. Framework-wise: zkSync and Polygon both allow the sequencer to charge a bit more to cover proving – and if a chain uses an external prover service, they might have a revenue split with them. Notably, no framework except OP Superchain has an enforced revenue-sharing at protocol level. The Optimism Collective’s Standard Rollup Revenue scheme requires OP Chains to remit either 2.5% of gross fees or 15% of net profits (whichever is greater) to a collective treasury. This is a voluntary-but-expected agreement under the Superchain charter, rather than a smart contract enforcement, but all major OP Stack chains (Base, opBNB, Worldcoin, etc.) have agreed to it. Those fees (over 14,000 ETH so far) fund public goods via Optimism’s governance. In contrast, Arbitrum does not charge Orbit chains any fee; Orbit is permissionless to use. Arbitrum DAO could potentially ask for some revenue sharing in the future (to fund its own ecosystem), but none exists as of 2025. Polygon CDK similarly does not impose a tax; Polygon’s approach is to attract users into its ecosystem (thus raising MATIC value and usage) rather than charge per-chain fees. Polygon co-founder Sandeep Nailwal explicitly said AggLayer “does not seek rent” from chains. zkSync also hasn’t announced any fee sharing – Matter Labs likely focuses on growing usage of zkSync Era and hyperchains, which indirectly benefits them via network effects and possibly future token value.

  • L1 Settlement Costs: A big part of the fee model is who pays for L1 transactions (posting data or proofs). In all cases, ultimately users pay, but the mechanism differs. In Optimistic rollups, the sequencer periodically posts batches of transactions (with calldata) to L1. The gas cost for those L1 transactions is paid by the sequencer using ETH. However, sequencers factor that into the L2 gas pricing. Optimism and Arbitrum have gas pricing formulas that estimate how much a transaction’s call-data will cost on L1 and include that in the L2 gas fee (often called the “amortized L1 cost” per tx). For example, a simple Optimism tx might incur 21,000 L2 gas for execution and maybe an extra few hundred for L1 data – the user’s fee covers both. If the pricing is misestimated, the sequencer might lose money on that batch or gain if usage is high. Sequencers typically adjust fees dynamically to match L1 conditions (raising L2 fees when L1 gas is expensive). In Arbitrum, the mechanism is similar, though Arbitrum has separate “L1 pricing” and “L2 pricing” components. In zkSync/Polygon (ZK), the sequencer must post a validity proof to L1 (costing a fixed gas amount to verify) plus either call data (if rollup) or state root (if validium). The proof verification cost is usually constant per batch (on zkSync Era it’s on the order of a few hundred thousand gas), so zkSync’s fee model spreads that cost across transactions. They might charge a slight overhead on each tx for proving. Notably, zkSync introduced features like state diffs and compression to minimize L1 data published. Polygon zkEVM likewise uses recursive proofs to batch many transactions into one proof, amortizing the verification cost. If a chain uses an alternative DA (Celestia/Avail), then instead of paying Ethereum for calldata, they pay that DA provider. Celestia, for instance, has its own gas token (TIA) to pay for data blobs. So a chain might need to convert part of fees to pay Celestia miners. Frameworks are increasingly abstracting these costs: e.g., an OP Stack chain could pay a Celestia DA node via an adapter, and include that cost in user fees.

  • Costs to Users (Finality and Withdrawal): For optimistic rollups (OP Stack, Arbitrum Orbit in rollup mode), users face the infamous challenge period for withdrawals – typically 7 days on Ethereum L1. This is a usability hit, but most ecosystems have mitigations. Fast bridges (liquidity networks) allow users to swap their L2 tokens for L1 tokens instantly for a small fee, while arbitrageurs wait the 7 days. Arbitrum has gone further for Orbit chains, working with teams to enable fast withdrawals in as little as 15 minutes via liquidity providers integrated at the protocol level. This effectively means users don’t wait a week except in worst-case scenarios. ZK-rollups don’t have this delay – once a validity proof is accepted on L1, the state is final. So zkSync and Polygon users get faster finality (often minutes to an hour) depending on how often proofs are submitted. The trade-off is that proving might introduce a bit of delay between when a transaction is accepted on L2 and when it’s included in an L1 proof (could be a few minutes). But generally, ZK rollups are offering 10–30 minute withdrawals in 2025, which is a huge improvement over 7 days. Users may pay a slightly higher fee for immediate finality (to cover prover costs), but many deem it worth it. Fee Customization is also worth noting: frameworks allow custom fee schedules (like free transactions or gas subsidies) if projects want. For example, an enterprise could subsidize all user fees on their chain by running the sequencer at a loss (perhaps for a game or social app). Or they could set up a different gas model (some have toyed with no gas for certain actions, or alternative gas accounting). Since most frameworks aim for Ethereum-equivalence, such deep changes are rare, but possible with code modification. Arbitrum’s Stylus could enable different fee metering for WASM contracts (not charging for certain ops to encourage WASM usage, for instance). The Polygon CDK being open source and modular means if a project wanted to implement a novel fee mechanism (like fee burning or dynamic pricing), they could.

In essence, all rollup frameworks strive to align economic incentives: make it profitable to operate a sequencer (via fee revenue), keep fees reasonable for users by leveraging cheaper DA, and (optionally) funnel some value to their broader ecosystem. Optimism’s model is unique in explicitly sharing revenue for public goods, while others rely on growth and token economics (e.g., more chains -> more MATIC/ETH usage, increasing those token’s value).

Architecture and Modularity

All these frameworks pride themselves on a modular architecture, meaning each layer of the stack (execution, settlement, consensus, DA, proofs) is swappable or upgradable. Let’s briefly note each:

  • OP Stack: Built as a series of modules corresponding to Ethereum’s layers – execution engine (OP EVM, derived from geth), consensus/rollup node (op-node), settlement smart contracts, and soon fraud prover. The OP Stack’s design goal was EVM equivalence (no custom gas schedule or opcode changes) and ease of integration with Ethereum tooling. The Bedrock upgrade in 2023 further modularized Optimism’s stack, making it easier to swap out components (e.g., to implement ZK proofs in the future, or use a different DA). Indeed, OP Stack is not limited to optimistic fraud proofs – the team has said it’s open to integrating validity proofs when they mature, essentially turning OP Stack chains into ZK rollups without changing the developer experience. The Superchain concept extends the architecture to multiple chains: standardizing inter-chain communication, bridging, and maybe shared sequencing. OP Stack comes with a rich set of smart contracts on L1 (for deposits, withdrawals, fraud proof verification, etc.), which chains inherit out-of-the-box. It’s effectively a plug-and-play L2 chain template – projects like Base launched by forking the OP Stack repos and configuring them to point at their own contracts.

  • ZK Stack: The ZK Stack is the framework underlying zkSync Era and future “Hyperchains.” Architecturally, it includes the zkEVM execution environment (an LLVM-based VM that allows running Solidity code with minimal changes), the prover system (the circuits and proof generation for transactions), the sequencer node, and the L1 contracts (the zkSync smart contracts that verify proofs and manage state roots). Modularity is seen in how it separates the ZK proof circuit from the execution – theoretically one could swap in a different proving scheme or even a different VM (though not trivial). The ZK Stack introduces the Elastic Chain Architecture with components like ZK Router and ZK Gateway. These act as an interoperability layer connecting multiple ZK Chains. It’s a bit like an “internet of ZK rollups” concept, where the Router (on Ethereum) holds a registry of chains and facilitates shared bridging/liquidity, and the Gateway handles messages between chains off-chain. This is modular because a new chain can plug into that architecture simply by deploying with the standard contracts. ZK Stack also embraces account abstraction at the protocol level (contracts as accounts, native meta-transactions), which is an architectural choice to improve UX. Another modular aspect: as discussed in DA, it can operate in rollup or validium mode – essentially flipping a switch in config. Also, the stack has a notion of Pluggable consensus for sequencing (as noted prior). Settlement layer can be Ethereum or potentially another chain: zkSync’s roadmap even floated settling hyperchains on L2 (e.g., an L3 that posts proofs to zkSync Era L2 instead of L1) – indeed they launched a prototype called “ZK Portal” for L3 settlement on L2. This gives a hierarchical modularity (L3->L2->L1). Overall, ZK Stack is a bit less turnkey for non-Matter-Labs teams as of 2025 (since running a ZK chain involves coordinating provers, etc.), but it’s highly flexible in capable hands.

  • Arbitrum Orbit: Arbitrum’s architecture is built on the Arbitrum Nitro stack, which includes the ArbOS execution layer (Arbitrum’s interpretation of EVM with some small differences), the Sequencer/Relay, the AnyTrust component for alternative DA, and the fraud proof machinery (interactive fraud proofs). Orbit essentially lets you use that same stack but configure certain parameters (like chain ID, L2 genesis state, choice of rollup vs AnyTrust). Modularity: Arbitrum introduced Stylus, a new WASM-compatible smart contract engine that runs alongside the EVM. Stylus allows writing contracts in Rust, C, C++ which compile to WASM and run with near-native speed on Arbitrum chains. This is an optional module – Orbit chains can enable Stylus or not. It’s a differentiator for Arbitrum’s stack, making it attractive for high-performance dApps (e.g., gaming or trading apps might write some logic in Rust for speed). The data availability module is also pluggable as discussed (Arbitrum chains can choose on-chain or DAC). Another module is the L1 settlement: Orbit chains can post their proofs to either Ethereum (L1) or to Arbitrum One (L2). If the latter, they effectively are L3s anchored in Arbitrum One’s security (with slightly different trust assumptions). Many Orbit chains are launching as L3s (to inherit Arbitrum One’s lower fees and still ultimately Ethereum security). Arbitrum’s codebase is fully open source now, and projects like Caldera, Conduit build on it to provide user-friendly deployment – they might add their own modules (like monitoring, chain management APIs). It’s worth noting Arbitrum’s fraud proofs were historically not permissionless (only whitelisted validators could challenge), but with BoLD, that part of the architecture is changing to allow anyone to step in. So the fraud proof component is becoming more decentralized (which is a modular upgrade in a sense). One might say Arbitrum is less of a “lego kit” than OP Stack or Polygon CDK, in that Offchain Labs hasn’t released a one-click chain launcher (though they did release an Orbit deployment GUI on GitHub). But functionally, it’s modular enough that third parties have automated deployments for it.

  • Polygon CDK (AggLayer): Polygon CDK is explicitly described as a “modular framework” for ZK-powered chains. It leverages Polygon’s ZK proving technology (from Polygon zkEVM, which is based on Plonky2 and recursive SNARKs). The architecture separates the execution layer (which is an EVM – specifically a fork of Geth adjusted for zkEVM) from the prover layer and the bridge/settlement contracts. Because it’s modular, a developer can choose different options for each: e.g. Execution – presumably always EVM for now (to use existing tooling), DA – as discussed (Ethereum or others), Sequencer consensus – single vs multi-node, Prover – one can run the prover Type1 (validity proofs posted to Ethereum) or a Type2 (validium proofs) etc., and AggLayer integration – yes or no (AggLayer for interop). Polygon even provided a slick interface (shown below) to visualize these choices:

Polygon CDK’s configuration interface, illustrating modular choices – e.g. Rollups vs Validium (scaling solution), decentralized vs centralized sequencer, local/Ethereum/3rd-party DA, different prover types, and whether to enable AggLayer interoperability.

Under the hood, Polygon CDK uses zk-Proofs with recursion to allow high throughput and a dynamic validator set. The AggLayer is an emerging part of the architecture that will connect chains for trustless messaging and shared liquidity. The CDK is built in a way that future improvements in Polygon’s ZK tech (like faster proofs, or new VM features) can be adopted by all CDK chains via upgrades. Polygon has a concept of “Type 1 vs Type 2” zkEVM – Type 1 is fully Ethereum-equivalent, Type 2 is almost equivalent with minor changes for efficiency. A CDK chain could choose a slightly modified EVM for more speed (sacrificing some equivalence) – this is an architectural option projects have. Overall, CDK is very lego-like: one can assemble a chain choosing components suitable for their use case (e.g., an enterprise might choose validium + permissioned sequencers + private Tx visibility; a public DeFi chain might choose rollup + decentralized sequencer + AggLayer enabled for liquidity). This versatility has attracted many projects to consider CDK for launching their own networks.

  • Images and diagrams: The frameworks often provide visual diagrams of their modular architecture. For example, zkSync’s UI shows toggles for Rollup/Validium, L2/L3, centralized/decentralized, etc., highlighting the ZK Stack’s flexibility:

An example configuration for a zkSync “Hyperchain.” The ZK Stack interface allows selecting chain mode (Rollup vs Validium vs Volition), layer (L2 or L3), transaction sequencing (decentralized, centralized, or shared), data availability source (Ethereum, third-party network, or custom), data visibility (public or private chain), and gas token (ETH, custom, or gasless). This modular approach is designed to support a variety of use cases, from public DeFi chains to private enterprise chains.

In summary, all these stacks are highly modular and upgradable, which is essential given the pace of blockchain innovation. They are converging in some sense: OP Stack adding validity proofs, Polygon adding shared sequencing (OP Stack ideas), Arbitrum adding interoperable L3s (like others), zkSync pursuing L3s (like Orbit and OPStack do). This cross-pollination means modular frameworks in 2025 are more alike than different in philosophy – each wants to be the one-stop toolkit to launch scalable chains without reinventing the wheel.

Developer Experience and Tooling

A critical factor for adoption is how easy and developer-friendly these frameworks are. This includes documentation, SDKs/APIs, CLIs for deployment, monitoring tools, and the learning curve for developers:

  • OP Stack – Developer Experience: Optimism’s OP Stack benefits from being EVM-equivalent, so Ethereum developers can use familiar tools (Remix, Hardhat, Truffle, Solidity, Vyper) without modification. Smart contracts deployed to an OP chain behave exactly as on L1. This drastically lowers the learning curve. Optimism provides extensive documentation: the official Optimism docs have sections on the OP Stack, running an L2 node, and even an “OP Stack from scratch” tutorial. There are community-written guides as well (for example, QuickNode’s step-by-step guide on deploying an Optimism L2 rollup). In terms of tooling, OP Labs has released the op-node client (for the rollup node) and op-geth (execution engine). To launch a chain, a developer typically needs to configure these and deploy the L1 contracts (Standard Bridge, etc.). This was non-trivial but is becoming easier with provider services. Deployment-as-a-service: companies like Caldera, Conduit, and Infura/Alchemy offer managed OP Stack rollup deployments, which abstracts away much of the DevOps. For monitoring, because an OP Stack chain is essentially a geth chain plus a rollup coordinator, standard Ethereum monitoring tools (ETH metrics dashboards, block explorers like Etherscan/Blockscout) can be used. In fact, Etherscan supports OP Stack chains such as Optimism and Base, providing familiar block explorer interfaces. Developer tooling specifically for OP Chains includes the Optimism SDK for bridging (facilitating deposits/withdrawals in apps) and Bedrock’s integration with Ethereum JSON-RPC (so tools like MetaMask just work by switching network). The OP Stack code is MIT licensed, inviting developers to fork and experiment. Many did – e.g. BNB Chain’s team used OP Stack to build opBNB with their own modifications to consensus and gas token (they use BNB gas on opBNB). The OP Stack’s adherence to Ethereum standards makes the developer experience arguably the smoothest among these: essentially “Ethereum, but cheaper” from a contract developer’s perspective. The main new skills needed are around running the infrastructure (for those launching a chain) and understanding cross-chain bridging nuances. Optimism’s community and support (Discord, forums) are active to help new chain teams. Additionally, Optimism has funded ecosystem tools like Magi (an alternative Rust rollup client) to diversify the stack and make it more robust for developers.

  • zkSync ZK Stack – Developer Experience: On the contract development side, zkSync’s ZK Stack offers a zkEVM that is intended to be high compatibility but currently not 100% bytecode-equivalent. It supports Solidity and Vyper contracts, but there are subtle differences (for example, certain precompiles or gas costs). That said, Matter Labs built an LLVM compiler that takes Solidity and produces zkEVM bytecode, so most Solidity code works with little to no change. They also natively support account abstraction, which devs can leverage to create gasless transactions, multi-sig wallets, etc., more easily than on Ethereum (no need for ERC-4337). The developer docs for zkSync are comprehensive (docs.zksync.io) and cover how to deploy contracts, use the Hyperchain CLI (if any), and configure a chain. However, running a ZK rollup is inherently more complex than an optimistic one – you need a proving setup. The ZK Stack provides the prover software (e.g. the GPU provers for zkSync’s circuits), but a chain operator must have access to serious hardware or cloud services to generate proofs continuously. This is a new DevOps challenge; to mitigate it, some companies are emerging that provide prover services or even Proof-as-a-Service. If a developer doesn’t want to run their own provers, they might be able to outsource it (with trust or crypto-economic assurances). Tooling: zkSync provides a bridge and wallet portal by default (the zkSync Portal) which can be forked for a new chain, giving users a UI to move assets and view accounts. For block exploration, Blockscout has been adapted to zkSync, and Matter Labs built their own block explorer for zkSync Era which could likely be used for new chains. The existence of the ZK Gateway and Router means that if a developer plugs into that, they get some out-of-the-box interoperability with other chains – but they need to follow Matter Labs’ standards. Overall, for a smart contract dev, building on zkSync is not too difficult (just Solidity, with perhaps minor differences like gasleft() might behave slightly differently due to not having actual Ethereum gas cost). But for a chain operator, the ZK Stack has a steeper learning curve than OP Stack or Orbit. In 2025, Matter Labs is focusing on improving this – for instance, simplifying the process of launching a Hyperchain, possibly providing scripts or cloud images to spin up the whole stack. There is also an emerging community of devs around ZK Stack; e.g., the ZKSync Community Edition is an initiative where community members run test L3 chains and share tips. We should note that language support for zkSync’s ecosystem might expand – they’ve talked about allowing other languages via the LLVM pipeline (e.g., a Rust-to-zkEVM compiler in the future), but Solidity is the main one now. In summary, zkSync’s dev experience: great for DApp devs (nearly Ethereum-like), moderate for chain launchers (need to handle prover and new concepts like validiums).

  • Arbitrum Orbit – Developer Experience: For Solidity developers, Arbitrum Orbit (and Arbitrum One) is fully EVM-compatible at the bytecode level (Arbitrum Nitro uses geth-derived execution). Thus, deploying and interacting with contracts on an Arbitrum chain is just like Ethereum (with some small differences like slightly different L1 block number access, chainID, etc., but nothing major). Where Arbitrum stands out is Stylus – developers can write smart contracts in languages like Rust, C, C++ (compiled to WebAssembly) and deploy those alongside EVM contracts. This opens blockchain development to a wider pool of programmers and enables high-performance use cases. For example, an algorithmic intensive logic could be written in C for speed. Stylus is still in beta on Arbitrum mainnet, but Orbit chains can experiment with it. This is a unique boon for developer experience, albeit those using Stylus will need to learn new tooling (e.g., Rust toolchains, and Arbitrum’s libraries for interfacing WASM with the chain). The Arbitrum docs provide guidance on using Stylus and even writing Rust smart contracts. For launching an Orbit chain, Offchain Labs has provided Devnet scripts and an Orbit deployment UI. The process is somewhat technical: one must set up an Arbitrum node with --l3 flags (if launching an L3) and configure the genesis, chain parameters, etc.. QuickNode and others have published guides (“How to deploy your own Arbitrum Orbit chain”). Additionally, Orbit partnerships with Caldera, AltLayer, and Conduit mean these third parties handle a lot of the heavy lifting. A developer can essentially fill out a form or run a wizard with those services to get a customized Arbitrum chain, instead of manually modifying the Nitro code. In terms of debugging and monitoring, Arbitrum chains can use Arbiscan (for those that have it) or community explorers. There’s also Grafana/Prometheus integrations for node metrics. One complexity is the fraud proof system – developers launching an Orbit chain should ensure there are validators (maybe themselves or trusted others) who run the off-chain validator software to watch for fraud. Offchain Labs likely provides default scripts for running such validators. But since fraud proofs rarely trigger, it’s more about having the security process in place. Arbitrum’s large developer community (projects building on Arbitrum One) is an asset – resources like tutorials, stackexchange answers, etc., often apply to Orbit as well. Also, Arbitrum is known for its strong developer education efforts (workshops, hackathons), which presumably extend to those interested in Orbit.

  • Polygon CDK – Developer Experience: Polygon CDK is newer (announced mid/late 2023), but it builds on familiar components. For developers writing contracts, Polygon CDK chains use a zkEVM that’s intended to be equivalent to Ethereum’s EVM (Polygon’s Type 2 zkEVM is nearly identical with a few edge cases). So, Solidity and Vyper are the go-to languages, with full support for standard Ethereum dev tools. If you’ve deployed on Polygon zkEVM or Ethereum, you can deploy on a CDK chain similarly. The challenge is more on the chain operations side. Polygon’s CDK is open-source on GitHub and comes with documentation on how to configure a chain. It likely provides a command-line tool to scaffold a new chain (similar to how one might use Cosmos SDK’s starport or Substrate’s node template). Polygon Labs has invested in making the setup as easy as possible – one quote: “launch a high-throughput ZK-powered Ethereum L2 as easily as deploying a smart contract”. While perhaps optimistic, this indicates tools or scripts exist to simplify deployment. Indeed, there have been early adopters like Immutable (for gaming) and OKX (exchange chain) that have worked with Polygon to launch CDK chains, suggesting a fairly smooth process with Polygon’s team support. The CDK includes SDKs and libraries to interact with the bridge (for deposits/withdrawals) and to enable AggLayer if desired. Monitoring a CDK chain can leverage Polygon’s block explorer (Polygonscan) if they integrate it, or Blockscout. Polygon is also known for robust SDKs for gaming and mobile (e.g., Unity SDKs) – those can be used on any Polygon-based chain. Developer support is a big focus: Polygon has academies, grants, hackathons regularly, and their Developer Relations team helps projects one-on-one. An example of enterprise developer experience: Libre, an institutional chain launched with CDK, presumably had custom requirements – Polygon was able to accommodate things like identity modules or compliance features on that chain. This shows the CDK can be extended for specific use cases by developers with help from the framework. As for learning materials, Polygon’s docs site and blog have guides on CDK usage, and because CDK is essentially the evolution of their zkEVM, those familiar with Polygon’s zkEVM design can pick it up quickly. One more tooling aspect: Cross-chain tools – since many Polygon CDK chains will coexist, Polygon provides the AggLayer for messaging, but also encourages use of standard cross-chain messaging like LayerZero (indeed Rarible’s Orbit chain integrated LayerZero for NFT transfers and Polygon chains can too). So, devs have options to integrate interoperability plugins easily. All told, the CDK developer experience is aimed to be turnkey for launching Ethereum-level chains with ZK security, benefiting from Polygon’s years of L2 experience.

In conclusion, developer experience has dramatically improved for launching custom chains: what once required a whole team of protocol engineers can now be done with guided frameworks and support. Optimism’s and Arbitrum’s offerings leverage familiarity (EVM equivalence), zkSync and Polygon offer cutting-edge tech with increasing ease-of-use, and all have growing ecosystems of third-party tools to simplify development (from block explorers to monitoring dashboards and devops scripts). The documentation quality is generally high – official docs plus community guides (Medium articles, QuickNode/Alchemy guides) cover a lot of ground. There is still a non-trivial learning curve to go from smart contract developer to “rollup operator,” but it’s getting easier as best practices emerge and the community of rollup builders expands.

Ecosystem Support and Go-to-Market Strategies

Building a technology is one thing; building an ecosystem is another. Each of these frameworks is backed by an organization or community investing in growth through grants, funding, marketing, and partnership support. Here we compare their ecosystem support strategies – how they attract developers and projects, and how they help those projects succeed:

  • OP Stack (Optimism) Ecosystem: Optimism has a robust ecosystem strategy centered on its Optimism Collective and ethos of public goods funding. They pioneered Retroactive Public Goods Funding (RPGF) – using OP token treasury to reward developers and projects that benefit the ecosystem. Through multiple RPGF rounds, Optimism has distributed millions in funding to infrastructure projects, dev tools, and applications on Optimism. Any project building with OP Stack (especially if aligning with the Superchain vision) is eligible to apply for grants from the Collective. Additionally, Optimism’s governance can authorize incentive programs (earlier in 2022, they had an airdrop and governance fund that projects could tap to distribute OP rewards to users). In 2024, Optimism established the Superchain Revenue Sharing model, where each OP Chain contributes a small portion of fees to a shared treasury. This creates a flywheel: as more chains (like Base, opBNB, Worldcoin’s chain, etc.) generate usage, they collectively fund more public goods that improve the OP Stack, which in turn attracts more chains. It’s a positive-sum approach unique to Optimism. On the go-to-market side, Optimism has actively partnered with major entities: getting Coinbase to build Base was a huge validation of OP Stack, and Optimism Labs provided technical help and support to Coinbase during that process. Similarly, they’ve worked with Worldcoin’s team, and Celo’s migration to an OP Stack L2 was done with consultation from OP Labs. Optimism does a lot of developer outreach – from running hackathons (often combined with ETHGlobal events) to maintaining a Developer Hub with tutorials. They also invest in tooling: e.g., funding teams to build alternative clients, monitoring tools, and providing an official faucet and block explorer integration for new chains. Marketing-wise, Optimism coined the term “Superchain” and actively promotes the vision of many chains uniting under one interoperable umbrella, which has attracted projects that want to be part of a broader narrative rather than an isolated appchain. There’s also the draw of shared liquidity: with the upcoming OPCraft (Superchain interoperability), apps on one OP Chain can easily interact with another, making it appealing to launch a chain that’s not an island. In essence, OP Stack’s ecosystem play is about community and collaboration – join the Superchain, get access to a pool of users (via easy bridging), funding, and collective branding. They even created a “Rollup Passport” concept where users can have a unified identity across OP Chains. All these efforts lower the barrier for new chains to find users and devs. Finally, Optimism’s own user base and reputation (being one of the top L2s) means any OP Stack chain can somewhat piggyback on that (Base did, by advertising itself as part of the Optimism ecosystem, for instance).

  • zkSync (ZK Stack/Hyperchains) Ecosystem: Matter Labs (the team behind zkSync) secured large funding rounds (over $200M) to fuel its ecosystem. They have set up funds like the ** zkSync Ecosystem Fund**, often in collaboration with VCs, to invest in projects building on zkSync Era. For the ZK Stack specifically, they have started to promote the concept of Hyperchains to communities that need their own chain. One strategy is targeting specific verticals: for example, gaming. zkSync has highlighted how a game studio could launch its own Hyperchain to get customizability and still be connected to Ethereum. They are likely offering close support to initial partners (in the way Polygon did with some enterprises). The mention in the Zeeve article about a “Swiss bank; world’s largest bank” interested in ZK Stack suggests Matter Labs is courting enterprise use cases that need privacy (ZK proofs can ensure correctness while keeping some data private, a big deal for institutions). If zkSync lands a major enterprise chain, that would boost their credibility. Developer support on zkSync is quite strong: they run accelerators (e.g., an program with Blockchain Founders Fund was announced), hackathons (often zk themed ones), and have an active community on their Discord providing technical help. While zkSync doesn’t have a live token (as of 2025) for governance or incentives, there’s speculation of one, and projects might anticipate future incentive programs. Matter Labs has also been working on bridging support: they partnered with major bridges like Across, LayerZero, Wormhole to ensure assets and messages can move easily to and from zkSync and any hyperchains. In fact, Across Protocol integrated zkSync’s ZK Stack, boasting support across “all major L2 frameworks”. This interoperability focus means a project launching a hyperchain can readily connect to Ethereum mainnet and other L2s, crucial for attracting users (nobody wants to be siloed). Marketing-wise, zkSync pushes the slogan “Web3 without compromise” and emphasizes being first to ZK mainnet. They publish roadmaps (their 2025 roadmap blog) to keep excitement high. If we consider ecosystem funds: aside from direct Matter Labs grants, there’s also the Ethereum Foundation and other ZK-focused funds that favor zkSync development due to the general importance of ZK tech. Another strategy: zkSync is open source and neutral (no licensing fees), which appeals to projects that might be wary of aligning with a more centralized ecosystem. The ZK Stack is trying to position itself as the decentralizer’s choice – e.g., highlighting full decentralization and no training wheels, whereas OP Stack and others still have some centralization in practice. Time will tell if that resonates, but certainly within the Ethereum community, zkSync has supporters who want a fully trustless stack. Finally, Matter Labs and BitDAO’s Windranger have a joint initiative called “ZK DAO” which might deploy capital or incentives for the ZK Stack adoption. Overall, zkSync’s ecosystem efforts are a mix of technical superiority messaging (ZK is the future) and building practical bridges (both figurative and literal) for projects to come onboard.

  • Arbitrum Orbit Ecosystem: Arbitrum has a huge existing ecosystem on its L2 (Arbitrum One), with the highest DeFi TVL among L2s in 2024. Offchain Labs leverages this by encouraging successful Arbitrum dApps to consider Orbit chains for sub-applications or L3 expansions. They announced that over 50 Orbit chains were in development by late 2023, expecting perhaps 100+ by end of 2024 – indicating substantial interest. To nurture this, Offchain Labs adopted a few strategies. First, partnerships with RaaS providers: They realized not every team can handle the rollup infra, so they enlisted Caldera, Conduit, and AltLayer to streamline it. These partners often have their own grant or incentive programs (sometimes co-sponsored by Arbitrum) to entice projects. For example, there might be an Arbitrum x AltLayer grant for gaming chains. Second, Offchain Labs provides direct technical support and co-development for key projects. The case of Xai Chain is illustrative: it’s a gaming L3 where Offchain Labs co-developed the chain and provides ongoing tech and even marketing support. They basically helped incubate Xai to showcase Orbit’s potential in gaming. Similarly, Rarible’s RARI NFT chain got integrated with many partners (Gelato for gasless, LayerZero for cross-chain NFTs, etc.) with presumably Arbitrum’s guidance. Offchain Labs also sometimes uses its war chest (Arbitrum DAO has a huge treasury of ARB tokens) to fund initiatives. While the Arbitrum DAO is separate, Offchain Labs can coordinate with it for ecosystem matters. For instance, if an Orbit chain heavily uses ARB token or benefits Arbitrum, the DAO could vote grants. However, a more direct approach: Offchain Labs launched Arbitrum Orbit Challenge hackathons and prizes to encourage developers to try making L3s. On marketing: Arbitrum’s brand is developer-focused, and they promote Orbit’s advantages like Stylus (fast, multi-language contracts) and no 7-day withdrawal (with fast bridging). They also highlight successful examples: e.g., Treasure DAO’s Bridgeworld announced an Orbit chain, etc. One more support angle: liquidity and Defi integration. Arbitrum is working with protocols so that if you launch an Orbit chain, you can tap into liquidity from Arbitrum One easily (via native bridging or LayerZero). The easier it is to get assets and users moving to your new chain, the more likely you’ll succeed. Arbitrum has a very large, active community (on Reddit, Discord, etc.), and by extending that to Orbit, new chains can market to existing Arbitrum users (for example, an Arbitrum user might get an airdrop on a new Orbit chain to try it out). In summary, Arbitrum’s ecosystem strategy for Orbit is about leveraging their L2 dominance – if you build an L3, you’re effectively an extension of the largest L2, so you get to share in that network effect. Offchain Labs is actively removing hurdles (technical and liquidity hurdles) and even directly helping build some early L3s to set precedents for others to follow.

  • Polygon CDK (AggLayer) Ecosystem: Polygon has been one of the most aggressive in ecosystem and business development. They have a multi-pronged approach:

    • Grants and Funds: Polygon established a $100M Ecosystem Fund a while back, and has invested in hundreds of projects. They also had specific vertical funds (e.g., Polygon Gaming Fund, Polygon DeFi Fund). For CDK chains, Polygon announced incentives such as covering part of the cost of running a chain or providing liquidity support. The CoinLaw stats mention “More than 190 dApps are leveraging Polygon CDK to build their own chains” – which implies Polygon has gotten a vast pipeline of projects (likely many still in development). They’ve likely offered grants or resource sharing to these teams.
    • Enterprise and Institutional Onboarding: Polygon’s BizDev team has on-boarded major companies (Starbucks, Reddit, Nike, Disney for NFTs on Polygon POS). Now with CDK, they pitch enterprises to launch dedicated chains. E.g., Immutable (gaming platform) partnering to use CDK for game-specific chains, Franklin Templeton launching a fund on Polygon, and Walmart’s trial of a supply chain on a private Polygon chain. Polygon provides white-glove support to these partners: technical consulting, custom feature development (privacy, compliance), and co-marketing. The introduction of Libre (by JP Morgan/Siemens) built on Polygon CDK shows how they cater to financial institutions with specialized needs.
    • Go-to-Market and Interoperability: Polygon is creating the AggLayer as an interoperability and liquidity hub connecting all Polygon chains. This means if you launch a CDK chain, you’re not on your own – you become part of “Polygon 2.0,” a constellation of chains with unified liquidity. They promise things like one-click token transfer between CDK chains and Ethereum (via AggLayer). They are also not charging any protocol fees (no rent), which they tout as a competitive advantage against, say, Optimism’s fee sharing. Polygon’s marketing highlights that launching a CDK chain can give you “the best of both worlds”: custom sovereignty and performance plus access to the large user base and developer base of Polygon/Ethereum. They often cite that Polygon (POS+zkEVM) combined processed 30%+ of all L2 transactions, to assure potential chain builders that the flow of users on Polygon is huge.
    • Developer Support: Polygon runs perhaps the most hackathons and DevRel events in the blockchain space. They have a dedicated Polygon University, online courses, and they frequently sponsor ETHGlobal and other hackathons with challenges around using CDK, zkEVM, etc. So developers can win prizes building prototypes of CDK chains or cross-chain dapps. They also maintain a strong presence in developer communities and provide quick support (the Polygon Discord has channels for technical questions where core devs answer).
    • Community and Governance: Polygon is transitioning to Polygon 2.0 with a new POL token and community governance that spans all chains. This could mean community treasuries or incentive programs that apply to CDK chains. For example, there may be a Polygon Ecosystem Mining program where liquidity mining rewards are offered to projects that deploy on new CDK chains to bootstrap usage. The idea is to ensure new chains aren’t ghost towns.
    • Success Stories: Already, several CDK chains are live or announced: OKX’s OKB Chain (X Layer), Gnosis Pay’s chain, Astar’s zkEVM, Palm Network migrating, GameSwift (gaming chain), etc.. Polygon actively publicizes these and shares knowledge from them to others.

Overall, Polygon’s strategy is “we will do whatever it takes to help you succeed if you build on our stack.” That includes financial incentives, technical manpower, marketing exposure (speaking slots in conferences, press releases on CoinTelegraph like we saw), and integration into a larger ecosystem. It’s very much a business development-driven approach in addition to grassroots dev community, reflecting Polygon’s more corporate style relative to the others.

To summarize ecosystem support: All these frameworks understand that attracting developers and projects requires more than tech – it needs funding, hand-holding, and integration into a larger narrative. Optimism pushes a collaborative public-goods-focused narrative with fair revenue sharing. zkSync pushes the cutting-edge tech angle and likely will announce incentives aligned with a future token. Arbitrum leverages its existing dominance and provides partner networks to make launching easy, plus possibly the deepest DeFi liquidity to tap into. Polygon arguably goes the furthest in smoothing the path for both crypto-native and enterprise players, effectively subsidizing and co-marketing chains.

An illustrative comparative snapshot:

FrameworkNotable Ecosystem ProgramsDeveloper/Partner SupportEcosystem Size (2025)
OP Stack (Optimism)RetroPGF grants (OP token); Superchain fee sharing for public goods; Multiple grant waves for tooling & dapps.OP Labs offers direct tech support to new chains (e.g. Base); strong dev community; Superchain branding & interoperability to attract users. Regular hackathons (often Optimism-sponsored tracks).Optimism mainnet ~160+ dapps, Base gaining traction, 5+ OP Chains live (Base, opBNB, Worldcoin, Zora, others) and more announced (Celo). Shared $14k+ ETH revenue to Collective. Large community via Optimism and Coinbase users.
zkSync ZK StackzkSync Ecosystem Fund (>$200M raised for dev financing); possible future airdrops; targeted vertical programs (e.g. gaming, AI agents on Hyperchains).Matter Labs provides technical onboarding for early Hyperchain pilots; detailed docs and open-source code. Partnered with bridge protocols for connectivity. Developer incentives mostly through hackathons and VC investments (no token incentives yet).zkSync Era L2 has 160+ protocols, ~$100M TVL. Early hyperchains in test (no major live L3 yet). Enterprise interest signals future growth (e.g. pilot with a large bank). Strong ZK developer community and growing recognition.
Arbitrum OrbitArbitrum DAO $ARB treasury ($3B+) for potential grants; Offchain Labs partnership with RaaS (Caldera, AltLayer) subsidizing chain launches; Orbit Accelerator programs.Offchain Labs co-developed flagship Orbit chains (Xai, etc.); assists with marketing (Binance Launchpad for Xai’s token). Dev support via Arbitrum’s extensive documentation and direct engineering help for integration (Stylus, custom gas). Fast bridge support for user experience.Arbitrum One: largest L2 TVL (~$5B); ~50 Orbit chains in dev as of late 2023, ~16 launched by early 2025. Notable live chains: Xai, Rari Chain, Frame, etc. DeFi heavy ecosystem on L2 can extend liquidity to L3s. Large, loyal community (Arbitrum airdrop had >250k participants).
Polygon CDK (AggLayer)Polygon Ecosystem Fund & many vertical funds (NFTs, gaming, enterprise); Polygon 2.0 Treasury for incentives; offering to cover certain infra costs for new chains. AggLayer liquidity/reward programs expected.Polygon Labs team works closely with partners (e.g. Immutable, enterprises) for custom needs; extensive devrel (Polygon University, hackathons, tutorials). Integration of CDK chains with Polygon’s zkEVM and PoS infrastructure (shared wallets, bridges). Marketing via big brand partnerships (public case studies of Nike, Reddit on Polygon) to lend credibility.Polygon PoS: huge adoption (4B+ txns); Polygon zkEVM growing (100+ dapps). CDK: 20+ chains either live (OKX, Gnosis Pay, etc.) or in pipeline by end 2024. ~190 projects exploring CDK. Enterprise adoption notable (financial institutions, retail giants). One of the largest developer ecosystems due to Polygon PoS history, now funneled into CDK.

As the table suggests, each ecosystem has its strengths – Optimism with collaborative ethos and Coinbase’s weight, zkSync with ZK leadership and innovation focus, Arbitrum with proven adoption and technical prowess (Stylus), Polygon with corporate connections and comprehensive support. All are pumping significant resources into growing their communities, because ultimately the success of a rollup framework is measured by the apps and users on the chains built with it.

Deployments and Adoption in 2025

Finally, let’s look at where these frameworks stand in terms of real-world adoption as of 2025 – both in the crypto-native context (public networks, DeFi/NFT/gaming projects) and enterprise or institutional use:

  • OP Stack Adoption: The OP Stack has powered Optimism Mainnet, which itself is one of the top Ethereum L2s with a thriving DeFi ecosystem (Uniswap, Aave, etc.) and tens of thousands of daily users. In 2023–2024, OP Stack was chosen by Coinbase for their Base network – Base launched in August 2023 and quickly onboarded popular apps (Coinbase’s own wallet integration, friend.tech social app) and reached high activity (at times even surpassing Optimism in transactions). Base’s success validated OP Stack for many; Base had 800M transactions in 2024, making it the second-highest chain by tx count that year. Another major OP Stack deployment is opBNB – Binance’s BNB Chain team created an L2 using OP Stack (but settling to BNB Chain instead of Ethereum). opBNB went live in 2023, indicating OP Stack’s flexibility to use a non-Ethereum settlement. Worldcoin’s World ID chain went live on OP Stack (settling on Ethereum) in 2023 to handle its unique biometric identity transactions. Zora Network, an NFT-centric chain by Zora, launched on OP Stack as well, tailored for creator economy use cases. Perhaps the most ambitious is Celo’s migration: Celo voted to transition from an independent L1 to an Ethereum L2 built on OP Stack – as of 2025, this migration is underway, effectively bringing a whole existing ecosystem (Celo’s DeFi and phone-focused apps) into the OP Stack fold. We also have smaller projects like Mode (Bybit’s side chain), Mantle (BitDAO’s chain) – actually Mantle opted for a modified OP Stack too. And many more are rumored or in development, given Optimism’s open-source approach (anyone can fork and launch without permission). On enterprise side, we haven’t seen much explicit OP Stack enterprise chain (enterprises seem drawn more to Polygon or custom). However, Base is an enterprise (Coinbase) backing, and that’s significant. The Superchain vision implies that even enterprise chains might join as OP Chains to benefit from shared governance – for instance, if some fintech wanted to launch a compliant chain, using OP Stack and plugging into Superchain could give it ready connectivity. As of 2025, OP Stack chains collectively (Optimism, Base, others) handle a significant portion of L2 activity, and the Superchain aggregated throughput is presented as a metric (Optimism often publishes combined stats). With Bedrock upgrade and further improvements, OP Stack chains are proving high reliability (Optimism had negligible downtime). The key measure of adoption: OP Stack is arguably the most forked rollup framework so far, given Base, BNB, Celo, etc., which are high-profile. In total, ~5-10 OP Stack chains are live mainnets, and many more testnets. If we include devnets and upcoming launches, the number grows.

  • zkSync Hyperchains Adoption: zkSync Era mainnet (L2) itself launched in March 2023 and by 2025 it’s among the top ZK rollups, with ~$100M TVL and dozens of projects. Notable apps like Curve, Uniswap, Chainlink deployed or announced deployment on zkSync. Now, regarding Hyperchains (L3 or sovereign chains), this is very cutting-edge. In late 2024, Matter Labs launched a program for teams to experiment with L3s on top of zkSync. One example: the Rollup-as-a-Service provider Decentriq was reportedly testing a private Hyperchain for data sharing. Also, Blockchain Capital (VC) hinted at experimenting with an L3. We have mention that an ecosystem of 18+ protocols is leveraging ZK Stack for things like AI agents and specialized use cases – possibly on testnets. No major Hyperchain is publicly serving users yet (as far as known by mid-2025). However, interest is high in specific domains: gaming projects have shown interest in ZK hyperchains for fast finality and customizability, and privacy-oriented chains (a Hyperchain could include encryption and use zkProofs to hide data – something an optimistic rollup can’t offer as easily). The comment about a “Swiss bank” suggests maybe UBS or a consortium is testing a private chain using ZK Stack, likely attracted by throughput (~10k TPS) and privacy. If that moves to production, it would be a flagship enterprise case. In summary, zkSync’s Hyperchain adoption in 2025 is in an early pilot stage: developer infrastructure is ready (as evidenced by documentation and some test deployments), but we’re waiting for the first movers to go live. It’s comparable to where Optimism was in early 2021 – proven tech but just starting adoption. By end of 2025, we could expect a couple of Hyperchains live, possibly one community-driven (maybe a gaming Hyperchain spun out of a popular zkSync game) and one enterprise-driven. Another factor: there’s talk of Layer3s on zkSync Era as well – essentially permissionless L3s where anyone can deploy an app-chain atop zkSync’s L2. Matter Labs has built the contracts to allow that, so we may see user-driven L3s (like someone launching a mini rollup for their specific app) which counts as adoption of the ZK Stack.

  • Arbitrum Orbit Adoption: Arbitrum Orbit saw a surge of interest after its formal introduction in mid-2023. By late 2023, around 18 Orbit chains were publicly disclosed, and Offchain Labs indicated over 50 in progress. As of 2025, some of the prominent ones:

    • Xai Chain: A gaming-focused L3, now live (mainnet launched late 2023). It’s used by game developers (like Ex Populus studio) and had a token launch via Binance Launchpad. This indicates decent adoption (Binance Launchpad involvement suggests lots of user interest). Xai uses AnyTrust mode (for high TPS).
    • Rari Chain: An NFT-centric L3 by Rarible. Launched mainnet Jan 2024. It’s focused on NFT marketplaces with features like credit card payments for gas (via Stripe) and gasless listings. This chain is a good showcase of customizing user experience (as noted, Gelato provides gasless transactions, etc. on Rari Chain).
    • Frame: A creator-focused L2 (though called L2, it’s likely an Orbit chain settling on Ethereum or Arbitrum). It launched early 2024 after raising funding.
    • EduChain (by Camelot/GMX communities): The Zeeve article mentions an EDU chain with a large number of projects – possibly an ecosystem for on-chain education and AI, built on Orbit.
    • Ape Chain: Not explicitly mentioned above, but the context from Zeeve suggests an “Ape chain” (maybe Yuga Labs or ApeCoin DAO chain) exists with $9.86M TVL and uses APE for gas. That could be an Orbit chain in the ApeCoin ecosystem (this would be significant given Yuga’s influence in NFTs).
    • Other gaming chains: e.g., Cometh’s “Muster” L3 was announced (a gaming platform partnering with AltLayer). Syndr Chain for an options trading protocol is on testnet as Orbit L3. Meliora (DeFi credit protocol) building an Orbit L3.
    • Many of these are in early stages (testnet or recently launched mainnet), but collectively they indicate Orbit is gaining adoption among specialized dApps that outgrew a shared L2 environment or wanted their own governance.
    • On enterprise: not as much noise here. Arbitrum is known more for DeFi/gaming adoption. However, the technology could appeal to enterprise if they want an Ethereum-secured chain with flexible trust (via AnyTrust). It’s possible some enterprise quietly used Arbitrum technology for a private chain, but not publicized.
    • By the numbers, Arbitrum Orbit’s biggest user so far might be Ape Chain (if confirmed) with ~$10M TVL and 17 protocols on it (according to Zeeve). Another is EDU chain with 1.35M TVL and 30+ projects.
    • Arbitrum One and Nova themselves are part of this narrative – the fact Orbit chains can settle on Nova (ultra-cheap social/gaming chain) or One means adoption of Orbit also drives activity to those networks. Nova has seen usage for Reddit points etc. If Orbit chains plug into Nova’s AnyTrust committee, Nova’s role grows.
    • In sum, Arbitrum Orbit has moved beyond theory: dozens of real projects are building on it, focusing on gaming, social, and custom DeFi. Arbitrum’s approach of showing real use-cases (like Xai, Rari) has paid off, and we can expect by end of 2025 there will be possibly 50+ Orbit chains live, some with significant user bases (especially if one of the gaming chains hits a popular game release).
  • Polygon CDK Adoption: Polygon only announced CDK in H2 2023, but it piggybacks on the success of Polygon’s existing networks. Already, Polygon zkEVM (mainnet beta) itself is essentially a CDK chain run by Polygon Labs. It has seen decent adoption (over $50M TVL, major protocols deployed). But beyond that, numerous independent chains are in motion:

    • Immutable X (a large Web3 gaming platform) declared support for Polygon CDK to let game studios spin up their own zk-rollups that connect to Immutable and Polygon liquidity. This alliance means possibly dozens of games using CDK via Immutable in 2025.
    • OKX (exchange) launched OKB Chain (aka X Chain) using Polygon CDK in late 2024. An exchange chain can drive a lot of transactions (cex-to-dex flows, etc.). OKX chose Polygon presumably for scalability and because many of their users already use Polygon.
    • Canto (DeFi chain) and Astar (Polkadot sidechain) are mentioned as migrating to or integrating with Polygon CDK. Canto moving from Cosmos to Polygon layer indicates the appeal of sharing security with Ethereum via Polygon’s ZK.
    • Gnosis Pay: launched Gnosis Card chain with CDK – it’s a chain to allow fast stablecoin payments connected to a Visa card. This is live and an innovative fintech use.
    • Palm Network: a NFT-specialized chain originally on Ethereum is moving to Polygon CDK (Palm was co-founded by ConsenSys for NFTs with DC Comics, etc.).
    • dYdX: This is interesting – dYdX was building its own Cosmos chain, but Zeeve’s info lists dYdX under AggLayer CDK chains. If dYdX were to consider Polygon instead, that would be huge (though as of known info, dYdX V4 is Cosmos-based; perhaps they plan cross-chain or future pivot).
    • Nubank: one of the largest digital banks in Brazil, appears in Zeeve’s list. Nubank had launched a token on Polygon earlier; a CDK chain for their rewards or CBDC-like program could be in testing.
    • Wirex, IDEX, GameSwift, Aavegotchi, Powerloom, Manta… these names in Zeeve’s list show how cross-ecosystem the CDK reach is: e.g., Manta (a Polkadot privacy project) might use CDK for an Ethereum-facing ZK solution; Aavegotchi (an NFT game originally on Polygon POS) might get its own chain for game logic.
    • The Celestia integration in early 2024 will likely attract projects that want the Polygon tech but with Celestia DA – possibly some Cosmos projects (since Celestia is Cosmos-based) will choose Polygon CDK for execution and Celestia for DA.
    • Enterprises: Polygon has a dedicated enterprise team. Apart from those mentioned (Stripe on stablecoins, Franklin Templeton fund on Polygon, country governments minting stamps, etc.), with CDK they can promise enterprises their own chain with custom rules. We might see pilots like “Polygon Siemens Chain” or government chains emerging, though often those start private.
    • Polygon’s approach of being chain-agnostic (they even support an “OP Stack mode” now in CDK per Zeeve!) and not charging rent, has meant a rapid onboarding – they claim 190+ projects using or considering CDK by Q1 2025. If even a quarter of those go live, Polygon will have an expansive network of chains. They envision themselves not just as one chain but as an ecosystem of many chains (Polygon 2.0), possibly the largest such network if successful.
    • By numbers: as of early 2025, 21+ chains are either in mainnet or testnet using CDK according to the AggLayer site. This should accelerate through 2025 as more migrate or launch.
    • We can expect some high-profile launches, e.g. a Reddit chain (Reddit’s avatars on Polygon POS were huge; a dedicated Polygon L2 for Reddit could happen). Also, if any central bank digital currencies (CBDCs) or government projects choose a scaling solution, Polygon is often in those conversations – a CDK chain could be their choice for a permissioned L2 with zk proofs.

In summary, 2025 adoption status: OP Stack and Arbitrum Orbit have multiple live chains with real users and TVL, zkSync’s hyperchains are on the cusp with strong test pilots, and Polygon CDK has many lined up and a few live successes in both crypto and enterprise. The space is evolving rapidly, and projects often cross-consider these frameworks before choosing. It’s not zero-sum either – e.g., an app might use an OP Stack chain and a Polygon CDK chain for different regions or purposes. The modular blockchain future likely involves interoperability among all these frameworks. It’s notable that efforts like LayerZero and bridge aggregators now ensure assets move relatively freely between Optimism, Arbitrum, Polygon, zkSync, etc., so users might not even realize which stack a chain is built on under the hood.

Conclusion

Rollups-as-a-Service in 2025 offers a rich menu of options. OP Stack provides a battle-tested optimistic rollup framework with Ethereum alignment and the backing of a collaborative Superchain community. ZK Stack (Hyperchains) delivers cutting-edge zero-knowledge technology with modular validity and data choices, aiming for massive scalability and new use-cases like private or Layer-3 chains. Arbitrum Orbit extends a highly optimized optimistic rollup architecture to developers, with flexibility in data availability and the exciting addition of Stylus for multi-language smart contracts. Polygon CDK empowers projects to launch zkEVM chains with out-of-the-box interoperability (AggLayer) and the full support of Polygon’s ecosystem and enterprise ties. zkSync Hyperchains (via ZK Stack) promise to unlock Web3 at scale – multiple hyperchains all secured by Ethereum, each optimized for its domain (be it gaming, DeFi, or social), with seamless connectivity through zkSync’s Elastic framework.

In comparing data availability, we saw all frameworks embracing modular DA – Ethereum for security, and newer solutions like Celestia, EigenDA, or committees for throughput. Sequencer designs are initially centralized but moving toward decentralization: Optimism and Arbitrum provide L1 fallback queues and are enabling multi-sequencer or permissionless validator models, while Polygon and zkSync allow custom consensus deployment for chains that desire it. Fee models differ mainly in ecosystem philosophy – Optimism’s revenue share vs others’ self-contained economies – but all allow custom tokens and aim to minimize user costs by leveraging cheaper DA and fast finality (especially ZK chains).

On ecosystem support, Optimism fosters a collective where each chain contributes to shared goals (funding public goods) and benefits from shared upgrades. Arbitrum leverages its thriving community and liquidity, actively helping projects launch Orbit chains and integrating them with its DeFi hub. Polygon goes all-in with resources, courting both crypto projects and corporates, providing perhaps the most hands-on support and boasting an extensive network of partnerships and funds. Matter Labs (zkSync) drives innovation and appeals to those who want the latest ZK tech, and while its incentive programs are less publicly structured (pending a token), it has significant funding to deploy and a strong pull for ZK-minded builders.

From a developer’s perspective, launching a rollup in 2025 is more accessible than ever. Whether one’s priority is EVM-equivalence and ease (OP Stack, Arbitrum) or maximum performance and future-proof tech (ZK Stack, Polygon CDK), the tools and documentation are in place. Even monitoring and dev-tools have grown to support these custom chains – for instance, Alchemy and QuickNode’s RaaS platforms support Optimism, Arbitrum, and zkSync stacks out-of-the-box. This means teams can focus on their application and leave much of the heavy lifting to these frameworks.

Looking at public and enterprise adoption, it’s clear that modular rollups are moving from experimental to mainstream. We have global brands like Coinbase, Binance, and OKX running their own chains, major DeFi protocols like Uniswap expanding to multiple L2s and possibly their own rollups, and even governments and banks exploring these technologies. The competition (and collaboration) between OP Stack, ZK Stack, Orbit, CDK, etc., is driving rapid innovation – ultimately benefiting Ethereum by scaling it to reach millions of new users through tailored rollups.

Each framework has its unique value proposition:

  • OP Stack: Easy on-ramp to L2, shared Superchain network effects, and a philosophy of “impact = profit” via public goods.
  • ZK Stack: Endgame scalability with ZK integrity, flexibility in design (L2 or L3, rollup or validium), and prevention of liquidity fragmentation through the Elastic chain model.
  • Arbitrum Orbit: Proven tech (Arbitrum One never had a major failure), high performance (Nitro + Stylus), and the ability to customize trust assumptions (full rollup security or faster AnyTrust) for different needs.
  • Polygon CDK: Turnkey zk-rollups backed by one of the largest ecosystems, with immediate connectivity to Polygon/Ethereum assets and the promise of future “unified liquidity” via AggLayer – effectively a launchpad not just for a chain, but for a whole economy on that chain.
  • zkSync Hyperchains: A vision of Layer-3 scalability where even small apps can have their own chain secured by Ethereum, with minimal overhead, enabling Web2-level performance in a Web3 environment.

As of mid-2025, we are seeing the multi-chain modular ecosystem materialize: dozens of app-specific or sector-specific chains coexisting, many built with these stacks. L2Beat and similar sites now track not just L2s but L3s and custom chains, many of which use OP Stack, Orbit, CDK, or ZK Stack. Interoperability standards are being developed so that whether a chain uses Optimism or Polygon tech, they can talk to each other (projects like Hyperlane, LayerZero, and even OP and Polygon collaboration on shared sequencing).

In conclusion, Rollups-as-a-Service in 2025 has matured into a competitive landscape with OP Stack, ZK Stack, Arbitrum Orbit, Polygon CDK, and zkSync Hyperchains each offering robust, modular blockchain frameworks. They differ in technical approach (Optimistic vs ZK), but all aim to empower developers to launch scalable, secure chains tailored to their needs. The choice of stack may depend on a project’s specific priorities – EVM compatibility, finality speed, customization, community alignment, etc. – as outlined above. The good news is that there is no shortage of options or support. Ethereum’s rollup-centric roadmap is being realized through these frameworks, heralding an era where launching a new chain is not a monumental feat, but rather a strategic decision akin to choosing a cloud provider or tech stack in Web2. The frameworks will continue to evolve (e.g. we anticipate more convergence, like OP Stack embracing ZK proofs, Polygon’s AggLayer connecting to non-Polygon chains, etc.), but even now they collectively ensure that Ethereum’s scalability and ecosystem growth are limited only by imagination, not infrastructure.

Sources:

  • Optimism OP Stack – Documentation and Mirror posts
  • zkSync ZK Stack – zkSync docs and Matter Labs posts
  • Arbitrum Orbit – Arbitrum docs, Offchain Labs announcements
  • Polygon CDK – Polygon Tech docs, CoinTelegraph report
  • General comparison – QuickNode Guides (Mar 2025), Zeeve and others for ecosystem stats, plus various project blogs as cited above.

Trusted Execution Environments (TEEs) in the Web3 Ecosystem: A Deep Dive

· 68 min read

1. Overview of TEE Technology

Definition and Architecture: A Trusted Execution Environment (TEE) is a secure area of a processor that protects the code and data loaded inside it with respect to confidentiality and integrity. In practical terms, a TEE acts as an isolated “enclave” within the CPU – a kind of black box where sensitive computations can run shielded from the rest of the system. Code running inside a TEE enclave is protected so that even a compromised operating system or hypervisor cannot read or tamper with the enclave’s data or code. Key security properties provided by TEEs include:

  • Isolation: The enclave’s memory is isolated from other processes and even the OS kernel. Even if an attacker gains full admin privileges on the machine, they cannot directly inspect or modify enclave memory.
  • Integrity: The hardware ensures that code executing in the TEE cannot be altered by external attacks. Any tampering of the enclave code or runtime state will be detected, preventing compromised results.
  • Confidentiality: Data inside the enclave remains encrypted in memory and is only decrypted for use within the CPU, so secret data is not exposed in plain text to the outside world.
  • Remote Attestation: The TEE can produce cryptographic proofs (attestations) to convince a remote party that it is genuine and that specific trusted code is running inside it. This means users can verify that an enclave is in a trustworthy state (e.g. running expected code on genuine hardware) before provisioning it with secret data.

Conceptual diagram of a Trusted Execution Environment as a secure enclave “black box” for smart contract execution. Encrypted inputs (data and contract code) are decrypted and processed inside the secure enclave, and only encrypted results leave the enclave. This ensures that sensitive contract data remains confidential to everyone outside the TEE.

Under the hood, TEEs are enabled by hardware-based memory encryption and access control in the CPU. For example, when a TEE enclave is created, the CPU allocates a protected memory region for it and uses dedicated keys (burned into the hardware or managed by a secure co-processor) to encrypt/decrypt data on the fly. Any attempt by external software to read the enclave memory gets only encrypted bytes. This unique CPU-level protection allows even user-level code to define private memory regions (enclaves) that privileged malware or even a malicious system administrator cannot snoop or modify. In essence, a TEE provides a higher level of security for applications than the normal operating environment, while still being more flexible than dedicated secure elements or hardware security modules.

Key Hardware Implementations: Several hardware TEE technologies exist, each with different architectures but a similar goal of creating a secure enclave within the system:

  • Intel SGX (Software Guard Extensions): Intel SGX is one of the most widely used TEE implementations. It allows applications to create enclaves at the process level, with memory encryption and access controls enforced by the CPU. Developers must partition their code into “trusted” code (inside the enclave) and “untrusted” code (normal world), using special instructions (ECALL/OCALL) to transfer data in and out of the enclave. SGX provides strong isolation for enclaves and supports remote attestation via Intel’s Attestation Service (IAS). Many blockchain projects – notably Secret Network and Oasis Network – built privacy-preserving smart contract functionality on SGX enclaves. However, SGX’s design on complex x86 architectures has led to some vulnerabilities (see §4), and Intel’s attestation introduces a centralized trust dependency.

  • ARM TrustZone: TrustZone takes a different approach by dividing the processor’s entire execution environment into two worlds: a Secure World and a Normal World. Sensitive code runs in the Secure World, which has access to certain protected memory and peripherals, while the Normal World runs the regular OS and applications. Switches between worlds are controlled by the CPU. TrustZone is commonly used in mobile and IoT devices for things like secure UI, payment processing, or digital rights management. In a blockchain context, TrustZone could enable mobile-first Web3 applications by allowing private keys or sensitive logic to run in the phone’s secure enclave. However, TrustZone enclaves are typically larger-grained (at OS or VM level) and not as commonly adopted in current Web3 projects as SGX.

  • AMD SEV (Secure Encrypted Virtualization): AMD’s SEV technology targets virtualized environments. Instead of requiring application-level enclaves, SEV can encrypt the memory of entire virtual machines. It uses an embedded security processor to manage cryptographic keys and to perform memory encryption so that a VM’s memory remains confidential even to the hosting hypervisor. This makes SEV well-suited for cloud or server use cases: for example, a blockchain node or off-chain worker could run inside a fully-encrypted VM, protecting its data from a malicious cloud provider. SEV’s design means less developer effort to partition code (you can run an existing application or even an entire OS in a protected VM). Newer iterations like SEV-SNP add features like tamper detection and allow VM owners to attest their VMs without relying on a centralized service. SEV is highly relevant for TEE use in cloud-based blockchain infrastructure.

Other emerging or niche TEE implementations include Intel TDX (Trust Domain Extensions, for enclave-like protection in VMs on newer Intel chips), open-source TEEs like Keystone (RISC-V), and secure enclave chips in mobile (such as Apple’s Secure Enclave, though not typically open for arbitrary code). Each TEE comes with its own development model and trust assumptions, but all share the core idea of hardware-isolated secure execution.

2. Applications of TEEs in Web3

Trusted Execution Environments have become a powerful tool in addressing some of Web3’s hardest challenges. By providing a secure, private computation layer, TEEs enable new possibilities for blockchain applications in areas of privacy, scalability, oracle security, and integrity. Below we explore major application domains:

Privacy-Preserving Smart Contracts

One of the most prominent uses of TEEs in Web3 is enabling confidential smart contracts – programs that run on a blockchain but can handle private data securely. Blockchains like Ethereum are transparent by default: all transaction data and contract state are public. This transparency is problematic for use cases that require confidentiality (e.g. private financial trades, secret ballots, personal data processing). TEEs provide a solution by acting as a privacy-preserving compute enclave connected to the blockchain.

In a TEE-powered smart contract system, transaction inputs can be sent to a secure enclave on a validator or worker node, processed inside the enclave where they remain encrypted to the outside world, and then the enclave can output an encrypted or hashed result back to the chain. Only authorized parties with the decryption key (or the contract logic itself) can access the plaintext result. For example, Secret Network uses Intel SGX in its consensus nodes to execute CosmWasm smart contracts on encrypted inputs, so things like account balances, transaction amounts, or contract state can be kept hidden from the public while still being usable in computations. This has enabled secret DeFi applications – e.g. private token swaps where the amounts are confidential, or secret auctions where bids are encrypted and only revealed after auction close. Another example is Oasis Network’s Parcel and confidential ParaTime, which allow data to be tokenized and used in smart contracts under confidentiality constraints, enabling use cases like credit scoring or medical data on blockchain with privacy compliance.

Privacy-preserving smart contracts via TEEs are attractive for enterprise and institutional adoption of blockchain. Organizations can leverage smart contracts while keeping sensitive business logic and data confidential. For instance, a bank could use a TEE-enabled contract to handle loan applications or trade settlements without exposing client data on-chain, yet still benefit from the transparency and integrity of blockchain verification. This capability directly addresses regulatory privacy requirements (such as GDPR or HIPAA), allowing compliant use of blockchain in healthcare, finance, and other sensitive industries. Indeed, TEEs facilitate compliance with data protection laws by ensuring that personal data can be processed inside an enclave with only encrypted outputs leaving, satisfying regulators that data is safeguarded.

Beyond confidentiality, TEEs also help enforce fairness in smart contracts. For example, a decentralized exchange could run its matching engine inside a TEE to prevent miners or validators from seeing pending orders and unfairly front-running trades. In summary, TEEs bring a much-needed privacy layer to Web3, unlocking applications like confidential DeFi, private voting/governance, and enterprise contracts that were previously infeasible on public ledgers.

Scalability and Off-Chain Computation

Another critical role for TEEs is improving blockchain scalability by offloading heavy computations off-chain into a secure environment. Blockchains struggle with complex or computationally intensive tasks due to performance limits and costs of on-chain execution. TEE-enabled off-chain computation allows these tasks to be done off the main chain (thus not consuming block gas or slowing down on-chain throughput) while still retaining trust guarantees about the correctness of the results. In effect, a TEE can serve as a verifiable off-chain compute accelerator for Web3.

For example, the iExec platform uses TEEs to create a decentralized cloud computing marketplace where developers can run computations off-chain and get results that are trusted by the blockchain. A dApp can request a computation (say, a complex AI model inference or a big data analysis) to be done by iExec worker nodes. These worker nodes execute the task inside an SGX enclave, producing a result along with an attestation that the correct code ran in a genuine enclave. The result is then returned on-chain, and the smart contract can verify the enclave’s attestation before accepting the output. This architecture allows heavy workloads to be handled off-chain without sacrificing trust, effectively boosting throughput. The iExec Orchestrator integration with Chainlink demonstrates this: a Chainlink oracle fetches external data, then hands off a complex computation to iExec’s TEE workers (e.g. aggregating or scoring the data), and finally the secure result is delivered on-chain. Use cases include things like decentralized insurance calculations (as iExec demonstrated), where a lot of data crunching can be done off-chain and cheaply, with only the final outcome going to the blockchain.

TEE-based off-chain computation also underpins some Layer-2 scaling solutions. Oasis Labs’ early prototype Ekiden (the precursor to Oasis Network) used SGX enclaves to run transaction execution off-chain in parallel, then commit only state roots to the main chain, effectively similar to rollup ideas but using hardware trust. By doing contract execution in TEEs, they achieved high throughput while relying on enclaves to preserve security. Another example is Sanders Network’s forthcoming Op-Succinct L2, which combines TEEs and zkSNARKs: TEEs execute transactions privately and quickly, and then zk-proofs are generated to prove the correctness of those executions to Ethereum. This hybrid approach leverages TEE speed and ZK verifiability for a scalable, private L2 solution.

In general, TEEs can run near-native performance computations (since they use actual CPU instructions, just with isolation), so they are orders of magnitude faster than pure cryptographic alternatives like homomorphic encryption or zero-knowledge proofs for complex logic. By offloading work to enclaves, blockchains can handle more complex applications (like machine learning, image/audio processing, large analytics) that would be impractical on-chain. The results come back with an attestation, which the on-chain contract or users can verify as originating from a trusted enclave, thus preserving data integrity and correctness. This model is often called “verifiable off-chain computation”, and TEEs are a cornerstone for many such designs (e.g. Hyperledger Avalon’s Trusted Compute Framework, developed by Intel, iExec, and others, uses TEEs to off-chain execute EVM bytecode with proof of correctness posted on-chain).

Secure Oracles and Data Integrity

Oracles bridge blockchains with real-world data, but they introduce trust challenges: how can a smart contract trust that an off-chain data feed is correct and untampered? TEEs provide a solution by serving as a secure sandbox for oracle nodes. A TEE-based oracle node can fetch data from external sources (APIs, web services) and process it inside an enclave that guarantees the data hasn’t been manipulated by the node operator or a malware on the node. The enclave can then sign or attest to the truth of the data it provides. This significantly improves oracle data integrity and trustworthiness. Even if an oracle operator is malicious, they cannot alter the data without breaking the enclave’s attestation (which the blockchain will detect).

A notable example is Town Crier, an oracle system developed at Cornell that was one of the first to use Intel SGX enclaves to provide authenticated data to Ethereum contracts. Town Crier would retrieve data (e.g. from HTTPS websites) inside an SGX enclave and deliver it to a contract along with evidence (an enclave signature) that the data came straight from the source and wasn’t forged. Chainlink recognized the value of this and acquired Town Crier in 2018 to integrate TEE-based oracles into its decentralized network. Today, Chainlink and other oracle providers have TEE initiatives: for instance, Chainlink’s DECO and Fair Sequencing Services involve TEEs to ensure data confidentiality and fair ordering. As noted in one analysis, “TEE revolutionized oracle security by providing a tamper-proof environment for data processing... even the node operators themselves cannot manipulate the data while it’s being processed”. This is particularly crucial for high-value financial data feeds (like price oracles for DeFi): a TEE can prevent even subtle tampering that could lead to big exploits.

TEEs also enable oracles to handle sensitive or proprietary data that couldn’t be published in plaintext on a blockchain. For example, an oracle network could use enclaves to aggregate private data (like confidential stock order books or personal health data) and feed only derived results or validated proofs to the blockchain, without exposing the raw sensitive inputs. In this way, TEEs broaden the scope of what data can be securely integrated into smart contracts, which is critical for real-world asset (RWA) tokenization, credit scoring, insurance, and other data-intensive on-chain services.

On the topic of cross-chain bridges, TEEs similarly improve integrity. Bridges often rely on a set of validators or a multi-sig to custody assets and validate transfers between chains, which makes them prime targets for attacks. By running bridge validator logic inside TEEs, one can secure the bridge’s private keys and verification processes against tampering. Even if a validator’s OS is compromised, the attacker shouldn’t be able to extract private keys or falsify messages from inside the enclave. TEEs can enforce that bridge transactions are processed exactly according to the protocol rules, reducing the risk of human operators or malware injecting fraudulent transfers. Furthermore, TEEs can enable atomic swaps and cross-chain transactions to be handled in a secure enclave that either completes both sides or aborts cleanly, preventing scenarios where funds get stuck due to interference. Several bridge projects and consortiums have explored TEE-based security to mitigate the plague of bridge hacks that have occurred in recent years.

Data Integrity and Verifiability Off-Chain

In all the above scenarios, a recurring theme is that TEEs help maintain data integrity even outside the blockchain. Because a TEE can prove what code it is running (via attestation) and can ensure the code runs without interference, it provides a form of verifiable computing. Users and smart contracts can trust the results coming from a TEE as if they were computed on-chain, provided the attestation checks out. This integrity guarantee is why TEEs are sometimes referred to as bringing a “trust anchor” to off-chain data and computation.

However, it’s worth noting that this trust model shifts some assumptions to hardware (see §4). The data integrity is only as strong as the TEE’s security. If the enclave is compromised or the attestation is forged, the integrity could fail. Nonetheless, in practice TEEs (when kept up-to-date) make certain attacks significantly harder. For example, a DeFi lending platform could use a TEE to calculate credit scores from a user’s private data off-chain, and the smart contract would accept the score only if accompanied by a valid enclave attestation. This way, the contract knows the score was computed by the approved algorithm on real data, rather than trusting the user or an oracle blindly.

TEEs also play a role in emerging decentralized identity (DID) and authentication systems. They can securely manage private keys, personal data, and authentication processes in a way that the user’s sensitive information is never exposed to the blockchain or to dApp providers. For instance, a TEE on a mobile device could handle biometric authentication and sign a blockchain transaction if the biometric check passes, all without revealing the user’s biometrics. This provides both security and privacy in identity management – an essential component if Web3 is to handle things like passports, certificates, or KYC data in a user-sovereign way.

In summary, TEEs serve as a versatile tool in Web3: they enable confidentiality for on-chain logic, allow scaling via off-chain secure compute, protect integrity of oracles and bridges, and open up new uses (from private identity to compliant data sharing). Next, we’ll look at specific projects leveraging these capabilities.

3. Notable Web3 Projects Leveraging TEEs

A number of leading blockchain projects have built their core offerings around Trusted Execution Environments. Below we dive into a few notable ones, examining how each uses TEE technology and what unique value it adds:

Secret Network

Secret Network is a layer-1 blockchain (built on Cosmos SDK) that pioneered privacy-preserving smart contracts using TEEs. All validator nodes in Secret Network run Intel SGX enclaves, which execute the smart contract code so that contract state and inputs/outputs remain encrypted even to the node operators. This makes Secret one of the first privacy-first smart contract platforms – privacy isn’t an optional add-on, but a default feature of the network at the protocol level.

In Secret Network’s model, users submit encrypted transactions, which validators load into their SGX enclave for execution. The enclave decrypts the inputs, runs the contract (written in a modified CosmWasm runtime), and produces encrypted outputs that are written to the blockchain. Only users with the correct viewing key (or the contract itself with its internal key) can decrypt and view the actual data. This allows applications to use private data on-chain without revealing it publicly.

The network has demonstrated several novel use cases:

  • Secret DeFi: e.g., SecretSwap (an AMM) where users’ account balances and transaction amounts are private, mitigating front-running and protecting trading strategies. Liquidity providers and traders can operate without broadcasting their every move to competitors.
  • Secret Auctions: Auction contracts where bids are kept secret until the auction ends, preventing strategic behavior based on others’ bids.
  • Private Voting and Governance: Token holders can vote on proposals without revealing their vote choices, while the tally can still be verified – ensuring fair, intimidation-free governance.
  • Data marketplaces: Sensitive datasets can be transacted and used in computations without exposing the raw data to buyers or nodes.

Secret Network essentially incorporates TEEs at the protocol level to create a unique value proposition: it offers programmable privacy. The challenges they tackle include coordinating enclave attestation across a decentralized validator set and managing key distribution so contracts can decrypt inputs while keeping them secret from validators. By all accounts, Secret has proven the viability of TEE-powered confidentiality on a public blockchain, establishing itself as a leader in the space.

Oasis Network

Oasis Network is another layer-1 aimed at scalability and privacy, which extensively utilizes TEEs (Intel SGX) in its architecture. Oasis introduced an innovative design that separates consensus from computation into different layers called the Consensus Layer and ParaTime Layer. The Consensus Layer handles blockchain ordering and finality, while each ParaTime can be a runtime environment for smart contracts. Notably, Oasis’s Emerald ParaTime is an EVM-compatible environment, and Sapphire is a confidential EVM that uses TEEs to keep smart contract state private.

Oasis’s use of TEEs is focused on confidential computation at scale. By isolating the heavy computation in parallelizable ParaTimes (which can run on many nodes), they achieve high throughput, and by using TEEs within those ParaTime nodes, they ensure the computations can include sensitive data without revealing it. For example, an institution could run a credit scoring algorithm on Oasis by feeding private data into a confidential ParaTime – the data stays encrypted for the node (since it’s processed in the enclave), and only the score comes out. Meanwhile, the Oasis consensus just records the proof that the computation happened correctly.

Technically, Oasis added extra layers of security beyond vanilla SGX. They implemented a “layered root of trust”: using Intel’s SGX Quoting Enclave and a custom lightweight kernel to verify hardware trustworthiness and to sandbox the enclave’s system calls. This reduces the attack surface (by filtering which OS calls enclaves can make) and protects against certain known SGX attacks. Oasis also introduced features like durable enclaves (so enclaves can persist state across restarts) and secure logging to mitigate rollback attacks (where a node might try to replay an old enclave state). These innovations were described in their technical papers and are part of why Oasis is seen as a research-driven project in TEE-based blockchain computing.

From an ecosystem perspective, Oasis has positioned itself for things like private DeFi (allowing banks to participate without leaking customer data) and data tokenization (where individuals or companies can share data to AI models in a confidential manner and get compensated, all via the blockchain). They have also collaborated with enterprises on pilots (for example, working with BMW on data privacy, and others on medical research data sharing). Overall, Oasis Network showcases how combining TEEs with a scalable architecture can address both privacy and performance, making it a significant player in TEE-based Web3 solutions.

Sanders Network

Sanders Network is a decentralized cloud computing network in the Polkadot ecosystem that uses TEEs to provide confidential and high-performance compute services. It is a parachain on Polkadot, meaning it benefits from Polkadot’s security and interoperability, but it introduces its own novel runtime for off-chain computation in secure enclaves.

The core idea of Sanders is to maintain a large network of worker nodes (called Sanders miners) that execute tasks inside TEEs (specifically, Intel SGX so far) and produce verifiable results. These tasks can range from running segments of smart contracts to general-purpose computation requested by users. Because the workers run in SGX, Sanders ensures that the computations are done with confidentiality (input data is hidden from the worker operator) and integrity (the results come with an attestation). This effectively creates a trustless cloud where users can deploy workloads knowing the host cannot peek or tamper with them.

One can think of Sanders as analogous to Amazon EC2 or AWS Lambda, but decentralized: developers can deploy code to Sanders’s network and have it run on many SGX-enabled machines worldwide, paying with Sanders’s token for the service. Some highlighted use cases:

  • Web3 Analytics and AI: A project could analyze user data or run AI algorithms in Sanders enclaves, so that raw user data stays encrypted (protecting privacy) while only aggregated insights leave the enclave.
  • Game backends and Metaverse: Sanders can handle intensive game logic or virtual world simulations off-chain, sending only commitments or hashes to the blockchain, enabling richer gameplay without trust in any single server.
  • On-chain services: Sanders has built an off-chain computation platform called Sanders Cloud. For example, it can serve as a back-end for bots, decentralized web services, or even an off-chain orderbook that publishes trades to a DEX smart contract with TEE attestation.

Sanders emphasizes that it can scale confidential computing horizontally: need more capacity? Add more TEE worker nodes. This is unlike a single blockchain where compute capacity is limited by consensus. Thus Sanders opens possibilities for computationally intensive dApps that still want trustless security. Importantly, Sanders doesn’t rely purely on hardware trust; it is integrating with Polkadot’s consensus (e.g., staking and slashing for bad results) and even exploring a combination of TEE with zero-knowledge proofs (as mentioned, their upcoming L2 uses TEE to speed up execution and ZKP to verify it succinctly on Ethereum). This hybrid approach helps mitigate the risk of any single TEE compromise by adding crypto verification on top.

In summary, Sanders Network leverages TEEs to deliver a decentralized, confidential cloud for Web3, allowing off-chain computation with security guarantees. This unleashes a class of blockchain applications that need both heavy compute and data privacy, bridging the gap between on-chain and off-chain worlds.

iExec

iExec is a decentralized marketplace for cloud computing resources built on Ethereum. Unlike the previous three (which are their own chains or parachains), iExec operates as a layer-2 or off-chain network that coordinates with Ethereum smart contracts. TEEs (specifically Intel SGX) are a cornerstone of iExec’s approach to establish trust in off-chain computation.

The iExec network consists of worker nodes contributed by various providers. These workers can execute tasks requested by users (dApp developers, data providers, etc.). To ensure these off-chain computations are trustworthy, iExec introduced a “Trusted off-chain Computing” framework: tasks can be executed inside SGX enclaves, and the results come with an enclave signature that proves the task was executed correctly on a secure node. iExec partnered with Intel to launch this trusted computing feature and even joined the Confidential Computing Consortium to advance standards. Their consensus protocol, called Proof-of-Contribution (PoCo), aggregates votes/attestations from multiple workers when needed to reach consensus on the correct result. In many cases, a single enclave’s attestation might suffice if the code is deterministic and trust in SGX is high; for higher assurance, iExec can replicate tasks across several TEEs and use a consensus or majority vote.

iExec’s platform enables several interesting use cases:

  • Decentralized Oracle Computing: As mentioned earlier, iExec can work with Chainlink. A Chainlink node might fetch raw data, then hand it to an iExec SGX worker to perform a computation (e.g., a proprietary algorithm or an AI inference) on that data, and finally return a result on-chain. This expands what oracles can do beyond just relaying data – they can now provide computed services (like call an AI model or aggregate many sources) with TEE ensuring honesty.
  • AI and DePIN (Decentralized Physical Infrastructure Network): iExec is positioning as a trust layer for decentralized AI apps. For example, a dApp that uses a machine learning model can run the model in an enclave to protect both the model (if it’s proprietary) and the user data being fed in. In the context of DePIN (like distributed IoT networks), TEEs can be used on edge devices to trust sensor readings and computations on those readings.
  • Secure Data Monetization: Data providers can make their datasets available in iExec’s marketplace in encrypted form. Buyers can send their algorithms to run on the data inside a TEE (so the data provider’s raw data is never revealed, protecting their IP, and the algorithm’s details can also be hidden). The result of the computation is returned to the buyer, and appropriate payment to the data provider is handled via smart contracts. This scheme, often called secure data exchange, is facilitated by the confidentiality of TEEs.

Overall, iExec provides the glue between Ethereum smart contracts and secure off-chain execution. It demonstrates how TEE “workers” can be networked to form a decentralized cloud, complete with a marketplace (using iExec’s RLC token for payment) and consensus mechanisms. By leading the Enterprise Ethereum Alliance’s Trusted Compute working group and contributing to standards (like Hyperledger Avalon), iExec also drives broader adoption of TEEs in enterprise blockchain scenarios.

Other Projects and Ecosystems

Beyond the four above, there are a few other projects worth noting:

  • Integritee – another Polkadot parachain similar to Sanders (in fact, it spun out of the Energy Web Foundation’s TEE work). Integritee uses TEEs to create “parachain-as-a-service” for enterprises, combining on-chain and off-chain enclave processing.
  • Automata Network – a middleware protocol for Web3 privacy that leverages TEEs for private transactions, anonymous voting, and MEV-resistant transaction processing. Automata runs as an off-chain network providing services like a private RPC relay and was mentioned as using TEEs for things like shielded identity and gasless private transactions.
  • Hyperledger Sawtooth (PoET) – in the enterprise realm, Sawtooth introduced a consensus algorithm called Proof of Elapsed Time which relied on SGX. Each validator runs an enclave that waits for a random time and produces a proof; the one with the shortest wait “wins” the block, a fair lottery enforced by SGX. While Sawtooth is not a Web3 project per se (more enterprise blockchain), it’s a creative use of TEEs for consensus.
  • Enterprise/Consortium Chains – Many enterprise blockchain solutions (e.g. ConsenSys Quorum, IBM Blockchain) incorporate TEEs to enable confidential consortium transactions, where only authorized nodes see certain data. For example, the Enterprise Ethereum Alliance’s Trusted Compute Framework (TCF) blueprint uses TEEs to execute private contracts off-chain and deliver merkle proofs on-chain.

These projects collectively show the versatility of TEEs: they power entire privacy-focused L1s, serve as off-chain networks, secure pieces of infrastructure like oracles and bridges, and even underpin consensus algorithms. Next, we consider the broader benefits and challenges of using TEEs in decentralized settings.

4. Benefits and Challenges of TEEs in Decentralized Environments

Adopting Trusted Execution Environments in blockchain systems comes with significant technical benefits as well as notable challenges and trade-offs. We will examine both sides: what TEEs offer to decentralized applications and what problems or risks arise from their use.

Benefits and Technical Strengths

  • Strong Security & Privacy: The foremost benefit is the confidentiality and integrity guarantees. TEEs allow sensitive code to run with assurance it won’t be spied on or altered by outside malware. This provides a level of trust in off-chain computation that was previously unavailable. For blockchain, this means private data can be utilized (enhancing functionality of dApps) without sacrificing security. Even in untrusted environments (cloud servers, validator nodes run by third parties), TEEs keep secrets safe. This is especially beneficial for managing private keys, user data, and proprietary algorithms within crypto systems. For example, a hardware wallet or a cloud signing service might use a TEE to sign blockchain transactions internally so the private key is never exposed in plaintext, combining convenience with security.

  • Near-Native Performance: Unlike purely cryptographic approaches to secure computation (like ZK proofs or homomorphic encryption), TEE overhead is relatively small. Code runs directly on the CPU, so a computation inside an enclave is roughly as fast as running outside (with some overhead for enclave transitions and memory encryption, typically single-digit percentage slowdowns in SGX). This means TEEs can handle compute-intensive tasks efficiently, enabling use cases (like real-time data feeds, complex smart contracts, machine learning) that would be orders of magnitude slower if done with cryptographic protocols. The low latency of enclaves makes them suitable where fast response is needed (e.g. high-frequency trading bots secured by TEEs, or interactive applications and games where user experience would suffer with high delays).

  • Improved Scalability (via Offload): By allowing heavy computations to be done off-chain securely, TEEs help alleviate congestion and gas costs on main chains. They enable Layer-2 designs and side protocols where the blockchain is used only for verification or final settlement, while the bulk of computation happens in parallel enclaves. This modularization (compute-intensive logic in TEEs, consensus on chain) can drastically improve throughput and scalability of decentralized apps. For instance, a DEX could do match-making in a TEE off-chain and only post matched trades on-chain, increasing throughput and reducing on-chain gas.

  • Better User Experience & Functionality: With TEEs, dApps can offer features like confidentiality or complex analytics that attract more users (including institutions). TEEs also enable gasless or meta-transactions by safely executing them off-chain and then submitting results, as noted in Automata’s use of TEEs to reduce gas for private transactions. Additionally, storing sensitive state off-chain in an enclave can reduce the data published on-chain, which is good for user privacy and network efficiency (less on-chain data to store/verify).

  • Composability with Other Tech: Interestingly, TEEs can complement other technologies (not strictly a benefit inherent to TEEs alone, but in combination). They can serve as the glue that holds together hybrid solutions: e.g., running a program in an enclave and also generating a ZK proof of its execution, where the enclave helps with parts of the proving process to speed it up. Or using TEEs in MPC networks to handle certain tasks with fewer rounds of communication. We’ll discuss comparisons in §5, but many projects highlight that TEEs don’t have to replace cryptography – they can work alongside to bolster security (Sanders’s mantra: “TEE’s strength lies in supporting others, not replacing them”).

Trust Assumptions and Security Vulnerabilities

Despite their strengths, TEEs introduce specific trust assumptions and are not invulnerable. It’s crucial to understand these challenges:

  • Hardware Trust and Centralization: By using TEEs, one is inherently placing trust in the silicon vendor and the security of their hardware design and supply chain. For example, using Intel SGX means trusting that Intel has no backdoors, that their manufacturing is secure, and that the CPU’s microcode correctly implements enclave isolation. This is a more centralized trust model compared to pure cryptography (which relies on math assumptions distributed among all users). Moreover, attestation for SGX historically relies on contacting Intel’s Attestation Service, meaning if Intel went offline or decided to revoke keys, enclaves globally could be affected. This dependency on a single company’s infrastructure raises concerns: it could be a single point of failure or even a target of government regulation (e.g., U.S. export controls could in theory restrict who can use strong TEEs). AMD SEV mitigates this by allowing more decentralized attestation (VM owners can attest their VMs), but still trust AMD’s chip and firmware. The centralization risk is often cited as somewhat antithetical to blockchain’s decentralization. Projects like Keystone (open-source TEE) and others are researching ways to reduce reliance on proprietary black boxes, but these are not yet mainstream.

  • Side-Channel and Other Vulnerabilities: A TEE is not a magic bullet; it can be attacked through indirect means. Side-channel attacks exploit the fact that even if direct memory access is blocked, an enclave’s operation might subtly influence the system (through timing, cache usage, power consumption, electromagnetic emissions, etc.). Over the past few years, numerous academic attacks on Intel SGX have been demonstrated: from Foreshadow (extracting enclave secrets via L1 cache timing leakage) to Plundervolt (voltage fault injection via privileged instructions) to SGAxe (extracting attestation keys), among others. These sophisticated attacks show that TEEs can be compromised without needing to break cryptographic protections – instead, by exploiting microarchitectural behaviors or flaws in the implementation. As a result, it’s acknowledged that “researchers have identified various potential attack vectors that could exploit hardware vulnerabilities or timing differences in TEE operations”. While these attacks are non-trivial and often require either local access or malicious hardware, they are a real threat. TEEs also generally do not protect against physical attacks if an adversary has the chip in hand (e.g., decapping the chip, probing buses, etc. can defeat most commercial TEEs).

    The vendor responses to side-channel discoveries have been microcode patches and enclave SDK updates to mitigate known leaks (sometimes at cost of performance). But it remains a cat-and-mouse game. For Web3, this means if someone finds a new side-channel on SGX, a “secure” DeFi contract running in SGX could potentially be exploited (e.g., to leak secret data or manipulate execution). So, relying on TEEs means accepting a potential vulnerability surface at the hardware level that is outside the typical blockchain threat model. It’s an active area of research to strengthen TEEs against these (for instance, by designing enclave code with constant-time operations, avoiding secret-dependent memory access patterns, and using techniques like oblivious RAM). Some projects also augment TEEs with secondary checks – e.g. combining with ZK proofs, or having multiple enclaves run on different hardware vendors to reduce single-chip risk.

  • Performance and Resource Constraints: Although TEEs run at near-native speed for CPU-bound tasks, they do come with some overheads and limits. Switching into an enclave (an ECALL) and out (OCALL) has a cost, as does the encryption/decryption of memory pages. This can impact performance for very frequent enclave boundary crossings. Enclaves also often have memory size limitations. For example, early SGX had a limited Enclave Page Cache and when enclaves used more memory, pages had to be swapped (with encryption) which massively slowed performance. Even newer TEEs often don’t allow using all system RAM easily – there’s a secure memory region that might be capped. This means very large-scale computations or data sets could be challenging to handle entirely inside a TEE. In Web3 contexts, this might limit the complexity of smart contracts or ML models that can run in an enclave. Developers have to optimize for memory and possibly split workloads.

  • Complexity of Attestation and Key Management: Using TEEs in a decentralized setting requires robust attestation workflows: each node needs to prove to others that it’s running an authentic enclave with expected code. Setting up this attestation verification on-chain can be complex. It usually involves hard-coding the vendor’s public attestation key or certificate into the protocol and writing verification logic into smart contracts or off-chain clients. This introduces overhead in protocol design, and any changes (like Intel changing its attestation signing key format from EPID to DCAP) can cause maintenance burdens. Additionally, managing keys within TEEs (for decrypting data or signing results) adds another layer of complexity. Mistakes in enclave key management could undermine security (e.g., if an enclave inadvertently exposes a decryption key through a bug, all its confidentiality promises collapse). Best practices involve using the TEE’s sealing APIs to securely store keys and rotating keys if needed, but again this requires careful design by developers.

  • Denial-of-Service and Availability: A perhaps less-discussed issue: TEEs do not help with availability and can even introduce new DoS avenues. For instance, an attacker might flood a TEE-based service with inputs that are costly to process, knowing that the enclave can’t be easily inspected or interrupted by the operator (since it’s isolated). Also, if a vulnerability is found and a patch requires firmware updates, during that cycle many enclave services might have to pause (for security) until nodes are patched, causing downtime. In blockchain consensus, imagine if a critical SGX bug was found – networks like Secret might have to halt until a fix, since trust in the enclaves would be broken. Coordination of such responses in a decentralized network is challenging.

Composability and Ecosystem Limitations

  • Limited Composability with Other Contracts: In a public smart contract platform like Ethereum, contracts can easily call other contracts and all state is in the open, enabling DeFi money legos and rich composition. In a TEE-based contract model, private state cannot be freely shared or composed without breaking confidentiality. For example, if Contract A in an enclave needs to interact with Contract B, and both hold some secret data, how do they collaborate? Either they must do a complex secure multi-party protocol (which negates some simplicity of TEEs) or they combine into one enclave (reducing modularity). This is a challenge that Secret Network and others face: cross-contract calls with privacy are non-trivial. Some solutions involve having a single enclave handle multiple contracts’ execution so it can internally manage shared secrets, but that can make the system more monolithic. Thus, composability of private contracts is more limited than public ones, or requires new design patterns. Similarly, integrating TEE-based modules into existing blockchain dApps requires careful interface design – often only the result of an enclave is posted on-chain, which might be a snark or a hash, and other contracts can only use that limited information. This is certainly a trade-off; projects like Secret provide viewing keys and permitting sharing of secrets on a need-to-know basis, but it’s not as seamless as the normal on-chain composability.

  • Standardization and Interoperability: The TEE ecosystem currently lacks unified standards across vendors. Intel SGX, AMD SEV, ARM TrustZone all have different programming models and attestation methods. This fragmentation means a dApp written for SGX enclaves isn’t trivially portable to TrustZone, etc. In blockchain, this can tie a project to a specific hardware (e.g., Secret and Oasis are tied to x86 servers with SGX right now). If down the line those want to support ARM nodes (say, validators on mobile), it would require additional development and perhaps different attestation verification logic. There are efforts (like the CCC – Confidential Computing Consortium) to standardize attestation and enclave APIs, but we’re not fully there yet. Lack of standards also affects developer tooling – one might find the SGX SDK mature but then need to adapt to another TEE with a different SDK. This interoperability challenge can slow adoption and increase costs.

  • Developer Learning Curve: Building applications that run inside TEEs requires specialized knowledge that many blockchain developers may not have. Low-level C/C++ programming (for SGX/TrustZone) or understanding of memory safety and side-channel-resistant coding is often needed. Debugging enclave code is infamously tricky (you can’t easily see inside an enclave while it’s running for security reasons!). Although frameworks and higher-level languages (like Oasis’s use of Rust for their confidential runtime, or even tools to run WebAssembly in enclaves) exist, the developer experience is still rougher than typical smart contract development or off-chain web2 development. This steep learning curve and immature tooling can deter developers or lead to mistakes if not handled carefully. There’s also the aspect of needing hardware to test on – running SGX code needs an SGX-enabled CPU or an emulator (which is slower), so the barrier to entry is higher. As a result, relatively few devs today are deeply familiar with enclave development, making audits and community support more scarce than in, say, the well-trodden solidity community.

  • Operational Costs: Running a TEE-based infrastructure can be more costly. The hardware itself might be more expensive or scarce (e.g., certain cloud providers charge premium for SGX-capable VMs). There’s also overhead in operations: keeping firmware up-to-date (for security patches), managing attestation networking, etc., which small projects might find burdensome. If every node must have a certain CPU, it could reduce the potential validator pool (not everyone has the required hardware), thus affecting decentralization and possibly leading to higher cloud hosting usage.

In summary, while TEEs unlock powerful features, they also bring trust trade-offs (hardware trust vs. math trust), potential security weaknesses (especially side-channels), and integration hurdles in a decentralized context. Projects using TEEs must carefully engineer around these issues – employing defense-in-depth (don’t assume the TEE is unbreakable), keeping the trusted computing base minimal, and being transparent about the trust assumptions to users (so it’s clear, for instance, that one is trusting Intel’s hardware in addition to the blockchain consensus).

5. TEEs vs. Other Privacy-Preserving Technologies (ZKP, FHE, MPC)

Trusted Execution Environments are one approach to achieving privacy and security in Web3, but there are other major techniques including Zero-Knowledge Proofs (ZKPs), Fully Homomorphic Encryption (FHE), and Secure Multi-Party Computation (MPC). Each of these technologies has a different trust model and performance profile. In many cases, they are not mutually exclusive – they can complement each other – but it’s useful to compare their trade-offs in performance, trust, and developer usability:

To briefly define the alternatives:

  • ZKPs: Cryptographic proofs (like zk-SNARKs, zk-STARKs) that allow one party to prove to others that a statement is true (e.g. “I know a secret that satisfies this computation”) without revealing why it’s true (hiding the secret input). In blockchain, ZKPs are used for private transactions (e.g. Zcash, Aztec) and for scalability (rollups that post proofs of correct execution). They ensure strong privacy (no secret data is leaked, only proofs) and integrity guaranteed by math, but generating these proofs can be computationally heavy and the circuits must be designed carefully.
  • FHE: Encryption scheme that allows arbitrary computation on encrypted data, so that the result, when decrypted, matches the result of computing on plaintexts. In theory, FHE provides ultimate privacy – data stays encrypted at all times – and you don’t need to trust anyone with the raw data. But FHE is extremely slow for general computations (though it’s improving with research); it's still mostly in experimental or specialized use due to performance.
  • MPC: Protocols where multiple parties jointly compute a function over their private inputs without revealing those inputs to each other. It often involves secret-sharing data among parties and performing cryptographic operations so that the output is correct but individual inputs remain hidden. MPC can distribute trust (no single point sees all data) and can be efficient for certain operations, but typically incurs a communication and coordination overhead and can be complex to implement for large networks.

Below is a comparison table summarizing key differences:

TechnologyTrust ModelPerformanceData PrivacyDeveloper Usability
TEE (Intel SGX, etc.)Trust in hardware manufacturer (centralized attestation server in some cases). Assumes chip is secure; if hardware is compromised, security is broken.Near-native execution speed; minimal overhead. Good for real-time computation and large workloads. Scalability limited by availability of TEE-enabled nodes.Data is in plaintext inside enclave, but encrypted to outside world. Strong confidentiality if hardware holds, but if enclave is breached, secrets exposed (no additional math protection).Moderate complexity. Can often reuse existing code/languages (C, Rust) and run it in enclave with minor modifications. Lowest entry barrier among these – no need to learn advanced cryptography – but requires systems programming and TEE-specific SDK knowledge.
ZKP (zk-SNARK/STARK)Trust in math assumptions (e.g. hardness of cryptographic problems) and sometimes a trusted setup (for SNARKs). No reliance on any single party at run-time.Proof generation is computationally heavy (especially for complex programs), often orders slower than native. Verification on-chain is fast (few ms). Not ideal for large data computations due to proving time. Scalability: good for succinct verification (rollups) but prover is bottleneck.Very strong privacy – can prove correctness without revealing any private input. Only minimal info (like proof size) leaks. Ideal for financial privacy, etc.High complexity. Requires learning specialized languages (circuits, zkDSLs like Circom or Noir) and thinking in terms of arithmetic circuits. Debugging is hard. Fewer experts available.
FHETrust in math (lattice problems). No trusted party; security holds as long as encryption isn’t broken.Very slow for general use. Operations on encrypted data are several orders of magnitude slower than plaintext. Somewhat scaling with hardware improvements and better algorithms, but currently impractical for real-time use in blockchain contexts.Ultimate privacy – data remains encrypted the entire time, even during computation. This is ideal for sensitive data (e.g. medical, cross-institution analytics) if performance allowed.Very specialized. Developers need crypto background. Some libraries (like Microsoft SEAL, TFHE) exist, but writing arbitrary programs in FHE is difficult and circuitous. Not yet a routine development target for dApps.
MPCTrust distributed among multiple parties. Assumes a threshold of parties are honest (no collusion beyond certain number). No hardware trust needed. Trust failure if too many collude.Typically slower than native due to communication rounds, but often faster than FHE. Performance varies: simple operations (add, multiply) can be efficient; complex logic may blow up in communication cost. Latency is sensitive to network speeds. Scalability can be improved with sharding or partial trust assumptions.Strong privacy if assumptions hold – no single node sees the whole input. But some info can leak via output or if parties drop (plus it lacks the succinctness of ZK – you get the result but no easily shareable proof of it without running the protocol again).High complexity. Requires designing a custom protocol for each use case or using frameworks (like SPDZ, or Partisia’s offering). Developers must reason about cryptographic protocols and often coordinate deployment of multiple nodes. Integration into blockchain apps can be complex (need off-chain rounds).

Citations: The above comparison draws on sources such as Sanders Network’s analysis and others, which highlight that TEEs excel in speed and ease-of-use, whereas ZK and FHE focus on maximal trustlessness at the cost of heavy computation, and MPC distributes trust but introduces network overhead.

From the table, a few key trade-offs become clear:

  • Performance: TEEs have a big advantage in raw speed and low latency. MPC can often handle moderate complexity with some slowdown, ZK is slow to produce but fast to verify (asynchronous usage), and FHE is currently the slowest by far for arbitrary tasks (though fine for limited operations like simple additions/multiplications). If your application needs real-time complex processing (like interactive applications, high-frequency decisions), TEEs or perhaps MPC (with few parties on good connections) are the only viable options today. ZK and FHE would be too slow in such scenarios.

  • Trust Model: ZKP and FHE are purely trustless (only trust math). MPC shifts trust to assumptions about participant honesty (which can be bolstered by having many parties or economic incentives). TEE places trust in hardware and the vendor. This is a fundamental difference: TEEs introduce a trusted third party (the chip) into the usually trustless world of blockchain. In contrast, ZK and FHE are often praised for aligning better with the decentralized ethos – no special entities to trust, just computational hardness. MPC sits in between: trust is decentralized but not eliminated (if N out of M nodes collude, privacy breaks). So for maximal trustlessness (e.g., a truly censorship-resistant, decentralized system), one might lean toward cryptographic solutions. On the other hand, many practical systems are comfortable assuming Intel is honest or that a set of major validators won’t collude, trading a bit of trust for huge gains in efficiency.

  • Security/Vulnerabilities: TEEs, as discussed, can be undermined by hardware bugs or side-channels. ZK and FHE security can be undermined if the underlying math (say, elliptic curve or lattice problem) is broken, but those are well-studied problems and attacks would likely be noticed (also, parameter choices can mitigate known risks). MPC’s security can be broken by active adversaries if the protocol isn’t designed for that (some MPC protocols assume “honest but curious” participants and might fail if someone outright cheats). In blockchain context, a TEE breach might be more catastrophic (all enclave-based contracts could be at risk until patched) whereas a ZK cryptographic break (like discovering a flaw in a hash function used by a ZK rollup) could also be catastrophic but is generally considered less likely given the simpler assumption. The surface of attack is very different: TEEs have to worry about things like power analysis, while ZK has to worry about mathematical breakthroughs.

  • Data Privacy: FHE and ZK offer the strongest privacy guarantees – data remains cryptographically protected. MPC ensures data is secret-shared, so no single party sees it (though some info could leak if outputs are public or if protocols are not carefully designed). TEE keeps data private from the outside, but inside the enclave data is decrypted; if someone somehow gains control of the enclave, the data confidentiality is lost. Also, TEEs typically allow the code to do anything with the data (including inadvertently leaking it through side-channels or network if the code is malicious). So TEEs require that you also trust the enclave code not just the hardware. In contrast, ZKPs prove properties of the code without ever revealing secrets, so you don’t even have to trust the code (beyond it actually having the property proven). If an enclave application had a bug that leaked data to a log file, the TEE hardware wouldn’t prevent that – whereas a ZK proof system simply wouldn’t reveal anything except the intended proof. This is a nuance: TEEs protect against external adversaries, but not necessarily logic bugs in the enclave program itself, whereas ZK’s design forces a more declarative approach (you prove exactly what is intended and nothing more).

  • Composability & Integration: TEEs integrate fairly easily into existing systems – you can take an existing program, put it into an enclave, and get some security benefits without changing the programming model too much. ZK and FHE often require rewriting the program into a circuit or restrictive form, which can be a massive effort. For instance, writing a simple AI model verification in ZK involves transforming it to a series of arithmetic ops and constraints, which is a far cry from just running TensorFlow in a TEE and attesting the result. MPC similarly may require custom protocol per use case. So from a developer productivity and cost standpoint, TEEs are attractive. We’ve seen adoption of TEEs quicker in some areas precisely because you can leverage existing software ecosystems (many libraries run in enclaves with minor tweaks). ZK/MPC require specialized engineering talent which is scarce. However, the flip side is that TEEs yield a solution that is often more siloed (you have to trust that enclave or that set of nodes), whereas ZK gives you a proof anyone can check on-chain, making it highly composable (any contract can verify a zk proof). So ZK results are portable – they produce a small proof that any number of other contracts or users can use to gain trust. TEE results usually come in the form of an attestation tied to a particular hardware and possibly not succinct; they may not be as easily shareable or chain-agnostic (though you can post a signature of the result and have contracts programmed to accept that if they know the public key of the enclave).

In practice, we are seeing hybrid approaches: for example, Sanders Network argues that TEE, MPC, and ZK each shine in different areas and can complement each other. A concrete case is decentralized identity: one might use ZK proofs to prove an identity credential without revealing it, but that credential might have been verified and issued by a TEE-based process that checked your documents privately. Or consider scaling: ZK rollups provide succinct proofs for lots of transactions, but generating those proofs could be sped up by using TEEs to do some computations faster (and then only proving a smaller statement). The combination can sometimes reduce the trust requirement on TEEs (e.g., use TEEs for performance, but still verify final correctness via a ZK proof or via an on-chain challenge game so that a compromised TEE can’t cheat without being caught). Meanwhile, MPC can be combined with TEEs by having each party’s compute node be a TEE, adding an extra layer so that even if some parties collude, they still cannot see each other’s data unless they also break hardware security.

In summary, TEEs offer a very practical and immediate path to secure computation with modest assumptions (hardware trust), whereas ZK and FHE offer a more theoretical and trustless path but at high computational cost, and MPC offers a distributed trust path with network costs. The right choice in Web3 depends on the application requirements:

  • If you need fast, complex computation on private data (like AI, large data sets) – TEEs (or MPC with few parties) are currently the only feasible way.
  • If you need maximum decentralization and verifiability – ZK proofs shine (for example, private cryptocurrency transactions favor ZKP as in Zcash, because users don’t want to trust anything but math).
  • If you need collaborative computing among multiple stakeholders – MPC is naturally suited (like multi-party key management or auctions).
  • If you have extremely sensitive data and long-term privacy is a must – FHE could be appealing if performance improves, because even if someone got your ciphertexts years later, without the key they learn nothing; whereas an enclave compromise could leak secrets retroactively if logs were kept.

It’s worth noting that the blockchain space is actively exploring all these technologies in parallel. We’re likely to see combinations: e.g., Layer 2 solutions integrating TEEs for sequencing transactions and then using a ZKP to prove the TEE followed the rules (a concept being explored in some Ethereum research), or MPC networks that use TEEs in each node to reduce the complexity of the MPC protocols (since each node is internally secure and can simulate multiple parties).

Ultimately, TEEs vs ZK vs MPC vs FHE is not a zero-sum choice – they each target different points in the triangle of security, performance, and trustlessness. As one article put it, all four face an "impossible triangle" of performance, cost, and security – no single solution is superior in all aspects. The optimal design often uses the right tool for the right part of the problem.

6. Adoption Across Major Blockchain Ecosystems

Trusted Execution Environments have seen varying levels of adoption in different blockchain ecosystems, often influenced by the priorities of those communities and the ease of integration. Here we evaluate how TEEs are being used (or explored) in some of the major ecosystems: Ethereum, Cosmos, and Polkadot, as well as touch on others.

Ethereum (and General Layer-1s)

On Ethereum mainnet itself, TEEs are not part of the core protocol, but they have been used in applications and Layer-2s. Ethereum’s philosophy leans on cryptographic security (e.g., emerging ZK-rollups), but TEEs have found roles in oracles and off-chain execution for Ethereum:

  • Oracle Services: As discussed, Chainlink has incorporated TEE-based solutions like Town Crier. While not all Chainlink nodes use TEEs by default, the technology is there for data feeds requiring extra trust. Also, API3 (another oracle project) has mentioned using Intel SGX to run APIs and sign data to ensure authenticity. These services feed data to Ethereum contracts with stronger assurances.

  • Layer-2 and Rollups: There’s ongoing research and debate in the Ethereum community about using TEEs in rollup sequencers or validators. For example, ConsenSys’ “ZK-Portal” concept and others have floated using TEEs to enforce correct ordering in optimistic rollups or to protect the sequencer from censorship. The Medium article we saw even suggests that by 2025, TEE might become a default feature in some L2s for things like high-frequency trading protection. Projects like Catalyst (a high-frequency trading DEX) and Flashbots (for MEV relays) have looked at TEEs to enforce fair ordering of transactions before they hit the blockchain.

  • Enterprise Ethereum: In consortium or permissioned Ethereum networks, TEEs are more widely adopted. The Enterprise Ethereum Alliance’s Trusted Compute Framework (TCF) was basically a blueprint for integrating TEEs into Ethereum clients. Hyperledger Avalon (formerly EEA TCF) allows parts of Ethereum smart contracts to be executed off-chain in a TEE and then verified on-chain. Several companies like IBM, Microsoft, and iExec contributed to this. While on public Ethereum this hasn’t become common, in private deployments (e.g., a group of banks using Quorum or Besu), TEEs can be used so that even consortium members don’t see each other’s data, only authorized results. This can satisfy privacy requirements in an enterprise setting.

  • Notable Projects: Aside from iExec which operates on Ethereum, there were projects like Enigma (which originally started as an MPC project at MIT, then pivoted to using SGX; it later became Secret Network on Cosmos). Another was Decentralized Cloud Services (DCS) in early Ethereum discussions. More recently, OAuth (Oasis Ethereum ParaTime) allows solidity contracts to run with confidentiality by using Oasis’s TEE backend but settling on Ethereum. Also, some Ethereum-based DApps like medical data sharing or gaming have experimented with TEEs by having an off-chain enclave component interacting with their contracts.

So Ethereum’s adoption is somewhat indirect – it hasn’t changed the protocol to require TEEs, but it has a rich set of optional services and extensions leveraging TEEs for those who need them. Importantly, Ethereum researchers remain cautious: proposals to make a “TEE-only shard” or to deeply integrate TEEs have met community skepticism due to trust concerns. Instead, TEEs are seen as “co-processors” to Ethereum rather than core components.

Cosmos Ecosystem

The Cosmos ecosystem is friendly to experimentation via its modular SDK and sovereign chains, and Secret Network (covered above) is a prime example of TEE adoption in Cosmos. Secret Network is actually a Cosmos SDK chain with Tendermint consensus, modified to mandate SGX in its validators. It’s one of the most prominent Cosmos zones after the main Cosmos Hub, indicating significant adoption of TEE tech in that community. The success of Secret in providing interchain privacy (through its IBC connections, Secret can serve as a privacy hub for other Cosmos chains) is a noteworthy case of TEE integration at L1.

Another Cosmos-related project is Oasis Network (though not built on the Cosmos SDK, it was designed by some of the same people who contributed to Tendermint and shares a similar ethos of modular architecture). Oasis is standalone but can connect to Cosmos via bridges, etc. Both Secret and Oasis show that in Cosmos-land, the idea of “privacy as a feature” via TEEs gained enough traction to warrant dedicated networks.

Cosmos even has a concept of “privacy providers” for interchain applications – e.g., an app on one chain can call a contract on Secret Network via IBC to perform a confidential computation, then get the result back. This composability is emerging now.

Additionally, the Anoma project (not strictly Cosmos, but related in the interoperability sense) has talked about using TEEs for intent-centric architectures, though it’s more theoretical.

In short, Cosmos has at least one major chain fully embracing TEEs (Secret) and others interacting with it, illustrating a healthy adoption in that sphere. The modularity of Cosmos could allow more such chains (for example, one could imagine a Cosmos zone specializing in TEE-based oracles or identity).

Polkadot and Substrate

Polkadot’s design allows parachains to specialize, and indeed Polkadot hosts multiple parachains that use TEEs:

  • Sanders Network: Already described; a parachain offering a TEE-based compute cloud. Sanders has been live as a parachain, providing services to other chains through XCMP (cross-chain message passing). For instance, another Polkadot project can offload a confidential task to Sanders’s workers and get a proof or result back. Sanders’s native token economics incentivize running TEE nodes, and it has a sizable community, signaling strong adoption.
  • Integritee: Another parachain focusing on enterprise and data privacy solutions using TEEs. Integritee allows teams to deploy their own private side-chains (called Teewasms) where the execution is done in enclaves. It’s targeting use cases like confidential data processing for corporations that still want to anchor to Polkadot security.
  • /Root or Crust?: There were ideas about using TEEs for decentralized storage or random beacons in some Polkadot-related projects. For example, Crust Network (decentralized storage) originally planned a TEE-based proof-of-storage (though it moved to another design later). And Polkadot’s random parachain (Entropy) considered TEEs vs VRFs.

Polkadot’s reliance on on-chain governance and upgrades means parachains can incorporate new tech rapidly. Both Sanders and Integritee have gone through upgrades to improve their TEE integration (like supporting new SGX features or refining attestation methods). The Web3 Foundation also funded earlier efforts on Substrate-based TEE projects like SubstraTEE (an early prototype that showed off-chain contract execution in TEEs with on-chain verification).

The Polkadot ecosystem thus shows multiple, independent teams betting on TEE tech, indicating a positive adoption trend. It’s becoming a selling point for Polkadot that “if you need confidential smart contracts or off-chain compute, we have parachains for that”.

Other Ecosystems and General Adoption

  • Enterprise and Consortia: Outside public crypto, Hyperledger and enterprise chains have steadily adopted TEEs for permissioned settings. For instance, the Basel Committee tested a TEE-based trade finance blockchain. The general pattern is: where privacy or data confidentiality is a must, and participants are known (so they might even collectively invest in hardware secure modules), TEEs find a comfortable home. These may not make headlines in crypto news, but in sectors like supply chain, banking consortia, or healthcare data-sharing networks, TEEs are often the go-to (as an alternative to just trusting a third party or using heavy cryptography).

  • Layer-1s outside Ethereum: Some newer L1s have dabbled with TEEs. NEAR Protocol had an early concept of a TEE-based shard for private contracts (not implemented yet). Celo considered TEEs for light client proofs (their Plumo proofs now rely on snarks, but they looked at SGX to compress chain data for mobile at one point). Concordium, a regulated privacy L1, uses ZK for anonymity but also explores TEEs for identity verification. Dfinity/Internet Computer uses secure enclaves in its node machines, but for bootstrapping trust (not for contract execution, as their “Chain Key” cryptography handles that).

  • Bitcoin: While Bitcoin itself does not use TEEs, there have been side projects. For example, TEE-based custody solutions (like Vault systems) for Bitcoin keys, or certain proposals in DLC (Discrete Log Contracts) to use oracles that might be TEE-secured. Generally, Bitcoin community is more conservative and would not trust Intel easily as part of consensus, but as ancillary tech (hardware wallets with secure elements) it’s already accepted.

  • Regulators and Governments: An interesting facet of adoption: some CBDC (central bank digital currency) research has looked at TEEs to enforce privacy while allowing auditability. For instance, the Bank of France ran experiments where they used a TEE to handle certain compliance checks on otherwise private transactions. This shows that even regulators see TEEs as a way to balance privacy with oversight – you could have a CBDC where transactions are encrypted to the public but a regulator enclave can review them under certain conditions (this is hypothetical, but discussed in policy circles).

  • Adoption Metrics: It’s hard to quantify adoption, but we can look at indicators like: number of projects, funds invested, availability of infrastructure. On that front, today (2025) we have: at least 3-4 public chains (Secret, Oasis, Sanders, Integritee, Automata as off-chain) explicitly using TEEs; major oracle networks incorporating it; large tech companies backing confidential computing (Microsoft Azure, Google Cloud offer TEE VMs – and these services are being used by blockchain nodes as options). The Confidential Computing Consortium now includes blockchain-focused members (Ethereum Foundation, Chainlink, Fortanix, etc.), showing cross-industry collaboration. These all point to a growing but niche adoption – TEEs aren’t ubiquitous in Web3 yet, but they have carved out important niches where privacy and secure off-chain compute are required.

7. Business and Regulatory Considerations

The use of TEEs in blockchain applications raises several business and regulatory points that stakeholders must consider:

Privacy Compliance and Institutional Adoption

One of the business drivers for TEE adoption is the need to comply with data privacy regulations (like GDPR in Europe, HIPAA in the US for health data) while leveraging blockchain technology. Public blockchains by default broadcast data globally, which conflicts with regulations that require sensitive personal data to be protected. TEEs offer a way to keep data confidential on-chain and only share it in controlled ways, thus enabling compliance. As noted, “TEEs facilitate compliance with data privacy regulations by isolating sensitive user data and ensuring it is handled securely”. This capability is crucial for bringing enterprises and institutions into Web3, as they can’t risk violating laws. For example, a healthcare dApp that processes patient info could use TEEs to ensure no raw patient data ever leaks on-chain, satisfying HIPAA’s requirements for encryption and access control. Similarly, a European bank could use a TEE-based chain to tokenize and trade assets without exposing clients’ personal details, aligning with GDPR.

This has a positive regulatory angle: some regulators have indicated that solutions like TEEs (and related concepts of confidential computing) are favorable because they provide technical enforcement of privacy. We’ve seen the World Economic Forum and others highlight TEEs as a means to build “privacy by design” into blockchain systems (essentially embedding compliance at the protocol level). Thus, from a business perspective, TEEs can accelerate institutional adoption by removing one of the key blockers (data confidentiality). Companies are more willing to use or build on blockchain if they know there’s a hardware safeguard for their data.

Another compliance aspect is auditability and oversight. Enterprises often need audit logs and the ability to prove to auditors that they are in control of data. TEEs can actually help here by producing attestation reports and secure logs of what was accessed. For instance, Oasis’s “durable logging” in an enclave provides a tamper-resistant log of sensitive operations. An enterprise can show that log to regulators to prove that, say, only authorized code ran and only certain queries were done on customer data. This kind of attested auditing could satisfy regulators more than a traditional system where you trust sysadmin logs.

Trust and Liability

On the flip side, introducing TEEs changes the trust structure and thus the liability model in blockchain solutions. If a DeFi platform uses a TEE and something goes wrong due to a hardware flaw, who is responsible? For example, consider a scenario where an Intel SGX bug leads to a leak of secret swap transaction details, causing users to lose money (front-run etc.). The users trusted the platform’s security claims. Is the platform at fault, or is it Intel’s fault? Legally, users might go after the platform (who in turn might have to go after Intel). This complicates things because you have a third-party tech provider (the CPU vendor) deeply in the security model. Businesses using TEEs have to consider this in contracts and risk assessments. Some might seek warranties or support from hardware vendors if using their TEEs in critical infra.

There’s also the centralization concern: if a blockchain’s security relies on a single company’s hardware (Intel or AMD), regulators might view that with skepticism. For instance, could a government subpoena or coerce that company to compromise certain enclaves? This is not a purely theoretical concern – consider export control laws: high-grade encryption hardware can be subject to regulation. If a large portion of crypto infrastructure relies on TEEs, it’s conceivable that governments could attempt to insert backdoors (though there’s no evidence of that, the perception matters). Some privacy advocates point this out to regulators: that TEEs concentrate trust and if anything, regulators should carefully vet them. Conversely, regulators who want more control might prefer TEEs over math-based privacy like ZK, because with TEEs there’s at least a notion that law enforcement could approach the hardware vendor with a court order if absolutely needed (e.g., to get a master attestation key or some such – not that it’s easy or likely, but it’s an avenue that doesn’t exist with ZK). So regulatory reception can split: privacy regulators (data protection agencies) are pro-TEE for compliance, whereas law enforcement might be cautiously optimistic since TEEs aren’t “going dark” in the way strong encryption is – there’s a theoretical lever (the hardware) they might try to pull.

Businesses need to navigate this by possibly engaging in certifications. There are security certifications like FIPS 140 or Common Criteria for hardware modules. Currently, SGX and others have some certifications (for example, SGX had Common Criteria EAL stuff for certain usages). If a blockchain platform can point to the enclave tech being certified to a high standard, regulators and partners might be more comfortable. For instance, a CBDC project might require that any TEE used is FIPS-certified so they trust its random number generation, etc. This introduces additional process and possibly restricts to certain hardware versions.

Ecosystem and Cost Considerations

From a business perspective, using TEEs might affect the cost structure of a blockchain operation. Nodes must have specific CPUs (which might be more expensive or less energy efficient). This could mean higher cloud hosting bills or capital expenses. For example, if a project mandates Intel Xeon with SGX for all validators, that’s a constraint – validators can’t just be anyone with a Raspberry Pi or old laptop; they need that hardware. This can centralize who can participate (possibly favoring those who can afford high-end servers or who use cloud providers offering SGX VMs). In extremes, it might push the network to be more permissioned or rely on cloud providers, which is a decentralization trade-off and a business trade-off (the network might have to subsidize node providers).

On the other hand, some businesses might find this acceptable because they want known validators or have an allowlist (especially in enterprise consortia). But in public crypto networks, this has caused debates – e.g., when SGX was required, people asked “does this mean only large data centers will run nodes?” It’s something that affects community sentiment and thus the market adoption. For instance, some crypto purists might avoid a chain that requires TEEs, labeling it as “less trustless” or too centralized. So projects have to handle PR and community education, making clear what the trust assumptions are and why it’s still secure. We saw Secret Network addressing FUD by explaining the rigorous monitoring of Intel updates and that validators are slashed if not updating enclaves, etc., basically creating a social layer of trust on top of the hardware trust.

Another consideration is partnerships and support. The business ecosystem around TEEs includes big tech companies (Intel, AMD, ARM, Microsoft, Google, etc.). Blockchain projects using TEEs often partner with these (e.g., iExec partnering with Intel, Secret network working with Intel on attestation improvements, Oasis with Microsoft on confidential AI, etc.). These partnerships can provide funding, technical assistance, and credibility. It’s a strategic point: aligning with the confidential computing industry can open doors (for funding or enterprise pilots), but also means a crypto project might align with big corporations, which has ideological implications in the community.

Regulatory Uncertainties

As blockchain applications using TEEs grow, there may be new regulatory questions. For example:

  • Data Jurisdiction: If data is processed inside a TEE in a certain country, is it considered “processed in that country” or nowhere (since it’s encrypted)? Some privacy laws require that data of citizens not leave certain regions. TEEs could blur the lines – you might have an enclave in a cloud region, but only encrypted data goes in/out. Regulators may need to clarify how they view such processing.
  • Export Controls: Advanced encryption technology can be subject to export restrictions. TEEs involve encryption of memory – historically this hasn’t been an issue (as CPUs with these features are sold globally), but if that ever changed, it could affect supply. Also, some countries might ban or discourage use of foreign TEEs due to national security (e.g., China has its own equivalent to SGX, as they don’t trust Intel’s, and might not allow SGX for sensitive uses).
  • Legal Compulsion: A scenario: could a government subpoena a node operator to extract data from an enclave? Normally they can’t because even the operator can’t see inside. But what if they subpoena Intel for a specific attestation key? Intel’s design is such that even they can’t decrypt enclave memory (they issue keys to the CPU which does the work). But if a backdoor existed or a special firmware could be signed by Intel to dump memory, that’s a hypothetical that concerns people. Legally, a company like Intel might refuse if asked to undermine their security (they likely would, to not destroy trust in their product). But the mere possibility might appear in regulatory discussions about lawful access. Businesses using TEEs should stay abreast of any such developments, though currently, no public mechanism exists for Intel/AMD to extract enclave data – that’s kind of the point of TEEs.

Market Differentiation and New Services

On the positive front for business, TEEs enable new products and services that can be monetized. For example:

  • Confidential data marketplaces: As iExec and Ocean Protocol and others have noted, companies hold valuable data they could monetize if they had guarantees it won’t leak. TEEs enable “data renting” where the data never leaves the enclave, only the insights do. This could unlock new revenue streams and business models. We see startups in Web3 offering confidential compute services to enterprises, essentially selling the idea of “get insights from blockchain or cross-company data without exposing anything.”
  • Enterprise DeFi: Financial institutions often cite lack of privacy as a reason not to engage with DeFi or public blockchain. If TEEs can guarantee privacy for their positions or trades, they might participate, bringing more liquidity and business to the ecosystem. Projects that cater to this (like Secret’s secret loans, or Oasis’s private AMM with compliance controls) are positioning to attract institutional users. If successful, that can be a significant market (imagine institutional AMM pools where identities and amounts are shielded but an enclave ensures compliance checks like AML are done internally – that’s a product that could bring big money into DeFi under regulatory comfort).
  • Insurance and Risk Management: With TEEs reducing certain risks (like oracle manipulation), we might see lower insurance premiums or new insurance products for smart contract platforms. Conversely, TEEs introduce new risks (like technical failure of enclaves) which might themselves be insurable events. There’s a budding area of crypto insurance; how they treat TEE-reliant systems will be interesting. A platform might market that it uses TEEs to lower risk of data breach, thus making it easier/cheaper to insure, giving it a competitive edge.

In conclusion, the business and regulatory landscape of TEE-enabled Web3 is about balancing trust and innovation. TEEs offer a route to comply with laws and unlock enterprise use cases (a big plus for mainstream adoption), but they also bring a reliance on hardware providers and complexities that must be transparently managed. Stakeholders need to engage with both tech giants (for support) and regulators (for clarity and assurance) to fully realize the potential of TEEs in blockchain. If done well, TEEs could be a cornerstone that allows blockchain to deeply integrate with industries handling sensitive data, thereby expanding the reach of Web3 into areas previously off-limits due to privacy concerns.

Conclusion

Trusted Execution Environments have emerged as a powerful component in the Web3 toolbox, enabling a new class of decentralized applications that require confidentiality and secure off-chain computation. We’ve seen that TEEs, like Intel SGX, ARM TrustZone, and AMD SEV, provide a hardware-isolated “safe box” for computation, and this property has been harnessed for privacy-preserving smart contracts, verifiable oracles, scalable off-chain processing, and more. Projects across ecosystems – from Secret Network’s private contracts on Cosmos, to Oasis’s confidential ParaTimes, to Sanders’s TEE cloud on Polkadot, and iExec’s off-chain marketplace on Ethereum – demonstrate the diverse ways TEEs are being integrated into blockchain platforms.

Technically, TEEs offer compelling benefits of speed and strong data confidentiality, but they come with their own challenges: a need to trust hardware vendors, potential side-channel vulnerabilities, and hurdles in integration and composability. We compared TEEs with cryptographic alternatives (ZKPs, FHE, MPC) and found that each has its niche: TEEs shine in performance and ease-of-use, whereas ZK and FHE provide maximal trustlessness at high cost, and MPC spreads trust among participants. In fact, many cutting-edge solutions are hybrid, using TEEs alongside cryptographic methods to get the best of both worlds.

Adoption of TEE-based solutions is steadily growing. Ethereum dApps leverage TEEs for oracle security and private computations, Cosmos and Polkadot have native support via specialized chains, and enterprise blockchain efforts are embracing TEEs for compliance. Business-wise, TEEs can be a bridge between decentralized tech and regulation – allowing sensitive data to be handled on-chain under the safeguards of hardware security, which opens the door for institutional usage and new services. At the same time, using TEEs means engaging with new trust paradigms and ensuring that the decentralization ethos of blockchain isn’t undermined by opaque silicon.

In summary, Trusted Execution Environments are playing a crucial role in the evolution of Web3: they address some of the most pressing concerns of privacy and scalability, and while they are not a panacea (and not without controversy), they significantly expand what decentralized applications can do. As the technology matures – with improvements in hardware security and standards for attestation – and as more projects demonstrate their value, we can expect TEEs (along with complementary cryptographic tech) to become a standard component of blockchain architectures aimed at unlocking Web3’s full potential in a secure and trustable manner. The future likely holds layered solutions where hardware and cryptography work hand-in-hand to deliver systems that are both performant and provably secure, meeting the needs of users, developers, and regulators alike.

Sources: The information in this report was gathered from a variety of up-to-date sources, including official project documentation and blogs, industry analyses, and academic research, as cited throughout the text. Notable references include the Metaschool 2025 guide on TEEs in Web3, comparisons by Sanders Network, technical insights from ChainCatcher and others on FHE/TEE/ZKP/MPC, and statements on regulatory compliance from Binance Research, among many others. These sources provide further detail and are recommended for readers who wish to explore specific aspects in greater depth.

EIP-7702 After Pectra: A Practical Playbook for Ethereum App Developers

· 9 min read
Dora Noda
Software Engineer

On May 7, 2025, Ethereum’s Pectra upgrade (Prague + Electra) hit mainnet. Among its most developer-visible changes is EIP-7702, which lets an externally owned account (EOA) "mount" smart-contract logic—without migrating funds or changing addresses. If you build wallets, dapps, or relayers, this unlocks a simpler path to smart-account UX.

Below is a concise, implementation-first guide: what actually shipped, how 7702 works, when to choose it over pure ERC-4337, and a cut-and-paste scaffold you can adapt today.


What Actually Shipped

  • EIP-7702 is in Pectra’s final scope. The meta-EIP for the Pectra hard fork officially lists 7702 among the included changes.
  • Activation details: Pectra activated on mainnet at epoch 364032 on May 7, 2025, following successful activations on all major testnets.
  • Toolchain note: Solidity v0.8.30 updated its default EVM target to prague for Pectra compatibility. You'll need to upgrade your compilers and CI pipelines, especially if you pin specific versions.

EIP-7702—How It Works (Nuts & Bolts)

EIP-7702 introduces a new transaction type and a mechanism for an EOA to delegate its execution logic to a smart contract.

  • New Transaction Type (0x04): A Type-4 transaction includes a new field called an authorization_list. This list contains one or more authorization tuples—(chain_id, address, nonce, y_parity, r, s)—each signed by the EOA's private key. When this transaction is processed, the protocol writes a delegation indicator to the EOA’s code field: 0xef0100 || address. From that point forward, any calls to the EOA are proxied to the specified address (the implementation), but they execute within the EOA’s storage and balance context. This delegation remains active until it's explicitly changed.
  • Chain Scope: An authorization can be chain-specific by providing a chain_id, or it can apply to all chains if chain_id is set to 0. This allows you to deploy the same implementation contract across multiple networks without requiring users to sign a new authorization for each one.
  • Revocation: To revert an EOA back to its original, non-programmable behavior, you simply send another 7702 transaction where the implementation address is set to the zero address. This clears the delegation indicator.
  • Self-Sponsored vs. Relayed: An EOA can submit the Type-4 transaction itself, or a third-party relayer can submit it on the EOA's behalf. The latter is common for creating a gasless user experience. Nonce handling differs slightly depending on the method, so it's important to use libraries that correctly manage this distinction.

Security Model Shift: Because the original EOA private key still exists, it can always override any smart contract rules (like social recovery or spending limits) by submitting a new 7702 transaction to change the delegation. This is a fundamental shift. Contracts that rely on tx.origin to verify that a call is from an EOA must be re-audited, as 7702 can break these assumptions. Audit your flows accordingly.


7702 or ERC-4337? (And When to Combine)

Both EIP-7702 and ERC-4337 enable account abstraction, but they serve different needs.

  • Choose EIP-7702 when…
    • You want to provide instant smart-account UX for existing EOAs without forcing users to migrate funds or change addresses.
    • You need consistent addresses across chains that can be progressively upgraded with new features.
    • You want to stage your transition to account abstraction, starting with simple features and adding complexity over time.
  • Choose pure ERC-4337 when…
    • Your product requires full programmability and complex policy engines (e.g., multi-sig, advanced recovery) from day one.
    • You are building for new users who don't have existing EOAs, making new smart-account addresses and the associated setup acceptable.
  • Combine them: The most powerful pattern is to use both. An EOA can use a 7702 transaction to designate an ERC-4337 wallet implementation as its logic. This makes the EOA behave like a 4337 account, allowing it to be bundled, sponsored by paymasters, and processed by the existing 4337 infrastructure—all without the user needing a new address. This is a forward-compatible path explicitly encouraged by the EIP's authors.

Minimal 7702 Scaffold You Can Adapt

Here’s a practical example of an implementation contract and the client-side code to activate it.

1. A Tiny, Auditable Implementation Contract

This contract code will execute in the EOA’s context once designated. Keep it small, auditable, and consider adding an upgrade mechanism.

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.20;

/// @notice Executes calls from the EOA context when designated via EIP-7702.
contract DelegatedAccount {
// Unique storage slot to avoid collisions with other contracts.
bytes32 private constant INIT_SLOT =
0x3fb93b3d3dcd1d1f4b4a1a8db6f4c5d55a1b7f9ac01dfe8e53b1b0f35f0c1a01;

event Initialized(address indexed account);
event Executed(address indexed to, uint256 value, bytes data, bytes result);

modifier onlyEOA() {
// Optional: add checks to restrict who can call certain functions.
_;
}

function initialize() external payable onlyEOA {
// Set a simple one-time init flag in the EOA's storage.
bytes32 slot = INIT_SLOT;
assembly {
if iszero(iszero(sload(slot))) { revert(0, 0) } // Revert if already initialized
sstore(slot, 1)
}
emit Initialized(address(this));
}

function execute(address to, uint256 value, bytes calldata data)
external
payable
onlyEOA
returns (bytes memory result)
{
(bool ok, bytes memory ret) = to.call{value: value}(data);
require(ok, "CALL_FAILED");
emit Executed(to, value, data, ret);
return ret;
}

function executeBatch(address[] calldata to, uint256[] calldata value, bytes[] calldata data)
external
payable
onlyEOA
{
uint256 n = to.length;
require(n == value.length && n == data.length, "LENGTH_MISMATCH");
for (uint256 i = 0; i < n; i++) {
(bool ok, ) = to[i].call{value: value[i]}(data[i]);
require(ok, "CALL_FAILED");
}
}
}

2. Designate the Contract on an EOA (Type-4 tx) with viem

Modern clients like viem have built-in helpers to sign authorizations and send Type-4 transactions. In this example, a relayer account pays the gas to upgrade an eoa.

import { createWalletClient, http, encodeFunctionData } from "viem";
import { sepolia } from "viem/chains";
import { privateKeyToAccount } from "viem/accounts";
import { abi, implementationAddress } from "./DelegatedAccountABI";

// 1. Define the relayer (sponsors gas) and the EOA to be upgraded
const relayer = privateKeyToAccount(process.env.RELAYER_PK as `0x${string}`);
const eoa = privateKeyToAccount(process.env.EOA_PK as `0x${string}`);

const client = createWalletClient({
account: relayer,
chain: sepolia,
transport: http(),
});

// 2. The EOA signs the authorization pointing to the implementation contract
const authorization = await client.signAuthorization({
account: eoa,
contractAddress: implementationAddress,
// If the EOA itself were sending this, you would add: executor: 'self'
});

// 3. The relayer sends a Type-4 transaction to set the EOA's code and call initialize()
const hash = await client.sendTransaction({
to: eoa.address, // The destination is the EOA itself
authorizationList: [authorization], // The new EIP-7702 field
data: encodeFunctionData({ abi, functionName: "initialize" }),
});

// 4. Now, the EOA can be controlled via its new logic without further authorizations
// For example, to execute a transaction:
// await client.sendTransaction({
// to: eoa.address,
// data: encodeFunctionData({ abi, functionName: 'execute', args: [...] })
// });

3. Revoke Delegation (Back to Plain EOA)

To undo the upgrade, have the EOA sign an authorization that designates the zero address as the implementation and send another Type-4 transaction. Afterward, a call to eth_getCode(eoa.address) should return empty bytes.


Integration Patterns That Work in Production

  • Upgrade in Place for Existing Users: In your dapp, detect if the user is on a Pectra-compatible network. If so, display an optional "Upgrade Account" button that triggers the one-time authorization signature. Maintain fallback paths (e.g., classic approve + swap) for users with older wallets.
  • Gasless Onboarding: Use a relayer (either your backend or a service) to sponsor the initial Type-4 transaction. For ongoing gasless transactions, route user operations through an ERC-4337 bundler to leverage existing paymasters and public mempools.
  • Cross-Chain Rollouts: Use a chain_id = 0 authorization to designate the same implementation contract across all chains. You can then enable or disable features on a per-chain basis within your application logic.
  • Observability: Your backend should index Type-4 transactions and parse the authorization_list to track which EOAs have been upgraded. After a transaction, verify the change by calling eth_getCode and confirming the EOA's code now matches the delegation indicator (0xef0100 || implementationAddress).

Threat Model & Gotchas (Don’t Skip This)

  • Delegation is Persistent: Treat changes to an EOA's implementation contract with the same gravity as a standard smart contract upgrade. This requires audits, clear user communication, and ideally, an opt-in flow. Never push new logic to users silently.
  • tx.origin Landmines: Any logic that used msg.sender == tx.origin to ensure a call came directly from an EOA is now potentially vulnerable. This pattern must be replaced with more robust checks, like EIP-712 signatures or explicit allowlists.
  • Nonce Math: When an EOA sponsors its own 7702 transaction (executor: 'self'), its authorization nonce and transaction nonce interact in a specific way. Always use a library that correctly handles this to avoid replay issues.
  • Wallet UX Responsibility: The EIP-7702 specification warns that dapps should not ask users to sign arbitrary designations. It is the wallet's responsibility to vet proposed implementations and ensure they are safe. Design your UX to align with this principle of wallet-mediated security.

When 7702 is a Clear Win

  • DEX Flows: A multi-step approve and swap can be combined into a single click using the executeBatch function.
  • Games & Sessions: Grant session-key-like privileges for a limited time or scope without requiring the user to create and fund a new wallet.
  • Enterprise & Fintech: Enable sponsored transactions and apply custom spending policies while keeping the same corporate address on every chain for accounting and identity.
  • L2 Bridges & Intents: Create smoother meta-transaction flows with a consistent EOA identity across different networks.

These use cases represent the same core benefits promised by ERC-4337, but are now available to every existing EOA with just a single authorization.


Ship Checklist

Protocol

  • Ensure nodes, SDKs, and infrastructure providers support Type-4 transactions and Pectra's "prague" EVM.
  • Update indexers and analytics tools to parse the authorization_list field in new transactions.

Contracts

  • Develop a minimal, audited implementation contract with essential features (e.g., batching, revocation).
  • Thoroughly test revoke and re-designate flows on testnets before deploying to mainnet.

Clients

  • Upgrade client-side libraries (viem, ethers, etc.) and test the signAuthorization and sendTransaction functions.
  • Verify that both self-sponsored and relayed transaction paths handle nonces and replays correctly.

Security

  • Remove all assumptions based on tx.origin from your contracts and replace them with safer alternatives.
  • Implement post-deployment monitoring to detect unexpected code changes at user addresses and alert on suspicious activity.

Bottom line: EIP-7702 provides a low-friction on-ramp to smart-account UX for the millions of EOAs already in use. Start with a tiny, audited implementation, use a relayed path for gasless setup, make revocation clear and easy, and you can deliver 90% of the benefits of full account abstraction—without the pain of address churn and asset migration.