Skip to main content

45 posts tagged with "blockchain"

View All Tags

Enso Network: The Unified, Intent-based Execution Engine

· 35 min read

Protocol Architecture

Enso Network is a Web3 development platform built as a unified, intent-based execution engine for on-chain operations. Its architecture abstracts away blockchain complexity by mapping every on-chain interaction to a shared engine that operates across multiple chains. Developers and users specify high-level intents (desired outcomes like a token swap, liquidity provision, yield strategy, etc.), and Enso’s network finds and executes the optimal sequence of actions to fulfill those intents. This is achieved through a modular design of “Actions” and “Shortcuts.”

Actions are granular smart contract abstractions (e.g. a swap on Uniswap, a deposit into Aave) provided by the community. Multiple Actions can be composed into Shortcuts, which are reusable workflows representing common DeFi operations. Enso maintains a library of these Shortcuts in smart contracts, so complex tasks can be executed via a single API call or transaction. This intent-based architecture lets developers focus on desired outcomes rather than writing low-level integration code for each protocol and chain.

Enso’s infrastructure includes a decentralized network (built on Tendermint consensus) that serves as a unifying layer connecting different blockchains. The network aggregates data (state from various L1s, rollups, and appchains) into a shared network state or ledger, enabling cross-chain composability and accurate multi-chain execution. In practice, this means Enso can read from and write to any integrated blockchain through one interface, acting as a single point of access for developers. Initially focused on EVM-compatible chains, Enso has expanded support to non-EVM ecosystems – for example, the roadmap includes integrations for Monad (an Ethereum-like L1), Solana, and Movement (a Move-language chain) by Q1 2025.

Network Participants: Enso’s innovation lies in its three-tier participant model, which decentralizes how intents are processed:

  • Action Providers – Developers who contribute modular contract abstractions (“Actions”) encapsulating specific protocol interactions. These building blocks are shared on the network for others to use. Action Providers are rewarded whenever their contributed Action is used in an execution, incentivizing them to publish secure and efficient modules.

  • Graphers – Independent solvers (algorithms) that combine Actions into executable Shortcuts to fulfill user intents. Multiple Graphers compete to find the most optimal solution (cheapest, fastest, or highest-yield path) for each request, similar to how solvers compete in a DEX aggregator. Only the best solution is selected for execution, and the winning Grapher earns a portion of the fees. This competitive mechanism encourages continuous optimization of on-chain routes and strategies.

  • Validators – Node operators who secure the Enso network by verifying and finalizing the Grapher’s solutions. Validators authenticate incoming requests, check the validity and safety of Actions/Shortcuts used, simulate transactions, and ultimately confirm the selected solution’s execution. They form the backbone of network integrity, ensuring results are correct and preventing malicious or inefficient solutions. Validators run a Tendermint-based consensus, meaning a BFT proof-of-stake process is used to reach agreement on each intent’s outcome and to update the network’s state.

Notably, Enso’s approach is chain-agnostic and API-centric. Developers interact with Enso via a unified API/SDK rather than dealing with each chain’s nuances. Enso integrates with over 250 DeFi protocols across multiple blockchains, effectively turning disparate ecosystems into one composable platform. This architecture eliminates the need for dApp teams to write custom smart contracts or handle cross-chain messaging for each new integration – Enso’s shared engine and community-provided Actions handle that heavy lifting. By mid-2025, Enso has proven its scalability: the network successfully facilitated $3.1B of liquidity migration in 3 days for Berachain’s launch (one of the largest DeFi migration events) and has processed over $15B in on-chain transactions to date. These feats demonstrate the robustness of Enso’s infrastructure under real-world conditions.

Overall, Enso’s protocol architecture delivers a “DeFi middleware” or on-chain operating system for Web3. It combines elements of indexing (like The Graph) and transaction execution (like cross-chain bridges or DEX aggregators) into a single decentralized network. This unique stack allows any application, bot, or agent to read and write to any smart contract on any chain via one integration, accelerating development and enabling new composable use cases. Enso positions itself as critical infrastructure for the multi-chain future – an intent engine that could power myriad apps without each needing to reinvent blockchain integrations.

Tokenomics

Enso’s economic model centers on the ENSO token, which is integral to network operation and governance. ENSO is a utility and governance token with a fixed total supply of 100 million tokens. The token’s design aligns incentives for all participants and creates a flywheel effect of usage and rewards:

  • Fee Currency (“Gas”): All requests submitted to the Enso network incur a query fee payable in ENSO. When a user (or dApp) triggers an intent, a small fee is embedded in the generated transaction bytecode. These fees are auctioned for ENSO tokens on the open market and then distributed to the network participants who process the request. In effect, ENSO is the gas that fuels execution of on-chain intents across Enso’s network. As demand for Enso’s shortcuts grows, demand for ENSO tokens may increase to pay for those network fees, creating a supply-demand feedback loop supporting token value.

  • Revenue Sharing & Staking Rewards: The ENSO collected from fees is distributed among Action Providers, Graphers, and Validators as a reward for their contributions. This model directly ties token earnings to network usage: more volume of intents means more fees to distribute. Action Providers earn tokens when their abstractions are used, Graphers earn tokens for winning solutions, and Validators earn tokens for validating and securing the network. All three roles must also stake ENSO as collateral to participate (to be slashed for malpractice), aligning their incentives with network health. Token holders can delegate their ENSO to Validators as well, supporting network security via delegated proof of stake. This staking mechanism not only secures the Tendermint consensus but also gives token stakers a share of network fees, similar to how miners/validators earn gas fees in other chains.

  • Governance: ENSO token holders will govern the protocol’s evolution. Enso is launching as an open network and plans to transition to community-driven decision making. Token-weighted voting will let holders influence upgrades, parameter changes (like fee levels or reward allocations), and treasury usage. This governance power ensures that core contributors and users are aligned on the network’s direction. The project’s philosophy is to put ownership in the hands of the community of builders and users, which was a driving reason for the community token sale in 2025 (see below).

  • Positive Flywheel: Enso’s tokenomics are designed to create a self-reinforcing loop. As more developers integrate Enso and more users execute intents, network fees (paid in ENSO) grow. Those fees reward contributors (attracting more Actions, better Graphers, and more Validators), which in turn improves the network’s capabilities (faster, cheaper, more reliable execution) and attracts more usage. This network effect is underpinned by the ENSO token’s role as both the fee currency and the incentive for contribution. The intention is for the token economy to scale sustainably with network adoption, rather than relying on unsustainable emissions.

Token Distribution & Supply: The initial token allocation is structured to balance team/investor incentives with community ownership. The table below summarizes the ENSO token distribution at genesis:

AllocationPercentageTokens (out of 100M)
Team (Founders & Core)25.0%25,000,000
Early Investors (VCs)31.3%31,300,000
Foundation & Growth Fund23.2%23,200,000
Ecosystem Treasury (Community incentives)15.0%15,000,000
Public Sale (CoinList 2025)4.0%4,000,000
Advisors1.5%1,500,000

Source: Enso Tokenomics.

The public sale in June 2025 offered 5% (4 million tokens) to the community, raising $5 million at a price of $1.25 per ENSO (implying a fully diluted valuation of ~$125 million). Notably, the community sale had no lock-up (100% unlocked at TGE), whereas the team and venture investors are subject to a 2-year linear vesting schedule. This means insiders’ tokens unlock gradually block-by-block over 24 months, aligning them to long-term network growth and mitigating immediate sell pressure. The community thus gained immediate liquidity and ownership, reflecting Enso’s goal of broad distribution.

Enso’s emission schedule beyond the initial allocation appears to be primarily fee-driven rather than inflationary. The total supply is fixed at 100M tokens, and there is no indication of perpetual inflation for block rewards at this time (validators are compensated from fee revenue). This contrasts with many Layer-1 protocols that inflate supply to pay stakers; Enso aims to be sustainable through actual usage fees to reward participants. If network activity is low in early phases, the foundation and treasury allocations can be used to bootstrap incentives for usage and development grants. Conversely, if demand is high, ENSO token’s utility (for fees and staking) could create organic demand pressure.

In summary, ENSO is the fuel of the Enso Network. It powers transactions (query fees), secures the network (staking and slashing), and governs the platform (voting). The token’s value is directly tied to network adoption: as Enso becomes more widely used as the backbone for DeFi applications, the volume of ENSO fees and staking should reflect that growth. The careful distribution (with only a small portion immediately circulating after TGE) and strong backing by top investors (below) provide confidence in the token’s support, while the community-centric sale signals a commitment to decentralization of ownership.

Team and Investors

Enso Network was founded in 2021 by Connor Howe (CEO) and Gorazd Ocvirk, who previously worked together at Sygnum Bank in Switzerland’s crypto banking sector. Connor Howe leads the project as CEO and is the public face in communications and interviews. Under his leadership, Enso initially launched as a social trading DeFi platform and then pivoted through multiple iterations to arrive at the current intent-based infrastructure vision. This adaptability highlights the team’s entrepreneurial resilience – from executing a high-profile “vampire attack” on index protocols in 2021 to building a DeFi aggregator super-app, and finally generalizing their tooling into Enso’s developer platform. Co-founder Gorazd Ocvirk (PhD) brought deep expertise in quantitative finance and Web3 product strategy, although public sources suggest he may have transitioned to other ventures (he was noted as a co-founder of a different crypto startup in 2022). Enso’s core team today includes engineers and operators with strong DeFi backgrounds. For example, Peter Phillips and Ben Wolf are listed as “blockend” (blockchain backend) engineers, and Valentin Meylan leads research. The team is globally distributed but has roots in Zug/Zurich, Switzerland, a known hub for crypto projects (Enso Finance AG was registered in 2020 in Switzerland).

Beyond the founders, Enso has notable advisors and backers that lend significant credibility. The project is backed by top-tier crypto venture funds and angels: it counts Polychain Capital and Multicoin Capital as lead investors, along with Dialectic and Spartan Group (both prominent crypto funds), and IDEO CoLab. An impressive roster of angel investors also participated across rounds – over 70 individuals from leading Web3 projects have invested in Enso. These include founders or executives from LayerZero, Safe (Gnosis Safe), 1inch, Yearn Finance, Flashbots, Dune Analytics, Pendle, and others. Even tech luminary Naval Ravikant (co-founder of AngelList) is an investor and supporter. Such names signal strong industry confidence in Enso’s vision.

Enso’s funding history: the project raised a $5M seed round in early 2021 to build the social trading platform, and later a $4.2M round (strategic/VC) as it evolved the product (these early rounds likely included Polychain, Multicoin, Dialectic, etc.). By mid-2023, Enso had secured enough capital to build out its network; notably, it operated relatively under the radar until its infrastructure pivot gained traction. In Q2 2025, Enso launched a $5M community token sale on CoinList, which was oversubscribed by tens of thousands of participants. The purpose of this sale was not just to raise funds (the amount was modest given prior VC backing) but to decentralize ownership and give its growing community a stake in the network’s success. According to CEO Connor Howe, “we want our earliest supporters, users, and believers to have real ownership in Enso…turning users into advocates”. This community-focused approach is part of Enso’s strategy to drive grassroots growth and network effects through aligned incentives.

Today, Enso’s team is considered among the thought leaders in the “intent-based DeFi” space. They actively engage in developer education (e.g., Enso’s Shortcut Speedrun attracted 700k participants as a gamified learning event) and collaborate with other protocols on integrations. The combination of a strong core team with proven ability to pivot, blue-chip investors, and an enthusiastic community suggests that Enso has both the talent and the financial backing to execute on its ambitious roadmap.

Adoption Metrics and Use Cases

Despite being a relatively new infrastructure, Enso has demonstrated significant traction in its niche. It has positioned itself as the go-to solution for projects needing complex on-chain integrations or cross-chain capabilities. Some key adoption metrics and milestones as of mid-2025:

  • Ecosystem Integration: Over 100 live applications (dApps, wallets, and services) are using Enso under the hood to power on-chain features. These range from DeFi dashboards to automated yield optimizers. Because Enso abstracts protocols, developers can quickly add new DeFi features to their product by plugging into Enso’s API. The network has integrated with 250+ DeFi protocols (DEXes, lending platforms, yield farms, NFT markets, etc.) across major chains, meaning Enso can execute virtually any on-chain action a user might want, from a Uniswap trade to a Yearn vault deposit. This breadth of integrations significantly reduces development time for Enso’s clients – a new project can support, say, all DEXes on Ethereum, Layer-2s, and even Solana using Enso, rather than coding each integration independently.

  • Developer Adoption: Enso’s community now includes 1,900+ developers actively building with its toolkit. These developers might be directly creating Shortcuts/Actions or incorporating Enso into their applications. The figure highlights that Enso isn’t just a closed system; it’s enabling a growing ecosystem of builders who use its shortcuts or contribute to its library. Enso’s approach of simplifying on-chain development (claiming to cut build times from 6+ months down to under a week) has resonated with Web3 developers. This is also evidenced by hackathons and the Enso Templates library where community members share plug-and-play shortcut examples.

  • Transaction Volume: Over **$15 billion in cumulative on-chain transaction volume has been settled through Enso’s infrastructure. This metric, as reported in June 2025, underscores that Enso is not just running in test environments – it’s processing real value at scale. A single high-profile example was Berachain’s liquidity migration: In April 2025, Enso powered the movement of liquidity for Berachain’s testnet campaign (“Boyco”) and facilitated $3.1B in executed transactions over 3 days, one of the largest liquidity events in DeFi history. Enso’s engine successfully handled this load, demonstrating reliability and throughput under stress. Another example is Enso’s partnership with Uniswap: Enso built a Uniswap Position Migrator tool (in collaboration with Uniswap Labs, LayerZero, and Stargate) that helped users seamlessly migrate Uniswap v3 LP positions from Ethereum to another chain. This tool simplified a typically complex cross-chain process (with bridging and re-deployment of NFTs) into a one-click shortcut, and its release showcased Enso’s ability to work alongside top DeFi protocols.

  • Real-World Use Cases: Enso’s value proposition is best understood through the diverse use cases it enables. Projects have used Enso to deliver features that would be very difficult to build alone:

    • Cross-Chain Yield Aggregation: Plume and Sonic used Enso to power incentivized launch campaigns where users could deposit assets on one chain and have them deployed into yields on another chain. Enso handled the cross-chain messaging and multi-step transactions, allowing these new protocols to offer seamless cross-chain experiences to users during their token launch events.
    • Liquidity Migration and Mergers: As mentioned, Berachain leveraged Enso for a “vampire attack”-like migration of liquidity from other ecosystems. Similarly, other protocols could use Enso Shortcuts to automate moving users’ funds from a competitor platform to their own, by bundling approvals, withdrawals, transfers, and deposits across platforms into one intent. This demonstrates Enso’s potential in protocol growth strategies.
    • DeFi “Super App” Functionality: Some wallets and interfaces (for instance, the Eliza OS crypto assistant and the Infinex trading platform) integrate Enso to offer one-stop DeFi actions. A user can, in one click, swap assets at the best rate (Enso will route across DEXes), then lend the output to earn yield, then perhaps stake an LP token – all of which Enso can execute as one Shortcut. This significantly improves user experience and functionality for those apps.
    • Automation and Bots: The presence of “agents” and even AI-driven bots using Enso is emerging. Because Enso exposes an API, algorithmic traders or AI agents can input a high-level goal (e.g. “maximize yield on X asset across any chain”) and let Enso find the optimal strategy. This has opened up experimentation in automated DeFi strategies without needing custom bot engineering for each protocol.
  • User Growth: While Enso is primarily a B2B/B2Dev infrastructure, it has cultivated a community of end-users and enthusiasts through campaigns. The Shortcut Speedrun – a gamified tutorial series – saw over 700,000 participants, indicating widespread interest in Enso’s capabilities. Enso’s social following has grown nearly 10x in a few months (248k followers on X as of mid-2025), reflecting strong mindshare among crypto users. This community growth is important because it creates grassroots demand: users aware of Enso will encourage their favorite dApps to integrate it or will use products that leverage Enso’s shortcuts.

In summary, Enso has moved beyond theory to real adoption. It is trusted by 100+ projects including well-known names like Uniswap, SushiSwap, Stargate/LayerZero, Berachain, zkSync, Safe, Pendle, Yearn and more, either as integration partners or direct users of Enso’s tech. This broad usage across different verticals (DEXs, bridges, layer-1s, dApps) highlights Enso’s role as general-purpose infrastructure. Its key traction metric – $15B+ in transactions – is especially impressive for an infrastructure project at this stage and validates market fit for an intent-based middleware. Investors can take comfort that Enso’s network effects appear to be kicking in: more integrations beget more usage, which begets more integrations. The challenge ahead will be converting this early momentum into sustained growth, which ties into Enso’s positioning against competitors and its roadmap.

Competitor Landscape

Enso Network operates at the intersection of DeFi aggregation, cross-chain interoperability, and developer infrastructure, making its competitive landscape multi-faceted. While no single competitor offers an identical product, Enso faces competition from several categories of Web3 protocols:

  • Decentralized Middleware & Indexing: The most direct analogy is The Graph (GRT). The Graph provides a decentralized network for querying blockchain data via subgraphs. Enso similarly crowd-sources data providers (Action Providers) but goes a step further by enabling transaction execution in addition to data fetching. Whereas The Graph’s ~$924M market cap is built on indexing alone, Enso’s broader scope (data + action) positions it as a more powerful tool in capturing developer mindshare. However, The Graph is a well-established network; Enso will have to prove the reliability and security of its execution layer to achieve similar adoption. One could imagine The Graph or other indexing protocols expanding into execution, which would directly compete with Enso’s niche.

  • Cross-Chain Interoperability Protocols: Projects like LayerZero, Axelar, Wormhole, and Chainlink CCIP provide infrastructure to connect different blockchains. They focus on message passing and bridging assets between chains. Enso actually uses some of these under the hood (e.g., LayerZero/Stargate for bridging in the Uniswap migrator) and is more of a higher-level abstraction on top. In terms of competition, if these interoperability protocols start offering higher-level “intent” APIs or developer-friendly SDKs to compose multi-chain actions, they could overlap with Enso. For example, Axelar offers an SDK for cross-chain calls, and Chainlink’s CCIP could enable cross-chain function execution. Enso’s differentiator is that it doesn’t just send messages between chains; it maintains a unified engine and library of DeFi actions. It targets application developers who want a ready-made solution, rather than forcing them to build on raw cross-chain primitives. Nonetheless, Enso will compete for market share in the broader blockchain middleware segment where these interoperability projects are well funded and rapidly innovating.

  • Transaction Aggregators & Automation: In the DeFi world, there are existing aggregators like 1inch, 0x API, or CoW Protocol that focus on finding optimal trade routes across exchanges. Enso’s Grapher mechanism for intents is conceptually similar to CoW Protocol’s solver competition, but Enso generalizes it beyond swaps to any action. A user intent to “maximize yield” might involve swapping, lending, staking, etc., which is outside the scope of a pure DEX aggregator. That said, Enso will be compared to these services on efficiency for overlapping use cases (e.g., Enso vs. 1inch for a complex token swap route). If Enso consistently finds better routes or lower fees thanks to its network of Graphers, it can outcompete traditional aggregators. Gelato Network is another competitor in automation: Gelato provides a decentralized network of bots to execute tasks like limit orders, auto-compounding, or cross-chain transfers on behalf of dApps. Gelato has a GEL token and an established client base for specific use cases. Enso’s advantage is its breadth and unified interface – rather than offering separate products for each use case (as Gelato does), Enso offers a general platform where any logic can be encoded as a Shortcut. However, Gelato’s head start and focused approach in areas like automation could attract developers who might otherwise use Enso for similar functionalities.

  • Developer Platforms (Web3 SDKs): There are also Web2-style developer platforms like Moralis, Alchemy, Infura, and Tenderly that simplify building on blockchains. These typically offer API access to read data, send transactions, and sometimes higher-level endpoints (e.g., “get token balances” or “send tokens across chain”). While these are mostly centralized services, they compete for the same developer attention. Enso’s selling point is that it’s decentralized and composable – developers are not just getting data or a single function, they’re tapping into an entire network of on-chain capabilities contributed by others. If successful, Enso could become “the GitHub of on-chain actions,” where developers share and reuse Shortcuts, much like open-source code. Competing with well-funded infrastructure-as-a-service companies means Enso will need to offer comparable reliability and ease-of-use, which it is striving for with an extensive API and documentation.

  • Homegrown Solutions: Finally, Enso competes with the status quo – teams building custom integrations in-house. Traditionally, any project wanting multi-protocol functionality had to write and maintain smart contracts or scripts for each integration (e.g., integrating Uniswap, Aave, Compound separately). Many teams might still choose this route for maximum control or due to security considerations. Enso needs to convince developers that outsourcing this work to a shared network is secure, cost-effective, and up-to-date. Given the speed of DeFi innovation, maintaining one’s own integrations is burdensome (Enso often cites that teams spend 6+ months and $500k on audits to integrate dozens of protocols). If Enso can prove its security rigor and keep its action library current with the latest protocols, it can convert more teams away from building in silos. However, any high-profile security incident or downtime in Enso could send developers back to preferring in-house solutions, which is a competitive risk in itself.

Enso’s Differentiators: Enso’s primary edge is being first-to-market with an intent-focused, community-driven execution network. It combines features that would require using multiple other services: data indexing, smart contract SDKs, transaction routing, and cross-chain bridging – all in one. Its incentive model (rewarding third-party developers for contributions) is also unique; it could lead to a vibrant ecosystem where many niche protocols get integrated into Enso faster than any single team could do, similar to how The Graph’s community indexes a long tail of contracts. If Enso succeeds, it could enjoy a strong network effect moat: more Actions and Shortcuts make it more attractive to use Enso versus competitors, which attracts more users and thus more Actions contributed, and so on.

That said, Enso is still in its early days. Its closest analog, The Graph, took years to decentralize and build an ecosystem of indexers. Enso will similarly need to nurture its Graphers and Validators community to ensure reliability. Large players (like a future version of The Graph, or a collaboration of Chainlink and others) could decide to roll out a competing intent execution layer, leveraging their existing networks. Enso will have to move quickly to solidify its position before such competition materializes.

In conclusion, Enso sits at a competitive crossroads of several important Web3 verticals – it’s carving a niche as the “middleware of everything”. Its success will depend on outperforming specialized competitors in each use case (or aggregating them) and continuing to offer a compelling one-stop solution that justifies developers choosing Enso over building from scratch. The presence of high-profile partners and investors suggests Enso has a foot in the door with many ecosystems, which will be advantageous as it expands its integration coverage.

Roadmap and Ecosystem Growth

Enso’s development roadmap (as of mid-2025) outlines a clear path toward full decentralization, multi-chain support, and community-driven growth. Key milestones and planned initiatives include:

  • Mainnet Launch (Q3 2024) – Enso launched its mainnet network in the second half of 2024. This involved deploying the Tendermint-based chain and initializing the Validator ecosystem. Early validators were likely permissioned or selected partners as the network bootstrapped. The mainnet launch allowed real user queries to be processed by Enso’s engine (prior to this, Enso’s services were accessible via a centralized API while in beta). This milestone marked Enso’s transition from an in-house platform to a public decentralized network.

  • Network Participant Expansion (Q4 2024) – Following mainnet, the focus shifted to decentralizing participation. In late 2024, Enso opened up roles for external Action Providers and Graphers. This included releasing tooling and documentation for developers to create their own Actions (smart contract adapters) and for algorithm developers to run Grapher nodes. We can infer that incentive programs or testnet competitions were used to attract these participants. By end of 2024, Enso aimed to have a broader set of third-party actions in its library and multiple Graphers competing on intents, moving beyond the core team’s internal algorithms. This was a crucial step to ensure Enso isn’t a centralized service, but a true open network where anyone can contribute and earn ENSO tokens.

  • Cross-Chain Expansion (Q1 2025) – Enso recognizes that supporting many blockchains is key to its value proposition. In early 2025, the roadmap targeted integration with new blockchain environments beyond the initial EVM set. Specifically, Enso planned support for Monad, Solana, and Movement by Q1 2025. Monad is an upcoming high-performance EVM-compatible chain (backed by Dragonfly Capital) – supporting it early could position Enso as the go-to middleware there. Solana integration is more challenging (different runtime and language), but Enso’s intent engine could work with Solana by using off-chain graphers to formulate Solana transactions and on-chain programs acting as adapters. Movement refers to Move-language chains (perhaps Aptos/Sui or a specific one called Movement). By incorporating Move-based chains, Enso would cover a broad spectrum of ecosystems (Solidity and Move, as well as existing Ethereum rollups). Achieving these integrations means developing new Action modules that understand Solana’s CPI calls or Move’s transaction scripts, and likely collaborating with those ecosystems for oracles/indexing. Enso’s mention in updates suggests these were on track – for example, a community update highlighted partnerships or grants (the mention of “Eclipse mainnet live + Movement grant” in a search result suggests Enso was actively working with novel L1s like Eclipse and Movement by early 2025).

  • Near-Term (Mid/Late 2025) – Although not explicitly broken out in the one-pager roadmap, by mid-2025 Enso’s focus is on network maturity and decentralization. The completion of the CoinList token sale in June 2025 is a major event: the next steps would be token generation and distribution (expected around July 2025) and launching on exchanges or governance forums. We anticipate Enso will roll out its governance process (Enso Improvement Proposals, on-chain voting) so the community can start participating in decisions using their newly acquired tokens. Additionally, Enso will likely move from “beta” to a fully production-ready service, if it hasn’t already. Part of this will be security hardening – conducting multiple smart contract audits and perhaps running a bug bounty program, considering the large TVLs involved.

  • Ecosystem Growth Strategies: Enso is actively fostering an ecosystem around its network. One strategy has been running educational programs and hackathons (e.g., the Shortcut Speedrun and workshops) to onboard developers to the Enso way of building. Another strategy is partnering with new protocols at launch – we’ve seen this with Berachain, zkSync’s campaign, and others. Enso is likely to continue this, effectively acting as an “on-chain launch partner” for emerging networks or DeFi projects, handling their complex user onboarding flows. This not only drives Enso’s volume (as seen with Berachain) but also integrates Enso deeply into those ecosystems. We expect Enso to announce integrations with more Layer-2 networks (e.g., Arbitrum, Optimism were presumably already supported; perhaps newer ones like Scroll or Starknet next) and other L1s (Polkadot via XCM, Cosmos via IBC or Osmosis, etc.). The long-term vision is that Enso becomes chain-ubiquitous – any developer on any chain can plug in. To that end, Enso may also develop better bridgeless cross-chain execution (using techniques like atomic swaps or optimistic execution of intents across chains), which could be on the R&D roadmap beyond 2025.

  • Future Outlook: Looking further, Enso’s team has hinted at involvement of AI agents as network participants. This suggests a future where not only human developers, but AI bots (perhaps trained to optimize DeFi strategies) plug into Enso to provide services. Enso might build out this vision by creating SDKs or frameworks for AI agents to safely interface with the intent engine – a potentially groundbreaking development merging AI and blockchain automation. Moreover, by late 2025 or 2026, we anticipate Enso will work on performance scaling (maybe sharding its network or using zero-knowledge proofs to validate intent execution correctness at scale) as usage grows.

The roadmap is ambitious but execution so far has been strong – Enso has met key milestones like mainnet launch and delivering real use cases. An important upcoming milestone is the full decentralization of the network. Currently, the network is in a transition: the documentation notes the decentralized network is in testnet and a centralized API was being used for production as of earlier in 2025. By now, with mainnet live and token in circulation, Enso will aim to phase out any centralized components. For investors, tracking this decentralization progress (e.g., number of independent validators, community Graphers joining) will be key to evaluating Enso’s maturity.

In summary, Enso’s roadmap focuses on scaling the network’s reach (more chains, more integrations) and scaling the network’s community (more third-party participants and token holders). The ultimate goal is to cement Enso as critical infrastructure in Web3, much like how Infura became essential for dApp connectivity or how The Graph became integral for data querying. If Enso can hit its milestones, the second half of 2025 should see a blossoming ecosystem around the Enso Network, potentially driving exponential growth in usage.

Risk Assessment

Like any early-stage protocol, Enso Network faces a range of risks and challenges that investors should carefully consider:

  • Technical and Security Risks: Enso’s system is inherently complex – it interacts with myriad smart contracts across many blockchains through a network of off-chain solvers and validators. This expansive surface area introduces technical risk. Each new Action (integration) could carry vulnerabilities; if an Action’s logic is flawed or a malicious provider introduces a backdoored Action, user funds could be at risk. Ensuring every integration is secure required substantial investment (Enso’s team spent over $500k on audits for integrating 15 protocols in its early days). As the library grows to hundreds of protocols, maintaining rigorous security audits is challenging. There’s also the risk of bugs in Enso’s coordination logic – for example, a flaw in how Graphers compose transactions or how Validators verify them could be exploited. Cross-chain execution, in particular, can be risky: if a sequence of actions spans multiple chains and one part fails or is censored, it could leave a user’s funds in limbo. Although Enso likely uses retries or atomic swaps for some cases, the complexity of intents means unknown failure modes might emerge. The intent-based model itself is relatively unproven at scale – there may be edge cases where the engine produces an incorrect solution or an outcome that diverges from the user’s intent. Any high-profile exploit or failure could undermine confidence in the whole network. Mitigation requires continuous security audits, a robust bug bounty program, and perhaps insurance mechanisms for users (none of which have been detailed yet).

  • Decentralization and Operational Risks: At present (mid-2025), the Enso network is still in the process of decentralizing its participants. This means there may be unseen operational centralization – for instance, the team’s infrastructure might still be co-ordinating a lot of the activity, or only a few validators/graphers are genuinely active. This presents two risks: reliability (if the core team’s servers go down, will the network stall?) and trust (if the process isn’t fully trustless yet, users must have faith in Enso Inc. not to front-run or censor transactions). The team has proven reliability in big events (like handling $3B volume in days), but as usage grows, scaling the network via more independent nodes will be crucial. There’s also a risk that network participants don’t show up – if Enso cannot attract enough skilled Action Providers or Graphers, the network might remain dependent on the core team, limiting decentralization. This could slow innovation and also concentrate too much power (and token rewards) within a small group, the opposite of the intended design.

  • Market and Adoption Risks: While Enso has impressive early adoption, it’s still in a nascent market for “intent-based” infrastructure. There is a risk that the broader developer community might be slow to adopt this new paradigm. Developers entrenched in traditional coding practices might be hesitant to rely on an external network for core functionality, or they may prefer alternative solutions. Additionally, Enso’s success depends on continuous growth of DeFi and multi-chain ecosystems. If the multi-chain thesis falters (for example, if most activity consolidates on a single dominant chain), the need for Enso’s cross-chain capabilities might diminish. On the flip side, if a new ecosystem arises that Enso fails to integrate quickly, projects in that ecosystem won’t use Enso. Essentially, staying up-to-date with every new chain and protocol is a never-ending challenge – missing or lagging on a major integration (say a popular new DEX or a Layer-2) could push projects to competitors or custom code. Furthermore, Enso’s usage could be hurt by macro market conditions; in a severe DeFi downturn, fewer users and developers might be experimenting with new dApps, directly reducing intents submitted to Enso and thus the fees/revenue of the network. The token’s value could suffer in such a scenario, potentially making staking less attractive and weakening network security or participation.

  • Competition: As discussed, Enso faces competition on multiple fronts. A major risk is a larger player entering the intent execution space. For instance, if a well-funded project like Chainlink were to introduce a similar intent service leveraging their existing oracle network, they could quickly overshadow Enso due to brand trust and integrations. Similarly, infrastructure companies (Alchemy, Infura) could build simplified multi-chain SDKs that, while not decentralized, capture the developer market with convenience. There’s also the risk of open-source copycats: Enso’s core concepts (Actions, Graphers) could be replicated by others, perhaps even as a fork of Enso if the code is public. If one of those projects forms a strong community or finds a better token incentive, it might divert potential participants. Enso will need to maintain technological leadership (e.g., by having the largest library of Actions and most efficient solvers) to fend off competition. Competitive pressure could also squeeze Enso’s fee model – if a rival offers similar services cheaper (or free, subsidized by VCs), Enso might be forced to lower fees or increase token incentives, which could strain its tokenomics.

  • Regulatory and Compliance Risks: Enso operates in the DeFi infrastructure space, which is a gray area in terms of regulation. While Enso itself doesn’t custody user funds (users execute intents from their own wallets), the network does automate complex financial transactions across protocols. There is a possibility that regulators could view intent-composition engines as facilitating unlicensed financial activity or even aiding money laundering if used to shuttle funds across chains in obscured ways. Specific concerns could arise if Enso enablescross-chain swaps that touch privacy pools or jurisdictions under sanctions. Additionally, the ENSO token and its CoinList sale reflect a distribution to a global community – regulators (like the SEC in the U.S.) might scrutinize it as an offering of securities (notably, Enso excluded US, UK, China, etc., from the sale, indicating caution on this front). If ENSO were deemed a security in major jurisdictions, it could limit exchange listings or usage by regulated entities. Enso’s decentralized network of validators might also face compliance issues: for example, could a validator be forced to censor certain transactions due to legal orders? This is largely hypothetical for now, but as the value flowing through Enso grows, regulatory attention will increase. The team’s base in Switzerland might offer a relatively crypto-friendly regulatory environment, but global operations mean global risks. Mitigating this likely involves ensuring Enso is sufficiently decentralized (so no single entity is accountable) and possibly geofencing certain features if needed (though that would be against the ethos of the project).

  • Economic Sustainability: Enso’s model assumes that fees generated by usage will sufficiently reward all participants. There’s a risk that the fee incentives may not be enough to sustain the network, especially early on. For instance, Graphers and Validators incur costs (infrastructure, development time). If query fees are set too low, these participants might not profit, leading them to drop off. On the other hand, if fees are too high, dApps may hesitate to use Enso and seek cheaper alternatives. Striking a balance is hard in a two-sided market. The Enso token economy also relies on token value to an extent – e.g., staking rewards are more attractive when the token has high value, and Action Providers earn value in ENSO. A sharp decline in ENSO price could reduce network participation or prompt more selling (which further depresses the price). With a large portion of tokens held by investors and team (over 56% combined, vesting over 2 years), there’s an overhang risk: if these stakeholders lose faith or need liquidity, their selling could flood the market post-vesting and undermine the token’s price. Enso tried to mitigate concentration by the community sale, but it’s still a relatively centralized token distribution in the near term. Economic sustainability will depend on growing genuine network usage to a level where fee revenue provides sufficient yield to token stakers and contributors – essentially making Enso a “cash-flow” generating protocol rather than just a speculative token. This is achievable (think of how Ethereum fees reward miners/validators), but only if Enso achieves widespread adoption. Until then, there is a reliance on treasury funds (15% allocated) to incentivize and perhaps to adjust the economic parameters (Enso governance may introduce inflation or other rewards if needed, which could dilute holders).

Summary of Risk: Enso is pioneering new ground, which comes with commensurate risk. The technological complexity of unifying all of DeFi into one network is enormous – each blockchain added or protocol integrated is a potential point of failure that must be managed. The team’s experience navigating earlier setbacks (like the limited success of the initial social trading product) shows they are aware of pitfalls and adapt quickly. They have actively mitigated some risks (e.g., decentralizing ownership via the community round to avoid overly VC-driven governance). Investors should watch how Enso executes on decentralization and whether it continues to attract top-tier technical talent to build and secure the network. In the best case, Enso could become indispensable infrastructure across Web3, yielding strong network effects and token value accrual. In the worst case, technical or adoption setbacks could relegate it to being an ambitious but niche tool.

From an investor’s perspective, Enso offers a high-upside, high-risk profile. Its current status (mid-2025) is that of a promising network with real usage and a clear vision, but it must now harden its technology and outpace a competitive and evolving landscape. Due diligence on Enso should include monitoring its security track record, the growth of query volumes/fees over time, and how effectively the ENSO token model incentivizes a self-sustaining ecosystem. As of now, the momentum is in Enso’s favor, but prudent risk management and continued innovation will be key to turning this early leadership into long-term dominance in the Web3 middleware space.

Sources:

  • Enso Network Official Documentation and Token Sale Materials

    • CoinList Token Sale Page – Key Highlights & Investors
    • Enso Docs – Tokenomics and Network Roles
  • Interviews and Media Coverage

    • CryptoPotato Interview with Enso CEO (June 2025) – Background on Enso’s evolution and intent-based design
    • DL News (May 2025) – Overview of Enso’s shortcuts and shared state approach
  • Community and Investor Analyses

    • Hackernoon (I. Pandey, 2025) – Insights on Enso’s community round and token distribution strategy
    • CryptoTotem / CoinLaunch (2025) – Token supply breakdown and roadmap timeline
  • Enso Official Site Metrics (2025) and Press Releases – Adoption figures and use-case examples (Berachain migration, Uniswap collaboration).

Aptos vs. Sui: A Panoramic Analysis of Two Move-Based Giants

· 7 min read
Dora Noda
Software Engineer

Overview

Aptos and Sui stand as the next generation of Layer-1 blockchains, both originating from the Move language initially conceived by Meta's Libra/Diem project. While they share a common lineage, their team backgrounds, core objectives, ecosystem strategies, and evolutionary paths have diverged significantly.

Aptos emphasizes versatility and enterprise-grade performance, targeting both DeFi and institutional use cases. In contrast, Sui is laser-focused on optimizing its unique object model to power mass-market consumer applications, particularly in gaming, NFTs, and social media. Which chain will ultimately distinguish itself depends on its ability to evolve its technology to meet the demands of its chosen market niche, while establishing a clear advantage in user experience and developer friendliness.


1. Development Journey

Aptos

Born from Aptos Labs—a team formed by former Meta Libra/Diem employees—Aptos began closed testing in late 2021 and launched its mainnet on October 19, 2022. Early mainnet performance drew community skepticism with under 20 TPS, as noted by WIRED, but subsequent iterations on its consensus and execution layers have steadily pushed its throughput to tens of thousands of TPS.

By Q2 2025, Aptos had achieved a peak of 44.7 million transactions in a single week, with weekly active addresses surpassing 4 million. The network has grown to over 83 million cumulative accounts, with daily DeFi trading volume consistently exceeding $200 million (Source: Aptos Forum).

Sui

Initiated by Mysten Labs, whose founders were core members of Meta's Novi wallet team, Sui launched its incentivized testnet in August 2022 and went live with its mainnet on May 3, 2023. From the earliest testnets, the team prioritized refining its "object model," which treats assets as objects with specific ownership and access controls to enhance parallel transaction processing (Source: Ledger).

As of mid-July 2025, Sui's ecosystem Total Value Locked (TVL) reached $2.326 billion. The platform has seen rapid growth in monthly transaction volume and the number of active engineers, proving especially popular within the gaming and NFT sectors (Source: AInvest, Tangem).


2. Technical Architecture Comparison

FeatureAptosSui
LanguageInherits the original Move design, emphasizing the security of "resources" and strict access control. The language is relatively streamlined. (Source: aptos.dev)Extends standard Move with an "object-centric" model, creating a customized version of the language that supports horizontally scalable parallel transactions. (Source: docs.sui.io)
ConsensusAptosBFT: An optimized BFT consensus mechanism promising sub-second finality, with a primary focus on security and consistency. (Source: Messari)Narwhal + Tusk: Decouples consensus from transaction ordering, enabling high throughput and low latency by prioritizing parallel execution efficiency.
Execution ModelEmploys a pipelined execution model where transactions are processed in stages (data fetching, execution, write-back), supporting high-frequency transfers and complex logic. (Source: chorus.one)Utilizes parallel execution based on object ownership. Transactions involving distinct objects do not require global state locks, fundamentally boosting throughput.
ScalabilityFocuses on single-instance optimization while researching sharding. The community is actively developing the AptosCore v2.0 sharding proposal.Features a native parallel engine designed for horizontal scaling, having already achieved peak TPS in the tens of thousands on its testnet.
Developer ToolsA mature toolchain including official SDKs, a Devnet, the Aptos CLI, an Explorer, and the Hydra framework for scalability.A comprehensive suite including the Sui SDK, Sui Studio IDE, an Explorer, GraphQL APIs, and an object-oriented query model.

3. On-Chain Ecosystem and Use Cases

3.1 Ecosystem Scale and Growth

Aptos In Q1 2025, Aptos recorded nearly 15 million monthly active users and approached 1 million daily active wallets. Its DeFi trading volume surged by 1000% year-over-year, with the platform establishing itself as a hub for financial-grade stablecoins and derivatives (Source: Coinspeaker). Key strategic moves include integrating USDT via Upbit to drive penetration in Asian markets and attracting numerous leading DEXs, lending protocols, and derivatives platforms (Source: Aptos Forum).

Sui In June 2025, Sui's ecosystem TVL reached a new high of $2.326 billion, driven primarily by high-interaction social, gaming, and NFT projects (Source: AInvest). The ecosystem is defined by core projects like object marketplaces, Layer-2 bridges, social wallets, and game engine SDKs, which have attracted a large number of Web3 game developers and IP holders.

3.2 Dominant Use Cases

  • DeFi & Enterprise Integration (Aptos): With its mature BFT finality and a rich suite of financial tools, Aptos is better suited for stablecoins, lending, and derivatives—scenarios that demand high levels of consistency and security.
  • Gaming & NFTs (Sui): Sui's parallel execution advantage is clear here. Its low transaction latency and near-zero fees are ideal for high-concurrency, low-value interactions common in gaming, such as opening loot boxes or transferring in-game items.

4. Evolution & Strategy

Aptos

  • Performance Optimization: Continuing to advance sharding research, planning for multi-region cross-chain liquidity, and upgrading the AptosVM to improve state access efficiency.
  • Ecosystem Incentives: A multi-hundred-million-dollar ecosystem fund has been established to support DeFi infrastructure, cross-chain bridges, and compliant enterprise applications.
  • Cross-Chain Interoperability: Strengthening integrations with bridges like Wormhole and building out connections to Cosmos (via IBC) and Ethereum.

Sui

  • Object Model Iteration: Extending the Move syntax to support custom object types and complex permission management while optimizing the parallel scheduling algorithm.
  • Driving Consumer Adoption: Pursuing deep integrations with major game engines like Unreal and Unity to lower the barrier for Web3 game development, and launching social plugins and SDKs.
  • Community Governance: Promoting the SuiDAO to empower core project communities with governance capabilities, enabling rapid iteration on features and fee models.

5. Core Differences & Challenges

  • Security vs. Parallelism: Aptos's strict resource semantics and consistent consensus provide DeFi-grade security but can limit parallelism. Sui's highly parallel transaction model must continuously prove its resilience against large-scale security threats.
  • Ecosystem Depth vs. Breadth: Aptos has cultivated deep roots in the financial sector with strong institutional ties. Sui has rapidly accumulated a broad range of consumer-facing projects but has yet to land a decisive breakthrough in large-scale DeFi.
  • Theoretical Performance vs. Real-World Throughput: While Sui has higher theoretical TPS, its actual throughput is still constrained by ecosystem activity. Aptos has also experienced congestion during peak periods, indicating a need for more effective sharding or Layer-2 solutions.
  • Market Narrative & Positioning: Aptos markets itself on enterprise-grade security and stability, targeting traditional finance and regulated industries. Sui uses the allure of a "Web2-like experience" and "zero-friction onboarding" to attract a wider consumer audience.

6. The Path to Mass Adoption

Ultimately, this is not a zero-sum game.

In the medium to long term, if the consumer market (gaming, social, NFTs) continues its explosive growth, Sui's parallel execution and low entry barrier could position it for rapid adoption among tens of millions of mainstream users.

In the short to medium term, Aptos's mature BFT finality, low fees, and strategic partnerships give it a more compelling offering for institutional finance, compliance-focused DeFi, and cross-border payments.

The future is likely a symbiotic one where the two chains coexist, creating a stratified market: Aptos powering financial and enterprise infrastructure, while Sui dominates high-frequency consumer interactions. The chain that ultimately achieves mass adoption will be the one that relentlessly optimizes performance and user experience within its chosen domain.

Rollups-as-a-Service in 2025: OP, ZK, Arbitrum Orbit, Polygon CDK, and zkSync Hyperchains

· 70 min read
Dora Noda
Software Engineer

Introduction

Rollups-as-a-Service (RaaS) and modular blockchain frameworks have become critical in 2025 for scaling Ethereum and building custom blockchains. Leading frameworks – Optimism’s OP Stack, zkSync’s ZK Stack (Hyperchains), Arbitrum Orbit, Polygon’s Chain Development Kit (CDK), and related solutions – allow developers to launch their own Layer-2 (L2) or Layer-3 (L3) chains with varying approaches (optimistic vs zero-knowledge). These frameworks share a philosophy of modularity: they separate concerns like execution, settlement, data availability, and consensus, enabling customization of each component. This report compares the frameworks across key dimensions – data availability options, sequencer design, fee models, ecosystem support – and examines their architecture, tooling, developer experience, and current adoption in both public and enterprise contexts.

Comparison Overview

The table below summarizes several core features of each framework:

AspectOP Stack (Optimism)ZK Stack (zkSync)Arbitrum OrbitPolygon CDK (AggLayer)
Rollup TypeOptimistic RollupZero-Knowledge (Validity)Optimistic RollupZero-Knowledge (Validity)
Proof SystemFault proofs (fraud proofs)ZK-SNARK validity proofsFault proofs (fraud proofs)ZK-SNARK validity proofs
EVM CompatibilityEVM-equivalent (geth)High – zkEVM (LLVM-based)EVM-equivalent (Arbitrum Nitro) + WASM via StylusPolygon zkEVM (EVM-equivalent)
Data AvailabilityEthereum L1 (on-chain); pluggable Alt-DA modules (Celestia, etc.)Ethereum L1; also Validium options off-chain (Celestia, Avail, EigenDA)Ethereum L1 (rollup) or AnyTrust committee (off-chain DAC); supports Celestia, AvailEthereum L1 (rollup) or off-chain (validium via Avail or Celestia); hybrid possible
Sequencer DesignSingle sequencer (default); multi-sequencer possible with customization. Shared sequencer vision for Superchain (future).Configurable: can be centralized or decentralized; priority L1 queue supported.Configurable: single operator or decentralized validators.Flexible: single sequencer or multiple validators (e.g. PoS committee).
Sequencer AccessCentralized today (each OP chain’s sequencer is run by its operator); not permissionless yet. Plans for a shared, permissionless sequencer network among OP Chains. L1 backup queue allows trustless tx submission if sequencer fails.zkSync Era uses a centralized sequencer (Matter Labs), but ZK Stack allows custom sequencer logic (even external consensus). Priority L1 sequencing supported for fairness. Decentralized sequencer options under development.Arbitrum One uses a centralized sequencer (Offchain Labs), with failover via L1 inbox. Arbitrum Orbit chains can run their own sequencer (initially centralized) or institute a validator set. BoLD upgrade (2025) enables permissionless validation to decentralize Orbit chains.Polygon zkEVM began with a single sequencer (Polygon Labs). CDK allows launching a chain with a permissioned validator set or other consensus for decentralization. Many CDK chains start centralized for simplicity, with roadmap for later community-run sequencers.
Fee TokenETH by default on OP-based L2s (to ease UX). Custom gas token technically supported, but most OP Chains opt for ETH or a standard token for interoperability. (OP Stack’s recent guidance favors common tokens across the Superchain).Custom base tokens are supported – developers can choose ETH or any ERC-20 as the native gas. (This flexibility enables project-specific economies on zkSync-based chains.)Custom gas token supported (upgrade in late 2023). Chains may use ETH, Arbitrum’s ARB, or their own token for fees. Example: Ape Chain uses APE as gas.Custom native token is supported. Many Polygon CDK chains use MATIC or another token as gas. Polygon’s ecosystem encourages MATIC for cross-chain consistency, but it’s not required.
Fee Model & CostsUsers pay L2 gas (collected by sequencer) plus L1 data posting costs. The sequencer must post transaction data (calldata or blobs) to Ethereum, so a portion of fees covers L1 gas. Revenue sharing: OP Chains in the Superchain commit ~2.5% of revenue to Optimism Collective (funding public goods).Users pay fees (often in ETH or chosen token) which cover L1 proof verification and data. No protocol-level “tax” on fees – each chain’s sequencer keeps revenue to incentivize operators. ZK prover costs are a factor: operators might charge slightly higher fees or use efficient provers to manage costs. Finality is fast (no delay), so users don’t need third-party fast exits.Users pay gas (in ETH or chain’s token) covering L2 execution + L1 batch cost. Sequencers/validators retain the fee revenue; no mandatory revenue-share to Arbitrum DAO or L1 (aside from L1 gas costs). To avoid the optimistic 7-day delay, many Orbit chains integrate liquidity providers or official fast-withdrawal bridges (Arbitrum supports 15-min fast exits on some Orbit chains via liquidity networks).Users pay gas fees which cover proving and posting costs. Sequencers or validators earn those fees; Polygon does not impose any rent or tax on CDK chain revenue. Using off-chain DA (validium mode) can cut fees by >100× (storing data on Celestia or Avail instead of Ethereum), at the cost of some trust assumptions.

Table: High-level comparison of key technical features of OP Stack, zkSync’s ZK Stack, Arbitrum Orbit, and Polygon CDK.

Data Availability Layers

Data Availability (DA) is where rollups store their transaction data so that anyone can reconstruct the chain’s state. All these frameworks support using Ethereum L1 as a DA (posting calldata or blob data on Ethereum for maximum security). However, to reduce costs, they also allow alternative DA solutions:

  • OP Stack: By default, OP chains publish data on Ethereum (as calldata or blobs). Thanks to a modular “Alt-DA” interface, OP Stack chains can plug into other DA layers easily. For example, an OP chain could use Celestia (a dedicated DA blockchain) instead of Ethereum. In 2023 OP Labs and Celestia released a beta where an OP Stack rollup settles on Ethereum but stores bulk data on Celestia. This reduces fees while inheriting Celestia’s data availability guarantees. In general, any EVM or non-EVM chain – even Bitcoin or a centralized store – can be configured as the DA layer in OP Stack. (Of course, using a less secure DA trades off some security for cost.) Ethereum remains the predominant choice for production OP chains, but projects like Caldera’s Taro testnet have demonstrated OP Stack with Celestia DA.

  • ZK Stack (zkSync Hyperchains): The ZK Stack offers both rollup and validium modes. In rollup mode, all data is on-chain (Ethereum). In validium mode, data is kept off-chain (with only validity proofs on-chain). Matter Labs is integrating Avail, Celestia, and EigenDA as first-class DA options for ZK Stack chains. This means a zkSync Hyperchain could post transaction data to Celestia or an EigenLayer-powered network instead of L1, massively increasing throughput. They even outline volition, where a chain can decide per-transaction whether to treat it as a rollup (on-chain data) or validium (off-chain). This flexibility allows developers to balance security and cost. For example, a gaming hyperchain might use Celestia to cheaply store data, while relying on Ethereum for periodic proofs. The ZK Stack’s design makes DA pluggable via a DA client/dispatcher component in the node software. Overall, Ethereum remains default, but zkSync’s ecosystem strongly emphasizes modular DA to achieve “hyperscale” throughput.

  • Arbitrum Orbit: Orbit chains can choose between Arbitrum’s two data modes: rollup (data posted on Ethereum) or AnyTrust (data availability committee). In Rollup configuration, an Orbit L3 will post its call data to the L2 (Arbitrum One or Nova) or L1, inheriting full security at higher cost. In AnyTrust mode, data is kept off-chain by a committee (as used in Arbitrum Nova, which uses a Data Availability Committee). This greatly lowers fees for high-volume apps (gaming, social) at the cost of trusting a committee (if all committee members collude to withhold data, the chain could halt). Beyond these, Arbitrum is also integrating with emerging modular DA networks. Notably, Celestia and Polygon Avail are supported for Orbit chains as alternative DA layers. Projects like AltLayer have worked on Orbit rollups that use EigenDA (EigenLayer’s DA service) as well. In summary, Arbitrum Orbit offers flexible data availability: on-chain via Ethereum, off-chain via DACs or specialized DA chains, or hybrids. Many Orbit adopters choose AnyTrust for cost savings, especially if they have a known set of validators or partners ensuring data is available.

  • Polygon CDK: Polygon’s CDK is inherently modular with respect to DA. A Polygon CDK chain can operate as a rollup (all data on Ethereum) or a validium (data on a separate network). Polygon has its own DA solution called Avail (a blockchain for data availability), and CDK chains can use Avail or any similar service. In late 2024, Polygon announced direct integration of Celestia into CDK – making Celestia an “easily-pluggable” DA option in the toolkit. This integration is expected in early 2024, enabling CDK chains to store compressed data on Celestia seamlessly. Polygon cites that using Celestia could reduce transaction fees by >100× compared to posting all data on Ethereum. Thus, a CDK chain creator can simply toggle the DA module to Celestia (or Avail) instead of Ethereum. Some Polygon chains (e.g. Polygon zkEVM) currently post all data to Ethereum (for maximal security), while others (perhaps certain enterprise chains) run as validiums with external DA. The CDK supports “hybrid” modes as well – for instance, critical transactions could go on Ethereum while others go to Avail. This modular DA approach aligns with Polygon’s broader Polygon 2.0 vision of multiple ZK-powered chains with unified liquidity but varied data backends.

In summary, all frameworks support multiple DA layers to various degrees. Ethereum remains the gold standard DA (especially with blob space from EIP-4844 making on-chain data cheaper), but new specialized DA networks (Celestia, Avail) and schemes (EigenLayer’s EigenDA, data committees) are being embraced across the board. This modularity allows rollup creators in 2025 to make trade-offs between cost and security by simply configuring a different DA module rather than building a new chain from scratch.

Sequencer Design and Decentralization

The sequencer is the node (or set of nodes) that orders transactions and produces blocks for a rollup. How the sequencer is designed – centralized vs decentralized, permissionless vs permissioned – affects the chain’s throughput and trust assumptions:

  • OP Stack (Optimism): Today, most OP Stack chains run a single sequencer operated by the chain’s core team or sponsor. For example, Optimism Mainnet’s sequencer is run by OP Labs, and Base’s sequencer is run by Coinbase. This yields low latency and simplicity at the cost of centralization (users must trust the sequencer to include their transactions fairly). However, Optimism has built in mechanisms for trust-minimization: there is an L1 transaction queue contract where users can submit transactions on Ethereum which the sequencer must include in the L2 chain. If the sequencer goes down or censors txs, users can rely on L1 to eventually get included (albeit with some delay). This provides a safety valve against a malicious or failed sequencer. In terms of decentralization, OP Stack is modular and theoretically allows multiple sequencers – e.g. one could implement a round-robin or proof-of-stake based block proposer set using the OP Stack code. In practice, this requires customization and is not the out-of-the-box configuration. The long-term Superchain roadmap envisions a shared sequencer for all OP Chains, which would be a set of validators sequencing transactions for many chains at once. A shared sequencer could enable cross-chain atomicity and reduce MEV across the Superchain. It’s still in development as of 2025, but the OP Stack’s design does not preclude plugging in such a consensus. For now, sequencer operations remain permissioned (run by whitelisted entities), but Optimism governance plans to decentralize this (possibly via staking or committee rotation) once the technology and economics are ready. In short: OP Stack chains start with centralized sequencing (with L1 as fallback), and a path to gradual decentralization is charted (moving from “Stage 0” to “Stage 2” maturity with no training wheels).

  • ZK Stack (zkSync Hyperchains): zkSync Era (the L2) currently uses a centralized sequencer operated by Matter Labs. However, the ZK Stack is built to allow various sequencing modes for new chains. Options include a centralized sequencer (easy start), a decentralized sequencer set (e.g. multiple nodes reaching consensus on ordering), a priority transaction queue from L1, or even an external sequencer service. In Matter Labs’ Elastic Chains vision, chains remain independent but interoperability is handled by the L1 contracts and a “ZK Router/Gateway” – this implies each chain can choose its own sequencer model as long as it meets the protocols for submitting state roots and proofs. Because ZK-rollups don’t require a consensus on L2 for security (validity proofs ensure correctness), decentralizing the sequencer is more about liveness and censorship-resistance. A Hyperchain could implement a round-robin block producer or even hook into a high-performance BFT consensus for its sequencers if desired. That said, running a single sequencer is far simpler and remains the norm initially. The ZK Stack docs mention that a chain could use an “external protocol” for sequencing – for instance, one could imagine using Tendermint or SU consensus as the block producer and then generating zk proofs for the blocks. Also, like others, zkSync has an L1 priority queue mechanism: users can send transactions to the zkSync contract with a priority fee to guarantee L1->L2 inclusion in a timely manner (mitigating censorship). Overall, permissionless participation in sequencing is not yet realized on zkSync chains (no public slot auction or staking-based sequencer selection in production), but the architecture leaves room for it. As validity proofs mature, we might see zkSync chains with community-run sequencer nodes that collectively decide ordering (once performance allows).

  • Arbitrum Orbit: On Arbitrum One (the main L2), the sequencer is centralized (run by Offchain Labs), though the chain’s state progression is ultimately governed by the Arbitrum validators and fraud proofs. Arbitrum has similarly provided an L1 queue for users as a backstop against sequencer issues. In Orbit (the L3 framework), each Orbit chain can have its own sequencer or validator set. Arbitrum’s Nitro tech includes the option to run a rollup with a decentralized sequencer: essentially, one could have multiple parties run the Arbitrum node software and use a leader election (possibly via the Arbitrum permissionless proof-of-stake chain in the future, or a custom mechanism). Out of the box, Orbit chains launched to date have been mostly centralized (e.g. the Xai gaming chain is run by a foundation in collaboration with Offchain Labs) – but this is a matter of configuration and governance. A noteworthy development is the introduction of BoLD (Bounded Liquidity Delay) in early 2025, which is a new protocol to make Arbitrum’s validation more permissionless. BoLD allows anyone to become a validator (prover) for the chain, resolving fraud challenges in a fixed time frame without a whitelist. This moves Arbitrum closer to trustless operation, although the sequencer role (ordering transactions day-to-day) might still be assigned or elected. Offchain Labs has expressed focus on advancing decentralization in 2024-2025 for Arbitrum. We also see multi-sequencer efforts: for example, an Orbit chain could use a small committee of known sequencers to get some fault tolerance (one goes down, another continues). Another angle is the idea of a shared sequencer for Orbit chains, though Arbitrum hasn’t emphasized this as much as Optimism. Instead, interoperability is achieved via L3s settling on Arbitrum L2 and using standard bridges. In summary, Arbitrum Orbit gives flexibility in sequencer design (from one entity to many), and the trend is toward opening the validator/sequencer set as the tech and community governance matures. Today, it’s fair to say Orbit chains start centralized but have a roadmap for permissionless validation.

  • Polygon CDK: Polygon CDK chains (sometimes referred to under the umbrella “AggLayer” in late 2024) can similarly choose their sequencer/consensus setup. Polygon’s zkEVM chain (operated by Polygon Labs) began with a single sequencer and centralized prover, with plans to progressively decentralize both. The CDK, being modular, allows a chain to plug in a consensus module – for instance, one could launch a CDK chain with a Proof-of-Stake validator set producing blocks, effectively decentralizing sequencing from day one. In fact, Polygon’s earlier framework (Polygon Edge) was used for permissioned enterprise chains using IBFT consensus; CDK chains could take a hybrid approach (run Polygon’s zkProver but have a committee of nodes propose blocks). By default, many CDK chains might run with a single operator for simplicity and then later adopt a consensus as they scale. Polygon is also exploring a shared sequencer or aggregator concept through the AggLayer hub, which is intended to connect all Polygon chains. While AggLayer primarily handles cross-chain messaging and liquidity, it could evolve into a shared sequencing service in the future (Polygon co-founder has discussed sequencer decentralization as part of Polygon 2.0). In general, permissionlessness is not yet present – one cannot spontaneously become a sequencer for someone’s CDK chain unless that project allows it. But projects like dYdX V4 (which is building a standalone chain with a form of decentralized consensus) and others show the appetite for validator-based L2s. Polygon CDK makes it technically feasible to have many block producers, but the exact implementation is left to the chain deployer. Expect Polygon to roll out more guidance or even infrastructure for decentralized sequencers as more enterprises and communities launch CDK chains.

To summarize the sequencer comparison: All frameworks currently rely on a relatively centralized sequencer model in their live deployments, to ensure efficiency. However, each provides a route to decentralization – whether via shared sequencing networks (OP Stack), pluggable consensus (CDK, ZK Stack), or permissionless validators (Arbitrum’s BoLD). Table below highlights sequencer designs:

Sequencer DesignOP StackZK Stack (zkSync)Arbitrum OrbitPolygon CDK
Default operator modelSingle sequencer (project-run)Single sequencer (Matter Labs or project-run)Single sequencer (project-run/Offchain Labs)Single sequencer (project or Polygon-run)
Decentralization optionsYes – can customize consensus, e.g. multiple sequencers or future shared setYes – configurable; can integrate external consensus or priority queuesYes – configurable; can use multi-validator (AnyTrust committee or custom)Yes – can integrate PoS validators or IBFT consensus (project’s choice)
Permissionless participationPlanned: Superchain shared sequencer (not yet live). Fraud provers are permissionless on L1 (anyone can challenge).Not yet (no public sequencer auction yet). Validity proofs don’t need challengers. Community can run read-nodes, but not produce blocks unless chosen.Emerging: BoLD enables anyone to validate fraud proofs. Sequencer still chosen by chain (could be via DAO in future).Not yet. Sequencers are appointed by chain owners or validators are permissioned/staked. Polygon’s roadmap includes community validation eventually.
Censorship resistanceL1 queue for users ensures inclusion. Training-wheels governance can veto sequencer misconduct.L1 priority queue for inclusion. Validium mode needs trust in DA committee for data availability.L1 inbox ensures inclusion if sequencer stalls. DAC mode requires ≥1 honest committee member to supply data.Depends on chain’s consensus – e.g. if using a validator set, need ≥2/3 honest. Rollup mode fallback is L1 Ethereum inclusion.

As seen, Optimism and Arbitrum include on-chain fallback queues, which is a strong censorship-resistance feature. ZK-based chains rely on the fact that a sequencer can’t forge state (thanks to ZK proofs), but if it censors, a new sequencer could be appointed by governance – an area still being refined. The trend in 2025 is that we’ll likely see more decentralized sequencer pools and possibly shared sequencer networks coming online, complementing these RaaS frameworks. Each project is actively researching this: e.g. Astria and others are building general shared sequencing services, and OP Labs, Polygon, and Offchain have all mentioned plans to decentralize the sequencer role.

Fee Models and Economics

Fee models determine who pays what in these rollup frameworks and how the economic incentives align for operators and the ecosystem. Key considerations include: What token are fees paid in? Who collects the fees? What costs (L1 posting, proving) must be covered? Are there revenue-sharing or kickback arrangements? How customizable are fee parameters?

  • Gas Token and Fee Customization: All compared frameworks allow customizing the native gas token, meaning a new chain can decide which currency users pay fees in. By default, rollups on Ethereum often choose ETH as the gas token for user convenience (users don’t need a new token to use the chain). For instance, Base (OP Stack) uses ETH for gas, as does zkSync Era and Polygon zkEVM. OP Stack technically supports replacing ETH with another ERC-20, but in the context of the OP Superchain, there’s a push to keep a standard (to make interoperability smoother). In fact, some OP Stack chains that initially considered a custom token opted for ETH – e.g., Worldcoin’s OP-chain uses ETH for fees even though the project has its own token WLD. On the other hand, Arbitrum Orbit launched without custom token support but quickly added it due to demand. Now Orbit chains can use ARB or any ERC-20 as gas. The Ape Chain L3 chose APE coin as its gas currency, showcasing this flexibility. Polygon CDK likewise lets you define the token; many projects lean towards using MATIC to align with Polygon’s ecosystem (and MATIC will upgrade to POL token under Polygon 2.0), but it’s not enforced. zkSync’s ZK Stack explicitly supports custom base tokens as well (the docs even have a “Custom base token” tutorial). This is useful for enterprise chains that might want, say, a stablecoin or their own coin for fees. It’s also crucial for app-chains that have their own token economy – they can drive demand for their token by making it the gas token. In summary, fee token is fully configurable in all frameworks, although using a widely-held token like ETH can lower user friction.

  • Fee Collection and Distribution: Generally, the sequencer (block producer) collects transaction fees on the L2/L3. This is a primary incentive for running a sequencer. For example, Optimism’s sequencer earns all the gas fees users pay on Optimism, but must then pay for posting batches to Ethereum. Usually, the sequencer will take the user-paid L2 fees, subtract the L1 costs, and keep the remainder as profit. On a well-run chain, L1 costs are a fraction of L2 fees, leaving some profit margin. For ZK-rollups, there’s an extra cost: generating the ZK proof. This can be significant (requiring specialized hardware or cloud compute). Currently, some ZK rollup operators subsidize proving costs (spending VC funds) to keep user fees low during growth phase. Over time, proving costs are expected to drop with better algorithms and hardware. Framework-wise: zkSync and Polygon both allow the sequencer to charge a bit more to cover proving – and if a chain uses an external prover service, they might have a revenue split with them. Notably, no framework except OP Superchain has an enforced revenue-sharing at protocol level. The Optimism Collective’s Standard Rollup Revenue scheme requires OP Chains to remit either 2.5% of gross fees or 15% of net profits (whichever is greater) to a collective treasury. This is a voluntary-but-expected agreement under the Superchain charter, rather than a smart contract enforcement, but all major OP Stack chains (Base, opBNB, Worldcoin, etc.) have agreed to it. Those fees (over 14,000 ETH so far) fund public goods via Optimism’s governance. In contrast, Arbitrum does not charge Orbit chains any fee; Orbit is permissionless to use. Arbitrum DAO could potentially ask for some revenue sharing in the future (to fund its own ecosystem), but none exists as of 2025. Polygon CDK similarly does not impose a tax; Polygon’s approach is to attract users into its ecosystem (thus raising MATIC value and usage) rather than charge per-chain fees. Polygon co-founder Sandeep Nailwal explicitly said AggLayer “does not seek rent” from chains. zkSync also hasn’t announced any fee sharing – Matter Labs likely focuses on growing usage of zkSync Era and hyperchains, which indirectly benefits them via network effects and possibly future token value.

  • L1 Settlement Costs: A big part of the fee model is who pays for L1 transactions (posting data or proofs). In all cases, ultimately users pay, but the mechanism differs. In Optimistic rollups, the sequencer periodically posts batches of transactions (with calldata) to L1. The gas cost for those L1 transactions is paid by the sequencer using ETH. However, sequencers factor that into the L2 gas pricing. Optimism and Arbitrum have gas pricing formulas that estimate how much a transaction’s call-data will cost on L1 and include that in the L2 gas fee (often called the “amortized L1 cost” per tx). For example, a simple Optimism tx might incur 21,000 L2 gas for execution and maybe an extra few hundred for L1 data – the user’s fee covers both. If the pricing is misestimated, the sequencer might lose money on that batch or gain if usage is high. Sequencers typically adjust fees dynamically to match L1 conditions (raising L2 fees when L1 gas is expensive). In Arbitrum, the mechanism is similar, though Arbitrum has separate “L1 pricing” and “L2 pricing” components. In zkSync/Polygon (ZK), the sequencer must post a validity proof to L1 (costing a fixed gas amount to verify) plus either call data (if rollup) or state root (if validium). The proof verification cost is usually constant per batch (on zkSync Era it’s on the order of a few hundred thousand gas), so zkSync’s fee model spreads that cost across transactions. They might charge a slight overhead on each tx for proving. Notably, zkSync introduced features like state diffs and compression to minimize L1 data published. Polygon zkEVM likewise uses recursive proofs to batch many transactions into one proof, amortizing the verification cost. If a chain uses an alternative DA (Celestia/Avail), then instead of paying Ethereum for calldata, they pay that DA provider. Celestia, for instance, has its own gas token (TIA) to pay for data blobs. So a chain might need to convert part of fees to pay Celestia miners. Frameworks are increasingly abstracting these costs: e.g., an OP Stack chain could pay a Celestia DA node via an adapter, and include that cost in user fees.

  • Costs to Users (Finality and Withdrawal): For optimistic rollups (OP Stack, Arbitrum Orbit in rollup mode), users face the infamous challenge period for withdrawals – typically 7 days on Ethereum L1. This is a usability hit, but most ecosystems have mitigations. Fast bridges (liquidity networks) allow users to swap their L2 tokens for L1 tokens instantly for a small fee, while arbitrageurs wait the 7 days. Arbitrum has gone further for Orbit chains, working with teams to enable fast withdrawals in as little as 15 minutes via liquidity providers integrated at the protocol level. This effectively means users don’t wait a week except in worst-case scenarios. ZK-rollups don’t have this delay – once a validity proof is accepted on L1, the state is final. So zkSync and Polygon users get faster finality (often minutes to an hour) depending on how often proofs are submitted. The trade-off is that proving might introduce a bit of delay between when a transaction is accepted on L2 and when it’s included in an L1 proof (could be a few minutes). But generally, ZK rollups are offering 10–30 minute withdrawals in 2025, which is a huge improvement over 7 days. Users may pay a slightly higher fee for immediate finality (to cover prover costs), but many deem it worth it. Fee Customization is also worth noting: frameworks allow custom fee schedules (like free transactions or gas subsidies) if projects want. For example, an enterprise could subsidize all user fees on their chain by running the sequencer at a loss (perhaps for a game or social app). Or they could set up a different gas model (some have toyed with no gas for certain actions, or alternative gas accounting). Since most frameworks aim for Ethereum-equivalence, such deep changes are rare, but possible with code modification. Arbitrum’s Stylus could enable different fee metering for WASM contracts (not charging for certain ops to encourage WASM usage, for instance). The Polygon CDK being open source and modular means if a project wanted to implement a novel fee mechanism (like fee burning or dynamic pricing), they could.

In essence, all rollup frameworks strive to align economic incentives: make it profitable to operate a sequencer (via fee revenue), keep fees reasonable for users by leveraging cheaper DA, and (optionally) funnel some value to their broader ecosystem. Optimism’s model is unique in explicitly sharing revenue for public goods, while others rely on growth and token economics (e.g., more chains -> more MATIC/ETH usage, increasing those token’s value).

Architecture and Modularity

All these frameworks pride themselves on a modular architecture, meaning each layer of the stack (execution, settlement, consensus, DA, proofs) is swappable or upgradable. Let’s briefly note each:

  • OP Stack: Built as a series of modules corresponding to Ethereum’s layers – execution engine (OP EVM, derived from geth), consensus/rollup node (op-node), settlement smart contracts, and soon fraud prover. The OP Stack’s design goal was EVM equivalence (no custom gas schedule or opcode changes) and ease of integration with Ethereum tooling. The Bedrock upgrade in 2023 further modularized Optimism’s stack, making it easier to swap out components (e.g., to implement ZK proofs in the future, or use a different DA). Indeed, OP Stack is not limited to optimistic fraud proofs – the team has said it’s open to integrating validity proofs when they mature, essentially turning OP Stack chains into ZK rollups without changing the developer experience. The Superchain concept extends the architecture to multiple chains: standardizing inter-chain communication, bridging, and maybe shared sequencing. OP Stack comes with a rich set of smart contracts on L1 (for deposits, withdrawals, fraud proof verification, etc.), which chains inherit out-of-the-box. It’s effectively a plug-and-play L2 chain template – projects like Base launched by forking the OP Stack repos and configuring them to point at their own contracts.

  • ZK Stack: The ZK Stack is the framework underlying zkSync Era and future “Hyperchains.” Architecturally, it includes the zkEVM execution environment (an LLVM-based VM that allows running Solidity code with minimal changes), the prover system (the circuits and proof generation for transactions), the sequencer node, and the L1 contracts (the zkSync smart contracts that verify proofs and manage state roots). Modularity is seen in how it separates the ZK proof circuit from the execution – theoretically one could swap in a different proving scheme or even a different VM (though not trivial). The ZK Stack introduces the Elastic Chain Architecture with components like ZK Router and ZK Gateway. These act as an interoperability layer connecting multiple ZK Chains. It’s a bit like an “internet of ZK rollups” concept, where the Router (on Ethereum) holds a registry of chains and facilitates shared bridging/liquidity, and the Gateway handles messages between chains off-chain. This is modular because a new chain can plug into that architecture simply by deploying with the standard contracts. ZK Stack also embraces account abstraction at the protocol level (contracts as accounts, native meta-transactions), which is an architectural choice to improve UX. Another modular aspect: as discussed in DA, it can operate in rollup or validium mode – essentially flipping a switch in config. Also, the stack has a notion of Pluggable consensus for sequencing (as noted prior). Settlement layer can be Ethereum or potentially another chain: zkSync’s roadmap even floated settling hyperchains on L2 (e.g., an L3 that posts proofs to zkSync Era L2 instead of L1) – indeed they launched a prototype called “ZK Portal” for L3 settlement on L2. This gives a hierarchical modularity (L3->L2->L1). Overall, ZK Stack is a bit less turnkey for non-Matter-Labs teams as of 2025 (since running a ZK chain involves coordinating provers, etc.), but it’s highly flexible in capable hands.

  • Arbitrum Orbit: Arbitrum’s architecture is built on the Arbitrum Nitro stack, which includes the ArbOS execution layer (Arbitrum’s interpretation of EVM with some small differences), the Sequencer/Relay, the AnyTrust component for alternative DA, and the fraud proof machinery (interactive fraud proofs). Orbit essentially lets you use that same stack but configure certain parameters (like chain ID, L2 genesis state, choice of rollup vs AnyTrust). Modularity: Arbitrum introduced Stylus, a new WASM-compatible smart contract engine that runs alongside the EVM. Stylus allows writing contracts in Rust, C, C++ which compile to WASM and run with near-native speed on Arbitrum chains. This is an optional module – Orbit chains can enable Stylus or not. It’s a differentiator for Arbitrum’s stack, making it attractive for high-performance dApps (e.g., gaming or trading apps might write some logic in Rust for speed). The data availability module is also pluggable as discussed (Arbitrum chains can choose on-chain or DAC). Another module is the L1 settlement: Orbit chains can post their proofs to either Ethereum (L1) or to Arbitrum One (L2). If the latter, they effectively are L3s anchored in Arbitrum One’s security (with slightly different trust assumptions). Many Orbit chains are launching as L3s (to inherit Arbitrum One’s lower fees and still ultimately Ethereum security). Arbitrum’s codebase is fully open source now, and projects like Caldera, Conduit build on it to provide user-friendly deployment – they might add their own modules (like monitoring, chain management APIs). It’s worth noting Arbitrum’s fraud proofs were historically not permissionless (only whitelisted validators could challenge), but with BoLD, that part of the architecture is changing to allow anyone to step in. So the fraud proof component is becoming more decentralized (which is a modular upgrade in a sense). One might say Arbitrum is less of a “lego kit” than OP Stack or Polygon CDK, in that Offchain Labs hasn’t released a one-click chain launcher (though they did release an Orbit deployment GUI on GitHub). But functionally, it’s modular enough that third parties have automated deployments for it.

  • Polygon CDK (AggLayer): Polygon CDK is explicitly described as a “modular framework” for ZK-powered chains. It leverages Polygon’s ZK proving technology (from Polygon zkEVM, which is based on Plonky2 and recursive SNARKs). The architecture separates the execution layer (which is an EVM – specifically a fork of Geth adjusted for zkEVM) from the prover layer and the bridge/settlement contracts. Because it’s modular, a developer can choose different options for each: e.g. Execution – presumably always EVM for now (to use existing tooling), DA – as discussed (Ethereum or others), Sequencer consensus – single vs multi-node, Prover – one can run the prover Type1 (validity proofs posted to Ethereum) or a Type2 (validium proofs) etc., and AggLayer integration – yes or no (AggLayer for interop). Polygon even provided a slick interface (shown below) to visualize these choices:

Polygon CDK’s configuration interface, illustrating modular choices – e.g. Rollups vs Validium (scaling solution), decentralized vs centralized sequencer, local/Ethereum/3rd-party DA, different prover types, and whether to enable AggLayer interoperability.

Under the hood, Polygon CDK uses zk-Proofs with recursion to allow high throughput and a dynamic validator set. The AggLayer is an emerging part of the architecture that will connect chains for trustless messaging and shared liquidity. The CDK is built in a way that future improvements in Polygon’s ZK tech (like faster proofs, or new VM features) can be adopted by all CDK chains via upgrades. Polygon has a concept of “Type 1 vs Type 2” zkEVM – Type 1 is fully Ethereum-equivalent, Type 2 is almost equivalent with minor changes for efficiency. A CDK chain could choose a slightly modified EVM for more speed (sacrificing some equivalence) – this is an architectural option projects have. Overall, CDK is very lego-like: one can assemble a chain choosing components suitable for their use case (e.g., an enterprise might choose validium + permissioned sequencers + private Tx visibility; a public DeFi chain might choose rollup + decentralized sequencer + AggLayer enabled for liquidity). This versatility has attracted many projects to consider CDK for launching their own networks.

  • Images and diagrams: The frameworks often provide visual diagrams of their modular architecture. For example, zkSync’s UI shows toggles for Rollup/Validium, L2/L3, centralized/decentralized, etc., highlighting the ZK Stack’s flexibility:

An example configuration for a zkSync “Hyperchain.” The ZK Stack interface allows selecting chain mode (Rollup vs Validium vs Volition), layer (L2 or L3), transaction sequencing (decentralized, centralized, or shared), data availability source (Ethereum, third-party network, or custom), data visibility (public or private chain), and gas token (ETH, custom, or gasless). This modular approach is designed to support a variety of use cases, from public DeFi chains to private enterprise chains.

In summary, all these stacks are highly modular and upgradable, which is essential given the pace of blockchain innovation. They are converging in some sense: OP Stack adding validity proofs, Polygon adding shared sequencing (OP Stack ideas), Arbitrum adding interoperable L3s (like others), zkSync pursuing L3s (like Orbit and OPStack do). This cross-pollination means modular frameworks in 2025 are more alike than different in philosophy – each wants to be the one-stop toolkit to launch scalable chains without reinventing the wheel.

Developer Experience and Tooling

A critical factor for adoption is how easy and developer-friendly these frameworks are. This includes documentation, SDKs/APIs, CLIs for deployment, monitoring tools, and the learning curve for developers:

  • OP Stack – Developer Experience: Optimism’s OP Stack benefits from being EVM-equivalent, so Ethereum developers can use familiar tools (Remix, Hardhat, Truffle, Solidity, Vyper) without modification. Smart contracts deployed to an OP chain behave exactly as on L1. This drastically lowers the learning curve. Optimism provides extensive documentation: the official Optimism docs have sections on the OP Stack, running an L2 node, and even an “OP Stack from scratch” tutorial. There are community-written guides as well (for example, QuickNode’s step-by-step guide on deploying an Optimism L2 rollup). In terms of tooling, OP Labs has released the op-node client (for the rollup node) and op-geth (execution engine). To launch a chain, a developer typically needs to configure these and deploy the L1 contracts (Standard Bridge, etc.). This was non-trivial but is becoming easier with provider services. Deployment-as-a-service: companies like Caldera, Conduit, and Infura/Alchemy offer managed OP Stack rollup deployments, which abstracts away much of the DevOps. For monitoring, because an OP Stack chain is essentially a geth chain plus a rollup coordinator, standard Ethereum monitoring tools (ETH metrics dashboards, block explorers like Etherscan/Blockscout) can be used. In fact, Etherscan supports OP Stack chains such as Optimism and Base, providing familiar block explorer interfaces. Developer tooling specifically for OP Chains includes the Optimism SDK for bridging (facilitating deposits/withdrawals in apps) and Bedrock’s integration with Ethereum JSON-RPC (so tools like MetaMask just work by switching network). The OP Stack code is MIT licensed, inviting developers to fork and experiment. Many did – e.g. BNB Chain’s team used OP Stack to build opBNB with their own modifications to consensus and gas token (they use BNB gas on opBNB). The OP Stack’s adherence to Ethereum standards makes the developer experience arguably the smoothest among these: essentially “Ethereum, but cheaper” from a contract developer’s perspective. The main new skills needed are around running the infrastructure (for those launching a chain) and understanding cross-chain bridging nuances. Optimism’s community and support (Discord, forums) are active to help new chain teams. Additionally, Optimism has funded ecosystem tools like Magi (an alternative Rust rollup client) to diversify the stack and make it more robust for developers.

  • zkSync ZK Stack – Developer Experience: On the contract development side, zkSync’s ZK Stack offers a zkEVM that is intended to be high compatibility but currently not 100% bytecode-equivalent. It supports Solidity and Vyper contracts, but there are subtle differences (for example, certain precompiles or gas costs). That said, Matter Labs built an LLVM compiler that takes Solidity and produces zkEVM bytecode, so most Solidity code works with little to no change. They also natively support account abstraction, which devs can leverage to create gasless transactions, multi-sig wallets, etc., more easily than on Ethereum (no need for ERC-4337). The developer docs for zkSync are comprehensive (docs.zksync.io) and cover how to deploy contracts, use the Hyperchain CLI (if any), and configure a chain. However, running a ZK rollup is inherently more complex than an optimistic one – you need a proving setup. The ZK Stack provides the prover software (e.g. the GPU provers for zkSync’s circuits), but a chain operator must have access to serious hardware or cloud services to generate proofs continuously. This is a new DevOps challenge; to mitigate it, some companies are emerging that provide prover services or even Proof-as-a-Service. If a developer doesn’t want to run their own provers, they might be able to outsource it (with trust or crypto-economic assurances). Tooling: zkSync provides a bridge and wallet portal by default (the zkSync Portal) which can be forked for a new chain, giving users a UI to move assets and view accounts. For block exploration, Blockscout has been adapted to zkSync, and Matter Labs built their own block explorer for zkSync Era which could likely be used for new chains. The existence of the ZK Gateway and Router means that if a developer plugs into that, they get some out-of-the-box interoperability with other chains – but they need to follow Matter Labs’ standards. Overall, for a smart contract dev, building on zkSync is not too difficult (just Solidity, with perhaps minor differences like gasleft() might behave slightly differently due to not having actual Ethereum gas cost). But for a chain operator, the ZK Stack has a steeper learning curve than OP Stack or Orbit. In 2025, Matter Labs is focusing on improving this – for instance, simplifying the process of launching a Hyperchain, possibly providing scripts or cloud images to spin up the whole stack. There is also an emerging community of devs around ZK Stack; e.g., the ZKSync Community Edition is an initiative where community members run test L3 chains and share tips. We should note that language support for zkSync’s ecosystem might expand – they’ve talked about allowing other languages via the LLVM pipeline (e.g., a Rust-to-zkEVM compiler in the future), but Solidity is the main one now. In summary, zkSync’s dev experience: great for DApp devs (nearly Ethereum-like), moderate for chain launchers (need to handle prover and new concepts like validiums).

  • Arbitrum Orbit – Developer Experience: For Solidity developers, Arbitrum Orbit (and Arbitrum One) is fully EVM-compatible at the bytecode level (Arbitrum Nitro uses geth-derived execution). Thus, deploying and interacting with contracts on an Arbitrum chain is just like Ethereum (with some small differences like slightly different L1 block number access, chainID, etc., but nothing major). Where Arbitrum stands out is Stylus – developers can write smart contracts in languages like Rust, C, C++ (compiled to WebAssembly) and deploy those alongside EVM contracts. This opens blockchain development to a wider pool of programmers and enables high-performance use cases. For example, an algorithmic intensive logic could be written in C for speed. Stylus is still in beta on Arbitrum mainnet, but Orbit chains can experiment with it. This is a unique boon for developer experience, albeit those using Stylus will need to learn new tooling (e.g., Rust toolchains, and Arbitrum’s libraries for interfacing WASM with the chain). The Arbitrum docs provide guidance on using Stylus and even writing Rust smart contracts. For launching an Orbit chain, Offchain Labs has provided Devnet scripts and an Orbit deployment UI. The process is somewhat technical: one must set up an Arbitrum node with --l3 flags (if launching an L3) and configure the genesis, chain parameters, etc.. QuickNode and others have published guides (“How to deploy your own Arbitrum Orbit chain”). Additionally, Orbit partnerships with Caldera, AltLayer, and Conduit mean these third parties handle a lot of the heavy lifting. A developer can essentially fill out a form or run a wizard with those services to get a customized Arbitrum chain, instead of manually modifying the Nitro code. In terms of debugging and monitoring, Arbitrum chains can use Arbiscan (for those that have it) or community explorers. There’s also Grafana/Prometheus integrations for node metrics. One complexity is the fraud proof system – developers launching an Orbit chain should ensure there are validators (maybe themselves or trusted others) who run the off-chain validator software to watch for fraud. Offchain Labs likely provides default scripts for running such validators. But since fraud proofs rarely trigger, it’s more about having the security process in place. Arbitrum’s large developer community (projects building on Arbitrum One) is an asset – resources like tutorials, stackexchange answers, etc., often apply to Orbit as well. Also, Arbitrum is known for its strong developer education efforts (workshops, hackathons), which presumably extend to those interested in Orbit.

  • Polygon CDK – Developer Experience: Polygon CDK is newer (announced mid/late 2023), but it builds on familiar components. For developers writing contracts, Polygon CDK chains use a zkEVM that’s intended to be equivalent to Ethereum’s EVM (Polygon’s Type 2 zkEVM is nearly identical with a few edge cases). So, Solidity and Vyper are the go-to languages, with full support for standard Ethereum dev tools. If you’ve deployed on Polygon zkEVM or Ethereum, you can deploy on a CDK chain similarly. The challenge is more on the chain operations side. Polygon’s CDK is open-source on GitHub and comes with documentation on how to configure a chain. It likely provides a command-line tool to scaffold a new chain (similar to how one might use Cosmos SDK’s starport or Substrate’s node template). Polygon Labs has invested in making the setup as easy as possible – one quote: “launch a high-throughput ZK-powered Ethereum L2 as easily as deploying a smart contract”. While perhaps optimistic, this indicates tools or scripts exist to simplify deployment. Indeed, there have been early adopters like Immutable (for gaming) and OKX (exchange chain) that have worked with Polygon to launch CDK chains, suggesting a fairly smooth process with Polygon’s team support. The CDK includes SDKs and libraries to interact with the bridge (for deposits/withdrawals) and to enable AggLayer if desired. Monitoring a CDK chain can leverage Polygon’s block explorer (Polygonscan) if they integrate it, or Blockscout. Polygon is also known for robust SDKs for gaming and mobile (e.g., Unity SDKs) – those can be used on any Polygon-based chain. Developer support is a big focus: Polygon has academies, grants, hackathons regularly, and their Developer Relations team helps projects one-on-one. An example of enterprise developer experience: Libre, an institutional chain launched with CDK, presumably had custom requirements – Polygon was able to accommodate things like identity modules or compliance features on that chain. This shows the CDK can be extended for specific use cases by developers with help from the framework. As for learning materials, Polygon’s docs site and blog have guides on CDK usage, and because CDK is essentially the evolution of their zkEVM, those familiar with Polygon’s zkEVM design can pick it up quickly. One more tooling aspect: Cross-chain tools – since many Polygon CDK chains will coexist, Polygon provides the AggLayer for messaging, but also encourages use of standard cross-chain messaging like LayerZero (indeed Rarible’s Orbit chain integrated LayerZero for NFT transfers and Polygon chains can too). So, devs have options to integrate interoperability plugins easily. All told, the CDK developer experience is aimed to be turnkey for launching Ethereum-level chains with ZK security, benefiting from Polygon’s years of L2 experience.

In conclusion, developer experience has dramatically improved for launching custom chains: what once required a whole team of protocol engineers can now be done with guided frameworks and support. Optimism’s and Arbitrum’s offerings leverage familiarity (EVM equivalence), zkSync and Polygon offer cutting-edge tech with increasing ease-of-use, and all have growing ecosystems of third-party tools to simplify development (from block explorers to monitoring dashboards and devops scripts). The documentation quality is generally high – official docs plus community guides (Medium articles, QuickNode/Alchemy guides) cover a lot of ground. There is still a non-trivial learning curve to go from smart contract developer to “rollup operator,” but it’s getting easier as best practices emerge and the community of rollup builders expands.

Ecosystem Support and Go-to-Market Strategies

Building a technology is one thing; building an ecosystem is another. Each of these frameworks is backed by an organization or community investing in growth through grants, funding, marketing, and partnership support. Here we compare their ecosystem support strategies – how they attract developers and projects, and how they help those projects succeed:

  • OP Stack (Optimism) Ecosystem: Optimism has a robust ecosystem strategy centered on its Optimism Collective and ethos of public goods funding. They pioneered Retroactive Public Goods Funding (RPGF) – using OP token treasury to reward developers and projects that benefit the ecosystem. Through multiple RPGF rounds, Optimism has distributed millions in funding to infrastructure projects, dev tools, and applications on Optimism. Any project building with OP Stack (especially if aligning with the Superchain vision) is eligible to apply for grants from the Collective. Additionally, Optimism’s governance can authorize incentive programs (earlier in 2022, they had an airdrop and governance fund that projects could tap to distribute OP rewards to users). In 2024, Optimism established the Superchain Revenue Sharing model, where each OP Chain contributes a small portion of fees to a shared treasury. This creates a flywheel: as more chains (like Base, opBNB, Worldcoin’s chain, etc.) generate usage, they collectively fund more public goods that improve the OP Stack, which in turn attracts more chains. It’s a positive-sum approach unique to Optimism. On the go-to-market side, Optimism has actively partnered with major entities: getting Coinbase to build Base was a huge validation of OP Stack, and Optimism Labs provided technical help and support to Coinbase during that process. Similarly, they’ve worked with Worldcoin’s team, and Celo’s migration to an OP Stack L2 was done with consultation from OP Labs. Optimism does a lot of developer outreach – from running hackathons (often combined with ETHGlobal events) to maintaining a Developer Hub with tutorials. They also invest in tooling: e.g., funding teams to build alternative clients, monitoring tools, and providing an official faucet and block explorer integration for new chains. Marketing-wise, Optimism coined the term “Superchain” and actively promotes the vision of many chains uniting under one interoperable umbrella, which has attracted projects that want to be part of a broader narrative rather than an isolated appchain. There’s also the draw of shared liquidity: with the upcoming OPCraft (Superchain interoperability), apps on one OP Chain can easily interact with another, making it appealing to launch a chain that’s not an island. In essence, OP Stack’s ecosystem play is about community and collaboration – join the Superchain, get access to a pool of users (via easy bridging), funding, and collective branding. They even created a “Rollup Passport” concept where users can have a unified identity across OP Chains. All these efforts lower the barrier for new chains to find users and devs. Finally, Optimism’s own user base and reputation (being one of the top L2s) means any OP Stack chain can somewhat piggyback on that (Base did, by advertising itself as part of the Optimism ecosystem, for instance).

  • zkSync (ZK Stack/Hyperchains) Ecosystem: Matter Labs (the team behind zkSync) secured large funding rounds (over $200M) to fuel its ecosystem. They have set up funds like the ** zkSync Ecosystem Fund**, often in collaboration with VCs, to invest in projects building on zkSync Era. For the ZK Stack specifically, they have started to promote the concept of Hyperchains to communities that need their own chain. One strategy is targeting specific verticals: for example, gaming. zkSync has highlighted how a game studio could launch its own Hyperchain to get customizability and still be connected to Ethereum. They are likely offering close support to initial partners (in the way Polygon did with some enterprises). The mention in the Zeeve article about a “Swiss bank; world’s largest bank” interested in ZK Stack suggests Matter Labs is courting enterprise use cases that need privacy (ZK proofs can ensure correctness while keeping some data private, a big deal for institutions). If zkSync lands a major enterprise chain, that would boost their credibility. Developer support on zkSync is quite strong: they run accelerators (e.g., an program with Blockchain Founders Fund was announced), hackathons (often zk themed ones), and have an active community on their Discord providing technical help. While zkSync doesn’t have a live token (as of 2025) for governance or incentives, there’s speculation of one, and projects might anticipate future incentive programs. Matter Labs has also been working on bridging support: they partnered with major bridges like Across, LayerZero, Wormhole to ensure assets and messages can move easily to and from zkSync and any hyperchains. In fact, Across Protocol integrated zkSync’s ZK Stack, boasting support across “all major L2 frameworks”. This interoperability focus means a project launching a hyperchain can readily connect to Ethereum mainnet and other L2s, crucial for attracting users (nobody wants to be siloed). Marketing-wise, zkSync pushes the slogan “Web3 without compromise” and emphasizes being first to ZK mainnet. They publish roadmaps (their 2025 roadmap blog) to keep excitement high. If we consider ecosystem funds: aside from direct Matter Labs grants, there’s also the Ethereum Foundation and other ZK-focused funds that favor zkSync development due to the general importance of ZK tech. Another strategy: zkSync is open source and neutral (no licensing fees), which appeals to projects that might be wary of aligning with a more centralized ecosystem. The ZK Stack is trying to position itself as the decentralizer’s choice – e.g., highlighting full decentralization and no training wheels, whereas OP Stack and others still have some centralization in practice. Time will tell if that resonates, but certainly within the Ethereum community, zkSync has supporters who want a fully trustless stack. Finally, Matter Labs and BitDAO’s Windranger have a joint initiative called “ZK DAO” which might deploy capital or incentives for the ZK Stack adoption. Overall, zkSync’s ecosystem efforts are a mix of technical superiority messaging (ZK is the future) and building practical bridges (both figurative and literal) for projects to come onboard.

  • Arbitrum Orbit Ecosystem: Arbitrum has a huge existing ecosystem on its L2 (Arbitrum One), with the highest DeFi TVL among L2s in 2024. Offchain Labs leverages this by encouraging successful Arbitrum dApps to consider Orbit chains for sub-applications or L3 expansions. They announced that over 50 Orbit chains were in development by late 2023, expecting perhaps 100+ by end of 2024 – indicating substantial interest. To nurture this, Offchain Labs adopted a few strategies. First, partnerships with RaaS providers: They realized not every team can handle the rollup infra, so they enlisted Caldera, Conduit, and AltLayer to streamline it. These partners often have their own grant or incentive programs (sometimes co-sponsored by Arbitrum) to entice projects. For example, there might be an Arbitrum x AltLayer grant for gaming chains. Second, Offchain Labs provides direct technical support and co-development for key projects. The case of Xai Chain is illustrative: it’s a gaming L3 where Offchain Labs co-developed the chain and provides ongoing tech and even marketing support. They basically helped incubate Xai to showcase Orbit’s potential in gaming. Similarly, Rarible’s RARI NFT chain got integrated with many partners (Gelato for gasless, LayerZero for cross-chain NFTs, etc.) with presumably Arbitrum’s guidance. Offchain Labs also sometimes uses its war chest (Arbitrum DAO has a huge treasury of ARB tokens) to fund initiatives. While the Arbitrum DAO is separate, Offchain Labs can coordinate with it for ecosystem matters. For instance, if an Orbit chain heavily uses ARB token or benefits Arbitrum, the DAO could vote grants. However, a more direct approach: Offchain Labs launched Arbitrum Orbit Challenge hackathons and prizes to encourage developers to try making L3s. On marketing: Arbitrum’s brand is developer-focused, and they promote Orbit’s advantages like Stylus (fast, multi-language contracts) and no 7-day withdrawal (with fast bridging). They also highlight successful examples: e.g., Treasure DAO’s Bridgeworld announced an Orbit chain, etc. One more support angle: liquidity and Defi integration. Arbitrum is working with protocols so that if you launch an Orbit chain, you can tap into liquidity from Arbitrum One easily (via native bridging or LayerZero). The easier it is to get assets and users moving to your new chain, the more likely you’ll succeed. Arbitrum has a very large, active community (on Reddit, Discord, etc.), and by extending that to Orbit, new chains can market to existing Arbitrum users (for example, an Arbitrum user might get an airdrop on a new Orbit chain to try it out). In summary, Arbitrum’s ecosystem strategy for Orbit is about leveraging their L2 dominance – if you build an L3, you’re effectively an extension of the largest L2, so you get to share in that network effect. Offchain Labs is actively removing hurdles (technical and liquidity hurdles) and even directly helping build some early L3s to set precedents for others to follow.

  • Polygon CDK (AggLayer) Ecosystem: Polygon has been one of the most aggressive in ecosystem and business development. They have a multi-pronged approach:

    • Grants and Funds: Polygon established a $100M Ecosystem Fund a while back, and has invested in hundreds of projects. They also had specific vertical funds (e.g., Polygon Gaming Fund, Polygon DeFi Fund). For CDK chains, Polygon announced incentives such as covering part of the cost of running a chain or providing liquidity support. The CoinLaw stats mention “More than 190 dApps are leveraging Polygon CDK to build their own chains” – which implies Polygon has gotten a vast pipeline of projects (likely many still in development). They’ve likely offered grants or resource sharing to these teams.
    • Enterprise and Institutional Onboarding: Polygon’s BizDev team has on-boarded major companies (Starbucks, Reddit, Nike, Disney for NFTs on Polygon POS). Now with CDK, they pitch enterprises to launch dedicated chains. E.g., Immutable (gaming platform) partnering to use CDK for game-specific chains, Franklin Templeton launching a fund on Polygon, and Walmart’s trial of a supply chain on a private Polygon chain. Polygon provides white-glove support to these partners: technical consulting, custom feature development (privacy, compliance), and co-marketing. The introduction of Libre (by JP Morgan/Siemens) built on Polygon CDK shows how they cater to financial institutions with specialized needs.
    • Go-to-Market and Interoperability: Polygon is creating the AggLayer as an interoperability and liquidity hub connecting all Polygon chains. This means if you launch a CDK chain, you’re not on your own – you become part of “Polygon 2.0,” a constellation of chains with unified liquidity. They promise things like one-click token transfer between CDK chains and Ethereum (via AggLayer). They are also not charging any protocol fees (no rent), which they tout as a competitive advantage against, say, Optimism’s fee sharing. Polygon’s marketing highlights that launching a CDK chain can give you “the best of both worlds”: custom sovereignty and performance plus access to the large user base and developer base of Polygon/Ethereum. They often cite that Polygon (POS+zkEVM) combined processed 30%+ of all L2 transactions, to assure potential chain builders that the flow of users on Polygon is huge.
    • Developer Support: Polygon runs perhaps the most hackathons and DevRel events in the blockchain space. They have a dedicated Polygon University, online courses, and they frequently sponsor ETHGlobal and other hackathons with challenges around using CDK, zkEVM, etc. So developers can win prizes building prototypes of CDK chains or cross-chain dapps. They also maintain a strong presence in developer communities and provide quick support (the Polygon Discord has channels for technical questions where core devs answer).
    • Community and Governance: Polygon is transitioning to Polygon 2.0 with a new POL token and community governance that spans all chains. This could mean community treasuries or incentive programs that apply to CDK chains. For example, there may be a Polygon Ecosystem Mining program where liquidity mining rewards are offered to projects that deploy on new CDK chains to bootstrap usage. The idea is to ensure new chains aren’t ghost towns.
    • Success Stories: Already, several CDK chains are live or announced: OKX’s OKB Chain (X Layer), Gnosis Pay’s chain, Astar’s zkEVM, Palm Network migrating, GameSwift (gaming chain), etc.. Polygon actively publicizes these and shares knowledge from them to others.

Overall, Polygon’s strategy is “we will do whatever it takes to help you succeed if you build on our stack.” That includes financial incentives, technical manpower, marketing exposure (speaking slots in conferences, press releases on CoinTelegraph like we saw), and integration into a larger ecosystem. It’s very much a business development-driven approach in addition to grassroots dev community, reflecting Polygon’s more corporate style relative to the others.

To summarize ecosystem support: All these frameworks understand that attracting developers and projects requires more than tech – it needs funding, hand-holding, and integration into a larger narrative. Optimism pushes a collaborative public-goods-focused narrative with fair revenue sharing. zkSync pushes the cutting-edge tech angle and likely will announce incentives aligned with a future token. Arbitrum leverages its existing dominance and provides partner networks to make launching easy, plus possibly the deepest DeFi liquidity to tap into. Polygon arguably goes the furthest in smoothing the path for both crypto-native and enterprise players, effectively subsidizing and co-marketing chains.

An illustrative comparative snapshot:

FrameworkNotable Ecosystem ProgramsDeveloper/Partner SupportEcosystem Size (2025)
OP Stack (Optimism)RetroPGF grants (OP token); Superchain fee sharing for public goods; Multiple grant waves for tooling & dapps.OP Labs offers direct tech support to new chains (e.g. Base); strong dev community; Superchain branding & interoperability to attract users. Regular hackathons (often Optimism-sponsored tracks).Optimism mainnet ~160+ dapps, Base gaining traction, 5+ OP Chains live (Base, opBNB, Worldcoin, Zora, others) and more announced (Celo). Shared $14k+ ETH revenue to Collective. Large community via Optimism and Coinbase users.
zkSync ZK StackzkSync Ecosystem Fund (>$200M raised for dev financing); possible future airdrops; targeted vertical programs (e.g. gaming, AI agents on Hyperchains).Matter Labs provides technical onboarding for early Hyperchain pilots; detailed docs and open-source code. Partnered with bridge protocols for connectivity. Developer incentives mostly through hackathons and VC investments (no token incentives yet).zkSync Era L2 has 160+ protocols, ~$100M TVL. Early hyperchains in test (no major live L3 yet). Enterprise interest signals future growth (e.g. pilot with a large bank). Strong ZK developer community and growing recognition.
Arbitrum OrbitArbitrum DAO $ARB treasury ($3B+) for potential grants; Offchain Labs partnership with RaaS (Caldera, AltLayer) subsidizing chain launches; Orbit Accelerator programs.Offchain Labs co-developed flagship Orbit chains (Xai, etc.); assists with marketing (Binance Launchpad for Xai’s token). Dev support via Arbitrum’s extensive documentation and direct engineering help for integration (Stylus, custom gas). Fast bridge support for user experience.Arbitrum One: largest L2 TVL (~$5B); ~50 Orbit chains in dev as of late 2023, ~16 launched by early 2025. Notable live chains: Xai, Rari Chain, Frame, etc. DeFi heavy ecosystem on L2 can extend liquidity to L3s. Large, loyal community (Arbitrum airdrop had >250k participants).
Polygon CDK (AggLayer)Polygon Ecosystem Fund & many vertical funds (NFTs, gaming, enterprise); Polygon 2.0 Treasury for incentives; offering to cover certain infra costs for new chains. AggLayer liquidity/reward programs expected.Polygon Labs team works closely with partners (e.g. Immutable, enterprises) for custom needs; extensive devrel (Polygon University, hackathons, tutorials). Integration of CDK chains with Polygon’s zkEVM and PoS infrastructure (shared wallets, bridges). Marketing via big brand partnerships (public case studies of Nike, Reddit on Polygon) to lend credibility.Polygon PoS: huge adoption (4B+ txns); Polygon zkEVM growing (100+ dapps). CDK: 20+ chains either live (OKX, Gnosis Pay, etc.) or in pipeline by end 2024. ~190 projects exploring CDK. Enterprise adoption notable (financial institutions, retail giants). One of the largest developer ecosystems due to Polygon PoS history, now funneled into CDK.

As the table suggests, each ecosystem has its strengths – Optimism with collaborative ethos and Coinbase’s weight, zkSync with ZK leadership and innovation focus, Arbitrum with proven adoption and technical prowess (Stylus), Polygon with corporate connections and comprehensive support. All are pumping significant resources into growing their communities, because ultimately the success of a rollup framework is measured by the apps and users on the chains built with it.

Deployments and Adoption in 2025

Finally, let’s look at where these frameworks stand in terms of real-world adoption as of 2025 – both in the crypto-native context (public networks, DeFi/NFT/gaming projects) and enterprise or institutional use:

  • OP Stack Adoption: The OP Stack has powered Optimism Mainnet, which itself is one of the top Ethereum L2s with a thriving DeFi ecosystem (Uniswap, Aave, etc.) and tens of thousands of daily users. In 2023–2024, OP Stack was chosen by Coinbase for their Base network – Base launched in August 2023 and quickly onboarded popular apps (Coinbase’s own wallet integration, friend.tech social app) and reached high activity (at times even surpassing Optimism in transactions). Base’s success validated OP Stack for many; Base had 800M transactions in 2024, making it the second-highest chain by tx count that year. Another major OP Stack deployment is opBNB – Binance’s BNB Chain team created an L2 using OP Stack (but settling to BNB Chain instead of Ethereum). opBNB went live in 2023, indicating OP Stack’s flexibility to use a non-Ethereum settlement. Worldcoin’s World ID chain went live on OP Stack (settling on Ethereum) in 2023 to handle its unique biometric identity transactions. Zora Network, an NFT-centric chain by Zora, launched on OP Stack as well, tailored for creator economy use cases. Perhaps the most ambitious is Celo’s migration: Celo voted to transition from an independent L1 to an Ethereum L2 built on OP Stack – as of 2025, this migration is underway, effectively bringing a whole existing ecosystem (Celo’s DeFi and phone-focused apps) into the OP Stack fold. We also have smaller projects like Mode (Bybit’s side chain), Mantle (BitDAO’s chain) – actually Mantle opted for a modified OP Stack too. And many more are rumored or in development, given Optimism’s open-source approach (anyone can fork and launch without permission). On enterprise side, we haven’t seen much explicit OP Stack enterprise chain (enterprises seem drawn more to Polygon or custom). However, Base is an enterprise (Coinbase) backing, and that’s significant. The Superchain vision implies that even enterprise chains might join as OP Chains to benefit from shared governance – for instance, if some fintech wanted to launch a compliant chain, using OP Stack and plugging into Superchain could give it ready connectivity. As of 2025, OP Stack chains collectively (Optimism, Base, others) handle a significant portion of L2 activity, and the Superchain aggregated throughput is presented as a metric (Optimism often publishes combined stats). With Bedrock upgrade and further improvements, OP Stack chains are proving high reliability (Optimism had negligible downtime). The key measure of adoption: OP Stack is arguably the most forked rollup framework so far, given Base, BNB, Celo, etc., which are high-profile. In total, ~5-10 OP Stack chains are live mainnets, and many more testnets. If we include devnets and upcoming launches, the number grows.

  • zkSync Hyperchains Adoption: zkSync Era mainnet (L2) itself launched in March 2023 and by 2025 it’s among the top ZK rollups, with ~$100M TVL and dozens of projects. Notable apps like Curve, Uniswap, Chainlink deployed or announced deployment on zkSync. Now, regarding Hyperchains (L3 or sovereign chains), this is very cutting-edge. In late 2024, Matter Labs launched a program for teams to experiment with L3s on top of zkSync. One example: the Rollup-as-a-Service provider Decentriq was reportedly testing a private Hyperchain for data sharing. Also, Blockchain Capital (VC) hinted at experimenting with an L3. We have mention that an ecosystem of 18+ protocols is leveraging ZK Stack for things like AI agents and specialized use cases – possibly on testnets. No major Hyperchain is publicly serving users yet (as far as known by mid-2025). However, interest is high in specific domains: gaming projects have shown interest in ZK hyperchains for fast finality and customizability, and privacy-oriented chains (a Hyperchain could include encryption and use zkProofs to hide data – something an optimistic rollup can’t offer as easily). The comment about a “Swiss bank” suggests maybe UBS or a consortium is testing a private chain using ZK Stack, likely attracted by throughput (~10k TPS) and privacy. If that moves to production, it would be a flagship enterprise case. In summary, zkSync’s Hyperchain adoption in 2025 is in an early pilot stage: developer infrastructure is ready (as evidenced by documentation and some test deployments), but we’re waiting for the first movers to go live. It’s comparable to where Optimism was in early 2021 – proven tech but just starting adoption. By end of 2025, we could expect a couple of Hyperchains live, possibly one community-driven (maybe a gaming Hyperchain spun out of a popular zkSync game) and one enterprise-driven. Another factor: there’s talk of Layer3s on zkSync Era as well – essentially permissionless L3s where anyone can deploy an app-chain atop zkSync’s L2. Matter Labs has built the contracts to allow that, so we may see user-driven L3s (like someone launching a mini rollup for their specific app) which counts as adoption of the ZK Stack.

  • Arbitrum Orbit Adoption: Arbitrum Orbit saw a surge of interest after its formal introduction in mid-2023. By late 2023, around 18 Orbit chains were publicly disclosed, and Offchain Labs indicated over 50 in progress. As of 2025, some of the prominent ones:

    • Xai Chain: A gaming-focused L3, now live (mainnet launched late 2023). It’s used by game developers (like Ex Populus studio) and had a token launch via Binance Launchpad. This indicates decent adoption (Binance Launchpad involvement suggests lots of user interest). Xai uses AnyTrust mode (for high TPS).
    • Rari Chain: An NFT-centric L3 by Rarible. Launched mainnet Jan 2024. It’s focused on NFT marketplaces with features like credit card payments for gas (via Stripe) and gasless listings. This chain is a good showcase of customizing user experience (as noted, Gelato provides gasless transactions, etc. on Rari Chain).
    • Frame: A creator-focused L2 (though called L2, it’s likely an Orbit chain settling on Ethereum or Arbitrum). It launched early 2024 after raising funding.
    • EduChain (by Camelot/GMX communities): The Zeeve article mentions an EDU chain with a large number of projects – possibly an ecosystem for on-chain education and AI, built on Orbit.
    • Ape Chain: Not explicitly mentioned above, but the context from Zeeve suggests an “Ape chain” (maybe Yuga Labs or ApeCoin DAO chain) exists with $9.86M TVL and uses APE for gas. That could be an Orbit chain in the ApeCoin ecosystem (this would be significant given Yuga’s influence in NFTs).
    • Other gaming chains: e.g., Cometh’s “Muster” L3 was announced (a gaming platform partnering with AltLayer). Syndr Chain for an options trading protocol is on testnet as Orbit L3. Meliora (DeFi credit protocol) building an Orbit L3.
    • Many of these are in early stages (testnet or recently launched mainnet), but collectively they indicate Orbit is gaining adoption among specialized dApps that outgrew a shared L2 environment or wanted their own governance.
    • On enterprise: not as much noise here. Arbitrum is known more for DeFi/gaming adoption. However, the technology could appeal to enterprise if they want an Ethereum-secured chain with flexible trust (via AnyTrust). It’s possible some enterprise quietly used Arbitrum technology for a private chain, but not publicized.
    • By the numbers, Arbitrum Orbit’s biggest user so far might be Ape Chain (if confirmed) with ~$10M TVL and 17 protocols on it (according to Zeeve). Another is EDU chain with 1.35M TVL and 30+ projects.
    • Arbitrum One and Nova themselves are part of this narrative – the fact Orbit chains can settle on Nova (ultra-cheap social/gaming chain) or One means adoption of Orbit also drives activity to those networks. Nova has seen usage for Reddit points etc. If Orbit chains plug into Nova’s AnyTrust committee, Nova’s role grows.
    • In sum, Arbitrum Orbit has moved beyond theory: dozens of real projects are building on it, focusing on gaming, social, and custom DeFi. Arbitrum’s approach of showing real use-cases (like Xai, Rari) has paid off, and we can expect by end of 2025 there will be possibly 50+ Orbit chains live, some with significant user bases (especially if one of the gaming chains hits a popular game release).
  • Polygon CDK Adoption: Polygon only announced CDK in H2 2023, but it piggybacks on the success of Polygon’s existing networks. Already, Polygon zkEVM (mainnet beta) itself is essentially a CDK chain run by Polygon Labs. It has seen decent adoption (over $50M TVL, major protocols deployed). But beyond that, numerous independent chains are in motion:

    • Immutable X (a large Web3 gaming platform) declared support for Polygon CDK to let game studios spin up their own zk-rollups that connect to Immutable and Polygon liquidity. This alliance means possibly dozens of games using CDK via Immutable in 2025.
    • OKX (exchange) launched OKB Chain (aka X Chain) using Polygon CDK in late 2024. An exchange chain can drive a lot of transactions (cex-to-dex flows, etc.). OKX chose Polygon presumably for scalability and because many of their users already use Polygon.
    • Canto (DeFi chain) and Astar (Polkadot sidechain) are mentioned as migrating to or integrating with Polygon CDK. Canto moving from Cosmos to Polygon layer indicates the appeal of sharing security with Ethereum via Polygon’s ZK.
    • Gnosis Pay: launched Gnosis Card chain with CDK – it’s a chain to allow fast stablecoin payments connected to a Visa card. This is live and an innovative fintech use.
    • Palm Network: a NFT-specialized chain originally on Ethereum is moving to Polygon CDK (Palm was co-founded by ConsenSys for NFTs with DC Comics, etc.).
    • dYdX: This is interesting – dYdX was building its own Cosmos chain, but Zeeve’s info lists dYdX under AggLayer CDK chains. If dYdX were to consider Polygon instead, that would be huge (though as of known info, dYdX V4 is Cosmos-based; perhaps they plan cross-chain or future pivot).
    • Nubank: one of the largest digital banks in Brazil, appears in Zeeve’s list. Nubank had launched a token on Polygon earlier; a CDK chain for their rewards or CBDC-like program could be in testing.
    • Wirex, IDEX, GameSwift, Aavegotchi, Powerloom, Manta… these names in Zeeve’s list show how cross-ecosystem the CDK reach is: e.g., Manta (a Polkadot privacy project) might use CDK for an Ethereum-facing ZK solution; Aavegotchi (an NFT game originally on Polygon POS) might get its own chain for game logic.
    • The Celestia integration in early 2024 will likely attract projects that want the Polygon tech but with Celestia DA – possibly some Cosmos projects (since Celestia is Cosmos-based) will choose Polygon CDK for execution and Celestia for DA.
    • Enterprises: Polygon has a dedicated enterprise team. Apart from those mentioned (Stripe on stablecoins, Franklin Templeton fund on Polygon, country governments minting stamps, etc.), with CDK they can promise enterprises their own chain with custom rules. We might see pilots like “Polygon Siemens Chain” or government chains emerging, though often those start private.
    • Polygon’s approach of being chain-agnostic (they even support an “OP Stack mode” now in CDK per Zeeve!) and not charging rent, has meant a rapid onboarding – they claim 190+ projects using or considering CDK by Q1 2025. If even a quarter of those go live, Polygon will have an expansive network of chains. They envision themselves not just as one chain but as an ecosystem of many chains (Polygon 2.0), possibly the largest such network if successful.
    • By numbers: as of early 2025, 21+ chains are either in mainnet or testnet using CDK according to the AggLayer site. This should accelerate through 2025 as more migrate or launch.
    • We can expect some high-profile launches, e.g. a Reddit chain (Reddit’s avatars on Polygon POS were huge; a dedicated Polygon L2 for Reddit could happen). Also, if any central bank digital currencies (CBDCs) or government projects choose a scaling solution, Polygon is often in those conversations – a CDK chain could be their choice for a permissioned L2 with zk proofs.

In summary, 2025 adoption status: OP Stack and Arbitrum Orbit have multiple live chains with real users and TVL, zkSync’s hyperchains are on the cusp with strong test pilots, and Polygon CDK has many lined up and a few live successes in both crypto and enterprise. The space is evolving rapidly, and projects often cross-consider these frameworks before choosing. It’s not zero-sum either – e.g., an app might use an OP Stack chain and a Polygon CDK chain for different regions or purposes. The modular blockchain future likely involves interoperability among all these frameworks. It’s notable that efforts like LayerZero and bridge aggregators now ensure assets move relatively freely between Optimism, Arbitrum, Polygon, zkSync, etc., so users might not even realize which stack a chain is built on under the hood.

Conclusion

Rollups-as-a-Service in 2025 offers a rich menu of options. OP Stack provides a battle-tested optimistic rollup framework with Ethereum alignment and the backing of a collaborative Superchain community. ZK Stack (Hyperchains) delivers cutting-edge zero-knowledge technology with modular validity and data choices, aiming for massive scalability and new use-cases like private or Layer-3 chains. Arbitrum Orbit extends a highly optimized optimistic rollup architecture to developers, with flexibility in data availability and the exciting addition of Stylus for multi-language smart contracts. Polygon CDK empowers projects to launch zkEVM chains with out-of-the-box interoperability (AggLayer) and the full support of Polygon’s ecosystem and enterprise ties. zkSync Hyperchains (via ZK Stack) promise to unlock Web3 at scale – multiple hyperchains all secured by Ethereum, each optimized for its domain (be it gaming, DeFi, or social), with seamless connectivity through zkSync’s Elastic framework.

In comparing data availability, we saw all frameworks embracing modular DA – Ethereum for security, and newer solutions like Celestia, EigenDA, or committees for throughput. Sequencer designs are initially centralized but moving toward decentralization: Optimism and Arbitrum provide L1 fallback queues and are enabling multi-sequencer or permissionless validator models, while Polygon and zkSync allow custom consensus deployment for chains that desire it. Fee models differ mainly in ecosystem philosophy – Optimism’s revenue share vs others’ self-contained economies – but all allow custom tokens and aim to minimize user costs by leveraging cheaper DA and fast finality (especially ZK chains).

On ecosystem support, Optimism fosters a collective where each chain contributes to shared goals (funding public goods) and benefits from shared upgrades. Arbitrum leverages its thriving community and liquidity, actively helping projects launch Orbit chains and integrating them with its DeFi hub. Polygon goes all-in with resources, courting both crypto projects and corporates, providing perhaps the most hands-on support and boasting an extensive network of partnerships and funds. Matter Labs (zkSync) drives innovation and appeals to those who want the latest ZK tech, and while its incentive programs are less publicly structured (pending a token), it has significant funding to deploy and a strong pull for ZK-minded builders.

From a developer’s perspective, launching a rollup in 2025 is more accessible than ever. Whether one’s priority is EVM-equivalence and ease (OP Stack, Arbitrum) or maximum performance and future-proof tech (ZK Stack, Polygon CDK), the tools and documentation are in place. Even monitoring and dev-tools have grown to support these custom chains – for instance, Alchemy and QuickNode’s RaaS platforms support Optimism, Arbitrum, and zkSync stacks out-of-the-box. This means teams can focus on their application and leave much of the heavy lifting to these frameworks.

Looking at public and enterprise adoption, it’s clear that modular rollups are moving from experimental to mainstream. We have global brands like Coinbase, Binance, and OKX running their own chains, major DeFi protocols like Uniswap expanding to multiple L2s and possibly their own rollups, and even governments and banks exploring these technologies. The competition (and collaboration) between OP Stack, ZK Stack, Orbit, CDK, etc., is driving rapid innovation – ultimately benefiting Ethereum by scaling it to reach millions of new users through tailored rollups.

Each framework has its unique value proposition:

  • OP Stack: Easy on-ramp to L2, shared Superchain network effects, and a philosophy of “impact = profit” via public goods.
  • ZK Stack: Endgame scalability with ZK integrity, flexibility in design (L2 or L3, rollup or validium), and prevention of liquidity fragmentation through the Elastic chain model.
  • Arbitrum Orbit: Proven tech (Arbitrum One never had a major failure), high performance (Nitro + Stylus), and the ability to customize trust assumptions (full rollup security or faster AnyTrust) for different needs.
  • Polygon CDK: Turnkey zk-rollups backed by one of the largest ecosystems, with immediate connectivity to Polygon/Ethereum assets and the promise of future “unified liquidity” via AggLayer – effectively a launchpad not just for a chain, but for a whole economy on that chain.
  • zkSync Hyperchains: A vision of Layer-3 scalability where even small apps can have their own chain secured by Ethereum, with minimal overhead, enabling Web2-level performance in a Web3 environment.

As of mid-2025, we are seeing the multi-chain modular ecosystem materialize: dozens of app-specific or sector-specific chains coexisting, many built with these stacks. L2Beat and similar sites now track not just L2s but L3s and custom chains, many of which use OP Stack, Orbit, CDK, or ZK Stack. Interoperability standards are being developed so that whether a chain uses Optimism or Polygon tech, they can talk to each other (projects like Hyperlane, LayerZero, and even OP and Polygon collaboration on shared sequencing).

In conclusion, Rollups-as-a-Service in 2025 has matured into a competitive landscape with OP Stack, ZK Stack, Arbitrum Orbit, Polygon CDK, and zkSync Hyperchains each offering robust, modular blockchain frameworks. They differ in technical approach (Optimistic vs ZK), but all aim to empower developers to launch scalable, secure chains tailored to their needs. The choice of stack may depend on a project’s specific priorities – EVM compatibility, finality speed, customization, community alignment, etc. – as outlined above. The good news is that there is no shortage of options or support. Ethereum’s rollup-centric roadmap is being realized through these frameworks, heralding an era where launching a new chain is not a monumental feat, but rather a strategic decision akin to choosing a cloud provider or tech stack in Web2. The frameworks will continue to evolve (e.g. we anticipate more convergence, like OP Stack embracing ZK proofs, Polygon’s AggLayer connecting to non-Polygon chains, etc.), but even now they collectively ensure that Ethereum’s scalability and ecosystem growth are limited only by imagination, not infrastructure.

Sources:

  • Optimism OP Stack – Documentation and Mirror posts
  • zkSync ZK Stack – zkSync docs and Matter Labs posts
  • Arbitrum Orbit – Arbitrum docs, Offchain Labs announcements
  • Polygon CDK – Polygon Tech docs, CoinTelegraph report
  • General comparison – QuickNode Guides (Mar 2025), Zeeve and others for ecosystem stats, plus various project blogs as cited above.

Trusted Execution Environments (TEEs) in the Web3 Ecosystem: A Deep Dive

· 68 min read

1. Overview of TEE Technology

Definition and Architecture: A Trusted Execution Environment (TEE) is a secure area of a processor that protects the code and data loaded inside it with respect to confidentiality and integrity. In practical terms, a TEE acts as an isolated “enclave” within the CPU – a kind of black box where sensitive computations can run shielded from the rest of the system. Code running inside a TEE enclave is protected so that even a compromised operating system or hypervisor cannot read or tamper with the enclave’s data or code. Key security properties provided by TEEs include:

  • Isolation: The enclave’s memory is isolated from other processes and even the OS kernel. Even if an attacker gains full admin privileges on the machine, they cannot directly inspect or modify enclave memory.
  • Integrity: The hardware ensures that code executing in the TEE cannot be altered by external attacks. Any tampering of the enclave code or runtime state will be detected, preventing compromised results.
  • Confidentiality: Data inside the enclave remains encrypted in memory and is only decrypted for use within the CPU, so secret data is not exposed in plain text to the outside world.
  • Remote Attestation: The TEE can produce cryptographic proofs (attestations) to convince a remote party that it is genuine and that specific trusted code is running inside it. This means users can verify that an enclave is in a trustworthy state (e.g. running expected code on genuine hardware) before provisioning it with secret data.

Conceptual diagram of a Trusted Execution Environment as a secure enclave “black box” for smart contract execution. Encrypted inputs (data and contract code) are decrypted and processed inside the secure enclave, and only encrypted results leave the enclave. This ensures that sensitive contract data remains confidential to everyone outside the TEE.

Under the hood, TEEs are enabled by hardware-based memory encryption and access control in the CPU. For example, when a TEE enclave is created, the CPU allocates a protected memory region for it and uses dedicated keys (burned into the hardware or managed by a secure co-processor) to encrypt/decrypt data on the fly. Any attempt by external software to read the enclave memory gets only encrypted bytes. This unique CPU-level protection allows even user-level code to define private memory regions (enclaves) that privileged malware or even a malicious system administrator cannot snoop or modify. In essence, a TEE provides a higher level of security for applications than the normal operating environment, while still being more flexible than dedicated secure elements or hardware security modules.

Key Hardware Implementations: Several hardware TEE technologies exist, each with different architectures but a similar goal of creating a secure enclave within the system:

  • Intel SGX (Software Guard Extensions): Intel SGX is one of the most widely used TEE implementations. It allows applications to create enclaves at the process level, with memory encryption and access controls enforced by the CPU. Developers must partition their code into “trusted” code (inside the enclave) and “untrusted” code (normal world), using special instructions (ECALL/OCALL) to transfer data in and out of the enclave. SGX provides strong isolation for enclaves and supports remote attestation via Intel’s Attestation Service (IAS). Many blockchain projects – notably Secret Network and Oasis Network – built privacy-preserving smart contract functionality on SGX enclaves. However, SGX’s design on complex x86 architectures has led to some vulnerabilities (see §4), and Intel’s attestation introduces a centralized trust dependency.

  • ARM TrustZone: TrustZone takes a different approach by dividing the processor’s entire execution environment into two worlds: a Secure World and a Normal World. Sensitive code runs in the Secure World, which has access to certain protected memory and peripherals, while the Normal World runs the regular OS and applications. Switches between worlds are controlled by the CPU. TrustZone is commonly used in mobile and IoT devices for things like secure UI, payment processing, or digital rights management. In a blockchain context, TrustZone could enable mobile-first Web3 applications by allowing private keys or sensitive logic to run in the phone’s secure enclave. However, TrustZone enclaves are typically larger-grained (at OS or VM level) and not as commonly adopted in current Web3 projects as SGX.

  • AMD SEV (Secure Encrypted Virtualization): AMD’s SEV technology targets virtualized environments. Instead of requiring application-level enclaves, SEV can encrypt the memory of entire virtual machines. It uses an embedded security processor to manage cryptographic keys and to perform memory encryption so that a VM’s memory remains confidential even to the hosting hypervisor. This makes SEV well-suited for cloud or server use cases: for example, a blockchain node or off-chain worker could run inside a fully-encrypted VM, protecting its data from a malicious cloud provider. SEV’s design means less developer effort to partition code (you can run an existing application or even an entire OS in a protected VM). Newer iterations like SEV-SNP add features like tamper detection and allow VM owners to attest their VMs without relying on a centralized service. SEV is highly relevant for TEE use in cloud-based blockchain infrastructure.

Other emerging or niche TEE implementations include Intel TDX (Trust Domain Extensions, for enclave-like protection in VMs on newer Intel chips), open-source TEEs like Keystone (RISC-V), and secure enclave chips in mobile (such as Apple’s Secure Enclave, though not typically open for arbitrary code). Each TEE comes with its own development model and trust assumptions, but all share the core idea of hardware-isolated secure execution.

2. Applications of TEEs in Web3

Trusted Execution Environments have become a powerful tool in addressing some of Web3’s hardest challenges. By providing a secure, private computation layer, TEEs enable new possibilities for blockchain applications in areas of privacy, scalability, oracle security, and integrity. Below we explore major application domains:

Privacy-Preserving Smart Contracts

One of the most prominent uses of TEEs in Web3 is enabling confidential smart contracts – programs that run on a blockchain but can handle private data securely. Blockchains like Ethereum are transparent by default: all transaction data and contract state are public. This transparency is problematic for use cases that require confidentiality (e.g. private financial trades, secret ballots, personal data processing). TEEs provide a solution by acting as a privacy-preserving compute enclave connected to the blockchain.

In a TEE-powered smart contract system, transaction inputs can be sent to a secure enclave on a validator or worker node, processed inside the enclave where they remain encrypted to the outside world, and then the enclave can output an encrypted or hashed result back to the chain. Only authorized parties with the decryption key (or the contract logic itself) can access the plaintext result. For example, Secret Network uses Intel SGX in its consensus nodes to execute CosmWasm smart contracts on encrypted inputs, so things like account balances, transaction amounts, or contract state can be kept hidden from the public while still being usable in computations. This has enabled secret DeFi applications – e.g. private token swaps where the amounts are confidential, or secret auctions where bids are encrypted and only revealed after auction close. Another example is Oasis Network’s Parcel and confidential ParaTime, which allow data to be tokenized and used in smart contracts under confidentiality constraints, enabling use cases like credit scoring or medical data on blockchain with privacy compliance.

Privacy-preserving smart contracts via TEEs are attractive for enterprise and institutional adoption of blockchain. Organizations can leverage smart contracts while keeping sensitive business logic and data confidential. For instance, a bank could use a TEE-enabled contract to handle loan applications or trade settlements without exposing client data on-chain, yet still benefit from the transparency and integrity of blockchain verification. This capability directly addresses regulatory privacy requirements (such as GDPR or HIPAA), allowing compliant use of blockchain in healthcare, finance, and other sensitive industries. Indeed, TEEs facilitate compliance with data protection laws by ensuring that personal data can be processed inside an enclave with only encrypted outputs leaving, satisfying regulators that data is safeguarded.

Beyond confidentiality, TEEs also help enforce fairness in smart contracts. For example, a decentralized exchange could run its matching engine inside a TEE to prevent miners or validators from seeing pending orders and unfairly front-running trades. In summary, TEEs bring a much-needed privacy layer to Web3, unlocking applications like confidential DeFi, private voting/governance, and enterprise contracts that were previously infeasible on public ledgers.

Scalability and Off-Chain Computation

Another critical role for TEEs is improving blockchain scalability by offloading heavy computations off-chain into a secure environment. Blockchains struggle with complex or computationally intensive tasks due to performance limits and costs of on-chain execution. TEE-enabled off-chain computation allows these tasks to be done off the main chain (thus not consuming block gas or slowing down on-chain throughput) while still retaining trust guarantees about the correctness of the results. In effect, a TEE can serve as a verifiable off-chain compute accelerator for Web3.

For example, the iExec platform uses TEEs to create a decentralized cloud computing marketplace where developers can run computations off-chain and get results that are trusted by the blockchain. A dApp can request a computation (say, a complex AI model inference or a big data analysis) to be done by iExec worker nodes. These worker nodes execute the task inside an SGX enclave, producing a result along with an attestation that the correct code ran in a genuine enclave. The result is then returned on-chain, and the smart contract can verify the enclave’s attestation before accepting the output. This architecture allows heavy workloads to be handled off-chain without sacrificing trust, effectively boosting throughput. The iExec Orchestrator integration with Chainlink demonstrates this: a Chainlink oracle fetches external data, then hands off a complex computation to iExec’s TEE workers (e.g. aggregating or scoring the data), and finally the secure result is delivered on-chain. Use cases include things like decentralized insurance calculations (as iExec demonstrated), where a lot of data crunching can be done off-chain and cheaply, with only the final outcome going to the blockchain.

TEE-based off-chain computation also underpins some Layer-2 scaling solutions. Oasis Labs’ early prototype Ekiden (the precursor to Oasis Network) used SGX enclaves to run transaction execution off-chain in parallel, then commit only state roots to the main chain, effectively similar to rollup ideas but using hardware trust. By doing contract execution in TEEs, they achieved high throughput while relying on enclaves to preserve security. Another example is Sanders Network’s forthcoming Op-Succinct L2, which combines TEEs and zkSNARKs: TEEs execute transactions privately and quickly, and then zk-proofs are generated to prove the correctness of those executions to Ethereum. This hybrid approach leverages TEE speed and ZK verifiability for a scalable, private L2 solution.

In general, TEEs can run near-native performance computations (since they use actual CPU instructions, just with isolation), so they are orders of magnitude faster than pure cryptographic alternatives like homomorphic encryption or zero-knowledge proofs for complex logic. By offloading work to enclaves, blockchains can handle more complex applications (like machine learning, image/audio processing, large analytics) that would be impractical on-chain. The results come back with an attestation, which the on-chain contract or users can verify as originating from a trusted enclave, thus preserving data integrity and correctness. This model is often called “verifiable off-chain computation”, and TEEs are a cornerstone for many such designs (e.g. Hyperledger Avalon’s Trusted Compute Framework, developed by Intel, iExec, and others, uses TEEs to off-chain execute EVM bytecode with proof of correctness posted on-chain).

Secure Oracles and Data Integrity

Oracles bridge blockchains with real-world data, but they introduce trust challenges: how can a smart contract trust that an off-chain data feed is correct and untampered? TEEs provide a solution by serving as a secure sandbox for oracle nodes. A TEE-based oracle node can fetch data from external sources (APIs, web services) and process it inside an enclave that guarantees the data hasn’t been manipulated by the node operator or a malware on the node. The enclave can then sign or attest to the truth of the data it provides. This significantly improves oracle data integrity and trustworthiness. Even if an oracle operator is malicious, they cannot alter the data without breaking the enclave’s attestation (which the blockchain will detect).

A notable example is Town Crier, an oracle system developed at Cornell that was one of the first to use Intel SGX enclaves to provide authenticated data to Ethereum contracts. Town Crier would retrieve data (e.g. from HTTPS websites) inside an SGX enclave and deliver it to a contract along with evidence (an enclave signature) that the data came straight from the source and wasn’t forged. Chainlink recognized the value of this and acquired Town Crier in 2018 to integrate TEE-based oracles into its decentralized network. Today, Chainlink and other oracle providers have TEE initiatives: for instance, Chainlink’s DECO and Fair Sequencing Services involve TEEs to ensure data confidentiality and fair ordering. As noted in one analysis, “TEE revolutionized oracle security by providing a tamper-proof environment for data processing... even the node operators themselves cannot manipulate the data while it’s being processed”. This is particularly crucial for high-value financial data feeds (like price oracles for DeFi): a TEE can prevent even subtle tampering that could lead to big exploits.

TEEs also enable oracles to handle sensitive or proprietary data that couldn’t be published in plaintext on a blockchain. For example, an oracle network could use enclaves to aggregate private data (like confidential stock order books or personal health data) and feed only derived results or validated proofs to the blockchain, without exposing the raw sensitive inputs. In this way, TEEs broaden the scope of what data can be securely integrated into smart contracts, which is critical for real-world asset (RWA) tokenization, credit scoring, insurance, and other data-intensive on-chain services.

On the topic of cross-chain bridges, TEEs similarly improve integrity. Bridges often rely on a set of validators or a multi-sig to custody assets and validate transfers between chains, which makes them prime targets for attacks. By running bridge validator logic inside TEEs, one can secure the bridge’s private keys and verification processes against tampering. Even if a validator’s OS is compromised, the attacker shouldn’t be able to extract private keys or falsify messages from inside the enclave. TEEs can enforce that bridge transactions are processed exactly according to the protocol rules, reducing the risk of human operators or malware injecting fraudulent transfers. Furthermore, TEEs can enable atomic swaps and cross-chain transactions to be handled in a secure enclave that either completes both sides or aborts cleanly, preventing scenarios where funds get stuck due to interference. Several bridge projects and consortiums have explored TEE-based security to mitigate the plague of bridge hacks that have occurred in recent years.

Data Integrity and Verifiability Off-Chain

In all the above scenarios, a recurring theme is that TEEs help maintain data integrity even outside the blockchain. Because a TEE can prove what code it is running (via attestation) and can ensure the code runs without interference, it provides a form of verifiable computing. Users and smart contracts can trust the results coming from a TEE as if they were computed on-chain, provided the attestation checks out. This integrity guarantee is why TEEs are sometimes referred to as bringing a “trust anchor” to off-chain data and computation.

However, it’s worth noting that this trust model shifts some assumptions to hardware (see §4). The data integrity is only as strong as the TEE’s security. If the enclave is compromised or the attestation is forged, the integrity could fail. Nonetheless, in practice TEEs (when kept up-to-date) make certain attacks significantly harder. For example, a DeFi lending platform could use a TEE to calculate credit scores from a user’s private data off-chain, and the smart contract would accept the score only if accompanied by a valid enclave attestation. This way, the contract knows the score was computed by the approved algorithm on real data, rather than trusting the user or an oracle blindly.

TEEs also play a role in emerging decentralized identity (DID) and authentication systems. They can securely manage private keys, personal data, and authentication processes in a way that the user’s sensitive information is never exposed to the blockchain or to dApp providers. For instance, a TEE on a mobile device could handle biometric authentication and sign a blockchain transaction if the biometric check passes, all without revealing the user’s biometrics. This provides both security and privacy in identity management – an essential component if Web3 is to handle things like passports, certificates, or KYC data in a user-sovereign way.

In summary, TEEs serve as a versatile tool in Web3: they enable confidentiality for on-chain logic, allow scaling via off-chain secure compute, protect integrity of oracles and bridges, and open up new uses (from private identity to compliant data sharing). Next, we’ll look at specific projects leveraging these capabilities.

3. Notable Web3 Projects Leveraging TEEs

A number of leading blockchain projects have built their core offerings around Trusted Execution Environments. Below we dive into a few notable ones, examining how each uses TEE technology and what unique value it adds:

Secret Network

Secret Network is a layer-1 blockchain (built on Cosmos SDK) that pioneered privacy-preserving smart contracts using TEEs. All validator nodes in Secret Network run Intel SGX enclaves, which execute the smart contract code so that contract state and inputs/outputs remain encrypted even to the node operators. This makes Secret one of the first privacy-first smart contract platforms – privacy isn’t an optional add-on, but a default feature of the network at the protocol level.

In Secret Network’s model, users submit encrypted transactions, which validators load into their SGX enclave for execution. The enclave decrypts the inputs, runs the contract (written in a modified CosmWasm runtime), and produces encrypted outputs that are written to the blockchain. Only users with the correct viewing key (or the contract itself with its internal key) can decrypt and view the actual data. This allows applications to use private data on-chain without revealing it publicly.

The network has demonstrated several novel use cases:

  • Secret DeFi: e.g., SecretSwap (an AMM) where users’ account balances and transaction amounts are private, mitigating front-running and protecting trading strategies. Liquidity providers and traders can operate without broadcasting their every move to competitors.
  • Secret Auctions: Auction contracts where bids are kept secret until the auction ends, preventing strategic behavior based on others’ bids.
  • Private Voting and Governance: Token holders can vote on proposals without revealing their vote choices, while the tally can still be verified – ensuring fair, intimidation-free governance.
  • Data marketplaces: Sensitive datasets can be transacted and used in computations without exposing the raw data to buyers or nodes.

Secret Network essentially incorporates TEEs at the protocol level to create a unique value proposition: it offers programmable privacy. The challenges they tackle include coordinating enclave attestation across a decentralized validator set and managing key distribution so contracts can decrypt inputs while keeping them secret from validators. By all accounts, Secret has proven the viability of TEE-powered confidentiality on a public blockchain, establishing itself as a leader in the space.

Oasis Network

Oasis Network is another layer-1 aimed at scalability and privacy, which extensively utilizes TEEs (Intel SGX) in its architecture. Oasis introduced an innovative design that separates consensus from computation into different layers called the Consensus Layer and ParaTime Layer. The Consensus Layer handles blockchain ordering and finality, while each ParaTime can be a runtime environment for smart contracts. Notably, Oasis’s Emerald ParaTime is an EVM-compatible environment, and Sapphire is a confidential EVM that uses TEEs to keep smart contract state private.

Oasis’s use of TEEs is focused on confidential computation at scale. By isolating the heavy computation in parallelizable ParaTimes (which can run on many nodes), they achieve high throughput, and by using TEEs within those ParaTime nodes, they ensure the computations can include sensitive data without revealing it. For example, an institution could run a credit scoring algorithm on Oasis by feeding private data into a confidential ParaTime – the data stays encrypted for the node (since it’s processed in the enclave), and only the score comes out. Meanwhile, the Oasis consensus just records the proof that the computation happened correctly.

Technically, Oasis added extra layers of security beyond vanilla SGX. They implemented a “layered root of trust”: using Intel’s SGX Quoting Enclave and a custom lightweight kernel to verify hardware trustworthiness and to sandbox the enclave’s system calls. This reduces the attack surface (by filtering which OS calls enclaves can make) and protects against certain known SGX attacks. Oasis also introduced features like durable enclaves (so enclaves can persist state across restarts) and secure logging to mitigate rollback attacks (where a node might try to replay an old enclave state). These innovations were described in their technical papers and are part of why Oasis is seen as a research-driven project in TEE-based blockchain computing.

From an ecosystem perspective, Oasis has positioned itself for things like private DeFi (allowing banks to participate without leaking customer data) and data tokenization (where individuals or companies can share data to AI models in a confidential manner and get compensated, all via the blockchain). They have also collaborated with enterprises on pilots (for example, working with BMW on data privacy, and others on medical research data sharing). Overall, Oasis Network showcases how combining TEEs with a scalable architecture can address both privacy and performance, making it a significant player in TEE-based Web3 solutions.

Sanders Network

Sanders Network is a decentralized cloud computing network in the Polkadot ecosystem that uses TEEs to provide confidential and high-performance compute services. It is a parachain on Polkadot, meaning it benefits from Polkadot’s security and interoperability, but it introduces its own novel runtime for off-chain computation in secure enclaves.

The core idea of Sanders is to maintain a large network of worker nodes (called Sanders miners) that execute tasks inside TEEs (specifically, Intel SGX so far) and produce verifiable results. These tasks can range from running segments of smart contracts to general-purpose computation requested by users. Because the workers run in SGX, Sanders ensures that the computations are done with confidentiality (input data is hidden from the worker operator) and integrity (the results come with an attestation). This effectively creates a trustless cloud where users can deploy workloads knowing the host cannot peek or tamper with them.

One can think of Sanders as analogous to Amazon EC2 or AWS Lambda, but decentralized: developers can deploy code to Sanders’s network and have it run on many SGX-enabled machines worldwide, paying with Sanders’s token for the service. Some highlighted use cases:

  • Web3 Analytics and AI: A project could analyze user data or run AI algorithms in Sanders enclaves, so that raw user data stays encrypted (protecting privacy) while only aggregated insights leave the enclave.
  • Game backends and Metaverse: Sanders can handle intensive game logic or virtual world simulations off-chain, sending only commitments or hashes to the blockchain, enabling richer gameplay without trust in any single server.
  • On-chain services: Sanders has built an off-chain computation platform called Sanders Cloud. For example, it can serve as a back-end for bots, decentralized web services, or even an off-chain orderbook that publishes trades to a DEX smart contract with TEE attestation.

Sanders emphasizes that it can scale confidential computing horizontally: need more capacity? Add more TEE worker nodes. This is unlike a single blockchain where compute capacity is limited by consensus. Thus Sanders opens possibilities for computationally intensive dApps that still want trustless security. Importantly, Sanders doesn’t rely purely on hardware trust; it is integrating with Polkadot’s consensus (e.g., staking and slashing for bad results) and even exploring a combination of TEE with zero-knowledge proofs (as mentioned, their upcoming L2 uses TEE to speed up execution and ZKP to verify it succinctly on Ethereum). This hybrid approach helps mitigate the risk of any single TEE compromise by adding crypto verification on top.

In summary, Sanders Network leverages TEEs to deliver a decentralized, confidential cloud for Web3, allowing off-chain computation with security guarantees. This unleashes a class of blockchain applications that need both heavy compute and data privacy, bridging the gap between on-chain and off-chain worlds.

iExec

iExec is a decentralized marketplace for cloud computing resources built on Ethereum. Unlike the previous three (which are their own chains or parachains), iExec operates as a layer-2 or off-chain network that coordinates with Ethereum smart contracts. TEEs (specifically Intel SGX) are a cornerstone of iExec’s approach to establish trust in off-chain computation.

The iExec network consists of worker nodes contributed by various providers. These workers can execute tasks requested by users (dApp developers, data providers, etc.). To ensure these off-chain computations are trustworthy, iExec introduced a “Trusted off-chain Computing” framework: tasks can be executed inside SGX enclaves, and the results come with an enclave signature that proves the task was executed correctly on a secure node. iExec partnered with Intel to launch this trusted computing feature and even joined the Confidential Computing Consortium to advance standards. Their consensus protocol, called Proof-of-Contribution (PoCo), aggregates votes/attestations from multiple workers when needed to reach consensus on the correct result. In many cases, a single enclave’s attestation might suffice if the code is deterministic and trust in SGX is high; for higher assurance, iExec can replicate tasks across several TEEs and use a consensus or majority vote.

iExec’s platform enables several interesting use cases:

  • Decentralized Oracle Computing: As mentioned earlier, iExec can work with Chainlink. A Chainlink node might fetch raw data, then hand it to an iExec SGX worker to perform a computation (e.g., a proprietary algorithm or an AI inference) on that data, and finally return a result on-chain. This expands what oracles can do beyond just relaying data – they can now provide computed services (like call an AI model or aggregate many sources) with TEE ensuring honesty.
  • AI and DePIN (Decentralized Physical Infrastructure Network): iExec is positioning as a trust layer for decentralized AI apps. For example, a dApp that uses a machine learning model can run the model in an enclave to protect both the model (if it’s proprietary) and the user data being fed in. In the context of DePIN (like distributed IoT networks), TEEs can be used on edge devices to trust sensor readings and computations on those readings.
  • Secure Data Monetization: Data providers can make their datasets available in iExec’s marketplace in encrypted form. Buyers can send their algorithms to run on the data inside a TEE (so the data provider’s raw data is never revealed, protecting their IP, and the algorithm’s details can also be hidden). The result of the computation is returned to the buyer, and appropriate payment to the data provider is handled via smart contracts. This scheme, often called secure data exchange, is facilitated by the confidentiality of TEEs.

Overall, iExec provides the glue between Ethereum smart contracts and secure off-chain execution. It demonstrates how TEE “workers” can be networked to form a decentralized cloud, complete with a marketplace (using iExec’s RLC token for payment) and consensus mechanisms. By leading the Enterprise Ethereum Alliance’s Trusted Compute working group and contributing to standards (like Hyperledger Avalon), iExec also drives broader adoption of TEEs in enterprise blockchain scenarios.

Other Projects and Ecosystems

Beyond the four above, there are a few other projects worth noting:

  • Integritee – another Polkadot parachain similar to Sanders (in fact, it spun out of the Energy Web Foundation’s TEE work). Integritee uses TEEs to create “parachain-as-a-service” for enterprises, combining on-chain and off-chain enclave processing.
  • Automata Network – a middleware protocol for Web3 privacy that leverages TEEs for private transactions, anonymous voting, and MEV-resistant transaction processing. Automata runs as an off-chain network providing services like a private RPC relay and was mentioned as using TEEs for things like shielded identity and gasless private transactions.
  • Hyperledger Sawtooth (PoET) – in the enterprise realm, Sawtooth introduced a consensus algorithm called Proof of Elapsed Time which relied on SGX. Each validator runs an enclave that waits for a random time and produces a proof; the one with the shortest wait “wins” the block, a fair lottery enforced by SGX. While Sawtooth is not a Web3 project per se (more enterprise blockchain), it’s a creative use of TEEs for consensus.
  • Enterprise/Consortium Chains – Many enterprise blockchain solutions (e.g. ConsenSys Quorum, IBM Blockchain) incorporate TEEs to enable confidential consortium transactions, where only authorized nodes see certain data. For example, the Enterprise Ethereum Alliance’s Trusted Compute Framework (TCF) blueprint uses TEEs to execute private contracts off-chain and deliver merkle proofs on-chain.

These projects collectively show the versatility of TEEs: they power entire privacy-focused L1s, serve as off-chain networks, secure pieces of infrastructure like oracles and bridges, and even underpin consensus algorithms. Next, we consider the broader benefits and challenges of using TEEs in decentralized settings.

4. Benefits and Challenges of TEEs in Decentralized Environments

Adopting Trusted Execution Environments in blockchain systems comes with significant technical benefits as well as notable challenges and trade-offs. We will examine both sides: what TEEs offer to decentralized applications and what problems or risks arise from their use.

Benefits and Technical Strengths

  • Strong Security & Privacy: The foremost benefit is the confidentiality and integrity guarantees. TEEs allow sensitive code to run with assurance it won’t be spied on or altered by outside malware. This provides a level of trust in off-chain computation that was previously unavailable. For blockchain, this means private data can be utilized (enhancing functionality of dApps) without sacrificing security. Even in untrusted environments (cloud servers, validator nodes run by third parties), TEEs keep secrets safe. This is especially beneficial for managing private keys, user data, and proprietary algorithms within crypto systems. For example, a hardware wallet or a cloud signing service might use a TEE to sign blockchain transactions internally so the private key is never exposed in plaintext, combining convenience with security.

  • Near-Native Performance: Unlike purely cryptographic approaches to secure computation (like ZK proofs or homomorphic encryption), TEE overhead is relatively small. Code runs directly on the CPU, so a computation inside an enclave is roughly as fast as running outside (with some overhead for enclave transitions and memory encryption, typically single-digit percentage slowdowns in SGX). This means TEEs can handle compute-intensive tasks efficiently, enabling use cases (like real-time data feeds, complex smart contracts, machine learning) that would be orders of magnitude slower if done with cryptographic protocols. The low latency of enclaves makes them suitable where fast response is needed (e.g. high-frequency trading bots secured by TEEs, or interactive applications and games where user experience would suffer with high delays).

  • Improved Scalability (via Offload): By allowing heavy computations to be done off-chain securely, TEEs help alleviate congestion and gas costs on main chains. They enable Layer-2 designs and side protocols where the blockchain is used only for verification or final settlement, while the bulk of computation happens in parallel enclaves. This modularization (compute-intensive logic in TEEs, consensus on chain) can drastically improve throughput and scalability of decentralized apps. For instance, a DEX could do match-making in a TEE off-chain and only post matched trades on-chain, increasing throughput and reducing on-chain gas.

  • Better User Experience & Functionality: With TEEs, dApps can offer features like confidentiality or complex analytics that attract more users (including institutions). TEEs also enable gasless or meta-transactions by safely executing them off-chain and then submitting results, as noted in Automata’s use of TEEs to reduce gas for private transactions. Additionally, storing sensitive state off-chain in an enclave can reduce the data published on-chain, which is good for user privacy and network efficiency (less on-chain data to store/verify).

  • Composability with Other Tech: Interestingly, TEEs can complement other technologies (not strictly a benefit inherent to TEEs alone, but in combination). They can serve as the glue that holds together hybrid solutions: e.g., running a program in an enclave and also generating a ZK proof of its execution, where the enclave helps with parts of the proving process to speed it up. Or using TEEs in MPC networks to handle certain tasks with fewer rounds of communication. We’ll discuss comparisons in §5, but many projects highlight that TEEs don’t have to replace cryptography – they can work alongside to bolster security (Sanders’s mantra: “TEE’s strength lies in supporting others, not replacing them”).

Trust Assumptions and Security Vulnerabilities

Despite their strengths, TEEs introduce specific trust assumptions and are not invulnerable. It’s crucial to understand these challenges:

  • Hardware Trust and Centralization: By using TEEs, one is inherently placing trust in the silicon vendor and the security of their hardware design and supply chain. For example, using Intel SGX means trusting that Intel has no backdoors, that their manufacturing is secure, and that the CPU’s microcode correctly implements enclave isolation. This is a more centralized trust model compared to pure cryptography (which relies on math assumptions distributed among all users). Moreover, attestation for SGX historically relies on contacting Intel’s Attestation Service, meaning if Intel went offline or decided to revoke keys, enclaves globally could be affected. This dependency on a single company’s infrastructure raises concerns: it could be a single point of failure or even a target of government regulation (e.g., U.S. export controls could in theory restrict who can use strong TEEs). AMD SEV mitigates this by allowing more decentralized attestation (VM owners can attest their VMs), but still trust AMD’s chip and firmware. The centralization risk is often cited as somewhat antithetical to blockchain’s decentralization. Projects like Keystone (open-source TEE) and others are researching ways to reduce reliance on proprietary black boxes, but these are not yet mainstream.

  • Side-Channel and Other Vulnerabilities: A TEE is not a magic bullet; it can be attacked through indirect means. Side-channel attacks exploit the fact that even if direct memory access is blocked, an enclave’s operation might subtly influence the system (through timing, cache usage, power consumption, electromagnetic emissions, etc.). Over the past few years, numerous academic attacks on Intel SGX have been demonstrated: from Foreshadow (extracting enclave secrets via L1 cache timing leakage) to Plundervolt (voltage fault injection via privileged instructions) to SGAxe (extracting attestation keys), among others. These sophisticated attacks show that TEEs can be compromised without needing to break cryptographic protections – instead, by exploiting microarchitectural behaviors or flaws in the implementation. As a result, it’s acknowledged that “researchers have identified various potential attack vectors that could exploit hardware vulnerabilities or timing differences in TEE operations”. While these attacks are non-trivial and often require either local access or malicious hardware, they are a real threat. TEEs also generally do not protect against physical attacks if an adversary has the chip in hand (e.g., decapping the chip, probing buses, etc. can defeat most commercial TEEs).

    The vendor responses to side-channel discoveries have been microcode patches and enclave SDK updates to mitigate known leaks (sometimes at cost of performance). But it remains a cat-and-mouse game. For Web3, this means if someone finds a new side-channel on SGX, a “secure” DeFi contract running in SGX could potentially be exploited (e.g., to leak secret data or manipulate execution). So, relying on TEEs means accepting a potential vulnerability surface at the hardware level that is outside the typical blockchain threat model. It’s an active area of research to strengthen TEEs against these (for instance, by designing enclave code with constant-time operations, avoiding secret-dependent memory access patterns, and using techniques like oblivious RAM). Some projects also augment TEEs with secondary checks – e.g. combining with ZK proofs, or having multiple enclaves run on different hardware vendors to reduce single-chip risk.

  • Performance and Resource Constraints: Although TEEs run at near-native speed for CPU-bound tasks, they do come with some overheads and limits. Switching into an enclave (an ECALL) and out (OCALL) has a cost, as does the encryption/decryption of memory pages. This can impact performance for very frequent enclave boundary crossings. Enclaves also often have memory size limitations. For example, early SGX had a limited Enclave Page Cache and when enclaves used more memory, pages had to be swapped (with encryption) which massively slowed performance. Even newer TEEs often don’t allow using all system RAM easily – there’s a secure memory region that might be capped. This means very large-scale computations or data sets could be challenging to handle entirely inside a TEE. In Web3 contexts, this might limit the complexity of smart contracts or ML models that can run in an enclave. Developers have to optimize for memory and possibly split workloads.

  • Complexity of Attestation and Key Management: Using TEEs in a decentralized setting requires robust attestation workflows: each node needs to prove to others that it’s running an authentic enclave with expected code. Setting up this attestation verification on-chain can be complex. It usually involves hard-coding the vendor’s public attestation key or certificate into the protocol and writing verification logic into smart contracts or off-chain clients. This introduces overhead in protocol design, and any changes (like Intel changing its attestation signing key format from EPID to DCAP) can cause maintenance burdens. Additionally, managing keys within TEEs (for decrypting data or signing results) adds another layer of complexity. Mistakes in enclave key management could undermine security (e.g., if an enclave inadvertently exposes a decryption key through a bug, all its confidentiality promises collapse). Best practices involve using the TEE’s sealing APIs to securely store keys and rotating keys if needed, but again this requires careful design by developers.

  • Denial-of-Service and Availability: A perhaps less-discussed issue: TEEs do not help with availability and can even introduce new DoS avenues. For instance, an attacker might flood a TEE-based service with inputs that are costly to process, knowing that the enclave can’t be easily inspected or interrupted by the operator (since it’s isolated). Also, if a vulnerability is found and a patch requires firmware updates, during that cycle many enclave services might have to pause (for security) until nodes are patched, causing downtime. In blockchain consensus, imagine if a critical SGX bug was found – networks like Secret might have to halt until a fix, since trust in the enclaves would be broken. Coordination of such responses in a decentralized network is challenging.

Composability and Ecosystem Limitations

  • Limited Composability with Other Contracts: In a public smart contract platform like Ethereum, contracts can easily call other contracts and all state is in the open, enabling DeFi money legos and rich composition. In a TEE-based contract model, private state cannot be freely shared or composed without breaking confidentiality. For example, if Contract A in an enclave needs to interact with Contract B, and both hold some secret data, how do they collaborate? Either they must do a complex secure multi-party protocol (which negates some simplicity of TEEs) or they combine into one enclave (reducing modularity). This is a challenge that Secret Network and others face: cross-contract calls with privacy are non-trivial. Some solutions involve having a single enclave handle multiple contracts’ execution so it can internally manage shared secrets, but that can make the system more monolithic. Thus, composability of private contracts is more limited than public ones, or requires new design patterns. Similarly, integrating TEE-based modules into existing blockchain dApps requires careful interface design – often only the result of an enclave is posted on-chain, which might be a snark or a hash, and other contracts can only use that limited information. This is certainly a trade-off; projects like Secret provide viewing keys and permitting sharing of secrets on a need-to-know basis, but it’s not as seamless as the normal on-chain composability.

  • Standardization and Interoperability: The TEE ecosystem currently lacks unified standards across vendors. Intel SGX, AMD SEV, ARM TrustZone all have different programming models and attestation methods. This fragmentation means a dApp written for SGX enclaves isn’t trivially portable to TrustZone, etc. In blockchain, this can tie a project to a specific hardware (e.g., Secret and Oasis are tied to x86 servers with SGX right now). If down the line those want to support ARM nodes (say, validators on mobile), it would require additional development and perhaps different attestation verification logic. There are efforts (like the CCC – Confidential Computing Consortium) to standardize attestation and enclave APIs, but we’re not fully there yet. Lack of standards also affects developer tooling – one might find the SGX SDK mature but then need to adapt to another TEE with a different SDK. This interoperability challenge can slow adoption and increase costs.

  • Developer Learning Curve: Building applications that run inside TEEs requires specialized knowledge that many blockchain developers may not have. Low-level C/C++ programming (for SGX/TrustZone) or understanding of memory safety and side-channel-resistant coding is often needed. Debugging enclave code is infamously tricky (you can’t easily see inside an enclave while it’s running for security reasons!). Although frameworks and higher-level languages (like Oasis’s use of Rust for their confidential runtime, or even tools to run WebAssembly in enclaves) exist, the developer experience is still rougher than typical smart contract development or off-chain web2 development. This steep learning curve and immature tooling can deter developers or lead to mistakes if not handled carefully. There’s also the aspect of needing hardware to test on – running SGX code needs an SGX-enabled CPU or an emulator (which is slower), so the barrier to entry is higher. As a result, relatively few devs today are deeply familiar with enclave development, making audits and community support more scarce than in, say, the well-trodden solidity community.

  • Operational Costs: Running a TEE-based infrastructure can be more costly. The hardware itself might be more expensive or scarce (e.g., certain cloud providers charge premium for SGX-capable VMs). There’s also overhead in operations: keeping firmware up-to-date (for security patches), managing attestation networking, etc., which small projects might find burdensome. If every node must have a certain CPU, it could reduce the potential validator pool (not everyone has the required hardware), thus affecting decentralization and possibly leading to higher cloud hosting usage.

In summary, while TEEs unlock powerful features, they also bring trust trade-offs (hardware trust vs. math trust), potential security weaknesses (especially side-channels), and integration hurdles in a decentralized context. Projects using TEEs must carefully engineer around these issues – employing defense-in-depth (don’t assume the TEE is unbreakable), keeping the trusted computing base minimal, and being transparent about the trust assumptions to users (so it’s clear, for instance, that one is trusting Intel’s hardware in addition to the blockchain consensus).

5. TEEs vs. Other Privacy-Preserving Technologies (ZKP, FHE, MPC)

Trusted Execution Environments are one approach to achieving privacy and security in Web3, but there are other major techniques including Zero-Knowledge Proofs (ZKPs), Fully Homomorphic Encryption (FHE), and Secure Multi-Party Computation (MPC). Each of these technologies has a different trust model and performance profile. In many cases, they are not mutually exclusive – they can complement each other – but it’s useful to compare their trade-offs in performance, trust, and developer usability:

To briefly define the alternatives:

  • ZKPs: Cryptographic proofs (like zk-SNARKs, zk-STARKs) that allow one party to prove to others that a statement is true (e.g. “I know a secret that satisfies this computation”) without revealing why it’s true (hiding the secret input). In blockchain, ZKPs are used for private transactions (e.g. Zcash, Aztec) and for scalability (rollups that post proofs of correct execution). They ensure strong privacy (no secret data is leaked, only proofs) and integrity guaranteed by math, but generating these proofs can be computationally heavy and the circuits must be designed carefully.
  • FHE: Encryption scheme that allows arbitrary computation on encrypted data, so that the result, when decrypted, matches the result of computing on plaintexts. In theory, FHE provides ultimate privacy – data stays encrypted at all times – and you don’t need to trust anyone with the raw data. But FHE is extremely slow for general computations (though it’s improving with research); it's still mostly in experimental or specialized use due to performance.
  • MPC: Protocols where multiple parties jointly compute a function over their private inputs without revealing those inputs to each other. It often involves secret-sharing data among parties and performing cryptographic operations so that the output is correct but individual inputs remain hidden. MPC can distribute trust (no single point sees all data) and can be efficient for certain operations, but typically incurs a communication and coordination overhead and can be complex to implement for large networks.

Below is a comparison table summarizing key differences:

TechnologyTrust ModelPerformanceData PrivacyDeveloper Usability
TEE (Intel SGX, etc.)Trust in hardware manufacturer (centralized attestation server in some cases). Assumes chip is secure; if hardware is compromised, security is broken.Near-native execution speed; minimal overhead. Good for real-time computation and large workloads. Scalability limited by availability of TEE-enabled nodes.Data is in plaintext inside enclave, but encrypted to outside world. Strong confidentiality if hardware holds, but if enclave is breached, secrets exposed (no additional math protection).Moderate complexity. Can often reuse existing code/languages (C, Rust) and run it in enclave with minor modifications. Lowest entry barrier among these – no need to learn advanced cryptography – but requires systems programming and TEE-specific SDK knowledge.
ZKP (zk-SNARK/STARK)Trust in math assumptions (e.g. hardness of cryptographic problems) and sometimes a trusted setup (for SNARKs). No reliance on any single party at run-time.Proof generation is computationally heavy (especially for complex programs), often orders slower than native. Verification on-chain is fast (few ms). Not ideal for large data computations due to proving time. Scalability: good for succinct verification (rollups) but prover is bottleneck.Very strong privacy – can prove correctness without revealing any private input. Only minimal info (like proof size) leaks. Ideal for financial privacy, etc.High complexity. Requires learning specialized languages (circuits, zkDSLs like Circom or Noir) and thinking in terms of arithmetic circuits. Debugging is hard. Fewer experts available.
FHETrust in math (lattice problems). No trusted party; security holds as long as encryption isn’t broken.Very slow for general use. Operations on encrypted data are several orders of magnitude slower than plaintext. Somewhat scaling with hardware improvements and better algorithms, but currently impractical for real-time use in blockchain contexts.Ultimate privacy – data remains encrypted the entire time, even during computation. This is ideal for sensitive data (e.g. medical, cross-institution analytics) if performance allowed.Very specialized. Developers need crypto background. Some libraries (like Microsoft SEAL, TFHE) exist, but writing arbitrary programs in FHE is difficult and circuitous. Not yet a routine development target for dApps.
MPCTrust distributed among multiple parties. Assumes a threshold of parties are honest (no collusion beyond certain number). No hardware trust needed. Trust failure if too many collude.Typically slower than native due to communication rounds, but often faster than FHE. Performance varies: simple operations (add, multiply) can be efficient; complex logic may blow up in communication cost. Latency is sensitive to network speeds. Scalability can be improved with sharding or partial trust assumptions.Strong privacy if assumptions hold – no single node sees the whole input. But some info can leak via output or if parties drop (plus it lacks the succinctness of ZK – you get the result but no easily shareable proof of it without running the protocol again).High complexity. Requires designing a custom protocol for each use case or using frameworks (like SPDZ, or Partisia’s offering). Developers must reason about cryptographic protocols and often coordinate deployment of multiple nodes. Integration into blockchain apps can be complex (need off-chain rounds).

Citations: The above comparison draws on sources such as Sanders Network’s analysis and others, which highlight that TEEs excel in speed and ease-of-use, whereas ZK and FHE focus on maximal trustlessness at the cost of heavy computation, and MPC distributes trust but introduces network overhead.

From the table, a few key trade-offs become clear:

  • Performance: TEEs have a big advantage in raw speed and low latency. MPC can often handle moderate complexity with some slowdown, ZK is slow to produce but fast to verify (asynchronous usage), and FHE is currently the slowest by far for arbitrary tasks (though fine for limited operations like simple additions/multiplications). If your application needs real-time complex processing (like interactive applications, high-frequency decisions), TEEs or perhaps MPC (with few parties on good connections) are the only viable options today. ZK and FHE would be too slow in such scenarios.

  • Trust Model: ZKP and FHE are purely trustless (only trust math). MPC shifts trust to assumptions about participant honesty (which can be bolstered by having many parties or economic incentives). TEE places trust in hardware and the vendor. This is a fundamental difference: TEEs introduce a trusted third party (the chip) into the usually trustless world of blockchain. In contrast, ZK and FHE are often praised for aligning better with the decentralized ethos – no special entities to trust, just computational hardness. MPC sits in between: trust is decentralized but not eliminated (if N out of M nodes collude, privacy breaks). So for maximal trustlessness (e.g., a truly censorship-resistant, decentralized system), one might lean toward cryptographic solutions. On the other hand, many practical systems are comfortable assuming Intel is honest or that a set of major validators won’t collude, trading a bit of trust for huge gains in efficiency.

  • Security/Vulnerabilities: TEEs, as discussed, can be undermined by hardware bugs or side-channels. ZK and FHE security can be undermined if the underlying math (say, elliptic curve or lattice problem) is broken, but those are well-studied problems and attacks would likely be noticed (also, parameter choices can mitigate known risks). MPC’s security can be broken by active adversaries if the protocol isn’t designed for that (some MPC protocols assume “honest but curious” participants and might fail if someone outright cheats). In blockchain context, a TEE breach might be more catastrophic (all enclave-based contracts could be at risk until patched) whereas a ZK cryptographic break (like discovering a flaw in a hash function used by a ZK rollup) could also be catastrophic but is generally considered less likely given the simpler assumption. The surface of attack is very different: TEEs have to worry about things like power analysis, while ZK has to worry about mathematical breakthroughs.

  • Data Privacy: FHE and ZK offer the strongest privacy guarantees – data remains cryptographically protected. MPC ensures data is secret-shared, so no single party sees it (though some info could leak if outputs are public or if protocols are not carefully designed). TEE keeps data private from the outside, but inside the enclave data is decrypted; if someone somehow gains control of the enclave, the data confidentiality is lost. Also, TEEs typically allow the code to do anything with the data (including inadvertently leaking it through side-channels or network if the code is malicious). So TEEs require that you also trust the enclave code not just the hardware. In contrast, ZKPs prove properties of the code without ever revealing secrets, so you don’t even have to trust the code (beyond it actually having the property proven). If an enclave application had a bug that leaked data to a log file, the TEE hardware wouldn’t prevent that – whereas a ZK proof system simply wouldn’t reveal anything except the intended proof. This is a nuance: TEEs protect against external adversaries, but not necessarily logic bugs in the enclave program itself, whereas ZK’s design forces a more declarative approach (you prove exactly what is intended and nothing more).

  • Composability & Integration: TEEs integrate fairly easily into existing systems – you can take an existing program, put it into an enclave, and get some security benefits without changing the programming model too much. ZK and FHE often require rewriting the program into a circuit or restrictive form, which can be a massive effort. For instance, writing a simple AI model verification in ZK involves transforming it to a series of arithmetic ops and constraints, which is a far cry from just running TensorFlow in a TEE and attesting the result. MPC similarly may require custom protocol per use case. So from a developer productivity and cost standpoint, TEEs are attractive. We’ve seen adoption of TEEs quicker in some areas precisely because you can leverage existing software ecosystems (many libraries run in enclaves with minor tweaks). ZK/MPC require specialized engineering talent which is scarce. However, the flip side is that TEEs yield a solution that is often more siloed (you have to trust that enclave or that set of nodes), whereas ZK gives you a proof anyone can check on-chain, making it highly composable (any contract can verify a zk proof). So ZK results are portable – they produce a small proof that any number of other contracts or users can use to gain trust. TEE results usually come in the form of an attestation tied to a particular hardware and possibly not succinct; they may not be as easily shareable or chain-agnostic (though you can post a signature of the result and have contracts programmed to accept that if they know the public key of the enclave).

In practice, we are seeing hybrid approaches: for example, Sanders Network argues that TEE, MPC, and ZK each shine in different areas and can complement each other. A concrete case is decentralized identity: one might use ZK proofs to prove an identity credential without revealing it, but that credential might have been verified and issued by a TEE-based process that checked your documents privately. Or consider scaling: ZK rollups provide succinct proofs for lots of transactions, but generating those proofs could be sped up by using TEEs to do some computations faster (and then only proving a smaller statement). The combination can sometimes reduce the trust requirement on TEEs (e.g., use TEEs for performance, but still verify final correctness via a ZK proof or via an on-chain challenge game so that a compromised TEE can’t cheat without being caught). Meanwhile, MPC can be combined with TEEs by having each party’s compute node be a TEE, adding an extra layer so that even if some parties collude, they still cannot see each other’s data unless they also break hardware security.

In summary, TEEs offer a very practical and immediate path to secure computation with modest assumptions (hardware trust), whereas ZK and FHE offer a more theoretical and trustless path but at high computational cost, and MPC offers a distributed trust path with network costs. The right choice in Web3 depends on the application requirements:

  • If you need fast, complex computation on private data (like AI, large data sets) – TEEs (or MPC with few parties) are currently the only feasible way.
  • If you need maximum decentralization and verifiability – ZK proofs shine (for example, private cryptocurrency transactions favor ZKP as in Zcash, because users don’t want to trust anything but math).
  • If you need collaborative computing among multiple stakeholders – MPC is naturally suited (like multi-party key management or auctions).
  • If you have extremely sensitive data and long-term privacy is a must – FHE could be appealing if performance improves, because even if someone got your ciphertexts years later, without the key they learn nothing; whereas an enclave compromise could leak secrets retroactively if logs were kept.

It’s worth noting that the blockchain space is actively exploring all these technologies in parallel. We’re likely to see combinations: e.g., Layer 2 solutions integrating TEEs for sequencing transactions and then using a ZKP to prove the TEE followed the rules (a concept being explored in some Ethereum research), or MPC networks that use TEEs in each node to reduce the complexity of the MPC protocols (since each node is internally secure and can simulate multiple parties).

Ultimately, TEEs vs ZK vs MPC vs FHE is not a zero-sum choice – they each target different points in the triangle of security, performance, and trustlessness. As one article put it, all four face an "impossible triangle" of performance, cost, and security – no single solution is superior in all aspects. The optimal design often uses the right tool for the right part of the problem.

6. Adoption Across Major Blockchain Ecosystems

Trusted Execution Environments have seen varying levels of adoption in different blockchain ecosystems, often influenced by the priorities of those communities and the ease of integration. Here we evaluate how TEEs are being used (or explored) in some of the major ecosystems: Ethereum, Cosmos, and Polkadot, as well as touch on others.

Ethereum (and General Layer-1s)

On Ethereum mainnet itself, TEEs are not part of the core protocol, but they have been used in applications and Layer-2s. Ethereum’s philosophy leans on cryptographic security (e.g., emerging ZK-rollups), but TEEs have found roles in oracles and off-chain execution for Ethereum:

  • Oracle Services: As discussed, Chainlink has incorporated TEE-based solutions like Town Crier. While not all Chainlink nodes use TEEs by default, the technology is there for data feeds requiring extra trust. Also, API3 (another oracle project) has mentioned using Intel SGX to run APIs and sign data to ensure authenticity. These services feed data to Ethereum contracts with stronger assurances.

  • Layer-2 and Rollups: There’s ongoing research and debate in the Ethereum community about using TEEs in rollup sequencers or validators. For example, ConsenSys’ “ZK-Portal” concept and others have floated using TEEs to enforce correct ordering in optimistic rollups or to protect the sequencer from censorship. The Medium article we saw even suggests that by 2025, TEE might become a default feature in some L2s for things like high-frequency trading protection. Projects like Catalyst (a high-frequency trading DEX) and Flashbots (for MEV relays) have looked at TEEs to enforce fair ordering of transactions before they hit the blockchain.

  • Enterprise Ethereum: In consortium or permissioned Ethereum networks, TEEs are more widely adopted. The Enterprise Ethereum Alliance’s Trusted Compute Framework (TCF) was basically a blueprint for integrating TEEs into Ethereum clients. Hyperledger Avalon (formerly EEA TCF) allows parts of Ethereum smart contracts to be executed off-chain in a TEE and then verified on-chain. Several companies like IBM, Microsoft, and iExec contributed to this. While on public Ethereum this hasn’t become common, in private deployments (e.g., a group of banks using Quorum or Besu), TEEs can be used so that even consortium members don’t see each other’s data, only authorized results. This can satisfy privacy requirements in an enterprise setting.

  • Notable Projects: Aside from iExec which operates on Ethereum, there were projects like Enigma (which originally started as an MPC project at MIT, then pivoted to using SGX; it later became Secret Network on Cosmos). Another was Decentralized Cloud Services (DCS) in early Ethereum discussions. More recently, OAuth (Oasis Ethereum ParaTime) allows solidity contracts to run with confidentiality by using Oasis’s TEE backend but settling on Ethereum. Also, some Ethereum-based DApps like medical data sharing or gaming have experimented with TEEs by having an off-chain enclave component interacting with their contracts.

So Ethereum’s adoption is somewhat indirect – it hasn’t changed the protocol to require TEEs, but it has a rich set of optional services and extensions leveraging TEEs for those who need them. Importantly, Ethereum researchers remain cautious: proposals to make a “TEE-only shard” or to deeply integrate TEEs have met community skepticism due to trust concerns. Instead, TEEs are seen as “co-processors” to Ethereum rather than core components.

Cosmos Ecosystem

The Cosmos ecosystem is friendly to experimentation via its modular SDK and sovereign chains, and Secret Network (covered above) is a prime example of TEE adoption in Cosmos. Secret Network is actually a Cosmos SDK chain with Tendermint consensus, modified to mandate SGX in its validators. It’s one of the most prominent Cosmos zones after the main Cosmos Hub, indicating significant adoption of TEE tech in that community. The success of Secret in providing interchain privacy (through its IBC connections, Secret can serve as a privacy hub for other Cosmos chains) is a noteworthy case of TEE integration at L1.

Another Cosmos-related project is Oasis Network (though not built on the Cosmos SDK, it was designed by some of the same people who contributed to Tendermint and shares a similar ethos of modular architecture). Oasis is standalone but can connect to Cosmos via bridges, etc. Both Secret and Oasis show that in Cosmos-land, the idea of “privacy as a feature” via TEEs gained enough traction to warrant dedicated networks.

Cosmos even has a concept of “privacy providers” for interchain applications – e.g., an app on one chain can call a contract on Secret Network via IBC to perform a confidential computation, then get the result back. This composability is emerging now.

Additionally, the Anoma project (not strictly Cosmos, but related in the interoperability sense) has talked about using TEEs for intent-centric architectures, though it’s more theoretical.

In short, Cosmos has at least one major chain fully embracing TEEs (Secret) and others interacting with it, illustrating a healthy adoption in that sphere. The modularity of Cosmos could allow more such chains (for example, one could imagine a Cosmos zone specializing in TEE-based oracles or identity).

Polkadot and Substrate

Polkadot’s design allows parachains to specialize, and indeed Polkadot hosts multiple parachains that use TEEs:

  • Sanders Network: Already described; a parachain offering a TEE-based compute cloud. Sanders has been live as a parachain, providing services to other chains through XCMP (cross-chain message passing). For instance, another Polkadot project can offload a confidential task to Sanders’s workers and get a proof or result back. Sanders’s native token economics incentivize running TEE nodes, and it has a sizable community, signaling strong adoption.
  • Integritee: Another parachain focusing on enterprise and data privacy solutions using TEEs. Integritee allows teams to deploy their own private side-chains (called Teewasms) where the execution is done in enclaves. It’s targeting use cases like confidential data processing for corporations that still want to anchor to Polkadot security.
  • /Root or Crust?: There were ideas about using TEEs for decentralized storage or random beacons in some Polkadot-related projects. For example, Crust Network (decentralized storage) originally planned a TEE-based proof-of-storage (though it moved to another design later). And Polkadot’s random parachain (Entropy) considered TEEs vs VRFs.

Polkadot’s reliance on on-chain governance and upgrades means parachains can incorporate new tech rapidly. Both Sanders and Integritee have gone through upgrades to improve their TEE integration (like supporting new SGX features or refining attestation methods). The Web3 Foundation also funded earlier efforts on Substrate-based TEE projects like SubstraTEE (an early prototype that showed off-chain contract execution in TEEs with on-chain verification).

The Polkadot ecosystem thus shows multiple, independent teams betting on TEE tech, indicating a positive adoption trend. It’s becoming a selling point for Polkadot that “if you need confidential smart contracts or off-chain compute, we have parachains for that”.

Other Ecosystems and General Adoption

  • Enterprise and Consortia: Outside public crypto, Hyperledger and enterprise chains have steadily adopted TEEs for permissioned settings. For instance, the Basel Committee tested a TEE-based trade finance blockchain. The general pattern is: where privacy or data confidentiality is a must, and participants are known (so they might even collectively invest in hardware secure modules), TEEs find a comfortable home. These may not make headlines in crypto news, but in sectors like supply chain, banking consortia, or healthcare data-sharing networks, TEEs are often the go-to (as an alternative to just trusting a third party or using heavy cryptography).

  • Layer-1s outside Ethereum: Some newer L1s have dabbled with TEEs. NEAR Protocol had an early concept of a TEE-based shard for private contracts (not implemented yet). Celo considered TEEs for light client proofs (their Plumo proofs now rely on snarks, but they looked at SGX to compress chain data for mobile at one point). Concordium, a regulated privacy L1, uses ZK for anonymity but also explores TEEs for identity verification. Dfinity/Internet Computer uses secure enclaves in its node machines, but for bootstrapping trust (not for contract execution, as their “Chain Key” cryptography handles that).

  • Bitcoin: While Bitcoin itself does not use TEEs, there have been side projects. For example, TEE-based custody solutions (like Vault systems) for Bitcoin keys, or certain proposals in DLC (Discrete Log Contracts) to use oracles that might be TEE-secured. Generally, Bitcoin community is more conservative and would not trust Intel easily as part of consensus, but as ancillary tech (hardware wallets with secure elements) it’s already accepted.

  • Regulators and Governments: An interesting facet of adoption: some CBDC (central bank digital currency) research has looked at TEEs to enforce privacy while allowing auditability. For instance, the Bank of France ran experiments where they used a TEE to handle certain compliance checks on otherwise private transactions. This shows that even regulators see TEEs as a way to balance privacy with oversight – you could have a CBDC where transactions are encrypted to the public but a regulator enclave can review them under certain conditions (this is hypothetical, but discussed in policy circles).

  • Adoption Metrics: It’s hard to quantify adoption, but we can look at indicators like: number of projects, funds invested, availability of infrastructure. On that front, today (2025) we have: at least 3-4 public chains (Secret, Oasis, Sanders, Integritee, Automata as off-chain) explicitly using TEEs; major oracle networks incorporating it; large tech companies backing confidential computing (Microsoft Azure, Google Cloud offer TEE VMs – and these services are being used by blockchain nodes as options). The Confidential Computing Consortium now includes blockchain-focused members (Ethereum Foundation, Chainlink, Fortanix, etc.), showing cross-industry collaboration. These all point to a growing but niche adoption – TEEs aren’t ubiquitous in Web3 yet, but they have carved out important niches where privacy and secure off-chain compute are required.

7. Business and Regulatory Considerations

The use of TEEs in blockchain applications raises several business and regulatory points that stakeholders must consider:

Privacy Compliance and Institutional Adoption

One of the business drivers for TEE adoption is the need to comply with data privacy regulations (like GDPR in Europe, HIPAA in the US for health data) while leveraging blockchain technology. Public blockchains by default broadcast data globally, which conflicts with regulations that require sensitive personal data to be protected. TEEs offer a way to keep data confidential on-chain and only share it in controlled ways, thus enabling compliance. As noted, “TEEs facilitate compliance with data privacy regulations by isolating sensitive user data and ensuring it is handled securely”. This capability is crucial for bringing enterprises and institutions into Web3, as they can’t risk violating laws. For example, a healthcare dApp that processes patient info could use TEEs to ensure no raw patient data ever leaks on-chain, satisfying HIPAA’s requirements for encryption and access control. Similarly, a European bank could use a TEE-based chain to tokenize and trade assets without exposing clients’ personal details, aligning with GDPR.

This has a positive regulatory angle: some regulators have indicated that solutions like TEEs (and related concepts of confidential computing) are favorable because they provide technical enforcement of privacy. We’ve seen the World Economic Forum and others highlight TEEs as a means to build “privacy by design” into blockchain systems (essentially embedding compliance at the protocol level). Thus, from a business perspective, TEEs can accelerate institutional adoption by removing one of the key blockers (data confidentiality). Companies are more willing to use or build on blockchain if they know there’s a hardware safeguard for their data.

Another compliance aspect is auditability and oversight. Enterprises often need audit logs and the ability to prove to auditors that they are in control of data. TEEs can actually help here by producing attestation reports and secure logs of what was accessed. For instance, Oasis’s “durable logging” in an enclave provides a tamper-resistant log of sensitive operations. An enterprise can show that log to regulators to prove that, say, only authorized code ran and only certain queries were done on customer data. This kind of attested auditing could satisfy regulators more than a traditional system where you trust sysadmin logs.

Trust and Liability

On the flip side, introducing TEEs changes the trust structure and thus the liability model in blockchain solutions. If a DeFi platform uses a TEE and something goes wrong due to a hardware flaw, who is responsible? For example, consider a scenario where an Intel SGX bug leads to a leak of secret swap transaction details, causing users to lose money (front-run etc.). The users trusted the platform’s security claims. Is the platform at fault, or is it Intel’s fault? Legally, users might go after the platform (who in turn might have to go after Intel). This complicates things because you have a third-party tech provider (the CPU vendor) deeply in the security model. Businesses using TEEs have to consider this in contracts and risk assessments. Some might seek warranties or support from hardware vendors if using their TEEs in critical infra.

There’s also the centralization concern: if a blockchain’s security relies on a single company’s hardware (Intel or AMD), regulators might view that with skepticism. For instance, could a government subpoena or coerce that company to compromise certain enclaves? This is not a purely theoretical concern – consider export control laws: high-grade encryption hardware can be subject to regulation. If a large portion of crypto infrastructure relies on TEEs, it’s conceivable that governments could attempt to insert backdoors (though there’s no evidence of that, the perception matters). Some privacy advocates point this out to regulators: that TEEs concentrate trust and if anything, regulators should carefully vet them. Conversely, regulators who want more control might prefer TEEs over math-based privacy like ZK, because with TEEs there’s at least a notion that law enforcement could approach the hardware vendor with a court order if absolutely needed (e.g., to get a master attestation key or some such – not that it’s easy or likely, but it’s an avenue that doesn’t exist with ZK). So regulatory reception can split: privacy regulators (data protection agencies) are pro-TEE for compliance, whereas law enforcement might be cautiously optimistic since TEEs aren’t “going dark” in the way strong encryption is – there’s a theoretical lever (the hardware) they might try to pull.

Businesses need to navigate this by possibly engaging in certifications. There are security certifications like FIPS 140 or Common Criteria for hardware modules. Currently, SGX and others have some certifications (for example, SGX had Common Criteria EAL stuff for certain usages). If a blockchain platform can point to the enclave tech being certified to a high standard, regulators and partners might be more comfortable. For instance, a CBDC project might require that any TEE used is FIPS-certified so they trust its random number generation, etc. This introduces additional process and possibly restricts to certain hardware versions.

Ecosystem and Cost Considerations

From a business perspective, using TEEs might affect the cost structure of a blockchain operation. Nodes must have specific CPUs (which might be more expensive or less energy efficient). This could mean higher cloud hosting bills or capital expenses. For example, if a project mandates Intel Xeon with SGX for all validators, that’s a constraint – validators can’t just be anyone with a Raspberry Pi or old laptop; they need that hardware. This can centralize who can participate (possibly favoring those who can afford high-end servers or who use cloud providers offering SGX VMs). In extremes, it might push the network to be more permissioned or rely on cloud providers, which is a decentralization trade-off and a business trade-off (the network might have to subsidize node providers).

On the other hand, some businesses might find this acceptable because they want known validators or have an allowlist (especially in enterprise consortia). But in public crypto networks, this has caused debates – e.g., when SGX was required, people asked “does this mean only large data centers will run nodes?” It’s something that affects community sentiment and thus the market adoption. For instance, some crypto purists might avoid a chain that requires TEEs, labeling it as “less trustless” or too centralized. So projects have to handle PR and community education, making clear what the trust assumptions are and why it’s still secure. We saw Secret Network addressing FUD by explaining the rigorous monitoring of Intel updates and that validators are slashed if not updating enclaves, etc., basically creating a social layer of trust on top of the hardware trust.

Another consideration is partnerships and support. The business ecosystem around TEEs includes big tech companies (Intel, AMD, ARM, Microsoft, Google, etc.). Blockchain projects using TEEs often partner with these (e.g., iExec partnering with Intel, Secret network working with Intel on attestation improvements, Oasis with Microsoft on confidential AI, etc.). These partnerships can provide funding, technical assistance, and credibility. It’s a strategic point: aligning with the confidential computing industry can open doors (for funding or enterprise pilots), but also means a crypto project might align with big corporations, which has ideological implications in the community.

Regulatory Uncertainties

As blockchain applications using TEEs grow, there may be new regulatory questions. For example:

  • Data Jurisdiction: If data is processed inside a TEE in a certain country, is it considered “processed in that country” or nowhere (since it’s encrypted)? Some privacy laws require that data of citizens not leave certain regions. TEEs could blur the lines – you might have an enclave in a cloud region, but only encrypted data goes in/out. Regulators may need to clarify how they view such processing.
  • Export Controls: Advanced encryption technology can be subject to export restrictions. TEEs involve encryption of memory – historically this hasn’t been an issue (as CPUs with these features are sold globally), but if that ever changed, it could affect supply. Also, some countries might ban or discourage use of foreign TEEs due to national security (e.g., China has its own equivalent to SGX, as they don’t trust Intel’s, and might not allow SGX for sensitive uses).
  • Legal Compulsion: A scenario: could a government subpoena a node operator to extract data from an enclave? Normally they can’t because even the operator can’t see inside. But what if they subpoena Intel for a specific attestation key? Intel’s design is such that even they can’t decrypt enclave memory (they issue keys to the CPU which does the work). But if a backdoor existed or a special firmware could be signed by Intel to dump memory, that’s a hypothetical that concerns people. Legally, a company like Intel might refuse if asked to undermine their security (they likely would, to not destroy trust in their product). But the mere possibility might appear in regulatory discussions about lawful access. Businesses using TEEs should stay abreast of any such developments, though currently, no public mechanism exists for Intel/AMD to extract enclave data – that’s kind of the point of TEEs.

Market Differentiation and New Services

On the positive front for business, TEEs enable new products and services that can be monetized. For example:

  • Confidential data marketplaces: As iExec and Ocean Protocol and others have noted, companies hold valuable data they could monetize if they had guarantees it won’t leak. TEEs enable “data renting” where the data never leaves the enclave, only the insights do. This could unlock new revenue streams and business models. We see startups in Web3 offering confidential compute services to enterprises, essentially selling the idea of “get insights from blockchain or cross-company data without exposing anything.”
  • Enterprise DeFi: Financial institutions often cite lack of privacy as a reason not to engage with DeFi or public blockchain. If TEEs can guarantee privacy for their positions or trades, they might participate, bringing more liquidity and business to the ecosystem. Projects that cater to this (like Secret’s secret loans, or Oasis’s private AMM with compliance controls) are positioning to attract institutional users. If successful, that can be a significant market (imagine institutional AMM pools where identities and amounts are shielded but an enclave ensures compliance checks like AML are done internally – that’s a product that could bring big money into DeFi under regulatory comfort).
  • Insurance and Risk Management: With TEEs reducing certain risks (like oracle manipulation), we might see lower insurance premiums or new insurance products for smart contract platforms. Conversely, TEEs introduce new risks (like technical failure of enclaves) which might themselves be insurable events. There’s a budding area of crypto insurance; how they treat TEE-reliant systems will be interesting. A platform might market that it uses TEEs to lower risk of data breach, thus making it easier/cheaper to insure, giving it a competitive edge.

In conclusion, the business and regulatory landscape of TEE-enabled Web3 is about balancing trust and innovation. TEEs offer a route to comply with laws and unlock enterprise use cases (a big plus for mainstream adoption), but they also bring a reliance on hardware providers and complexities that must be transparently managed. Stakeholders need to engage with both tech giants (for support) and regulators (for clarity and assurance) to fully realize the potential of TEEs in blockchain. If done well, TEEs could be a cornerstone that allows blockchain to deeply integrate with industries handling sensitive data, thereby expanding the reach of Web3 into areas previously off-limits due to privacy concerns.

Conclusion

Trusted Execution Environments have emerged as a powerful component in the Web3 toolbox, enabling a new class of decentralized applications that require confidentiality and secure off-chain computation. We’ve seen that TEEs, like Intel SGX, ARM TrustZone, and AMD SEV, provide a hardware-isolated “safe box” for computation, and this property has been harnessed for privacy-preserving smart contracts, verifiable oracles, scalable off-chain processing, and more. Projects across ecosystems – from Secret Network’s private contracts on Cosmos, to Oasis’s confidential ParaTimes, to Sanders’s TEE cloud on Polkadot, and iExec’s off-chain marketplace on Ethereum – demonstrate the diverse ways TEEs are being integrated into blockchain platforms.

Technically, TEEs offer compelling benefits of speed and strong data confidentiality, but they come with their own challenges: a need to trust hardware vendors, potential side-channel vulnerabilities, and hurdles in integration and composability. We compared TEEs with cryptographic alternatives (ZKPs, FHE, MPC) and found that each has its niche: TEEs shine in performance and ease-of-use, whereas ZK and FHE provide maximal trustlessness at high cost, and MPC spreads trust among participants. In fact, many cutting-edge solutions are hybrid, using TEEs alongside cryptographic methods to get the best of both worlds.

Adoption of TEE-based solutions is steadily growing. Ethereum dApps leverage TEEs for oracle security and private computations, Cosmos and Polkadot have native support via specialized chains, and enterprise blockchain efforts are embracing TEEs for compliance. Business-wise, TEEs can be a bridge between decentralized tech and regulation – allowing sensitive data to be handled on-chain under the safeguards of hardware security, which opens the door for institutional usage and new services. At the same time, using TEEs means engaging with new trust paradigms and ensuring that the decentralization ethos of blockchain isn’t undermined by opaque silicon.

In summary, Trusted Execution Environments are playing a crucial role in the evolution of Web3: they address some of the most pressing concerns of privacy and scalability, and while they are not a panacea (and not without controversy), they significantly expand what decentralized applications can do. As the technology matures – with improvements in hardware security and standards for attestation – and as more projects demonstrate their value, we can expect TEEs (along with complementary cryptographic tech) to become a standard component of blockchain architectures aimed at unlocking Web3’s full potential in a secure and trustable manner. The future likely holds layered solutions where hardware and cryptography work hand-in-hand to deliver systems that are both performant and provably secure, meeting the needs of users, developers, and regulators alike.

Sources: The information in this report was gathered from a variety of up-to-date sources, including official project documentation and blogs, industry analyses, and academic research, as cited throughout the text. Notable references include the Metaschool 2025 guide on TEEs in Web3, comparisons by Sanders Network, technical insights from ChainCatcher and others on FHE/TEE/ZKP/MPC, and statements on regulatory compliance from Binance Research, among many others. These sources provide further detail and are recommended for readers who wish to explore specific aspects in greater depth.

Meta’s Stablecoin Revival in 2025: Plans, Strategy, and Impact

· 26 min read

Meta’s 2025 Stablecoin Initiative – Announcements and Projects

In May 2025, reports surfaced that Meta (formerly Facebook) is re-entering the stablecoin market with new initiatives focused on digital currencies. While Meta has not formally announced a new coin, a Fortune report revealed the company is in discussions with crypto firms about using stablecoins for payments. These discussions are still preliminary (Meta is in “learn mode”), but they mark Meta’s first significant crypto move since the 2019–2022 Libra/Diem project. Notably, Meta aims to leverage stablecoins to handle payouts for content creators and cross-border transfers on its platforms.

Official stance: Meta has not launched any new cryptocurrency of its own as of May 2025. Andy Stone, Meta’s Communications Director, responded to the rumors by clarifying that “Diem is ‘dead.’ There is no Meta stablecoin.”. This indicates that instead of resurrecting an in-house coin like Diem, Meta’s approach is likely to integrate existing stablecoins (possibly issued by partner firms) into its ecosystem. In fact, sources suggest Meta may use multiple stablecoins rather than a single proprietary coin. In short, the project in 2025 is not a relaunch of Libra/Diem, but a new effort to support stablecoins within Meta’s products.

Strategic Goals and Motivations for Meta

Meta’s renewed crypto foray is driven by clear strategic goals. Chief among these is reducing payment friction and cost for global user transactions. By using stablecoins (digital tokens pegged 1:1 to fiat currency), Meta can simplify cross-border payments and creator monetization across its 3+ billion users. Specific motivations include:

  • Lowering Payment Costs: Meta makes countless small payouts to contributors and creators worldwide. Stablecoin payouts would let Meta pay everyone in a single USD-pegged currency, avoiding hefty fees from bank wires or currency conversions. For example, a creator in India or Nigeria could receive a USD stablecoin rather than dealing with costly international bank transfers. This could save Meta money (fewer processing fees) and speed up payments.

  • Micropayments and New Revenue Streams: Stablecoins enable fast, low-cost micro-transactions. Meta could facilitate tipping, in-app purchases, or revenue sharing in tiny increments (cents or dollars) without exorbitant fees. For instance, sending a few dollars in stablecoin costs only fractions of a cent on certain networks. This capability is crucial for business models like tipping content creators, cross-border e-commerce on Facebook Marketplace, or buying digital goods in the metaverse.

  • Global User Engagement: A stablecoin integrated into Facebook, Instagram, WhatsApp, etc., would function as a universal digital currency within Meta’s ecosystem. This can keep users and their money circulating inside Meta’s apps (similar to how WeChat uses WeChat Pay). Meta could become a major fintech platform by handling remittances, shopping, and creator payments internally. Such a move aligns with CEO Mark Zuckerberg’s longstanding interest in expanding Meta’s role in financial services and the metaverse economy (where digital currencies are needed for transactions).

  • Staying Competitive: The broader tech and finance industry is warming up to stablecoins as essential infrastructure. Rivals and financial partners are embracing stablecoins, from PayPal’s PYUSD launch in 2023 to Mastercard, Visa, and Stripe’s stablecoin projects. Meta doesn’t want to be left behind in what some see as the future of payments. Re-entering crypto now allows Meta to capitalize on an evolving market (stablecoins may grow by $2 trillion by 2028, according to Standard Chartered) and to diversify its business beyond advertising.

In summary, Meta’s stablecoin push is about cutting costs, unlocking new features (fast global payments), and positioning Meta as a key player in the digital economy. These motivations echo the original Libra vision of financial inclusion, but with a more focused and pragmatic approach in 2025.

Technology and Blockchain Infrastructure Plans

Unlike the Libra project—which involved creating a brand-new blockchain—Meta’s 2025 strategy leans toward using existing blockchain infrastructure and stablecoins. According to reports, Meta is considering Ethereum’s blockchain as one backbone for these stablecoin transactions. Ethereum is attractive due to its maturity and widespread adoption in the crypto ecosystem. In fact, Meta “plans to start using stablecoins on the Ethereum blockchain” to reach its massive user base. This suggests Meta might integrate popular Ethereum-based stablecoins (like USDC or USDT) into its apps.

However, Meta appears open to a multi-chain or multi-coin approach. The company will “likely use more than one type of stablecoin” for different purposes. This could involve:

  • Partnering with Major Stablecoin Issuers: Meta has reportedly been in talks with firms like Circle (issuer of USDC) and others. It may support USD Coin (USDC) and Tether (USDT), the two largest USD stablecoins, to ensure liquidity and familiarity for users. Integrating existing regulated stablecoins would spare Meta the trouble of issuing its own token while providing immediate scale.

  • Utilizing Efficient Networks: Meta also seems interested in high-speed, low-cost blockchain networks. The hiring of Ginger Baker (more on her below) hints at this strategy. Baker sits on the board of the Stellar Development Foundation, and analysts note that Stellar’s network is designed for compliance and cheap transactions. Stellar natively supports regulated stablecoins and features like KYC and on-chain reporting. It’s speculated that Meta Pay’s wallet could leverage Stellar for near-instant micropayments (sending USDC via Stellar costs a fraction of a cent). In essence, Meta might route transactions through whichever blockchain offers the best mix of compliance, speed, and low fees (Ethereum for broad compatibility, Stellar or others for efficiency).

  • Meta Pay Wallet Transformation: On the front end, Meta is likely upgrading its existing Meta Pay infrastructure into a “decentralized-ready” digital wallet. Meta Pay (formerly Facebook Pay) currently handles traditional payments on Meta’s platforms. Under Baker’s leadership, it is envisioned to support cryptocurrencies and stablecoins seamlessly. This means users could hold stablecoin balances, send them to peers, or receive payouts in-app, with the complexity of blockchain managed behind the scenes.

Importantly, Meta is not building a new coin or chain from scratch this time. By using proven public blockchains and partner-issued coins, Meta can roll out stablecoin functionality faster and with (hopefully) less regulatory resistance. The technology plan focuses on integration rather than invention – weaving stablecoins into Meta’s products in a way that feels natural to users (e.g. a WhatsApp user might send a USDC payment as easily as sending a photo).

Reviving Diem/Novi or Starting Anew?

Meta’s current initiative clearly differs from its past Libra/Diem effort. Libra (announced 2019) was an ambitious plan for a Facebook-led global currency, backed by a basket of assets and governed by an association of companies. It was later rebranded to Diem (a USD-pegged stablecoin) but ultimately shut down in early 2022 amid regulatory backlash. Novi, the accompanying crypto wallet, was piloted briefly but also discontinued.

In 2025, Meta is not simply reviving Diem/Novi. Key differences in the new approach include:

  • No In-House “Meta Coin” (For Now): During Libra, Facebook was essentially creating its own currency. Now, Meta’s spokespeople emphasize that “there is no Meta stablecoin” in development. Diem is dead and won’t be resurrected. Instead, the focus is on using existing stablecoins (issued by third parties) as payment tools. This shift from issuer to integrator is a direct lesson from Libra’s failure – Meta is avoiding the appearance of coining its own money.

  • Compliance-First Strategy: Libra’s broad vision spooked regulators who feared a private currency for billions could undermine national currencies. Today Meta is operating more quietly and cooperatively. The company is hiring compliance and fintech experts (for example, Ginger Baker) and choosing technologies known for regulatory compliance (e.g. Stellar). Any new stablecoin features will likely require identity verification and adhere to financial regulations in each jurisdiction, in contrast to Libra’s initially decentralized approach.

  • Scaling Back Ambitions (at Least Initially): Libra aimed to be a universal currency and financial system. Meta’s 2025 effort has a narrower initial scope: payouts and peer-to-peer payments within Meta’s platforms. By targeting creator payments (like “up to $100” micro-payouts on Instagram), Meta is finding a use-case that is less likely to alarm regulators than a full-scale global currency. Over time this could expand, but the rollout is expected to be gradual and use-case driven, rather than a Big Bang launch of a new coin.

  • No Public Association or New Blockchain: Libra was managed by an independent association and required partners running nodes on a brand new blockchain. The new approach doesn’t involve creating a consortium or a custom network. Meta is working directly with established crypto companies and leveraging their infrastructure. This behind-the-scenes collaboration means less publicity and potentially fewer regulatory targets than Libra’s highly public coalition.

In summary, Meta is starting anew, using the lessons from Libra/Diem to chart a more pragmatic course. The company has essentially pivoted from “becoming a crypto issuer” to “being a crypto-friendly platform”. As one crypto analyst observed, whether Meta “builds and issues their own [stablecoin] or partners with someone like Circle is yet to be determined” – but all signs point to partnerships rather than a solo venture like Diem.

Key Personnel, Partnerships, and Collaborations

Meta has made strategic hires and likely partnerships to drive this stablecoin initiative. The standout personnel move is the addition of Ginger Baker as Meta’s Vice President of Product for payments and crypto. Baker joined Meta in January 2025 specifically to “help shepherd [Meta’s] stablecoin explorations”. Her background is a strong indicator of Meta’s strategy:

  • Ginger Baker – Fintech Veteran: Baker is a seasoned payment executive. She previously worked at Plaid (as Chief Network Officer), and has experience at Ripple, Square, and Visa – all major players in payments/crypto. Uniquely, she also served on the board of the Stellar Development Foundation, and was an executive there. By hiring Baker, Meta gains expertise in both traditional fintech and blockchain networks (Ripple and Stellar are focused on cross-border and compliance). Baker is now “spearheading Meta’s renewed stablecoin initiatives”, including the transformation of Meta Pay into a crypto-ready wallet. Her leadership suggests Meta will build a product that bridges conventional payments with crypto (likely ensuring things like bank integrations, smooth UX, KYC, etc., are in place alongside the blockchain elements).

  • Other Team Members: In addition to Baker, Meta is “adding crypto-experienced individuals” to its teams to support the stablecoin plans. Some former members of the Libra/Diem team may be involved behind the scenes, though many departed (for example, former Novi head David Marcus left to start his own crypto firm, and others went on to projects like Aptos). The current effort appears largely under Meta’s existing Meta Financial Technologies unit (which runs Meta Pay). No major acquisitions of crypto companies have been announced in 2025 so far – Meta seems to be relying on internal hires and partnerships rather than buying a stablecoin company outright.

  • Potential Partnerships: While no official partners are named yet, multiple crypto firms have been in talks with Meta. At least two crypto company executives confirmed they’ve had early discussions with Meta about stablecoin payouts. It’s reasonable to speculate that Circle (issuer of USDC) is among them – the Fortune report made mention of Circle’s activities in the same context. Meta could partner with a regulated stablecoin issuer (like Circle or Paxos) to handle the currency issuance and custody. For instance, Meta might integrate USDC by working with Circle, similar to how PayPal partnered with Paxos to launch its own stablecoin. Other partnerships might involve crypto infrastructure providers (for security, custody, or blockchain integration) or fintech companies in different regions for compliance.

  • External Advisors/Influencers: It’s worth noting that Meta’s move comes as others in tech/finance ramp up stablecoin efforts. Companies like Stripe and Visa recently made moves (Stripe bought a crypto startup, Visa partnered with a stablecoin platform). Meta may not formally partner with these companies, but these industry connections (e.g., Baker’s past at Visa, or existing commerce relationships Meta has with Stripe for payments) could smooth the path for stablecoin adoption. Additionally, First Digital (issuer of FDUSD) and Tether might see indirect collaboration if Meta decides to support their coins for certain markets.

In essence, Meta’s stablecoin initiative is being led by experienced fintech insiders and likely involves close collaboration with established crypto players. We see a deliberate effort to bring in people who understand both Silicon Valley and crypto. This bodes well for Meta navigating the technical and regulatory challenges with knowledgeable guidance.

Regulatory Strategy and Positioning

Regulation is the elephant in the room for Meta’s crypto ambitions. After the bruising experience with Libra (where global regulators and lawmakers almost unanimously opposed Facebook’s coin), Meta is taking a very cautious, compliance-forward stance in 2025. Key elements of Meta’s regulatory positioning include:

  • Working Within Regulatory Frameworks: Meta appears intent on working with authorities rather than attempting an end-run around them. By using existing regulated stablecoins (like USDC, which complies with U.S. state regulations and audits) and by building in KYC/AML features, Meta is aligning with current financial rules. For example, Stellar’s compliance features (KYC, sanctions screening) are explicitly noted as aligning with Meta’s need to stay in regulators’ good graces. This suggests Meta will ensure that users who transact in stablecoins through its apps are verified and that transactions can be monitored for illicit activity, similar to any fintech app.

  • Political Timing: The regulatory climate in the U.S. has shifted since the Libra days. As of 2025, the administration of President Donald Trump is seen as more crypto-friendly than the prior Biden administration. This change potentially gives Meta an opening. In fact, Meta’s renewed push comes just as Washington is actively debating stablecoin legislation. A pair of stablecoin bills are working through Congress, and the Senate’s GENIUS Act is aiming to set guardrails for stablecoins. Meta could be hoping that a clearer legal framework will legitimize corporate involvement in digital currency. However, this is not without opposition – Senator Elizabeth Warren and other lawmakers have singled out Meta, urging that big tech firms be barred from issuing stablecoins in any new law. Meta will have to navigate such political hurdles, possibly by emphasizing that it is not issuing a new coin but merely using existing ones (thus technically not “Facebook Coin” that worried Congress).

  • Global and Local Compliance: Beyond the U.S., Meta will consider regulations in each market. For instance, if it introduces stablecoin payments in WhatsApp for remittances, it may pilot this in countries with receptive regulators (similar to how WhatsApp Pay was rolled out in markets like Brazil or India with local approval). Meta may engage central banks and financial regulators in target regions to ensure its stablecoin integration meets requirements (such as being fully fiat-backed, redeemable, and not harming local currency stability). The First Digital USD (FDUSD), one of the stablecoins Meta could support, is Hong Kong-based and operates under that jurisdiction’s trust laws, which hints Meta might leverage regions with crypto-friendly rules (e.g. Hong Kong, Singapore) for initial phases.

  • Avoiding the “Libra Mistake”: With Libra, regulators were concerned Meta would control a global currency outside of government control. Meta’s strategy now is to position itself as a participant, not a controller. By saying “there is no Meta stablecoin”, the company distances itself from the idea of printing money. Instead, Meta can argue it’s improving payment infrastructure for users, analogous to offering support for PayPal or credit cards. This narrative — “we’re just using safe, fully reserved currencies like USDC to help users transact” — is likely how Meta will pitch the project to regulators to allay fears of destabilizing the monetary system.

  • Compliance and Licensing: If Meta does decide to offer a branded stablecoin or custody users’ crypto, it may seek the proper licenses (e.g., becoming a licensed money transmitter, obtaining state or federal charter for stablecoin issuance via a subsidiary or partner bank). There’s precedent: PayPal obtained a New York trust charter (through Paxos) for its stablecoin. Meta could similarly partner or create a regulated entity for any custodial aspects. For now, by partnering with established stablecoin issuers and banks, Meta can rely on their regulatory approvals.

Overall, Meta’s approach can be seen as “regulatory accommodation” – it is trying to design the project to fit into legal boxes that regulators have built or are building. This includes proactive outreach, scaling slowly, and employing experts who know the rules. That said, regulatory uncertainty remains a risk. The company will be closely watching the outcome of stablecoin bills and likely engaging in policy discussions to ensure it can move forward without legal roadblocks.

Market Impact and Stablecoin Landscape Analysis

Meta’s entrance into stablecoins could be a game-changer for the stablecoin market, which as of early 2025 is already booming. The total market capitalization of stablecoins hit an all-time high of around $238–245 billion in April 2025, roughly double the size from a year before. This market is currently dominated by a few key players:

  • Tether (USDT): The largest stablecoin, with nearly 70% of market share and about $148 billion in circulation as of April. USDT is issued by Tether Ltd. and is widely used in crypto trading and cross-exchange liquidity. It’s known for less transparency in reserves but has maintained its peg.

  • USD Coin (USDC): The second-largest, issued by Circle (in partnership with Coinbase) with around $62 billion in supply (≈26% market share). USDC is U.S.-regulated, fully reserved in cash and treasuries, and favored by institutions for its transparency. It’s used both in trading and an increasing number of mainstream fintech apps.

  • First Digital USD (FDUSD): A newer entrant (launched mid-2023) issued by First Digital Trust out of Hong Kong. FDUSD grew as an alternative on platforms like Binance after regulatory issues hit Binance’s own BUSD. By April 2025, FDUSD’s market cap was about $1.25 billion. It had some volatility (losing its $1 peg briefly in April), but is touted for being based in a friendlier regulatory environment in Asia.

The table below compares Meta’s envisioned stablecoin integration with USDT, USDC, and FDUSD:

FeatureMeta’s Stablecoin Initiative (2025)Tether (USDT)USD Coin (USDC)First Digital USD (FDUSD)
Issuer / ManagerNo proprietary coin: Meta to partner with existing issuers; coin could be issued by a third-party (e.g. Circle, etc.). Meta will integrate stablecoins in its platforms, not issue its own (per official statements).Tether Holdings Ltd. (affiliated with iFinex). Privately held; issuer of USDT.Circle Internet Financial (with Coinbase; via Centre Consortium). USDC governed by Circle under U.S. regulations.First Digital Trust, a Hong Kong-registered trust company, issues FDUSD under HK’s Trust Ordinance.
Launch & StatusNew initiative, planning stage in 2025. No coin launched yet (Meta exploring integration to start in 2025). Internal testing or pilots expected; not publicly available as of May 2025.Launched in 2014. Established with ~$148B in circulation. Widely used across exchanges and chains (Ethereum, Tron, etc.).Launched in 2018. Established with ~$62B in circulation. Used in trading, DeFi, payments; available on multiple chains (Ethereum, Stellar, others).Launched in mid-2023. Emerging player with ~$1–2B market cap (recently ~$1.25B). Promoted on Asian exchanges (Binance, etc.) as a regulated USD stablecoin alternative.
Technology / BlockchainLikely multi-blockchain support. Emphasis on Ethereum for compatibility; possibly leveraging Stellar or other networks for low-fee transactions. Meta’s wallet will abstract the blockchain layer for users.Multi-chain: Originally on Bitcoin’s Omni, now primarily on Tron, Ethereum, etc. USDT exists on 10+ networks. Fast on Tron (low fees); widespread integration in crypto platforms.Multi-chain: Primarily on Ethereum, with versions on Stellar, Algorand, Solana, etc. Focus on Ethereum but expanding to reduce fees (also exploring Layer-2).Multi-chain: Issued on Ethereum and BNB Chain (Binance Smart Chain) from launch. Aims for cross-chain usage. Relies on Ethereum security and Binance ecosystem for liquidity.
Regulatory OversightMeta will adhere to regulations via partners. Stablecoins used will be fully reserved (1:1 USD) and issuers under supervision (e.g. Circle regulated under U.S. state laws). Meta will implement KYC/AML in its apps. Regulatory strategy is to cooperate and comply (especially after Diem’s failure).Historically opaque. Limited audits; faced regulatory bans in NY. Increasing transparency lately but not regulated like a bank. Has settled with regulators over past misrepresentations. Operates in a grey area but systemically important due to size.High compliance. Regulated as a stored value under U.S. laws (Circle has a NY BitLicense, trust charters). Monthly reserve attestations published. Seen as safer by U.S. authorities; could seek federal stablecoin charter if laws pass.Moderate compliance. Regulated in Hong Kong as a trust-held asset. Benefits from Hong Kong’s pro-crypto stance. Less scrutiny from U.S. regulators; positioned to serve markets where USDT/USDC face hurdles.
Use Cases & IntegrationMeta’s platforms integration: Used for payouts to creators, P2P transfers, in-app purchases across Facebook, Instagram, WhatsApp, etc.. Aimed at mainstream users (social/media context) rather than crypto traders. Could enable global remittances (e.g. sending money via WhatsApp) and metaverse commerce.Primarily used in crypto trading (as a dollar substitute on exchanges). Also common in DeFi lending, and as a dollar hedge in countries with currency instability. Less used in retail payments due to volatility concerns around issuer.Used in both crypto markets and some fintech apps. Popular in DeFi and trading pairs, but also integrated by payment processors and fintechs (for commerce, remittances). Coinbase and others allow USDC for transfers. Growing role in business settlements.Currently mostly used on crypto exchanges (Binance) as a USD liquidity option after BUSD’s decline. Some potential for Asia-based payments or DeFi, but use cases are nascent. Market positioning is to be a compliant alternative for Asian users and institutions.

Projected Impact: If Meta successfully rolls out stablecoin payments, it could significantly expand the reach and usage of stablecoins. Meta’s apps might onboard hundreds of millions of new stablecoin users who have never used crypto before. This mainstream adoption could increase the overall stablecoin market cap beyond current leaders. For example, should Meta partner with Circle to use USDC at scale, the demand for USDC could surge – potentially challenging USDT’s dominance over time. It’s plausible that Meta could help USDC (or whichever coin it adopts) grow closer to Tether’s size, by providing use cases outside of trading (social commerce, remittances, etc.).

On the other hand, Meta’s involvement might spur competition and innovation among stablecoins. Tether and other incumbents could adjust by improving transparency or forming their own big-tech alliances. New stablecoins might emerge tailored for social networks. Also, Meta supporting multiple stablecoins suggests no single coin will “monopolize” Meta’s ecosystem – users might seamlessly transact with different dollar tokens depending on region or preference. This could lead to a more diversified stablecoin market where dominance is spread.

It’s also important to note the infrastructure boost Meta could provide. A stablecoin integrated with Meta will likely need robust capacity for millions of daily transactions. This could drive improvements on the underlying blockchains (e.g. Ethereum Layer-2 scaling, or increased Stellar network usage). Already, observers suggest Meta’s move could “increase activity on [Ethereum] and demand for ETH” if a lot of transactions flow there. Similarly, if Stellar is used, its native token XLM could see higher demand as gas for transactions.

Finally, Meta’s entrance is somewhat double-edged for the crypto industry: it legitimizes stablecoins as a payment mechanism (potentially positive for adoption and market growth), but it also raises regulatory stakes. Governments may treat stablecoins more as a matter of national importance if billions of social media users start transacting in them. This could accelerate regulatory clarity – or crackdowns – depending on how Meta’s rollout goes. In any case, the stablecoin landscape by the late 2020s will likely be reshaped by Meta’s participation, alongside other big players like PayPal, Visa, and traditional banks venturing into this space.

Integration into Meta’s Platforms (Facebook, Instagram, WhatsApp, etc.)

A critical aspect of Meta’s strategy is seamless integration of stablecoin payments into its family of apps. The goal is to embed digital currency functionality in a user-friendly way across Facebook, Instagram, WhatsApp, Messenger, and even new platforms like Threads. Here’s how integration is expected to play out on each service:

  • Instagram: Instagram is poised to be a testing ground for stablecoin payouts. Creators on Instagram could opt to receive their earnings (for Reels bonuses, affiliate sales, etc.) in a stablecoin rather than local currency. Reports specifically mention Meta may start by paying out up to ~$100 to creators via stablecoins on Instagram. This suggests a focus on small cross-border payments – ideal for influencers in countries where receiving U.S. dollars directly is preferable. Additionally, Instagram could enable tipping of creators in-app using stablecoins, or allow users to purchase digital collectibles and services with a stablecoin balance. Since Instagram already experimented with NFT display features (in 2022) and has a creator marketplace, adding a stablecoin wallet could enhance its creator ecosystem.

  • Facebook (Meta): On Facebook proper, stablecoin integration might manifest in Facebook Pay/Meta Pay features. Users on Facebook could send money to each other in chats using stablecoins, or donate to fundraisers with crypto. Facebook Marketplace (where people buy/sell goods) could support stablecoin transactions, enabling easier cross-border commerce by eliminating currency exchange issues. Another area is gaming and apps on Facebook – developers could be paid out in stablecoins, or in-game purchases could utilize a stablecoin for a universal experience. Given Facebook’s broad user base, integrating a stablecoin wallet in the profile or Messenger could quickly mainstream the concept of sending “digital dollars” to friends and family. Meta’s own posts hint at content monetization: for instance, paying out bonuses to Facebook content creators or Stars (Facebook’s tipping tokens) being potentially backed by stablecoins in the future.

  • WhatsApp: This is perhaps the most transformative integration. WhatsApp has over 2 billion users and is heavily used for messaging in regions where remittances are crucial (India, Latin America, etc.). Meta’s stablecoin could turn WhatsApp into a global remittance platform. Users might send a stablecoin to a contact as easily as sending a text, with WhatsApp handling the currency swap on each end if needed. In fact, WhatsApp briefly piloted the Novi wallet in 2021 for sending a stablecoin (USDP) in the US and Guatemala – so the concept is proven on a small scale. Now Meta could incorporate stablecoin transfers natively into WhatsApp’s UI. For example, an Indian worker in the US could send USDC via WhatsApp to family in India, who could then cash it out or spend it if integrations with local payment providers are in place. This bypasses expensive remittance fees. Aside from P2P, small businesses on WhatsApp (common in emerging markets) could accept stablecoin payments for goods, using it like a low-fee merchant payment system. The Altcoin Buzz analysis even speculates that WhatsApp will be one of the next integration points after creator payouts.

  • Messenger: Similar to WhatsApp, Facebook Messenger could allow sending money in chats using stablecoins. Messenger already has peer-to-peer fiat payments in the U.S. If extended to stablecoins, it could connect users internationally. One could envision Messenger chatbots or customer service using stablecoin transactions (for example, paying a bill or ordering products via a Messenger interaction and settling in stablecoin).

  • Threads and Others: Threads (Meta’s Twitter-like platform launched in 2023) and the broader Meta VR/Metaverse (Reality Labs) might also leverage stablecoins. In Horizon Worlds or other metaverse experiences, a stablecoin could serve as the in-world currency for buying virtual goods, tickets to events, etc., providing a real-money equivalent that travels across experiences. While Meta’s metaverse unit is currently operating at a loss, integrating a currency accepted across games and worlds could create a unified economy that might spur usage (much like Roblox has Robux, but in Meta’s case it would be a USD stablecoin under the hood). This would align with Zuckerberg’s vision of the metaverse economy, without creating a new token just for VR.

Integration Strategy: Meta is likely to roll this out carefully. A plausible sequence is:

  1. Pilot creator payouts on Instagram (limited amount, select regions) – this tests the system with real value going out, but in a controlled way.
  2. Expand to P2P transfers in messaging (WhatsApp/Messenger) once confidence is gained – starting with remittance corridors or within certain countries.
  3. Merchant payments and services – enabling businesses on its platforms to transact in stablecoin (this could involve partnerships with payment processors to allow easy conversion to local fiat).
  4. Full ecosystem integration – eventually, a user’s Meta Pay wallet could show a stablecoin balance that can be used anywhere across Facebook ads, Instagram shopping, WhatsApp pay, etc.

It’s worth noting that user experience will be key. Meta will likely abstract away terms like “USDC” or “Ethereum” from the average user. The wallet might just display a balance in “USD” (powered by stablecoins in the backend) to make it simple. Only more advanced users might interact with on-chain functions (like withdrawing to an external crypto wallet), if allowed. Meta’s advantage is its huge user base; if even a fraction adopt the stablecoin feature, it could outnumber the current crypto user population.

In conclusion, Meta’s plan to integrate stablecoins into its platforms could blur the line between traditional digital payments and cryptocurrency. A Facebook or WhatsApp user may soon be using a stablecoin without even realizing it’s a crypto asset – they’ll just see a faster, cheaper way to send money and transact globally. This deep integration could set Meta’s apps apart in markets where financial infrastructure is costly or slow, and it positions Meta as a formidable competitor to both fintech companies and crypto exchanges in the realm of digital payments.

Sources:

  • Meta’s stablecoin exploratory talks and hiring of a crypto VP
  • Meta’s intent to use stablecoins for cross-border creator payouts (Fortune report)
  • Comment by Meta’s communications director (“Diem is dead, no Meta stablecoin”)
  • Analysis of Meta’s strategic motivations (cost reduction, single currency for payouts)
  • Tech infrastructure choices – Ethereum integration and Stellar’s compliance features
  • Ginger Baker’s role and background (former Plaid, Ripple, Stellar board)
  • Fortune/LinkedIn insights on Meta’s crypto team and partnerships in discussion
  • Regulatory context: Libra’s collapse in 2022 and 2025’s friendlier environment under Trump vs. legislative pushback (Sen. Warren on banning Big Tech stablecoins)
  • Stablecoin market data (Q2 2025): ~$238B market cap, USDT ~$148B vs USDC ~$62B, growth trends
  • Comparison info for USDT, USDC, FDUSD (market share, regulatory stance, issuers)
  • Integration details across Meta’s products (content creator payouts, WhatsApp payments).

Sui-Backed MPC Network Ika – Comprehensive Technical and Investment Evaluation

· 39 min read

Introduction

Ika is a parallel Multi-Party Computation (MPC) network strategically backed by the Sui Foundation. Formerly known as dWallet Network, Ika is designed to enable zero-trust, cross-chain interoperability at high speed and scale. It allows smart contracts (especially on the Sui blockchain) to securely control and coordinate assets on other blockchains without traditional bridges. This report provides a deep dive into Ika’s technical architecture and cryptographic design from a founder’s perspective, as well as a business and investment analysis covering team, funding, tokenomics, adoption, and competition. A summary comparison table of Ika versus other MPC-based networks (Lit Protocol, Threshold Network, and Zama) is also included for context.

Ika Network

Technical Architecture and Features (Founder’s Perspective)

Architecture and Cryptographic Primitives

Ika’s core innovation is a novel “2PC-MPC” cryptographic scheme – a two-party computation within a multi-party computation framework. In simple terms, the signing process always involves two parties: (1) the user and (2) the Ika network. The user retains a private key share, and the network – composed of many independent nodes – holds the other share. A signature can only be produced with participation from both, ensuring the network alone can never forge a signature without the user. The network side isn’t a single entity but a distributed MPC among N validators that collectively act as the second party. A threshold of at least two-thirds of these nodes must agree (akin to Byzantine Fault Tolerance consensus) to generate the network’s share of the signature. This nested MPC structure (user + network) makes Ika non-collusive: even if all Ika nodes collude, they cannot steal user assets because the user’s participation (their key share) is always cryptographically required. In other words, Ika enables “zero-trust” security, upholding decentralization and user ownership principles of Web3 – no single entity or small group can unilaterally compromise assets.

Figure: Schematic of Ika’s 2PC-MPC architecture – the user acts as one party (holding a private key share) and the Ika network of N validators forms the other party via an MPC threshold protocol (t-out-of-N). This guarantees that both the user and a supermajority of decentralized nodes must cooperate to produce a valid signature.

Technically, Ika is implemented as a standalone blockchain network forked from the Sui codebase. It runs its own instance of Sui’s high-performance consensus engine (Mysticeti, a DAG-based BFT protocol) to coordinate the MPC nodes. Notably, Ika’s version of Sui has smart contracts disabled (Ika’s chain exists solely to run the MPC protocol) and includes custom modules for the 2PC-MPC signing algorithm. Mysticeti provides a reliable broadcast channel among the nodes, replacing the complex mesh of peer-to-peer messages that traditional MPC protocols use. By leveraging a DAG-based consensus for communication, Ika avoids the exponential communication overhead of earlier threshold signing schemes, which required each of n parties to send messages to all others. Instead, Ika’s nodes broadcast messages via the consensus, achieving linear communication complexity O(n), and using batching and aggregation techniques to keep per-node costs almost constant even as N grows large. This represents a significant breakthrough in threshold cryptography: the Ika team replaced point-to-point “unicast” communication with efficient broadcast and aggregation, enabling the protocol to support hundreds or thousands of participants without slowing down.

Zero-knowledge integrations: At present, Ika’s security is achieved through threshold cryptography and BFT consensus rather than explicit zero-knowledge proofs. The system does not rely on zk-SNARKs or zk-STARKs in its core signing process. However, Ika uses on-chain state proofs (light client proofs) to verify events from other chains, which is a form of cryptographic verification (e.g. verifying Merkle proofs of block headers or state). The design leaves room for integrating zero-knowledge techniques in the future – for example, to validate cross-chain state or conditions without revealing sensitive data – but as of 2025 no specific zk-SNARK module is part of Ika’s published architecture. The emphasis is instead on the “zero-trust” principle (meaning no trust assumptions) via the 2PC-MPC scheme, rather than zero-knowledge proof systems.

Performance and Scalability

A primary goal of Ika is to overcome the performance bottlenecks of prior MPC networks. Legacy threshold signature protocols (like Lindell’s 2PC ECDSA or GG20) struggled to support more than a handful of participants, often taking many seconds or minutes to produce a single signature. In contrast, Ika’s optimized protocol achieves sub-second latency for signing and can handle a very high throughput of signature requests in parallel. Benchmark claims indicate Ika can scale to around 10,000 signatures per second while maintaining security across a large node cluster. This is possible thanks to the aforementioned linear communication and heavy use of batching: many signatures can be generated concurrently by the network in one round of protocol, dramatically amortizing costs. According to the team, Ika can be “10,000× faster” than existing MPC networks under load. In practical terms, this means real-time, high-frequency transactions (such as trading or cross-chain DeFi operations) can be supported without the usual delays of threshold signing. Latency is on the order of sub-second finality, meaning a signature (and the corresponding cross-chain operation) can be completed almost instantly after a user’s request.

Equally important, Ika does this while scaling out the number of signers to enhance decentralization. Traditional MPC setups often used a fixed committee of maybe 10–20 nodes to avoid performance collapse. Ika’s architecture can expand to hundreds or even thousands of validators participating in the signing process without significant slowdown. This massive decentralization improves security (harder for an attacker to corrupt a majority) and network robustness. The underlying consensus is Byzantine fault tolerant, so the network can tolerate up to one-third of nodes being compromised or offline and still function correctly. In any given signing operation, only a threshold t-of-N of nodes (e.g. 67% of N) need to actively participate; by design, if too many nodes are down, the signature might be delayed, but the system is engineered to handle typical failure scenarios gracefully (similar to a blockchain’s consensus liveness and safety properties). In summary, Ika achieves both high throughput and high validator count, a combination that sets it apart from earlier MPC solutions that had to trade off decentralization for speed.

Developer Tooling and Integration

The Ika network is built to be developer-friendly, especially for those already building on Sui. Developers do not write smart contracts on Ika itself (since Ika’s chain doesn’t run user-defined contracts), but instead interact with Ika from other chains. For example, a Sui Move contract can invoke Ika’s functionality to sign transactions on external chains. To facilitate this, Ika provides robust tooling and SDKs:

  • TypeScript SDK: Ika offers a TypeScript SDK (Node.js library) that mirrors the style of the Sui SDK. This SDK allows builders to create and manage dWallets (decentralized wallets) and issue signing requests to Ika from their applications. Using the TS SDK, developers can generate keypairs, register user shares, and call Ika’s RPC to coordinate threshold signatures – all with familiar patterns from Sui’s API. The SDK abstracts away the complexity of the MPC protocol, making it as simple as calling a function to request (for example) a Bitcoin transaction signature, given the appropriate context and user approval.

  • CLI and Local Network: For more direct interaction, a command-line interface (CLI) called dWallet CLI is available. Developers can run a local Ika node or even a local test network by forking the open-source repository. This is valuable for testing and integration in a development environment. The documentation guides through setting up a local devnet, getting testnet tokens (DWLT – the testnet token), and creating a first dWallet address.

  • Documentation and Examples: Ika’s docs include step-by-step tutorials for common scenarios, such as “Your First dWallet”. These show how to establish a dWallet that corresponds to an address on another chain (e.g., a Bitcoin address controlled by Ika’s keys), how to encrypt the user’s key share for safekeeping, and how to initiate cross-chain transactions. Example code covers use cases like transferring BTC via a Sui smart contract call, or scheduling future transactions (a feature Ika supports whereby a transaction can be pre-signed under certain conditions).

  • Sui Integration (Light Clients): Out-of-the-box, Ika is tightly integrated with the Sui blockchain. The Ika network runs a Sui light client internally to trustlessly read Sui on-chain data. This means a Sui smart contract can emit an event or call that Ika will recognize (via a state proof) as a trigger to perform an action. For instance, a Sui contract might instruct Ika: “when event X occurs, sign and broadcast a transaction on Ethereum”. Ika nodes will verify the Sui event using the light client proof and then collectively produce the signature for the Ethereum transaction. The signed payload can then be delivered to the target chain (possibly by an off-chain relayer or by the user) to execute the desired action. Currently, Sui is the first fully supported controller chain (given Ika’s origins on Sui), but the architecture is multi-chain by design. Support for other chains’ state proofs and integrations are on the roadmap – for example, the team has mentioned extending Ika to work with rollups in the Polygon Avail ecosystem (providing dWallet capabilities on rollups with Avail as a data layer) and other Layer-1s in the future.

  • Supported Crypto Algorithms: Ika’s network can generate keys/signatures for virtually any blockchain’s signature scheme. Initially it supports ECDSA (the elliptic curve algorithm used by Bitcoin, Ethereum’s ECDSA accounts, BNB Chain, etc.). In the near term, it’s planned to support EdDSA (Ed25519, used by chains like Solana and some Cosmos chains) and Schnorr signatures (e.g. Bitcoin Taproot’s Schnorr keys). This broad support means an Ika dWallet can have an address on Bitcoin, an address on Ethereum, on Solana, and so on – all controlled by the same underlying distributed key. Developers on Sui or other platforms can thus integrate any of these chains into their dApps through one unified framework (Ika), instead of dealing with chain-specific bridges or custodians.

In summary, Ika offers a developer experience similar to interacting with a blockchain node or wallet, abstracting away the heavy cryptography. Whether via the TypeScript SDK or directly through Move contracts and light clients, it strives to make cross-chain logic “plug-and-play” for builders.

Security, Decentralization, and Fault Tolerance

Security is paramount in Ika’s design. The zero-trust model means that no user has to trust the Ika network with unilateral control of assets at any point. If a user creates a dWallet (say a BTC address managed by Ika), that address’s private key is never held by any single party – not even the user alone. Instead, the user holds a secret share and the network collectively holds the other share. Both are required to sign any transaction. Thus, even if the worst-case scenario occurred (e.g. many Ika nodes were compromised by an attacker), they still could not move funds without the user’s secret key share. This property addresses a major risk in conventional bridges, where a quorum of validators could collude to steal locked assets. Ika eliminates that risk by fundamentally changing the access structure (the threshold is set such that the network alone is never enough – the threshold effectively includes the user). In the literature, this is a new paradigm: a non-collusive MPC network where the asset owner remains part of the signing quorum by design.

On the network side, Ika uses a delegated Proof-of-Stake model (inherited from Sui’s design) for selecting and incentivizing validators. IKA token holders can delegate stake to validator nodes; the top validators (weighted by stake) become the authorities for an epoch, and are Byzantine-fault-tolerant (2/3 honest) in each epoch. This means the system assumes <33% of stake is malicious to maintain safety. If a validator misbehaves (e.g. tries to produce an incorrect signature share or censor transactions), the consensus and MPC protocol will detect it – incorrect signature shares can be identified (they won’t combine to a valid signature), and a malicious node can be logged and potentially slashed or removed in future epochs. Meanwhile, liveness is maintained as long as enough nodes (>67%) participate; the consensus can continue to finalize operations even if many nodes crash or go offline unexpectedly. This fault tolerance ensures the service is robust – no single point of failure exists since hundreds of independent operators in different jurisdictions are participating. Decentralization is further reinforced by the sheer number of participants: Ika does not limit itself to a fixed small committee, so it can onboard more validators to increase security without sacrificing much performance. In fact, Ika’s protocol was explicitly designed to “transcend the node limit of MPC networks” and allow massive decentralization.

Finally, the Ika team has subjected their cryptography to external review. They published a comprehensive whitepaper in 2024 detailing the 2PC-MPC protocol, and they have undergone at least one third-party security audit so far. For example, in June 2024, an audit by Symbolic Software examined Ika’s Rust implementation of the 2PC-MPC protocol and related crypto libraries. The audit would have focused on validating the correctness of the cryptographic protocols (ensuring no flaw in the threshold ECDSA scheme, key generation, or share aggregation) and checking for potential vulnerabilities. The codebase is open-source (under the dWallet Labs GitHub), allowing the community to inspect and contribute to its security. As of the alpha testnet stage, the team also cautioned that the software was still experimental and not yet production-audited, but ongoing audits and security improvements were a top priority prior to mainnet launch. In summary, Ika’s security model is a combination of provable cryptographic guarantees (from threshold schemes) and blockchain-grade decentralization (from the PoS consensus and large validator set), reviewed by experts, to provide strong assurances against both external attackers and insider collusion.

Compatibility and Ecosystem Interoperability

Ika is purpose-built to be an interoperability layer, initially for Sui but extensible to many ecosystems. On day one, its closest integration is with the Sui blockchain: it effectively acts as an add-on module to Sui, empowering Sui dApps with multi-chain capabilities. This tight alignment is by design – Sui’s Move contracts and object-centric model make it a good “controller” for Ika’s dWallets. For instance, a Sui DeFi application can use Ika to pull liquidity from Ethereum or Bitcoin on the fly, making Sui a hub for multi-chain liquidity. Sui Foundation’s support for Ika indicates a strategy to position Sui as “the base chain for every chain”, leveraging Ika to connect to external assets. In practice, when Ika mainnet is live, a Sui builder might create a Move contract that, say, accepts BTC deposits: behind the scenes, that contract would create a Bitcoin dWallet (an address) via Ika and issue instructions to move BTC when needed. The end user experiences this as if Bitcoin is just another asset managed within the Sui app, even though the BTC stays native on Bitcoin until a valid threshold-signed transaction moves it.

Beyond Sui, Ika’s architecture supports other Layer-1 blockchains, Layer-2s, and even off-chain systems. The network can host multiple light clients concurrently, so it can validate state from Ethereum, Solana, Avalanche, or others – enabling smart contracts on those chains (or their users) to also leverage Ika’s MPC network. While such capabilities might roll out gradually, the design goal is chain-agnostic. In the interim, even without deep on-chain integration, Ika can be used in a more manual way: for example, an application on Ethereum could call an Ika API (via an Oracle or off-chain service) to request a signature for an Ethereum tx or a message. Because Ika supports ECDSA, it could even be used to manage an Ethereum account’s key in a decentralized way, similarly to how Lit Protocol’s PKPs work (we discuss Lit later). Ika has also showcased use cases like controlling Bitcoin on rollups – an example being integrating with the Polygon Avail framework to allow rollup users to manage BTC without trusting a centralized custodian. This suggests Ika may collaborate with various ecosystems (Polygon/Avail, Celestia rollups, etc.) as a provider of decentralized key infrastructure.

In summary, from a technical standpoint Ika is compatible with any system that relies on digital signatures – which is essentially all blockchains. Its initial deployment on Sui is just the beginning; the long-term vision is a universal MPC layer that any chain or dApp can plug into for secure cross-chain operations. By supporting common cryptographic standards (ECDSA, Ed25519, Schnorr) and providing the needed light client verifications, Ika could become a kind of “MPC-as-a-service” network for all of Web3, bridging assets and actions in a trust-minimized way.

Business and Investment Perspective

Founding Team and Background

Ika was founded by a team of seasoned cryptography and blockchain specialists, primarily based in Israel. The project’s creator and CEO is Omer Sadika, an entrepreneur with a strong pedigree in the crypto security space. Omer previously co-founded the Odsy Network, another project centered on decentralized wallet infrastructure, and he is the Founder/CEO of dWallet Labs, the company behind Ika. His background includes training at Y Combinator (YC alum) and a focus on cybersecurity and distributed systems. Omer’s experience with Odsy and dWallet Labs directly informed Ika’s vision – in essence, Ika can be seen as an evolution of the “dynamic decentralized wallet” concept Odsy worked on, now implemented as an MPC network on Sui.

Ika’s CTO and co-founder is Yehonatan Cohen Scaly, a cryptography expert who co-authored the 2PC-MPC protocol. Yehonatan leads the R&D for Ika’s novel cryptographic algorithms and had previously worked in cybersecurity (possibly with academic research in cryptography). He has been quoted discussing the limitations of existing threshold schemes and how Ika’s approach overcomes them, reflecting deep expertise in MPC and distributed cryptographic protocols. Another co-founder is David Lachmish, who oversees product development. David’s role is to translate the core technology into developer-friendly products and real-world use cases. The trio of Omer, Yehonatan, and David – along with other researchers like Dr. Dolev Mutzari (VP of Research at dWallet Labs) – anchors Ika’s leadership. Collectively, the team’s credentials include prior startups, academic research contributions, and experience at the intersection of crypto, security, and blockchain. This depth is why Ika is described as being created by “some of the world’s leading cryptography experts”.

In addition to the founders, Ika’s broader team and advisors likely feature individuals with strong cryptography backgrounds. For instance, Dolev Mutzari (mentioned above) is a co-author of the technical paper and instrumental in the protocol design. The presence of such talent gives investors confidence that Ika’s complex technology is in capable hands. Moreover, having a founder (Omer) who already successfully raised funds and built a community around Odsy/dWallet concepts means Ika benefits from lessons learned in previous iterations of the idea. The team’s base in Israel – a country known for its cryptography and cybersecurity sector – also situates them in a rich talent pool for hiring developers and researchers.

Funding Rounds and Key Backers

Ika (and its parent, dWallet Labs) has attracted significant venture funding and strategic investment since its inception. To date it has raised over $21 million across multiple rounds. The project’s initial seed round in August 2022 was $5M, which was remarkable given the bear market conditions at that time. That seed round included a wide array of well-known crypto investors and angels. Notable participants included Node Capital (lead), Lemniscap, Collider Ventures, Dispersion Capital, Lightshift Capital, Tykhe Block Ventures, Liquid2 Ventures, Zero Knowledge Ventures, and others. Prominent individual investors also joined, such as Naval Ravikant (AngelList co-founder and prominent tech investor), Marc Bhargava (co-founder of Tagomi), Rene Reinsberg (co-founder of Celo), and several other industry figures. Such a roster of backers underscored strong confidence in Ika’s approach to decentralized custody even at the idea stage.

In May 2023, Ika raised an additional ~$7.5M in what appears to be a Series A or strategic round, reportedly at a valuation around $250M. This round was led by Blockchange Ventures and Node Capital (again), with participation from Insignius Capital, Rubik Ventures, and others. By this point, the thesis of scalable MPC networks had gained traction, and Ika’s progress likely attracted these investors to double down. The $250M valuation for a relatively early-stage network reflected the market’s expectation that Ika could become foundational infrastructure in web3 (on par with L1 blockchains or major DeFi protocols in terms of value).

The most high-profile investment came in April 2025, when the Sui Foundation announced a strategic investment in Ika. This partnership with Sui’s ecosystem fund pushed Ika’s total funding above $21M and cemented a close alignment with the Sui blockchain. While the exact amount Sui Foundation invested wasn’t publicly disclosed, it’s clear this was a significant endorsement – likely on the order of several million USD. The Sui Foundation’s support is not just financial; it also means Ika gets strong go-to-market assistance within the Sui ecosystem (developer outreach, integration support, marketing, etc.). According to press releases, “Ika…announced a strategic investment from the Sui Foundation, pushing its total funding to over $21 million.” This strategic round, rather than a traditional VC equity round, highlights that Sui sees Ika as critical infrastructure for its blockchain’s future (similar to how Ethereum Foundation might directly back a Layer-2 or interoperability project that benefits Ethereum).

Aside from Sui, other backers worth noting are Node Capital (a China-based crypto fund known for early investments in infrastructure), Lemniscap (a crypto VC focusing on early protocol innovation), and Collider Ventures (Israel-based VC, likely providing local support). Blockchange Ventures leading the 2023 round is notable; Blockchange is a VC that has backed several crypto infrastructure plays and their lead suggests they saw Ika’s tech as potentially category-defining. Additionally, Digital Currency Group (DCG) and Node Capital led a $5M fundraise for dWallet Labs prior to Ika’s rebranding (according to a LinkedIn post by Omer) – DCG’s involvement (via an earlier round for the company) indicates even more support in the background.

In summary, Ika’s funding journey shows a mix of traditional VCs and strategic partners. The Sui Foundation’s involvement particularly stands out, as it not only provides capital but also an integrated ecosystem to deploy Ika’s technology. Investors are essentially betting that Ika will become the go-to solution for decentralized key management and bridging across many networks, and they have valued the project accordingly.

Tokenomics and Economic Model

Ika will have a native utility token called $IKA, which is central to the network’s economics and security model. Uniquely, the IKA token is being launched on the Sui blockchain (as an SUI native asset), even though the Ika network itself is a separate chain. This means IKA will exist as a coin that can be held and transferred on Sui like any other Sui asset, and it will be used in a dual manner: within the Ika network for staking and fees, and on Sui for governance or access in dApps. The tokenomics can be outlined as follows:

  • Gas Fees: Just as ETH is gas in Ethereum or SUI is gas in Sui, IKA serves as the gas/payment for MPC operations on the Ika network. When a user or a dApp requests a signature or dWallet operation, a fee in IKA is paid to the network. These fees compensate validators for the computation and communication work of running the threshold signing protocol. The whitepaper analogizes IKA’s role to Sui’s gas, confirming that all cross-chain transactions facilitated by Ika will incur a small IKA fee. The fee schedule is likely proportional to the complexity of the operation (e.g., a single signature might cost a baseline fee, while more complex multi-step workflows could cost more).

  • Staking and Security: IKA is also a staking token. Validator nodes in the Ika network must be delegated a stake of IKA to participate in consensus and signing. The consensus follows a delegated proof-of-stake similar to Sui’s: token holders delegate IKA to validators, and the weight of each validator in the consensus (and thus in the threshold signature processes) is determined by stake. In each epoch, validators are chosen and their voting power is a function of stake, with the overall set being Byzantine fault tolerant (meaning if a validator set has total stake $X$, up to ~$X/3$ stake could be malicious without breaking the network’s guarantees). Stakers (delegators) are incentivized by staking rewards: Ika’s model likely includes distribution of the collected fees (and possibly inflationary rewards) to validators and their delegators at epoch ends. Indeed, documentation notes that all transaction fees collected are distributed to authorities, who may share a portion with their delegators as rewards. This mirrors the Sui model of rewarding service providers for throughput.

  • Supply and Distribution: As of now (Q2 2025), details on IKA’s total supply, initial distribution, and inflation are not fully public. However, given the funding rounds, we can infer some structure. Likely, a portion of IKA is allocated to early investors (seed and series rounds) and the team, with a large part reserved for community and future incentives. There may be a community sale or airdrop planned, especially since Ika ran a notable NFT campaign raising 1.4M SUI as mentioned in news (this was an NFT art campaign on Sui that set a record; it’s possible participants in that campaign might get IKA rewards or early access). The NFT campaign suggests a strategy to involve the community and bootstrap token distribution to users, not just VCs.

  • Token Launch Timing: The Sui Foundation’s October 2024 announcement indicated “The IKA token will launch natively on Sui, unlocking new functionality and utility in decentralized security”. Mainnet was slated for December 2024, so presumably the token generation event (TGE) would coincide or shortly follow. If mainnet launched on schedule, IKA tokens might have begun distribution in late 2024 or early 2025. The token would then start being used for gas on the Ika network and staking. Before that, on testnet, a temporary token (DWLT on testnet) was used for gas, which had no real value.

  • Use Cases and Value Accrual: The value of IKA as an investment hinges on Ika network usage. As more cross-chain transactions flow through Ika, more fees are paid in IKA, creating demand. Additionally, if many want to run validators or secure the network, they must acquire and stake IKA, which locks up supply (reducing float). Thus IKA has a utility plus governance nature – utility in paying for services and staking, and likely governance in directing the future of the protocol (though governance isn’t explicitly mentioned yet, it’s common for such networks to eventually decentralize control via token voting). One can imagine IKA token holders voting on adding support for new chains, adjusting fee parameters, or other protocol upgrades in the future.

Overall, IKA’s tokenomics aim to balance network security with usability. By launching on Sui, they make it easy for Sui ecosystem users to obtain and use IKA (no separate chain onboarding needed for the token itself), which can jumpstart adoption. Investors will watch metrics like the portion of supply staked (indicating security), the fee revenue (indicating usage), and partnerships that drive transactions (indicating demand for the token).

Business Model and Go-to-Market Strategy

Ika’s business model is that of an infrastructure provider in the blockchain ecosystem. It doesn’t offer a consumer-facing product; instead it offers a protocol service (decentralized key management and transaction execution) that other projects integrate. As such, the primary revenue (or value capture) mechanism is the fee for service – i.e., the gas fees in IKA for using the network. One can liken Ika to a decentralized AWS for key signing: any developer can plug in and use it, paying per use. In the long run, as the network decentralizes, dWallet Labs (the founding company) might capture value by holding a stake in the network and via token appreciation rather than charging SaaS-style fees off-chain.

Go-to-Market (GTM) Strategy: Early on, Ika is targeting blockchain developers and projects that need cross-chain functionality or custody solutions. The alignment with Sui gives a ready pool of such developers. Sui itself, being a newer L1, needs unique features to attract users – and Ika offers cross-chain DeFi, Bitcoin access, and more on Sui, which are compelling features. Thus, Ika’s GTM piggybacks on Sui’s growing ecosystem. Notably, even before mainnet, several Sui projects announced they are integrating Ika:

  • Projects like Full Sail, Rhei, Aeon, Human Tech, Covault, Lucky Kat, Native, Nativerse, Atoma, and Ekko (all builders on Sui) have “announced their upcoming launches utilizing Ika”, covering use cases from DeFi to gaming. For example, Full Sail might be building an exchange that can trade BTC via Ika; Lucky Kat (a gaming studio) could use Ika to enable in-game assets that reside on multiple chains; Covault likely involves custody solutions, etc. By securing these partnerships early, Ika ensures that upon launch there will be immediate transaction volume and real applications showcasing its capabilities.

  • Ika is also emphasizing institutional use-cases, such as decentralized custody for institutions. In press releases, they highlight “unmatched security for institutional and individual users” in custody via Ika. This suggests Ika could be marketed to crypto custodians, exchanges, or even TradFi players that want a more secure way to manage private keys (perhaps as an alternative or complement to Fireblocks or Copper, which use MPC but in a centralized enterprise setting). In fact, by being a decentralized network, Ika could allow competitors in custody to all rely on the same robust signing network rather than each building their own. This cooperative model could attract institutions that prefer a neutral, decentralized custodian for certain assets.

  • Another angle is AI integrations: Ika mentions “AI Agent guardrails” as a use case. This is forward-looking, playing on the trend of AI autonomy (e.g., AI agents executing on blockchain). Ika can ensure an AI agent (say an autonomous economic agent given control of some funds) cannot run off with the funds because the agent itself isn’t the sole holder of the key – it would still need the user’s share or abide by conditions in Ika. Marketing Ika as providing safety rails for AI in Web3 is a novel angle to capture interest from that sector.

Geographically, the presence of Node Capital and others hints at an Asia focus as well, in addition to the Western market. Sui has a strong Asia community (especially in China). Ika’s NFT campaign on Sui (the art campaign raising 1.4M SUI) indicates a community-building effort – possibly engaging Chinese users who are avid in Sui NFT space. By doing NFT sales or community airdrops, Ika can cultivate a grassroots user base who hold IKA tokens and are incentivized to promote its adoption.

Over time, the business model could extend to offering premium features or enterprise integrations. For instance, while the public Ika network is permissionless, dWallet Labs could spin up private instances or consortium versions for certain clients, or provide consulting services to projects integrating Ika. They could also earn via running some of the validators early on (bootstrap phase) and thus collecting part of the fees.

In summary, Ika’s GTM is strongly tied to ecosystem partnerships. By embedding deeply into Sui’s roadmap (where Sui’s 2025 goals include cross-chain liquidity and unique use cases), Ika ensures it will ride the growth of that L1. Simultaneously, it positions itself as a generalized solution for multi-chain coordination, which can then be pitched to projects on other chains once a success on Sui is demonstrated. The backing from Sui Foundation and the early integration announcements give Ika a significant head start in credibility and adoption compared to if it launched in isolation.

Ecosystem Adoption, Partnerships, and Roadmap

Even at its early stage, Ika has built an impressive roster of ecosystem engagements:

  • Sui Ecosystem Adoption: As mentioned, multiple Sui-based projects are integrating Ika. This means upon Ika’s mainnet launch, we expect to see Sui dApps enabling features like “Powered by Ika” – for example, a Sui lending protocol that lets users deposit BTC, or a DAO on Sui that uses Ika to hold its treasury on multiple chains. The fact that names like Rhei, Atoma, Nativerse (likely DeFi projects) and Lucky Kat (gaming/NFT) are on board shows that Ika’s applicability spans various verticals.

  • Strategic Partnerships: Ika’s most important partnership is with the Sui Foundation itself, which is both an investor and a promoter. Sui’s official channels (blog, etc.) have featured Ika prominently, effectively endorsing it as the interoperability solution for Sui. Additionally, Ika has likely been working with other infrastructure providers. For instance, given the mention of zkLogin (Sui’s Web2 login feature) alongside Ika, there could be a combined use-case where zkLogin handles user authentication and Ika handles cross-chain transactions, together providing a seamless UX. Also, Ika’s mention of Avail (Polygon) in its blogs suggests a partnership or pilot in that ecosystem: perhaps with Polygon Labs or teams building rollups on Avail to use Ika for bridging Bitcoin to those rollups. Another potential partnership domain is with custodians – for example, integrating Ika with wallet providers like Zengo (notable since ZenGo’s co-founder was Omer’s prior project) or with institutional custody tech like Fireblocks. While not confirmed, these would be logical targets (indeed Fireblocks has partnered with Sui elsewhere; one could imagine Fireblocks leveraging Ika for MPC on Sui).

  • Community and Developer Engagement: Ika runs a Discord and likely hackathons to get developers building with dWallets. The technology is novel, so evangelizing it through education is key. The presence of “Use cases” and “Builders” sections on their site, plus blog posts explaining core concepts, indicates a push to get developers comfortable with the concept of dWallets. The more developers understand that they can build cross-chain logic without bridges (and without compromising security), the more organic adoption will grow.

  • Roadmap: As of 2025, Ika’s roadmap included:

    • Alpha and Testnet (2023–2024): The alpha testnet launched in 2024 on Sui, allowing developers to experiment with dWallets and providing feedback. This stage was used to refine the protocol, fix bugs, and run internal audits.
    • Mainnet Launch (Dec 2024): Ika planned to go live on mainnet by end of 2024. If achieved, by now (mid-2025) Ika’s mainnet should be operational. Launch likely included initial support for a set of chains: at least Bitcoin and Ethereum (ECDSA chains) out of the gate, given those were heavily mentioned in marketing.
    • Post-Launch 2025 Goals: In 2025, we expect the focus to be on scaling usage (through Sui apps and possibly expanding to other chains). The team will work on adding Ed25519 and Schnorr support shortly after launch, enabling integration with Solana, Polkadot, and other ecosystems. They will also implement more light clients (perhaps Ethereum light client for Ika, Solana light client, etc.) to broaden the trustless control. Another roadmap item is likely permissionless validator expansion – encouraging more independent validators to join and decentralizing the network further. Since the code is a Sui fork, running an Ika validator is similar to running a Sui node, which many operators can do.
    • Feature Enhancements: Two interesting features hinted in blogs are Encrypted User Shares and Future Transaction signing. Encrypted user share means users can optionally encrypt their private share and store it on-chain (perhaps on Ika or elsewhere) in a way that only they can decrypt, simplifying recovery. Future transaction signing implies the ability to have Ika pre-sign a transaction that executes later when conditions are met. These features increase usability (users won’t have to be online for every action if they pre-approve certain logic, all while maintaining non-custodial security). Delivering these in 2025 would further differentiate Ika’s offering.
    • Ecosystem Growth: By end of 2025, Ika likely aims to have multiple chain ecosystems actively using it. We might see, for example, an Ethereum project using Ika via an oracle (if direct on-chain integration is not yet there) or collaborations with interchain projects like Wormhole or LayerZero, where Ika could serve as the signing mechanism for secure messaging.

The competitive landscape will also shape Ika’s strategy. It’s not alone in offering decentralized key management, so part of its roadmap will involve highlighting its performance edge and unique two-party security in contrast to others. In the next section, we compare Ika to its notable competitors Lit Protocol, Threshold Network, and Zama.

Competitive Analysis: Ika vs. Other MPC/Threshold Networks

Ika operates in a cutting-edge arena of cryptographic networks, where a few projects are pursuing similar goals with varying approaches. Below is a summary comparison of Ika with Lit Protocol, Threshold Network, and Zama (each a representative competitor in decentralized key infrastructure or privacy computing):

AspectIka (Parallel MPC Network)Lit Protocol (PKI & Compute)Threshold Network (tBTC & TSS)Zama (FHE Network)
Launch & StatusFounded 2022; Testnet in 2024; Mainnet launched on Sui in Dec 2024 (early 2025). Token $IKA live on Sui.Launched 2021; Lit nodes network live. Token $LIT (launched 2021). Building “Chronicle” rollup for scaling.Network went live 2022 after Keep/NuCypher merger. Token $T governs DAO. tBTC v2 launched for Bitcoin bridging.In development (no public network yet as of 2025). Raised large VC rounds for R&D. No token yet (FHE tools in alpha stage).
Core Focus/Use-CaseCross-chain interoperability and custody: threshold signing to control native assets across chains (e.g. BTC, ETH) via dWallets. Enables DeFi, multi-chain dApps, etc.Decentralized key management & access control: threshold encryption/decryption and conditional signing via PKPs (Programmable Key Pairs). Popular for gating content, cross-chain automation with JavaScript “Lit Actions”.Threshold cryptography services: e.g. tBTC decentralized Bitcoin-to-Ethereum bridge; threshold ECDSA for digital asset custody; threshold proxy re-encryption (PRE) for data privacy.Privacy-preserving computation: Fully Homomorphic Encryption (FHE) to enable encrypted data processing and private smart contracts. Focus on confidentiality (e.g. private DeFi, on-chain ML) rather than cross-chain control.
ArchitectureFork of Sui blockchain (DAG consensus Mysticeti) modified for MPC. No user smart contracts on Ika; uses off-chain 2PC-MPC protocol among ~N validators + user share. High throughput (10k TPS) design.Decentralized network + L2: Lit nodes run MPC and also a TEE-based JS runtime. “Chronicle” Arbitrum Rollup used to anchor state and coordinate nodes. Uses 2/3 threshold for consensus on key operations.Decentralized network on Ethereum: Node operators are staked with $T and randomly selected into signing groups (e.g. 100 nodes for tBTC). Uses off-chain protocols (GG18, etc.) with on-chain Ethereum contracts for coordination and deposit handling.FHE Toolkits atop existing chains: Zama’s tech (e.g. Concrete, TFHE libraries) enables FHE on Ethereum (fhEVM). Plans for a threshold key management system (TKMS) for FHE keys. Likely will integrate with L1s or run as Layer-2 for private computations.
Security Model2PC-MPC, non-collusive: User’s key share + threshold of N validators (2/3 BFT) required for any signature. No single entity ever has full key. BFT consensus tolerates <33% malicious. Audited by Symbolic (2024).Threshold + TEE: Requires 2/3 of Lit nodes to sign/decrypt. Uses Trusted Execution Environments on each node to run user-provided code (Lit Actions) securely. Security depends on node honesty and hardware security.Threshold multi-party: e.g. for tBTC, a randomly selected group of ~100 nodes must reach a threshold (e.g. 51) to sign BTC transactions. Economic incentives ($T staking, slashing) to keep honest majority. DAO governed; security incidents would be handled via governance.FHE-based: Security relies on cryptographic hardness of FHE (learning with errors, etc.) – data remains encrypted at all times. Zama’s TKMS indicates use of threshold cryptography to manage FHE keys as well. Not a live network yet; security under review by academics.
PerformanceSub-second latency, ~10,000 signatures/sec in theory. Scales to hundreds or thousands of nodes without major perf loss (broadcast & batching approach). Suitable for real-time dApp use (trading, gaming).Moderate latency (heavier due to TEE and consensus overhead). Lit has ~50 nodes; uses “shadow splicing” to scale but large node count can degrade performance. Good for moderate-frequency tasks (opening access, occasional tx signing). Chronicle L2 helps batching.Lower throughput, higher latency: tBTC minting can take minutes (waiting for Bitcoin confirmations + threshold signing) and uses small groups to sign. Threshold’s focus is quality (security) over quantity – fine for bridging transactions and access control, not designed for thousands TPS.Heavy computation latency: FHE is currently much slower than plaintext computation (orders of magnitude). Zama is optimizing, but running private contracts will be slower and costlier than normal ones. Not aimed at high-frequency tasks; targeted at complex computations where privacy is paramount.
DecentralizationHigh – permissionless validator set, hundreds of validators possible. Delegated PoS (Sui-style) ensures open participation and decentralized governance over time. User always in the loop (can’t be bypassed).Medium – currently ~30-50 core nodes run by Lit team and partners. Plans to decentralize further. Nodes do heavy tasks (MPC + TEE), so scaling out is non-trivial. Governance not fully decentralized yet (Lit DAO exists but early).High – large pool of stakers; however actual signing done by selected groups (not entire network at once). The network is as decentralized as its stake distribution. Governed by Threshold DAO (token holder votes) – mature decentralization in governance.N/A (for network) – Zama is more a company-driven project now. If fhEVM or networks launch, initially likely centralized or limited set of nodes (given complexity). Over time could decentralize execution of FHE transactions, but that’s uncharted territory in 2025.
Token and Incentives$IKA (Sui-based) for gas fees, staking, and potentially governance. Incentive: earn fees for running validators; token appreciates with network usage. Sui Foundation backing gives it ecosystem value.$LIT token – used for governance and maybe fees for advanced services. Lit Actions currently free to developers (no gas); long-term may introduce fee model. $LIT incentivizes node operation (stakers) but exact token economics evolving.$T token – staked by nodes, governs the DAO treasury and protocol upgrades. Nodes earn in $T and fees (in ETH or tBTC fees). $T secures network (slashing for misbehavior). Also used in liquidity programs for tBTC adoption.No token (yet) – Zama is VC-funded; might introduce a token if they launch a network service (could be used for paying for private computation or staking to secure networks running FHE contracts). Currently developers use Zama’s tools without a token.
Key BackersSui Foundation (strategic investor); VCs: Node Capital, Blockchange, Lemniscap, Collider; angels like Naval Ravikant. Strong support from Sui ecosystem.Backed by 1kx, Pantera, Coinbase Ventures, Framework, etc. (Raised $13M in 2022). Has growing developer community via Lit DAO. Partnerships with Ceramic, NFT projects for access control.Emerged from Keep & NuCypher communities (backed by a16z, Polychain in past). Threshold is run by DAO; no new VC funding post-merger (grants from Ethereum Community Fund, etc.). Partnerships: works with Curve, Aave (tBTC integrations).Backed by a16z, SoftBank, Multicoin Capital (raised $73M Series A). Close ties with Ethereum Foundation research (Rand Hindi, CEO, is an outspoken FHE advocate in Ethereum). Collaborating with projects like Optalysys for hardware acceleration.

Ika’s Competitive Edge: Ika’s differentiators lie in its performance at scale and unique security model. Compared to Lit Protocol, Ika can support far more signers and much higher throughput, making it suitable for use cases (like high-volume trading or gaming) that Lit’s network would struggle with. Ika also does not rely on Trusted Execution Environments, which some developers are wary of (due to potential exploits in SGX); instead, Ika achieves trustlessness purely with cryptography and consensus. Against Threshold Network, Ika offers a more general-purpose platform. Threshold is largely focused on Bitcoin↔Ethereum bridging (tBTC) and a couple of cryptographic services like proxy re-encryption, whereas Ika is a flexible interoperability layer that can work with any chain and asset out-of-the-box. Also, Ika’s user-in-the-loop model means it doesn’t require over-collateralization or insurance for deposits (tBTC v2 uses a robust but complex economic model to secure BTC deposits, whereas in Ika the user never gives up control in the first place). Compared to Zama, Ika addresses a different problem – Zama targets privacy, while Ika targets interoperability. However, it’s conceivable that in the future the two could complement each other (e.g., using FHE on Ika-stored assets). For now, Ika has the advantage of being operational sooner in a niche with immediate demand (bridges and MPC networks are needed today, whereas FHE is still maturing).

One potential challenge for Ika is market education and trust. It’s introducing a novel way of doing cross-chain interactions (dWallets instead of traditional lock-and-mint bridges). It will need to demonstrate its security in practice over time to win the same level of trust that, say, the Threshold Network has gradually earned (Threshold had to prove out tBTC after an earlier version was paused due to risks). If Ika’s technology works as advertised, it effectively leapfrogs the competition by solving the trilemma of decentralization, security, and speed in the MPC space. The strong backing from Sui and the extensive audits/papers lend credibility.

In conclusion, Ika stands out among MPC networks for its ambitious scalability and user-centric security model. Investors see it as a bet on the future of cross-chain coordination – one where users can seamlessly move value and logic across many blockchains without ever giving up control of their keys. If Ika achieves broad adoption, it could become as integral to Web3 infrastructure as cross-chain messaging protocols or major Layer-1 blockchains themselves. The coming year (2025) will be critical as Ika’s mainnet and first use cases go live, proving whether this cutting-edge cryptography can deliver on its promises in real market conditions. The early signs – strong technical fundamentals, an active pipeline of integrations, and substantial investor support – suggest that Ika has a real shot at redefining blockchain interoperability with MPC.

Sources: Primary information was gathered from Ika’s official documentation and whitepaper, Sui Foundation announcements, press releases and funding news, as well as competitor technical docs and analyses for context (Lit Protocol’s Messari report, Threshold Network documentation, and Zama’s FHE descriptions). All information is up-to-date as of 2025.

Programmable Privacy in Blockchain: Off‑Chain Compute with On‑Chain Verification

· 47 min read
Dora Noda
Software Engineer

Public blockchains provide transparency and integrity at the cost of privacy – every transaction and contract state is exposed to all participants. This openness creates problems like MEV (Miner Extractable Value) attacks, copy-trading, and leakage of sensitive business logic. Programmable privacy aims to solve these issues by allowing computations on private data without revealing the data itself. Two emerging cryptographic paradigms are making this possible: Fully Homomorphic Encryption Virtual Machines (FHE-VM) and Zero-Knowledge (ZK) Coprocessors. These approaches enable off-chain or encrypted computation with on-chain verification, preserving confidentiality while retaining trustless correctness. In this report, we dive deep into FHE-VM and ZK-coprocessor architectures, compare their trade-offs, and explore use cases across finance, identity, healthcare, data markets, and decentralized machine learning.

Fully Homomorphic Encryption Virtual Machine (FHE-VM)

Fully Homomorphic Encryption (FHE) allows arbitrary computations on encrypted data without ever decrypting it. An FHE Virtual Machine integrates this capability into blockchain smart contracts, enabling encrypted contract state and logic. In an FHE-enabled blockchain (often called an fhEVM for EVM-compatible designs), all inputs, contract storage, and outputs remain encrypted throughout execution. This means validators can process transactions and update state without learning any sensitive values, achieving on-chain execution with data confidentiality.

Architecture and Design of FHE-VM

A typical FHE-VM extends a standard smart contract runtime (like the Ethereum Virtual Machine) with native support for encrypted data types and operations. For example, Zama’s FHEVM introduces encrypted integers (euint8, euint32, etc.), encrypted booleans (ebool), and even encrypted arrays as first-class types. Smart contract languages like Solidity are augmented via libraries or new opcodes so developers can perform arithmetic (add, mul, etc.), logical operations, and comparisons directly on ciphertexts. Under the hood, these operations invoke FHE primitives (e.g. using the TFHE library) to manipulate encrypted bits and produce encrypted results.

Encrypted state storage is supported so that contract variables remain encrypted in the blockchain state. The execution flow is typically:

  1. Client-Side Encryption: Users encrypt their inputs locally using the public FHE key before sending transactions. The encryption key is public (for encryption and evaluation), while the decryption key remains secret. In some designs, each user manages their own key; in others, a single global FHE key is used (discussed below).
  2. On-Chain Homomorphic Computation: Miners/validators execute the contract with encrypted opcodes. They perform the same deterministic homomorphic operations on the ciphertexts, so consensus can be reached on the encrypted new state. Crucially, validators never see plaintext data – they just see “gibberish” ciphertext but can still process it consistently.
  3. Decryption (Optional): If a result needs to be revealed or used off-chain, an authorized party with the private key can decrypt the output ciphertext. Otherwise, results remain encrypted and can be used as inputs to further transactions (allowing consecutive computations on persistent encrypted state).

A major design consideration is key management. One approach is per-user keys, where each user holds their secret key and only they can decrypt outputs relevant to them. This maximizes privacy (no one else can ever decrypt your data), but homomorphic operations cannot mix data encrypted under different keys without complex multi-key protocols. Another approach, used by Zama’s FHEVM, is a global FHE key: a single public key encrypts all contract data and a distributed set of validators holds shares of the threshold decryption key. The public encryption and evaluation keys are published on-chain, so anyone can encrypt data to the network; the private key is split among validators who can collectively decrypt if needed under a threshold scheme. To prevent validator collusion from compromising privacy, Zama employs a threshold FHE protocol (based on their Noah’s Ark research) with “noise flooding” to make partial decryptions secure. Only if a sufficient quorum of validators cooperates can a plaintext be recovered, for example to serve a read request. In normal operation, however, no single node ever sees plaintext – data remains encrypted on-chain at all times.

Access control is another crucial component. FHE-VM implementations include fine-grained controls to manage who (if anyone) can trigger decryptions or access certain encrypted fields. For instance, Cypher’s fhEVM supports Access Control Lists on ciphertexts, allowing developers to specify which addresses or contracts can interact with or re-encrypt certain data. Some frameworks support re-encryption: the ability to transfer an encrypted value from one user’s key to another’s without exposing plaintext. This is useful for things like data marketplaces, where a data owner can encrypt a dataset with their key, and upon purchase, re-encrypt it to the buyer’s key – all on-chain, without ever decrypting publicly.

Ensuring Correctness and Privacy

One might ask: if all data is encrypted, how do we enforce correctness of contract logic? How can the chain prevent invalid operations if it can’t “see” the values? FHE by itself doesn’t provide a proof of correctness – validators can perform the homomorphic steps, but they can’t inherently tell if a user’s encrypted input was valid or if a conditional branch should be taken, etc., without decrypting. Zero-knowledge proofs (ZKPs) can complement FHE to solve this gap. In an FHE-VM, typically users must provide a ZK proof attesting to certain plaintext conditions whenever needed. Zama’s design, for example, uses a ZK Proof of Plaintext Knowledge (ZKPoK) to accompany each encrypted input. This proves that the user knows the plaintext corresponding to their ciphertext and that it meets expected criteria, without revealing the plaintext itself. Such “certified ciphertexts” prevent a malicious user from submitting a malformed encryption or an out-of-range value. Similarly, for operations that require a decision (e.g. ensure account balance ≥ withdrawal amount), the user can supply a ZK proof that this condition holds true on the plaintexts before the encrypted operation is executed. In this way, the chain doesn’t decrypt or see the values, but it gains confidence that the encrypted transactions follow the rules.

Another approach in FHE rollups is to perform off-chain validation with ZKPs. Fhenix (an L2 rollup using FHE) opts for an optimistic model where a separate network component called a Threshold Service Network can decrypt or verify encrypted results, and any incorrect computation can be challenged with a fraud-proof. In general, combining FHE + ZK or fraud proofs ensures that encrypted execution remains trustless. Validators either collectively decrypt only when authorized, or they verify proofs that each encrypted state transition was valid without needing to see plaintext.

Performance considerations: FHE operations are computationally heavy – many orders of magnitude slower than normal arithmetic. For example, a simple 64-bit addition on Ethereum costs ~3 gas, whereas an addition on an encrypted 64-bit integer (euint64) under Zama’s FHEVM costs roughly 188,000 gas. Even an 8-bit add can cost ~94k gas. This enormous overhead means a straightforward implementation on existing nodes would be impractically slow and costly. FHE-VM projects tackle this with optimized cryptographic libraries (like Zama’s TFHE-rs library for binary gate bootstrapping) and custom EVM modifications for performance. For instance, Cypher’s modified Geth client adds new opcodes and optimizes homomorphic instruction execution in C++/assembly to minimize overhead. Nevertheless, achieving usable throughput requires acceleration. Ongoing work includes using GPUs, FPGAs, and even specialized photonic chips to speed up FHE computations. Zama reports their FHE performance improved 100× since 2024 and is targeting thousands of TPS with GPU/FPGA acceleration. Dedicated FHE co-processor servers (such as Optalysys’s LightLocker Node) can plug into validator nodes to offload encrypted operations to hardware, supporting >100 encrypted ERC-20 transfers per second per node. As hardware and algorithms improve, the gap between FHE and plain computation will narrow, enabling private contracts to approach more practical speeds.

Compatibility: A key goal of FHE-VM designs is to remain compatible with existing development workflows. Cypher’s and Zama’s fhEVM implementations allow developers to write contracts in Solidity with minimal changes – using a library to declare encrypted types and operations. The rest of the Ethereum toolchain (Remix, Hardhat, etc.) can still be used, as the underlying modifications are mostly at the client/node level. This lowers the barrier to entry: developers don’t need to be cryptography experts to write a confidential smart contract. For example, a simple addition of two numbers can be written as euint32 c = a + b; and the FHEVM will handle the encryption-specific details behind the scenes. The contracts can even interoperate with normal contracts – e.g. an encrypted contract could output a decrypted result to a standard contract if desired, allowing a mix of private and public parts in one ecosystem.

Current FHE-VM Projects: Several projects are pioneering this space. Zama (a Paris-based FHE startup) developed the core FHEVM concept and libraries (TFHE-rs and an fhevm-solidity library). They do not intend to launch their own chain, but rather provide infrastructure to others. Inco is an L1 blockchain (built on Cosmos SDK with Evmos) that integrated Zama’s FHEVM to create a modular confidential chain. Their testnets (named Gentry and Paillier) showcase encrypted ERC-20 transfers and other private DeFi primitives. Fhenix is an Ethereum Layer-2 optimistic rollup using FHE for privacy. It decided on an optimistic (fraud-proof) approach rather than ZK-rollup due to the heavy cost of doing FHE and ZK together for every block. Fhenix uses the same TFHE-rs library (with some modifications) and introduces a Threshold Service Network for handling decryptions in a decentralized way. There are also independent teams like Fhenix (now rebranded) and startups exploring MPC + FHE hybrids. Additionally, Cypher (by Z1 Labs) is building a Layer-3 network focused on AI and privacy, using an fhEVM with features like secret stores and federated learning support. The ecosystem is nascent but growing rapidly, fueled by significant funding – e.g. Zama became a “unicorn” with >$130M raised by 2025 to advance FHE tech.

In summary, an FHE-VM enables privacy-preserving smart contracts by executing all logic on encrypted data on-chain. This paradigm ensures maximum confidentiality – nothing sensitive is ever exposed in transactions or state – while leveraging the existing blockchain consensus for integrity. The cost is increased computational burden on validators and complexity in key management and proof integration. Next, we explore an alternative paradigm that offloads compute entirely off-chain and only uses the chain for verification: the zero-knowledge coprocessor.

Zero-Knowledge Coprocessors (ZK-Coprocessors)

A ZK-coprocessor is a new blockchain architecture pattern where expensive computations are performed off-chain, and a succinct zero-knowledge proof of their correctness is verified on-chain. This allows smart contracts to harness far greater computational power and data than on-chain execution would allow, without sacrificing trustlessness. The term coprocessor is used by analogy to hardware coprocessors (like a math co-processor or GPU) that handle specialized tasks for a CPU. Here, the blockchain’s “CPU” (the native VM like EVM) delegates certain tasks to a zero-knowledge proof system which acts as a cryptographic coprocessor. The ZK-coprocessor returns a result and a proof that the result was computed correctly, which the on-chain contract can verify and then use.

Architecture and Workflow

In a typical setup, a dApp developer identifies parts of their application logic that are too expensive or complex for on-chain execution (e.g. large computations over historical data, heavy algorithms, ML model inference, etc.). They implement those parts as an off-chain program (in a high-level language or circuit DSL) that can produce a zero-knowledge proof of its execution. The on-chain component is a verifier smart contract that checks proofs and makes the results available to the rest of the system. The flow can be summarized as:

  1. Request – The on-chain contract triggers a request for a certain computation to be done off-chain. This could be initiated by a user transaction or by one contract calling into the ZK-coprocessor’s interface. For example, a DeFi contract might call “proveInterestRate(currentState)” or a user calls “queryHistoricalData(query)”.
  2. Off-Chain Execution & Proving – An off-chain service (which could be a decentralized network of provers or a trusted service, depending on the design) picks up the request. It gathers any required data (on-chain state, off-chain inputs, etc.) and executes the computation in a special ZK Virtual Machine (ZKVM) or circuit. During execution, a proof trace is generated. At the end, the service produces a succinct proof (e.g. a SNARK or STARK) attesting that “Computing function F on input X yields output Y” and optionally attesting to data integrity (more on this below).
  3. On-Chain Verification – The proof and result are returned to the blockchain (often via a callback function). The verifier contract checks the proof’s validity using efficient cryptographic verification (pairing checks, etc.). If valid, the contract can now trust the output Y as correct. The result can be stored in state, emitted as an event, or fed into further contract logic. If the proof is invalid or not provided within some time, the request can be considered failed (and potentially some fallback or timeout logic triggers).

Figure 1: Architecture of a ZK Coprocessor (RISC Zero Bonsai example). Off-chain, a program runs on a ZKVM with inputs from the smart contract call. A proof of execution is returned on-chain via a relay contract, which invokes a callback with the verified results.

Critically, the on-chain gas cost for verification is constant (or grows very slowly) regardless of how complex the off-chain computation was. Verifying a succinct proof might cost on the order of a few hundred thousand gas (a fraction of an Ethereum block), but that proof could represent millions of computational steps done off-chain. As one developer quipped, “Want to prove one digital signature? ~$15. Want to prove one million signatures? Also ~$15.”. This scalability is a huge win: dApps can offer complex functionalities (big data analytics, elaborate financial models, etc.) without clogging the blockchain.

The main components of a ZK-coprocessor system are:

  • Proof Generation Environment: This can be a general-purpose ZKVM (able to run arbitrary programs) or custom circuits tailored to specific computations. Approaches vary:

    • Some projects use handcrafted circuits for each supported query or function (maximizing efficiency for that function).
    • Others provide a Domain-Specific Language (DSL) or an Embedded DSL that developers use to write their off-chain logic, which is then compiled into circuits (balancing ease-of-use and performance).
    • The most flexible approach is a zkVM: a virtual machine (often based on RISC architectures) where programs can be written in standard languages (Rust, C, etc.) and automatically proven. This sacrifices performance (simulating a CPU in a circuit adds overhead) for maximum developer experience.
  • Data Access and Integrity: A unique challenge is feeding the off-chain computation with the correct data, especially if that data resides on the blockchain (past blocks, contract states, etc.). A naive solution is to have the prover read from an archive node and trust it – but that introduces trust assumptions. ZK-coprocessors instead typically prove that any on-chain data used was indeed authentic by linking to Merkle proofs or state commitments. For example, the query program might take a block number and a Merkle proof of a storage slot or transaction, and the circuit will verify that proof against a known block header hash. Three patterns exist:

    1. Inline Data: Put the needed data on-chain (as input to the verifier) so it can be directly checked. This is very costly for large data and undermines the whole point.
    2. Trust an Oracle: Have an oracle service feed the data to the proof and vouch for it. This is simpler but reintroduces trust in a third party.
    3. Prove Data Inclusion via ZK: Incorporate proofs of data inclusion in the chain’s history within the zero-knowledge circuit itself. This leverages the fact that each Ethereum block header commits to the entire prior state (via state root) and transaction history. By verifying Merkle Patricia proofs of the data within the circuit, the output proof assures the contract that “this computation used genuine blockchain data from block N” with no additional trust needed.

    The third approach is the most trustless and is used by advanced ZK-coprocessors like Axiom and Xpansion (it does increase proving cost, but is preferable for security). For instance, Axiom’s system models Ethereum’s block structure, state trie, and transaction trie inside its circuits, so it can prove statements like “the account X had balance Y at block N or “a transaction with certain properties occurred in block N”. It leverages the fact that given a recent trusted block hash, one can recursively prove inclusion of historical data without trusting any external party.

  • Verifier Contract: This on-chain contract contains the verifying key and logic to accept or reject proofs. For SNARKs like Groth16 or PLONK, the verifier might do a few elliptic curve pairings; for STARKs, it might do some hash computations. Performance optimizations like aggregation and recursion can minimize on-chain load. For example, RISC Zero’s Bonsai uses a STARK-to-SNARK wrapper: it runs a STARK-based VM off-chain for speed, but then generates a small SNARK proof attesting to the STARK’s validity. This shrinks proof size from hundreds of kilobytes to a few hundred bytes, making on-chain verification feasible and cheap. The Solidity verifier then just checks the SNARK (which is a constant-time operation).

In terms of deployment, ZK-coprocessors can function as layer-2 like networks or as pure off-chain services. Some, like Axiom, started as a specialized service for Ethereum (with Paradigm’s backing) where developers submit queries to Axiom’s prover network and get proofs on-chain. Axiom’s tagline was providing Ethereum contracts “trustless access to all on-chain data and arbitrary expressive compute over it.” It effectively acts as a query oracle where the answers are verified by ZKPs instead of trust. Others, like RISC Zero’s Bonsai, offer a more open platform: any developer can upload a program (compiled to a RISC-V compatible ZKVM) and use Bonsai’s proving service via a relay contract. The relay pattern, as illustrated in Figure 1, involves a contract that mediates requests and responses: the dApp contract calls the relay to ask for a proof, the off-chain service listens for this (e.g. via event or direct call), computes the proof, and then the relay invokes a callback function on the dApp contract with the result and proof. This asynchronous model is necessary because proving may take from seconds to minutes depending on complexity. It introduces a latency (and a liveness assumption that the prover will respond), whereas FHE-VM computations happen synchronously within a block. Designing the application to handle this async workflow (possibly akin to Oracle responses) is part of using a ZK-coprocessor.

Notable ZK-Coprocessor Projects

  • Axiom: Axiom is a ZK coprocessor tailored for Ethereum, focused originally on proving historical on-chain data queries. It uses the Halo2 proving framework (a Plonk-ish SNARK) to create proofs that incorporate Ethereum’s cryptographic structures. In Axiom’s system, a developer can query things like “what was the state of contract X at block N?” or perform a computation over all transactions in a range. Under the hood, Axiom’s circuits had to implement Ethereum’s state/trie logic, even performing elliptic curve operations and SNARK verification inside the circuit to support recursion. Trail of Bits, in an audit, noted the complexity of Axiom’s Halo2 circuits modeling entire blocks and states. After auditing, Axiom generalized their tech into an OpenVM, allowing arbitrary Rust code to be proved with the same Halo2-based infrastructure. (This mirrors the trend of moving from domain-specific circuits to a more general ZKVM approach.) The Axiom team demonstrated ZK queries that Ethereum natively cannot do, enabling stateless access to any historical data with cryptographic integrity. They have also emphasized security, catching and fixing under-constrained circuit bugs and ensuring soundness. While Axiom’s initial product was shut down during their pivot, their approach remains a landmark in ZK coprocessors.

  • RISC Zero Bonsai: RISC Zero is a ZKVM based on the RISC-V architecture. Their zkVM can execute arbitrary programs (written in Rust, C++ and other languages compiled to RISC-V) and produce a STARK proof of execution. Bonsai is RISC Zero’s cloud service that provides this proving on demand, acting as a coprocessor for smart contracts. To use it, a developer writes a program (say a function that performs complex math or verifies an off-chain API response), uploads it to the Bonsai service, and deploys a corresponding verifier contract. When the contract needs that computation, it calls the Bonsai relay which triggers the proof generation and returns the result via callback. One example application demonstrated was off-chain governance computation: RISC Zero showed a DAO using Bonsai to tally votes and compute complex voting metrics off-chain, then post a proof so that the on-chain Governor contract could trust the outcome with minimal gas cost. RISC Zero’s technology emphasizes that developers can use familiar programming paradigms – for instance, writing a Rust function to compute something – and the heavy lifting of circuit creation is handled by the zkVM. However, proofs can be large, so as noted earlier they implemented a SNARK compression for on-chain verification. In August 2023 they successfully verified RISC Zero proofs on Ethereum’s Sepolia testnet, costing on the order of 300k gas per proof. This opens the door for Ethereum dApps to use Bonsai today as a scaling and privacy solution. (Bonsai is still in alpha, not production-ready, and uses a temporary SNARK setup without a ceremony.)

  • Others: There are numerous other players and research initiatives. Expansion/Xpansion (as mentioned in a blog) uses an embedded DSL approach, where developers can write queries over on-chain data with a specialized language, and it handles proof generation internally. StarkWare’s Cairo and Polygon’s zkEVM are more general ZK-rollup VMs, but their tech could be repurposed for coprocessor-like use by verifying proofs within L1 contracts. We also see projects in the ZKML (ZK Machine Learning) domain, which effectively act as coprocessors to verify ML model inference or training results on-chain. For example, a zkML setup can prove that “a neural network inference on private inputs produced classification X” without revealing the inputs or doing the computation on-chain. These are special cases of the coprocessor concept applied to AI.

Trust assumptions: ZK-coprocessors rely on the soundness of the cryptographic proofs. If the proof system is secure (and any trusted setup is done honestly), then an accepted proof guarantees the computation was correct. No additional trust in the prover is needed – even a malicious prover cannot convince the verifier of a false statement. However, there is a liveness assumption: someone must actually perform the off-chain computation and produce the proof. In practice this might be a decentralized network (with incentives or fees to do the work) or a single service operator. If no one provides the proof, the on-chain request might remain unresolved. Another subtle trust aspect is data availability for off-chain inputs that aren’t on the blockchain. If the computation depends on some private or external data, the verifier can’t know if that data was honestly provided unless additional measures (like data commitments or oracle signatures) are used. But for purely on-chain data computations, the mechanisms described ensure trustlessness equivalent to the chain itself (Axiom argued their proofs offer “security cryptographically equivalent to Ethereum” for historical queries).

Privacy: Zero-knowledge proofs also inherently support privacy – the prover can keep inputs hidden while proving statements about them. In a coprocessor context, this means a proof can allow a contract to use a result that was derived from private data. For example, a proof might show “user’s credit score > 700, so approve loan” without revealing the actual credit score or raw data. Axiom’s use-case was more about publicly known data (blockchain history), so privacy wasn’t the focus there. But RISC Zero’s zkVM could be used to prove assertions about secret data provided by a user: the data stays off-chain and only the needed outcome goes on-chain. It’s worth noting that unlike FHE, a ZK proof doesn’t usually provide ongoing confidentiality of state – it’s a one-time proof. If a workflow needs maintaining a secret state across transactions, one might build it by having the contract store a commitment to the state and each proof showing a valid state transition from old commitment to new, with secrets hidden. This is essentially how zk-rollups for private transactions (like Aztec or Zcash) work. So ZK coprocessors can facilitate fully private state machines, but the implementation is nontrivial; often they are used for one-off computations where either the input or the output (or both) can be private as needed.

Developer experience: Using a ZK-coprocessor typically requires learning new tools. Writing custom circuits (option (1) above) is quite complex and usually only done for narrow purposes. Higher-level options like DSLs or zkVMs make life easier but still add overhead: the dev must write and deploy off-chain code and manage the interaction. In contrast to FHE-VM where the encryption is mostly handled behind the scenes and the developer writes normal smart contract code, here the developer needs to partition their logic and possibly write in a different language (Rust, etc.) for the off-chain part. However, initiatives like Noir, Leo, Circom DSLs or RISC Zero’s approach are rapidly improving accessibility. For instance, RISC Zero provides templates and Foundry integration such that a developer can simulate their off-chain code locally (for correctness) and then seamlessly hook it into solidity tests via the Bonsai callback. Over time, we can expect development frameworks that abstract away whether a piece of logic is executed via ZK proof or on-chain – the compiler or tooling might decide based on cost.

FHE-VM vs ZK-Coprocessor: Comparison

Both FHE-VMs and ZK-coprocessors enable a form of “compute on private data with on-chain assurance”, but they differ fundamentally in architecture. The table below summarizes key differences:

AspectFHE-VM (Encrypted On-Chain Execution)ZK-Coprocessor (Off-Chain Proving)
Where computation happensDirectly on-chain (all nodes execute homomorphic operations on ciphertexts).Off-chain (a prover or network executes the program; only a proof is verified on-chain).
Data confidentialityFull encryption: data remains encrypted at all times on-chain; validators never see plaintext. Only holders of decryption keys can decrypt outputs.Zero-knowledge: prover’s private inputs never revealed on-chain; proof reveals no secrets beyond what’s in public outputs. However, any data used in computation that must affect on-chain state must be encoded in the output or commitment. Secrets remain off-chain by default.
Trust modelTrust in consensus execution and cryptography: if majority of validators follow protocol, encrypted execution is deterministic and correct. No external trust needed for computation correctness (all nodes recompute it). Must trust FHE scheme security (typically based on lattice hardness) for privacy. In some designs, also trust that no collusion of enough validators can occur to misuse threshold keys.Trust in the proof system security (soundness of SNARK/STARK). If proof verifies, result is correct with cryptographic certainty. Off-chain provers cannot cheat the math. There is a liveness assumption on provers to actually do the work. If using a trusted setup (e.g. SNARK SRS), must trust that it was generated honestly or use transparent/no-setup systems.
On-chain cost and scalabilityHigh per-transaction cost: Homomorphic ops are extremely expensive computationally, and every node must perform them. Gas costs are high (e.g. 100k+ gas for a single 8-bit addition). Complex contracts are limited by what every validator can compute in a block. Throughput is much lower than normal smart contracts unless specialized hardware is employed. Scalability is improved by faster cryptography and hardware acceleration, but fundamentally each operation grows chain workload.Low verification cost: Verifying a succinct proof is efficient and constant-size, so on-chain gas is modest (hundreds of thousands gas for any size computation). This decouples complexity from on-chain resource limits – large computations have no extra on-chain cost. Thus, it scales in terms of on-chain load. Off-chain, proving time can be significant (minutes or more for huge tasks) and might require powerful machines, but this doesn’t directly slow the blockchain. Overall throughput can be high as long as proofs can be generated in time (potential parallel prover networks).
LatencyResults are available immediately in the same transaction/block, since computation occurs during execution. No additional round-trips – synchronous operation. However, longer block processing time might increase blockchain latency if FHE ops are slow.Inherently asynchronous. Typically requires one transaction to request and a later transaction (or callback) to provide the proof/result. This introduces delay (possibly seconds to hours depending on proof complexity and proving hardware). Not suitable for instant finality of a single transaction – more like an async job model.
Privacy guaranteesStrong: Everything (inputs, outputs, intermediate state) can remain encrypted on-chain. You can have long-lived encrypted state that multiple transactions update without ever revealing it. Only authorized decryption actions (if any) reveal outputs, and those can be controlled via keys/ACLs. However, side-channel considerations like gas usage or event logs must be managed so they don’t leak patterns (fhEVM designs strive for data-oblivious execution with constant gas for operations to avoid leaks).Selective: The proof reveals whatever is in the public outputs or is necessary to verify (e.g. a commitment to initial state). Designers can ensure that only the intended result is revealed, and all other inputs remain zero-knowledge hidden. But unlike FHE, the blockchain typically doesn’t store the hidden state – privacy is achieved by keeping data off-chain entirely. If a persistent private state is needed, the contract may store a cryptographic commitment to it (so state updates still reveal a new commitment each time). Privacy is limited by what you choose to prove; you have flexibility to prove e.g. a threshold was met without revealing exact values.
Integrity enforcementBy design, all validators recompute the next state homomorphically, so if a malicious actor provides a wrong ciphertext result, others will detect a mismatch – consensus fails unless everyone gets the same result. Thus, integrity is enforced by redundant execution (like normal blockchain, just on encrypted data). Additional ZK proofs are often used to enforce business rules (e.g. user couldn’t violate a constraint) because validators can’t directly check plaintext conditions.Integrity is enforced by the verifier contract checking the ZK proof. As long as the proof verifies, the result is guaranteed to be consistent with some valid execution of the off-chain program. No honest-majority assumption needed for correctness – even a single honest verifier (the contract code itself) suffices. The on-chain contract will simply reject any false proof or missing proof (similar to how it would reject an invalid signature). One consideration: if the prover aborts or delays, the contract may need fallback logic (or users may need to try again later), but it won’t accept incorrect results.
Developer experiencePros: Can largely use familiar smart contract languages (Solidity, etc.) with extensions. The confidentiality is handled by the platform – devs worry mainly about what to encrypt and who holds keys. Composition of encrypted and normal contracts is possible, maintaining the composability of DeFi (just with encrypted variables). Cons: Must understand FHE limitations – e.g. no direct conditional jumps on secret data without special handling, limited circuit depth (though bootstrapping in TFHE allows arbitrary length of computation at expense of time). Debugging encrypted logic can be tricky since you can’t easily introspect runtime values without the key. Also, key management and permissioning add complexity to contract design.Pros: Potentially use any programming language for off-chain part (especially with a zkVM). Leverage existing code/libraries in off-chain program (with caveats for ZK-compatibility). No custom cryptography needed by developer if using a general ZKVM – they write normal code and get a proof. Also, the heavy computation can use libraries (e.g. machine learning code) that would never run on-chain. Cons: Developers must orchestrate off-chain infrastructure or use a proving service. Handling asynchronous workflows and integrating them with on-chain logic requires more design work (e.g. storing a pending state, waiting for callback). Writing efficient circuits or zkVM code might require learning new constraints (e.g. no floating point, use fixed-point or special primitives; avoid heavy branching that blows up proving time; optimize for constraints count). There is also the burden of dealing with proof failures, timeouts, etc., which are not concerns in regular solidity. The ecosystem of tools is growing, but it’s a new paradigm for many.

Both approaches are actively being improved, and we even see convergence: as noted, ZKPs are used inside FHE-VMs for certain checks, and conversely some researchers propose using FHE to keep prover inputs private in ZK (so a cloud prover doesn’t see your secret data). It’s conceivable future systems will combine them – e.g. performing FHE off-chain and then proving the correctness of that to chain, or using FHE on-chain but ZK-proving to light clients that the encrypted ops were done right. Each technique has strengths: FHE-VM offers continuous privacy and real-time interaction at the cost of heavy computation, whereas ZK-coprocessors offer scalability and flexibility at the cost of latency and complexity.

Use Cases and Implications

The advent of programmable privacy unlocks a wealth of new blockchain applications across industries. Below we explore how FHE-VMs and ZK-coprocessors (or hybrids) can empower various domains by enabling privacy-preserving smart contracts and a secure data economy.

Confidential DeFi and Finance

In decentralized finance, privacy can mitigate front-running, protect trading strategies, and satisfy compliance without sacrificing transparency where needed. Confidential DeFi could allow users to interact with protocols without revealing their positions to the world.

  • Private Transactions and Hidden Balances: Using FHE, one can implement confidential token transfers (encrypted ERC-20 balances and transactions) or shielded pools on a blockchain L1. No observer can see how much of a token you hold or transferred, eliminating the risk of targeted attacks based on holdings. ZK proofs can ensure balances stay in sync and no double-spending occurs (similar to Zcash but on smart contract platforms). An example is a confidential AMM (Automated Market Maker) where pool reserves and trades are encrypted on-chain. Arbitrageurs or front-runners cannot exploit the pool because they can’t observe the price slippage until after the trade is settled, reducing MEV. Only after some delay or via an access-controlled mechanism might some data be revealed for audit.

  • MEV-Resistant Auctions and Trading: Miners and bots exploit transaction transparency to front-run trades. With encryption, you could have an encrypted mempool or batch auctions where orders are submitted in ciphertext. Only after the auction clears do trades decrypt. This concept, sometimes called Fair Order Flow, can be achieved with threshold decryption (multiple validators collectively decrypt the batch) or by proving auction outcomes via ZK without revealing individual bids. For instance, a ZK-coprocessor could take a batch of sealed bids off-chain, compute the auction clearing price, and output just that price and winners with proofs. This preserves fairness and privacy of losing bids.

  • Confidential Lending and Derivatives: In DeFi lending, users might not want to reveal the size of their loans or collateral (it can affect market sentiment or invite exploitation). An FHE-VM can maintain an encrypted loan book where each loan’s details are encrypted. Smart contract logic can still enforce rules like liquidation conditions by operating on encrypted health factors. If a loan’s collateral ratio falls below threshold, the contract (with help of ZK proofs) can flag it for liquidation without ever exposing exact values – it might just produce a yes/no flag in plaintext. Similarly, secret derivatives or options positions could be managed on-chain, with only aggregated risk metrics revealed. This could prevent copy trading and protect proprietary strategies, encouraging more institutional participation.

  • Compliant Privacy: Not all financial contexts want total anonymity; sometimes selective disclosure is needed for regulation. With these tools, we can achieve regulated privacy: for example, trades are private to the public, but a regulated exchange can decrypt or receive proofs about certain properties. One could prove via ZK that “this trade did not involve a blacklisted address and both parties are KYC-verified” without revealing identities to the chain. This balance could satisfy Anti-Money Laundering (AML) rules while still keeping user identities and positions confidential to everyone else. FHE could allow an on-chain compliance officer contract to scan encrypted transactions for risk signals (with a decryption key accessible only under court order, for instance).

Digital Identity and Personal Data

Identity systems stand to gain significantly from on-chain privacy tech. Currently, putting personal credentials or attributes on a public ledger is impractical due to privacy laws and user reluctance. With FHE and ZK, self-sovereign identity can be realized in a privacy-preserving way:

  • Zero-Knowledge Credentials: Using ZK proofs (already common in some identity projects), a user can prove statements like “I am over 18”, “I have a valid driver’s license”, or “I earn above $50k (for credit scoring)” without revealing any other personal info. ZK-coprocessors can enhance this by handling more complex checks off-chain, e.g. proving a user’s credit score is above a threshold by querying a private credit database in an Axiom-like fashion, outputting only a yes/no to the blockchain.

  • Confidential KYC on DeFi: Imagine a DeFi protocol that by law must ensure users are KYC’d. With FHE-VM, a user’s credentials can be stored encrypted on-chain (or referenced via DID), and a smart contract can perform an FHE computation to verify the KYC info meets requirements. For instance, a contract could homomorphically check that name and SSN in an encrypted user profile match a sanctioned users list (also encrypted), or that the user’s country is not restricted. The contract would only get an encrypted “pass/fail” which can be threshold-decrypted by network validators to a boolean flag. Only the fact that the user is allowed or not is revealed, preserving PII confidentiality and aligning with GDPR principles. This selective disclosure ensures compliance and privacy.

  • Attribute-Based Access and Selective Disclosure: Users could hold a bunch of verifiable credentials (age, citizenship, skills, etc.) as encrypted attributes. They can authorize certain dApps to run computations on them without disclosing everything. For example, a decentralized recruitment DApp could filter candidates by performing searches on encrypted resumes (using FHE) – e.g. count years of experience, check for a certification – and only if a match is found, contact the candidate off-chain. The candidate’s private details remain encrypted unless they choose to reveal. ZK proofs can also let users selectively prove they possess a combination of attributes (e.g. over 21 and within a certain ZIP code) without revealing the actual values.

  • Multi-Party Identity Verification: Sometimes a user’s identity needs to be vetted by multiple parties (say, background check by company A, credit check by company B). With homomorphic and ZK tools, each verifier could contribute an encrypted score or approval, and a smart contract can aggregate these to a final decision without exposing individual contributions. For instance, three agencies provide encrypted “pass/fail” bits, and the contract outputs an approval if all three are passes – the user or relying party only sees the final outcome, not which specific agency might have failed them, preserving privacy of the user’s record at each agency. This can reduce bias and stigma associated with, say, one failed check revealing a specific issue.

Healthcare and Sensitive Data Sharing

Healthcare data is highly sensitive and regulated, yet combining data from multiple sources can unlock huge value (for research, insurance, personalized medicine). Blockchain could provide a trust layer for data exchange if privacy is solved. Confidential smart contracts could enable new health data ecosystems:

  • Secure Medical Data Exchange: Patients could store references to their medical records on-chain in encrypted form. An FHE-enabled contract could allow a research institution to run analytics on a cohort of patient data without decrypting it. For example, a contract could compute the average efficacy of a drug across encrypted patient outcomes. Only aggregate statistical results come out decrypted (and perhaps only if a minimum number of patients is included, to prevent re-identification). Patients could receive micropayments for contributing their encrypted data to research, knowing that their privacy is preserved because even the blockchain and researchers only see ciphertext or aggregate proofs. This fosters a data marketplace for healthcare that respects privacy.

  • Privacy-Preserving Insurance Claims: Health insurance claims processing could be automated via smart contracts that verify conditions on medical data without exposing the data to the insurer. A claim could include an encrypted diagnosis code and encrypted treatment cost; the contract, using FHE, checks policy rules (e.g. coverage, deductible) on that encrypted data. It could output an approval and payment amount without ever revealing the actual diagnosis to the insurer’s blockchain (only the patient and doctor had the key). ZK proofs might be used to show that the patient’s data came from a certified hospital’s records (using something like Axiom to verify a hospital’s signature or record inclusion) without revealing the record itself. This ensures patient privacy while preventing fraud.

  • Genomic and Personal Data Computation: Genomic data is extremely sensitive (it’s literally one’s DNA blueprint). However, analyzing genomes can provide valuable health insights. Companies could use FHE-VM to perform computations on encrypted genomes uploaded by users. For instance, a smart contract could run a gene-environment risk model on encrypted genomic data and encrypted environmental data (from wearables perhaps), outputting a risk score that only the user can decrypt. The logic (maybe a polygenic risk score algorithm) is coded in the contract and runs homomorphically, so the genomic data never appears in plain. This way, users get insights without giving companies raw DNA data – mitigating both privacy and data ownership concerns.

  • Epidemiology and Public Health: During situations like pandemics, sharing data is vital for modeling disease spread, but privacy laws can hinder data sharing. ZK coprocessors could allow public health authorities to submit queries like “How many people in region X tested positive in last 24h?” to a network of hospitals’ data via proofs. Each hospital keeps patient test records off-chain but can prove to the authority’s contract the count of positives without revealing who. Similarly, contact tracing could be done by matching encrypted location trails: contracts can compute intersections of encrypted location histories of patients to identify hotspots, outputting only the hotspot locations (and perhaps an encrypted list of affected IDs that only health dept can decrypt). The raw location trails of individuals remain private.

Data Marketplaces and Collaboration

The ability to compute on data without revealing it opens new business models around data sharing. Entities can collaborate on computations knowing their proprietary data will not be exposed:

  • Secure Data Marketplaces: Sellers can make data available in encrypted form on a blockchain marketplace. Buyers can pay to run specific analytics or machine learning models on the encrypted dataset via a smart contract, getting either the trained model or aggregated results. The seller’s raw data is never revealed to the buyer or the public – the buyer might only receive a model (which still might leak some info in weights, but techniques like differential privacy or controlling output granularity can mitigate this). ZK proofs can ensure the buyer that the computation was done correctly over the promised dataset (e.g. the seller can’t cheat by running the model on dummy data because the proof ties it to the committed encrypted dataset). This scenario encourages data sharing: for instance, a company could monetize user behavior data by allowing approved algorithms to run on it under encryption, without giving away the data itself.

  • Federated Learning & Decentralized AI: In decentralized machine learning, multiple parties (e.g. different companies or devices) want to jointly train a model on their combined data without sharing data with each other. FHE-VMs excel here: they can enable federated learning where each party’s model updates are homomorphically aggregated by a contract. Because the updates are encrypted, no participant learns others’ contributions. The contract could even perform parts of the training loop (like gradient descent steps) on-chain under encryption, producing an updated model that only authorized parties can decrypt. ZK can complement this by proving that each party’s update was computed following the training algorithm (preventing a malicious participant from poisoning the model). This means a global model can be trained with full auditability on-chain, yet the training data of each contributor remains private. Use cases include jointly training fraud detection models across banks or improving AI assistants using data from many users without centralizing the raw data.

  • Cross-Organizational Analytics: Consider two companies that want to find their intersection of customers for a partnership campaign without exposing their entire customer lists to each other. They could each encrypt their customer ID lists and upload a commitment. An FHE-enabled contract can compute the intersection on the encrypted sets (using techniques like private set intersection via FHE). The result could be an encrypted list of common customer IDs that only a mutually trusted third-party (or the customers themselves, via some mechanism) can decrypt. Alternatively, a ZK approach: one party proves to the other in zero-knowledge that “we have N customers in common and here is an encryption of those IDs” with a proof that the encryption indeed corresponds to common entries. This way, they can proceed with a campaign to those N customers without ever exchanging their full lists in plaintext. Similar scenarios: computing supply chain metrics across competitors without revealing individual supplier details, or banks collating credit info without sharing full client data.

  • Secure Multi-Party Computation (MPC) on Blockchain: FHE and ZK essentially bring MPC concepts on-chain. Complex business logic spanning multiple organizations can be encoded in a smart contract such that each org’s inputs are secret-shared or encrypted. The contract (as an MPC facilitator) produces outputs like profit splits, cost calculations, or joint risk assessments that everyone can trust. For example, suppose several energy companies want to settle a marketplace of power trading. They could feed their encrypted bids and offers into a smart contract auction; the contract computes the clearing prices and allocations on encrypted bids, and outputs each company’s allocation and cost just to that company (via encryption to their public key). No company sees others’ bids, protecting competitive info, but the auction result is fair and verifiable. This combination of blockchain transparency and MPC privacy could revolutionize consortiums and enterprise consortia that currently rely on trusted third parties.

Decentralized Machine Learning (ZKML and FHE-ML)

Bringing machine learning to blockchains in a verifiable and private way is an emerging frontier:

  • Verifiable ML Inference: Using ZK proofs, one can prove that “a machine learning model f, when given input x, produces output y” without revealing either x (if it’s private data) or the inner workings of f (if the model weights are proprietary). This is crucial for AI services on blockchain – e.g., a decentralized AI oracle that provides predictions or classifications. A ZK-coprocessor can run the model off-chain (since models can be large and expensive to evaluate) and post a proof of the result. For instance, an oracle could prove the statement “The satellite image provided shows at least 50% tree cover” to support a carbon credit contract, without revealing the satellite image or possibly even the model. This is known as ZKML and projects are working on optimizing circuit-friendly neural nets. It ensures the integrity of AI outputs used in smart contracts (no cheating or arbitrary outputs) and can preserve confidentiality of the input data and model parameters.

  • Training with Privacy and Auditability: Training an ML model is even more computation-intensive, but if achievable, it would allow blockchain-based model marketplaces. Multiple data providers could contribute to training a model under FHE so that the training algorithm runs on encrypted data. The result might be an encrypted model that only the buyer can decrypt. Throughout training, ZK proofs could be supplied periodically to prove that the training was following the protocol (preventing a malicious trainer from inserting a backdoor, for example). While fully on-chain ML training is far off given costs, a hybrid approach could use off-chain compute with ZK proofs for critical parts. One could imagine a decentralized Kaggle-like competition where participants train models on private datasets and submit ZK proofs of the model’s accuracy on encrypted test data to determine a winner – all without revealing the datasets or the test data.

  • Personalized AI and Data Ownership: With these technologies, users could retain ownership of their personal data and still benefit from AI. For example, a user’s mobile device could use FHE to encrypt their usage data and send it to an analytics contract which computes a personalized AI model (like a recommendation model) just for them. The model is encrypted and only the user’s device can decrypt and use it locally. The platform (maybe a social network) never sees the raw data or model, but the user gets the AI benefit. If the platform wants aggregated insights, it could request ZK proofs of certain aggregate patterns from the contract without accessing individual data.

Additional Areas

  • Gaming: On-chain games often struggle with hiding secret information (e.g. hidden card hands, fog-of-war in strategy games). FHE can enable hidden state games where the game logic runs on encrypted state. For example, a poker game contract could shuffle and deal encrypted cards; players get decryptions of their own cards, but the contract and others only see ciphertext. Betting logic can use ZK proofs to ensure a player isn’t bluffing about an action (or to reveal the winning hand at the end in a verifiably fair way). Similarly, random seeds for NFT minting or game outcomes can be generated and proven fair without exposing the seed (preventing manipulation). This can greatly enhance blockchain gaming, allowing it to support the same dynamics as traditional games.

  • Voting and Governance: DAOs could use privacy tech for secret ballots on-chain, eliminating vote buying and pressure. FHE-VM could tally votes that are cast in encrypted form, and only final totals are decrypted. ZK proofs can assure each vote was valid (came from an eligible voter, who hasn’t voted twice) without revealing who voted for what. This provides verifiability (everyone can verify the proofs and tally) while keeping individual votes secret – crucial for unbiased governance.

  • Secure Supply Chain and IoT: In supply chains, partners might want to share proof of certain properties (origin, quality metrics) without exposing full details to competitors. For instance, an IoT sensor on a food shipment could continuously send encrypted temperature data to a blockchain. A contract could use FHE to check if the temperature stayed in a safe range throughout transit. If a threshold was exceeded, it can trigger an alert or penalty, but it doesn’t have to reveal the entire temperature log publicly – maybe only a proof or an aggregate like “90th percentile temp”. This builds trust in supply chain automation while respecting confidentiality of process data.

Each of these use cases leverages the core ability: compute on or verify data without revealing the data. This capability can fundamentally change how we handle sensitive information in decentralized systems. It reduces the trade-off between transparency and privacy that has limited blockchain adoption in areas dealing with private data.

Conclusion

Blockchain technology is entering a new era of programmable privacy, where data confidentiality and smart contract functionality go hand in hand. The paradigms of FHE-VM and ZK-coprocessors, while technically distinct, both strive to expand the scope of blockchain applications by decoupling what we can compute from what we must reveal.

Fully Homomorphic Encryption Virtual Machines keep computations on-chain and encrypted, preserving decentralization and composability but demanding advances in efficiency. Zero-Knowledge coprocessors shift heavy lifting off-chain, enabling virtually unbounded computation under cryptographic guarantees, and are already proving their worth in scaling and enhancing Ethereum. The choice between them (and hybrids thereof) will depend on the use case: if real-time interaction with private state is needed, an FHE approach might be more suitable; if extremely complex computation or integration with existing code is required, a ZK-coprocessor might be the way to go. In many cases, they are complementary – indeed, we see ZK proofs bolstering FHE integrity, and FHE potentially helping ZK by handling private data for provers.

For developers, these technologies will introduce new design patterns. We will think in terms of encrypted variables and proof verification as first-class elements of dApp architecture. Tooling is rapidly evolving: high-level languages and SDKs are abstracting away cryptographic details (e.g. Zama’s libraries making FHE types as easy as native types, or RISC Zero’s templates for proof requests). In a few years, writing a confidential smart contract could feel almost as straightforward as writing a regular one, just with privacy “built-in” by default.

The implications for the data economy are profound. Individuals and enterprises will be more willing to put data or logic on-chain when they can control its visibility. This can unlock cross-organization collaborations, new financial products, and AI models that were previously untenable due to privacy concerns. Regulators, too, may come to embrace these techniques as they allow compliance checks and audits via cryptographic means (e.g. proving taxes are paid correctly on-chain without exposing all transactions).

We are still in the early days – current FHE-VM prototypes have performance limits, and ZK proofs, while much faster than before, can still be a bottleneck for extremely complex tasks. But continuous research and engineering efforts (including specialized hardware, as evidenced by companies like Optalysys pushing optical FHE acceleration) are quickly eroding these barriers. The funding pouring into this space (e.g. Zama’s unicorn status, Paradigm’s investment in Axiom) underscores a strong belief that privacy features will be as fundamental to Web3 as transparency was to Web1/2.

In conclusion, programmable privacy via FHE-VMs and ZK-coprocessors heralds a new class of dApps that are trustless, decentralized, and confidential. From DeFi trades that reveal no details, to health research that protects patient data, to machine learning models trained across the world without exposing raw data – the possibilities are vast. As these technologies mature, blockchain platforms will no longer force the trade-off between utility and privacy, enabling broader adoption in industries that require confidentiality. The future of Web3 is one where *users and organizations can confidently transact and compute with sensitive data on-chain, knowing the blockchain will verify integrity while keeping their secrets safe*.

Sources: The information in this report is drawn from technical documentation and recent research blogs of leading projects in this space, including Cypher’s and Zama’s FHEVM documentation, detailed analyses from Trail of Bits on Axiom’s circuits, RISC Zero’s developer guides and blog posts, as well as industry articles highlighting use cases of confidential blockchain tech. These sources and more have been cited throughout to provide further reading and evidence for the described architectures and applications.

Plume Network and Real-World Assets (RWA) in Web3

· 77 min read

Plume Network: Overview and Value Proposition

Plume Network is a blockchain platform purpose-built for Real-World Assets (RWA). It is a public, Ethereum-compatible chain designed to tokenize a wide range of real-world financial assets – from private credit and real estate to carbon credits and even collectibles – and make them as usable as native crypto assets. In other words, Plume doesn’t just put assets on-chain; it allows users to hold and utilize tokenized real assets in decentralized finance (DeFi) – enabling familiar crypto activities like staking, lending, borrowing, swapping, and speculative trading on assets that originate in traditional finance.

The core value proposition of Plume is to bridge TradFi and DeFi by turning traditionally illiquid or inaccessible assets into programmable, liquid tokens. By integrating institutional-grade assets (e.g. private credit funds, ETFs, commodities) with DeFi infrastructure, Plume aims to make high-quality investments – which were once limited to large institutions or specific markets – permissionless, composable, and a click away for crypto users. This opens the door for crypto participants to earn “real yield” backed by stable real-world cash flows (such as loan interest, rental income, bond yields, etc.) rather than relying on inflationary token rewards. Plume’s mission is to drive “RWA Finance (RWAfi)”, creating a transparent and open financial system where anyone can access assets like private credit, real estate debt, or commodities on-chain, and use them freely in novel ways.

In summary, Plume Network serves as an “on-chain home for real-world assets”, offering a full-stack ecosystem that transforms off-chain assets into globally accessible financial tools with true crypto-native utility. Users can stake stablecoins to earn yields from top fund managers (Apollo, BlackRock, Blackstone, etc.), loop and leverage RWA-backed tokens as collateral, and trade RWAs as easily as ERC-20 tokens. By doing so, Plume stands out as a platform striving to make alternative assets more liquid and programmable, bringing fresh capital and investment opportunities into Web3 without sacrificing transparency or user experience.

Technology and Architecture

Plume Network is implemented as an EVM-compatible blockchain with a modular Layer-2 architecture. Under the hood, Plume operates similarly to an Ethereum rollup (comparable to Arbitrum’s technology), utilizing Ethereum for data availability and security. Every transaction on Plume is eventually batch-posted to Ethereum, which means users pay a small extra fee to cover the cost of publishing calldata on Ethereum. This design leverages Ethereum’s robust security while allowing Plume to have its own high-throughput execution environment. Plume runs a sequencer that aggregates transactions and commits them to Ethereum periodically, giving the chain faster execution and lower fees for RWA use-cases, but anchored to Ethereum for trust and finality.

Because Plume is EVM-compatible, developers can deploy Solidity smart contracts on Plume just as they would on Ethereum, with almost no changes. The chain supports the standard Ethereum RPC methods and Solidity operations, with only minor differences (e.g. Plume’s block number and timestamp semantics mirror Arbitrum’s conventions due to the Layer-2 design). In practice, this means Plume can easily integrate existing DeFi protocols and developer tooling. The Plume docs note that cross-chain messaging is supported between Ethereum (the “parent” chain) and Plume (the L2), enabling assets and data to move between the chains as needed.

Notably, Plume describes itself as a “modular blockchain” optimized for RWA finance. The modular approach is evident in its architecture: it has dedicated components for bridging assets (called Arc for bringing anything on-chain), for omnichain yield routing (SkyLink) across multiple blockchains, and for on-chain data feeds (Nexus, an “onchain data highway”). This suggests Plume is building an interconnected system where real-world asset tokens on Plume can interact with liquidity on other chains and where off-chain data (like asset valuations, interest rates, etc.) is reliably fed on-chain. Plume’s infrastructure also includes a custom wallet called Plume Passport (the “RWAfi Wallet”) which likely handles identity/AML checks necessary for RWA compliance, and a native stablecoin (pUSD) for transacting in the ecosystem.

Importantly, Plume’s current iteration is often called a Layer-2 or rollup chain – it is built atop Ethereum for security. However, the team has hinted at ambitious plans to evolve the tech further. Plume’s CTO noted that they started as a modular L2 rollup but are now pushing “down the stack” toward a fully sovereign Layer-1 architecture, optimizing a new chain from scratch with high performance, privacy features “comparable to Swiss banks,” and a novel crypto-economic security model to secure the next trillion dollars on-chain. While specifics are scant, this suggests that over time Plume may transition to a more independent chain or incorporate advanced features like FHE (Fully Homomorphic Encryption) or zk-proofs (the mention of zkTLS and privacy) to meet institutional requirements. For now, though, Plume’s mainnet leverages Ethereum’s security and EVM environment to rapidly onboard assets and users, providing a familiar but enhanced DeFi experience for RWAs.

Tokenomics and Incentives

PLUME ($PLUME) is the native utility token of the Plume Network. The $PLUME token is used to power transactions, governance, and network security on Plume. As the gas token, $PLUME is required to pay transaction fees on the Plume chain (similar to how ETH is gas on Ethereum). This means all operations – trading, staking, deploying contracts – consume $PLUME for fees. Beyond gas, $PLUME has several utility and incentive roles:

  • Governance: $PLUME holders can participate in governance decisions, presumably voting on protocol parameters, upgrades, or asset onboarding decisions.
  • Staking/Security: The token can be staked, which likely supports the network’s validator or sequencer operations. Stakers help secure the chain and in return earn staking rewards in $PLUME. (Even as a rollup, Plume may use a proof-of-stake mechanism for its sequencer or for eventual decentralization of block production).
  • Real Yield and DeFi utility: Plume’s docs mention that users can use $PLUME across dApps to “unlock real yield”. This suggests that holding or staking $PLUME might confer higher yields in certain RWA yield farms or access to exclusive opportunities in the ecosystem.
  • Ecosystem Incentives: $PLUME is also used to reward community engagement – for example, users might earn tokens via community quests, referral programs, testnet participation (such as the “Take Flight” developer program or the testnet “Goons” NFTs). This incentive design is meant to bootstrap network effects by distributing tokens to those who actively use and grow the platform.

Token Supply & Distribution: Plume has a fixed total supply of 10 billion $PLUME tokens. At the Token Generation Event (mainnet launch), the initial circulating supply is 20% of the total (i.e. 2 billion tokens). The allocation is heavily weighted toward community and ecosystem development:

  • 59% to Community, Ecosystem & Foundation – this large share is reserved for grants, liquidity incentives, community rewards, and a foundation pool to support the ecosystem’s long-term growth. This ensures a majority of tokens are available to bootstrap usage (and potentially signals commitment to decentralization over time).
  • 21% to Early Backers – these tokens are allocated to strategic investors and partners who funded Plume’s development. (As we’ll see, Plume raised capital from prominent crypto funds; this allocation likely vests over time as per investor agreements.)
  • 20% to Core Contributors (Team) – allocated to the founding team and core developers driving Plume. This portion incentivizes the team and aligns them with the network’s success, typically vesting over a multi-year period.

Besides $PLUME, Plume’s ecosystem includes a stablecoin called Plume USD (pUSD). pUSD is designed as the RWAfi ecosystem stablecoin for Plume. It serves as the unit of account and primary trading/collateral currency within Plume’s DeFi apps. Uniquely, pUSD is fully backed 1:1 by USDC – effectively a wrapped USDC for the Plume network. This design choice (wrapping USDC) was made to reduce friction for traditional institutions: if an organization is already comfortable holding and minting USDC, they can seamlessly mint and use pUSD on Plume under the same frameworks. pUSD is minted and redeemed natively on both Ethereum and Plume, meaning users or institutions can deposit USDC on Ethereum and receive pUSD on Plume, or vice versa. By tying pUSD 1:1 to USDC (and ultimately to USD reserves), Plume ensures its stablecoin remains fully collateralized and liquid, which is critical for RWA transactions (where predictability and stability of the medium of exchange are required). In practice, pUSD provides a common stable liquidity layer for all RWA apps on Plume – whether it’s buying tokenized bonds, investing in RWA yield vaults, or trading assets on a DEX, pUSD is the stablecoin that underpins value exchange.

Overall, Plume’s tokenomics aim to balance network utility with growth incentives. $PLUME ensures the network is self-sustaining (through fees and staking security) and community-governed, while large allocations to ecosystem funds and airdrops help drive early adoption. Meanwhile, pUSD anchors the financial ecosystem in a trustworthy stable asset, making it easier for traditional capital to enter Plume and for DeFi users to measure returns on real-world investments.

Founding Team and Backers

Plume Network was founded in 2022 by a trio of entrepreneurs with backgrounds in crypto and finance: Chris Yin (CEO), Eugene Shen (CTO), and Teddy Pornprinya (CBO). Chris Yin is described as the visionary product leader of the team, driving the platform’s strategy and thought leadership in the RWA space. Eugene Shen leads the technical development as CTO (previously having worked on modular blockchain architectures, given his note about “customizing geth” and building from the ground up). Teddy Pornprinya, as Chief Business Officer, spearheads partnerships, business development, and marketing – he was instrumental in onboarding dozens of projects into the Plume ecosystem early on. Together, the founders identified the gap in the market for an RWA-optimized chain and quit their prior roles to build Plume, officially launching the project roughly a year after conception.

Plume has attracted significant backing from both crypto-native VCs and traditional finance giants, signaling strong confidence in its vision:

  • In May 2023, Plume raised a $10 million seed round led by Haun Ventures (the fund of former a16z partner Katie Haun). Other participants in the seed included Galaxy Digital, Superscrypt (Temasek’s crypto arm), A Capital, SV Angel, Portal Ventures, and Reciprocal Ventures. This diverse investor base gave Plume a strong start, combining crypto expertise and institutional connections.

  • By late 2024, Plume secured a $20 million Series A funding to accelerate its development. This round was backed by top-tier investors such as Brevan Howard Digital, Haun Ventures (returning), Galaxy, and Faction VC. The inclusion of Brevan Howard, one of the world’s largest hedge funds with a dedicated crypto arm, is especially notable and underscored the growing Wall Street interest in RWAs on blockchain.

  • In April 2025, Apollo Global Management – one of the world’s largest alternative asset managers – made a strategic investment in Plume. Apollo’s investment was a seven-figure (USD) amount intended to help Plume scale its infrastructure and bring more traditional financial products on-chain. Apollo’s involvement is a strong validation of Plume’s approach: Christine Moy, Apollo’s Head of Digital Assets, said their investment “underscores Apollo’s focus on technologies that broaden access to institutional-quality products… Plume represents a new kind of infrastructure focused on digital asset utility, investor engagement, and next-generation financial solutions”. In other words, Apollo sees Plume as key infrastructure to make private markets more liquid and accessible via blockchain.

  • Another strategic backer is YZi Labs, formerly Binance Labs. In early 2025, YZi (Binance’s venture arm rebranded) announced a strategic investment in Plume Network as well. YZi Labs highlighted Plume as a “cutting-edge Layer-2 blockchain designed for scaling real world assets”, and their support signals confidence that Plume can bridge TradFi and DeFi at a large scale. (It’s worth noting Binance Labs’ rebranding to YZi Labs indicates continuity of their investments in core infrastructure projects like Plume.)

  • Plume’s backers also include traditional fintech and crypto institutions through partnerships (detailed below) – for example, Mercado Bitcoin (Latin America’s largest digital asset platform) and Anchorage Digital (a regulated crypto custodian) are ecosystem partners, effectively aligning themselves with Plume’s success. Additionally, Grayscale Investments – the world’s largest digital asset manager – has taken notice: in April 2025, Grayscale officially added $PLUME to its list of assets “Under Consideration” for future investment products. Being on Grayscale’s radar means Plume could potentially be included in institutional crypto trusts or ETFs, a major nod of legitimacy for a relatively new project.

In summary, Plume’s funding and support comes from a who’s-who of top investors: premier crypto VCs (Haun, Galaxy, a16z via GFI’s backing of Goldfinch, etc.), hedge funds and TradFi players (Brevan Howard, Apollo), and corporate venture arms (Binance/YZi). This mix of backers brings not just capital but also strategic guidance, regulatory expertise, and connections to real-world asset originators. It has also provided Plume with war-chest funding (at least $30M+ over seed and Series A) to build out its specialized blockchain and onboard assets. The strong backing serves as a vote of confidence that Plume is positioned as a leading platform in the fast-growing RWA sector.

Ecosystem Partners and Integrations

Plume has been very active in forging ecosystem partnerships across both crypto and traditional finance, assembling a broad network of integrations even before (and immediately upon) mainnet launch. These partners provide the assets, infrastructure, and distribution that make Plume’s RWA ecosystem functional:

  • Nest Protocol (Nest Credit): An RWA yield platform that operates on Plume, allowing users to deposit stablecoins into vaults and receive yield-bearing tokens backed by real-world assets. Nest is essentially a DeFi frontend for RWA yields, offering products like tokenized U.S. Treasury Bills, private credit, mineral rights, etc., but abstracting away the complexity so they “feel like crypto.” Users swap USDC (or pUSD) for Nest-issued tokens that are fully backed by regulated, audited assets held by custodians. Nest works closely with Plume – a testimonial from Anil Sood of Anemoy (a partner) highlights that “partnering with Plume accelerates our mission to bring institutional-grade RWAs to every investor… This collaboration is a blueprint for the future of RWA innovation.”. In practice, Nest is Plume’s native yield marketplace (sometimes called “Nest Yield” or RWA staking platform), and many of Plume’s big partnerships funnel into Nest vaults.

  • Mercado Bitcoin (MB): The largest digital asset exchange in Latin America (based in Brazil) has partnered with Plume to tokenize ~$40 million of Brazilian real-world assets. This initiative, announced in Feb 2025, involves MB using Plume’s blockchain to issue tokens representing Brazilian asset-backed securities, consumer credit portfolios, corporate debt, and accounts receivable. The goal is to connect global investors with yield-bearing opportunities in Brazil’s economy – effectively opening up Brazilian credit markets to on-chain investors worldwide through Plume. These Brazilian RWA tokens will be available from day one of Plume’s mainnet on the Nest platform, providing stable on-chain returns backed by Brazilian small-business loans and credit receivables. This partnership is notable because it gives Plume a geographic reach (LATAM) and a pipeline of emerging-market assets, showcasing how Plume can serve as a hub connecting regional asset originators to global liquidity.

  • Superstate: Superstate is a fintech startup founded by Robert Leshner (former founder of Compound), focused on bringing regulated U.S. Treasury fund products on-chain. In 2024, Superstate launched a tokenized U.S. Treasury fund (approved as a 1940 Act mutual fund) targeted at crypto users. Plume was chosen by Superstate to power its multi-chain expansion. In practice, this means Superstate’s tokenized T-bill fund (which offers stable yield from U.S. government bonds) is being made available on Plume, where it can be integrated into Plume’s DeFi ecosystem. Leshner himself said: “by expanding to Plume – the unique RWAfi chain – we can demonstrate how purpose-built infrastructure can enable great new use-cases for tokenized assets. We’re excited to build on Plume.”. This indicates Superstate will deploy its fund tokens (e.g., maybe an on-chain share of a Treasuries fund) on Plume, allowing Plume users to hold or use them in DeFi (perhaps as collateral for borrowing, or in Nest vaults for auto-yield). It is a strong validation that Plume’s chain is seen as a preferred home for regulated asset tokens like Treasuries.

  • Ondo Finance: Ondo is a well-known DeFi project that pivoted into the RWA space by offering tokenized bonds and yield products (notably, Ondo’s OUSG token, which represents shares in a short-term U.S. Treasury fund, and USDY, representing an interest-bearing USD deposit product). Ondo is listed among Plume’s ecosystem partners, implying a collaboration where Ondo’s yield-bearing tokens (like OUSG, USDY) can be used on Plume. In fact, Ondo’s products align closely with Plume’s goals: Ondo established legal vehicles (SPVs) to ensure compliance, and its OUSG token is backed by BlackRock’s tokenized money market fund (BUIDL), providing ~4.5% APY from Treasuries. By integrating Ondo, Plume gains blue-chip RWA assets like U.S. Treasuries on-chain. Indeed, as of late 2024, Ondo’s RWA products had a market value around $600+ million, so bridging them to Plume adds significant TVL. This synergy likely allows Plume users to swap into Ondo’s tokens or include them in Nest vaults for composite strategies.

  • Centrifuge: Centrifuge is a pioneer in RWA tokenization (operating its own Polkadot parachain for RWA pools). Plume’s site lists Centrifuge as a partner, suggesting collaboration or integration. This could mean that Centrifuge’s pools of assets (trade finance, real estate bridge loans, etc.) might be accessible from Plume, or that Centrifuge will use Plume’s infrastructure for distribution. For example, Plume’s SkyLink omnichain yield might route liquidity from Plume into Centrifuge pools on Polkadot, or Centrifuge could tokenize certain assets directly onto Plume for deeper DeFi composability. Given Centrifuge leads the private credit RWA category with ~$409M TVL in its pools, its participation in Plume’s ecosystem is significant. It indicates an industry-wide move toward interoperability among RWA platforms, with Plume acting as a unifying layer for RWA liquidity across chains.

  • Credbull: Credbull is a private credit fund platform that partnered with Plume to launch a large tokenized credit fund. According to CoinDesk, Credbull is rolling out up to a $500M private credit fund on Plume, offering a fixed high yield to on-chain investors. This likely involves packaging private credit (loans to mid-sized companies or other credit assets) into a vehicle where on-chain stablecoin holders can invest for a fixed return. The significance is twofold: (1) It adds a huge pipeline of yield assets (~half a billion dollars) to Plume’s network, and (2) it exemplifies how Plume is attracting real asset managers to originate products on its chain. Combined with other pipeline assets, Plume said it planned to tokenize about $1.25 billion worth of RWAs by late 2024, including Credbull’s fund, plus $300M of renewable energy assets (solar farms via Plural Energy), ~$120M of healthcare receivables (Medicaid-backed invoices), and even oil & gas mineral rights. This large pipeline shows that at launch, Plume isn’t empty – it comes with tangible assets ready to go.

  • Goldfinch: Goldfinch is a decentralized credit protocol that provided undercollateralized loans to fintech lenders globally. In 2023, Goldfinch pivoted to “Goldfinch Prime”, targeting accredited and institutional investors by offering on-chain access to top private credit funds. Plume and Goldfinch announced a strategic partnership to bring Goldfinch Prime’s offerings to Plume’s Nest platform, effectively marrying Goldfinch’s institutional credit deals with Plume’s user base. Through this partnership, institutional investors on Plume can stake stablecoins into funds managed by Apollo, Golub Capital, Aries, Stellus, and other leading private credit managers via Goldfinch’s integration. The ambition is massive: collectively these managers represent over $1 trillion in assets, and the partnership aims to eventually make portions of that available on-chain. In practical terms, a user on Plume could invest in a diversified pool that earns yield from hundreds of real-world loans made by these credit funds, all tokenized through Goldfinch Prime. This not only enhances Plume’s asset diversity but also underscores Plume’s credibility to partner with top-tier RWA platforms.

  • Infrastructure Partners (Custody and Connectivity): Plume has also integrated key infrastructure players. Anchorage Digital, a regulated crypto custodian bank, is a partner – Anchorage’s involvement likely means institutional users can custody their tokenized assets or $PLUME securely in a bank-level custody solution (a must for big money). Paxos is another listed partner, which could relate to stablecoin infrastructure (Paxos issues USDP stablecoin and also provides custody and brokerage services – possibly Paxos could be safeguarding the reserves for pUSD or facilitating asset tokenization pipelines). LayerZero is mentioned as well, indicating Plume uses LayerZero’s interoperability protocol for cross-chain messaging. This would allow assets on Plume to move to other chains (and vice versa) in a trust-minimized way, complementing Plume’s rollup bridge.

  • Other DeFi Integrations: Plume’s ecosystem page cites 180+ protocols, including RWA specialists and mainstream DeFi projects. For instance, names like Nucleus Yield (a platform for tokenized yields), and possibly on-chain KYC providers or identity solutions, are part of the mix. By the time of mainnet, Plume had over 200 integrated protocols in its testnet environment – meaning many existing dApps (DEXs, money markets, etc.) have deployed or are ready to deploy on Plume. This ensures that once real-world assets are tokenized, they have immediate utility: e.g., a tokenized solar farm revenue stream could be traded on an order-book exchange, or used as collateral for a loan, or included in an index – because the DeFi “money lego” pieces (DEXs, lending platforms, asset management protocols) are available on the chain from the start.

In summary, Plume’s ecosystem strategy has been aggressive and comprehensive: secure anchor partnerships for assets (e.g. funds from Apollo, BlackRock via Superstate/Ondo, private credit via Goldfinch and Credbull, emerging market assets via Mercado Bitcoin), ensure infrastructure and compliance in place (Anchorage custody, Paxos, identity/AML tooling), and port over the DeFi primitives to allow a flourishing of secondary markets and leverage. The result is that Plume enters 2025 as potentially the most interconnected RWA network in Web3 – a hub where various RWA protocols and real-world institutions plug in. This “network-of-networks” effect could drive significant total value locked and user activity, as indicated by early metrics (Plume’s testnet saw 18+ million unique wallets and 280+ million transactions in a short span, largely due to incentive campaigns and the breadth of projects testing the waters).

Roadmap and Development Milestones

Plume’s development has moved at a rapid clip, with a phased approach to scaling up real-world assets on-chain:

  • Testnet and Community Growth (2023): Plume launched its incentivized testnet (code-named “Miles”) in mid-late 2023. The testnet campaign was extremely successful in attracting users – over 18 million testnet wallet addresses were created, executing 280 million+ transactions. This was likely driven by testnet “missions” and an airdrop campaign (Season 1 of Plume’s airdrop was claimed by early users). The testnet also onboarded over 200 protocols and saw 1 million NFTs (“Goons”) minted, indicating a vibrant trial ecosystem. This massive testnet was a milestone proving out Plume’s tech scalability and generating buzz (and a large community: Plume now counts ~1M Twitter followers and hundreds of thousands in Discord/Telegram).

  • Mainnet Launch (Q1 2025): Plume targeted the end of 2024 or early 2025 for mainnet launch. Indeed, by February 2025, partners like Mercado Bitcoin announced their tokenized assets would go live “from the first day of Plume’s mainnet launch.”. This implies Plume mainnet went live or was scheduled to go live around Feb 2025. Mainnet launch is a crucial milestone, bringing the testnet’s lessons to production along with the initial slate of real assets (~$1B+ worth) ready to be tokenized. The launch likely included the release of Plume’s core products: the Plume Chain (mainnet), Arc for asset onboarding, pUSD stablecoin, and Plume Passport wallet, as well as initial DeFi dApps (DEXs, money markets) deployed by partners.

  • Phased Asset Onboarding: Plume has indicated a “phased onboarding” strategy for assets to ensure a secure, liquid environment. In early phases, simpler or lower-risk assets (like fully backed stablecoins, tokenized bonds) come first, alongside controlled participation (perhaps whitelisted institutions) to build trust and liquidity. Each phase then unlocks more use cases and asset classes as the ecosystem proves itself. For example, Phase 1 might focus on on-chain Treasuries and private credit fund tokens (relatively stable, yield-generating assets). Subsequent phases could bring more esoteric or higher-yield assets like renewable energy revenue streams, real estate equity tokens, or even exotic assets (the docs amusingly mention “GPUs, uranium, mineral rights, durian farms” as eventual on-chain asset possibilities). Plume’s roadmap thus expands the asset menu over time, parallel with developing the needed market depth and risk management on-chain.

  • Scaling and Decentralization: Following mainnet, a key development goal is to decentralize the Plume chain’s operations. Currently, Plume has a sequencer model (likely run by the team or a few nodes). Over time, they plan to introduce a robust validator/sequencer set where $PLUME stakers help secure the network, and possibly even transition to a fully independent consensus. The founder’s note about building an optimized L1 with a new crypto-economic model hints that Plume might implement a novel Proof-of-Stake or hybrid security model to protect high-value RWAs on-chain. Milestones in this category would include open-sourcing more of the stack, running incentivized testnet for node operators, and implementing fraud proofs or zk-proofs (if moving beyond an optimistic rollup).

  • Feature Upgrades: Plume’s roadmap also includes adding advanced features demanded by institutions. This could involve:

    • Privacy enhancements: e.g., integrating zero-knowledge proofs for confidential transactions or identity, so that sensitive financial details of RWAs (like borrower info or cashflow data) can be kept private on a public ledger. The mention of FHE and zkTLS suggests research in enabling private yet verifiable asset handling.
    • Compliance and Identity: Plume already has AML screening and compliance modules, but future work will refine on-chain identity (perhaps DID integration in Plume Passport) so that RWA tokens can enforce transfer restrictions or only be held by eligible investors when required.
    • Interoperability: Further integrations with cross-chain protocols (expanding on LayerZero) and bridges so that Plume’s RWA liquidity can seamlessly flow into major ecosystems like Ethereum mainnet, Layer-2s, and even other app-chains. The SkyLink omnichain yield product is likely part of this, enabling users on other chains to tap yields from Plume’s RWA pools.
  • Growth Targets: Plume’s leadership has publicly stated goals like “tokenize $3 billion+ in assets by Q4 2024” and eventually far more. While $1.25B was the short-term pipeline at launch, the journey to $3B in tokenized RWAs is an explicit milestone. Longer term, given the trillions in institutional assets potentially tokenizable, Plume will measure success in how much real-world value it brings on-chain. Another metric is TVL and user adoption: by April 2025 the RWA tokenization market crossed $20B in TVL overall, and Plume aspires to capture a significant share of that. If its partnerships mature (e.g., if even 5% of that $1 trillion Goldfinch pipeline comes on-chain), Plume’s TVL could grow exponentially.

  • Recent Highlights: By spring 2025, Plume had several noteworthy milestones:

    • The Apollo investment (Apr 2025) – which not only brought funding but also the opportunity to work with Apollo’s portfolio (Apollo manages $600B+ including credit, real estate, and private equity assets that could eventually be tokenized).
    • Grayscale consideration (Apr 2025) – being added to Grayscale’s watchlist is a milestone in recognition, potentially paving the way for a Plume investment product for institutions.
    • RWA Market Leadership: Plume’s team frequently publishes the “Plumeberg” Newsletters noting RWA market trends. In one, they celebrated RWA protocols surpassing $10B TVL and noted Plume’s key role in the narrative. They have positioned Plume as core infrastructure as the sector grows, which suggests a milestone of becoming a reference platform in the RWA conversation.

In essence, Plume’s roadmap is about scaling up and out: scale up in terms of assets (from hundreds of millions to billions tokenized), and scale out in terms of features (privacy, compliance, decentralization) and integrations (connecting to more assets and users globally). Each successful asset onboarding (be it a Brazilian credit deal or an Apollo fund tranche) is a development milestone in proving the model. If Plume can maintain momentum, upcoming milestones might include major financial institutions launching products directly on Plume (e.g., a bank issuing a bond on Plume), or government entities using Plume for public asset auctions – all part of the longer-term vision of Plume as a global on-chain marketplace for real-world finance.

Metrics and Traction

While still early, Plume Network’s traction can be gauged by a combination of testnet metrics, partnership pipeline, and the overall growth of RWA on-chain:

  • Testnet Adoption: Plume’s incentivized testnet (2023) saw extraordinary participation. 18 million+ unique addresses and 280 million transactions were recorded – numbers rivaling or exceeding many mainnets. This was driven by an enthusiastic community drawn by Plume’s airdrop incentives and the allure of RWAs. It demonstrates a strong retail interest in the platform (though many may have been speculators aiming for rewards, it nonetheless seeded a large user base). Additionally, over 200 DeFi protocols deployed contracts on the testnet, signaling broad developer interest. This effectively primed Plume with a large user and developer community even before launch.

  • Community Size: Plume quickly built a social following in the millions (e.g., 1M followers on X/Twitter, 450k in Discord, etc.). They brand their community members as “Goons” – over 1 million “Goon” NFTs were minted as a part of testnet achievements. Such gamified growth reflects one of the fastest community buildups in recent Web3 memory, indicating that the narrative of real-world assets resonates with a wide audience in crypto.

  • Ecosystem and TVL Pipeline: At mainnet launch, Plume projected having over $1 billion in real-world assets tokenized or available on day one. In a statement, co-founder Chris Yin highlighted proprietary access to high-yield, privately held assets that are “exclusively” coming to Plume. Indeed, specific assets lined up included:

    • $500M from a Credbull private credit fund,
    • $300M in solar energy farms (Plural Energy),
    • $120M in healthcare (Medicaid receivables),
    • plus mineral rights and other esoteric assets. These sum to ~$1B, and Yin stated the aim to reach $3B tokenized by end of 2024. Such figures, if realized, would place Plume among the top chains for RWA TVL. By comparison, the entire RWA sector’s on-chain TVL was about $20B as of April 2025, so $3B on one platform would be a very significant share.
  • Current TVL / Usage: Since mainnet launch is recent, concrete TVL figures on Plume aren’t yet publicly reported like on DeFiLlama. However, we know several integrated projects bring their own TVL:

    • Ondo’s products (OUSG, etc.) had $623M in market value around early 2024 – some of that may now reside or be mirrored on Plume.
    • The tokenized assets via Mercado Bitcoin (Brazil) add $40M pipeline.
    • Goldfinch Prime’s pool could attract large deposits (Goldfinch’s legacy pools originated ~$100M+ of loans; Prime could scale higher with institutions).
    • If Nest vaults aggregate multiple yields, that could quickly accumulate nine-figure TVL on Plume as stablecoin holders seek 5-10% yields from RWAs. As a qualitative metric, demand for RWA yields has been high even in bear markets – for instance, tokenized Treasury funds like Ondo’s saw hundreds of millions in a few months. Plume, concentrating many such offerings, could see a rapid uptick in TVL as DeFi users rotate into more “real” yields.
  • Transactions and Activity: We might anticipate relatively lower on-chain transaction counts on Plume compared to say a gaming chain, because RWA transactions are higher-value but less frequent (e.g., moving millions in a bond token vs. many micro-transactions). That said, if secondary trading picks up (on an order book exchange or AMM on Plume), we could see steady activity. The presence of 280M test txns suggests Plume can handle high throughput if needed. With Plume’s low fees (designed to be cheaper than Ethereum) and composability, it encourages more complex strategies (like looping collateral, automated yield strategies by smart contracts) which could drive interactions.

  • Real-World Impact: Another “metric” is traditional participation. Plume’s partnership with Apollo and others means institutional AuM (Assets under Management) connected to Plume is in the tens of billions (just counting Apollo’s involved funds, BlackRock’s BUIDL fund, etc.). While not all that value is on-chain, even a small allocation from each could quickly swell Plume’s on-chain assets. For example, BlackRock’s BUIDL fund (tokenized money market) hit $1B AUM within a year. Franklin Templeton’s on-chain government money fund reached $368M. If similar funds launch on Plume or existing ones connect, those figures reflect potential scale.

  • Security/Compliance Metrics: It’s worth noting Plume touts being fully onchain 24/7, permissionless yet compliant. One measure of success will be zero security incidents or defaults in the initial cohorts of RWA tokens. Metrics like payment yields delivered to users (e.g., X amount of interest paid out via Plume smart contracts from real assets) will build credibility. Plume’s design includes real-time auditing and on-chain verification of asset collateral (some partners provide daily transparency reports, as Ondo does for USDY). Over time, consistent, verified yield payouts and perhaps credit ratings on-chain could become key metrics to watch.

In summary, early indicators show strong interest and a robust pipeline for Plume. The testnet numbers demonstrate crypto community traction, and the partnerships outline a path to significant on-chain TVL and usage. As Plume transitions to steady state, we will track metrics like how many asset types are live, how much yield is distributed, and how many active users (especially institutional) engage on the platform. Given that the entire RWA category is growing fast (over $22.4B TVL as of May 2025, with a 9.3% monthly growth rate), Plume’s metrics should be viewed in context of this expanding pie. There is a real possibility that Plume could emerge as a leading RWA hub capturing a multi-billion-dollar share of the market if it continues executing.


Real-World Assets (RWA) in Web3: Overview and Significance

Real-World Assets (RWAs) refer to tangible or financial assets from the traditional economy that are tokenized on blockchain – in other words, digital tokens that represent ownership or rights to real assets or cash flows. These can include assets like real estate properties, corporate bonds, trade invoices, commodities (gold, oil), stocks, or even intangible assets like carbon credits and intellectual property. RWA tokenization is arguably one of the most impactful trends in crypto, because it serves as a bridge between traditional finance (TradFi) and decentralized finance (DeFi). By bringing real-world assets on-chain, blockchain technology can inject transparency, efficiency, and broader access into historically opaque and illiquid markets.

The significance of RWAs in Web3 has grown dramatically in recent years:

  • They unlock new sources of collateral and yield for the crypto ecosystem. Instead of relying on speculative token trading or purely crypto-native yield farming, DeFi users can invest in tokens that derive value from real economic activity (e.g., revenue from a real estate portfolio or interest from loans). This introduces “real yield” and diversification, making DeFi more sustainable.
  • For traditional finance, tokenization promises to increase liquidity and accessibility. Assets like commercial real estate or loan portfolios, which typically have limited buyers and cumbersome settlement processes, can be fractionalized and traded 24/7 on global markets. This can reduce financing costs and democratize access to investments that were once restricted to banks or large funds.
  • RWAs also leverage blockchain’s strengths: transparency, programmability, and efficiency. Settlement of tokenized securities can be near-instant and peer-to-peer, eliminating layers of intermediaries and reducing settlement times from days to seconds. Smart contracts can automate interest payments or enforce covenants. Additionally, the immutable audit trail of blockchains enhances transparency – investors can see exactly how an asset is performing (especially when coupled with oracle data) and trust that the token supply matches real assets (with on-chain proofs of reserve, etc.).
  • Importantly, RWA tokenization is seen as a key driver of the next wave of institutional adoption of blockchain. Unlike the largely speculative DeFi summer of 2020 or the NFT boom, RWAs appeal directly to the finance industry’s core, by making familiar assets more efficient. A recent report by Ripple and BCG projected that the market for tokenized assets could reach **$18.9 trillion** by 2033, underscoring the vast addressable market. Even nearer term, growth is rapid – as of May 2025, RWA projects’ TVL was $22.45B (up ~9.3% in one month) and projected to hit ~$50B by end of 2025. Some estimates foresee **$1–$3 trillion tokenized by 2030**, with upper scenarios as high as $30T if adoption accelerates.

In short, RWA tokenization is transforming capital markets by making traditional assets more liquid, borderless, and programmable. It represents a maturation of the crypto industry – moving beyond purely self-referential assets toward financing the real economy. As one analysis put it, RWAs are “rapidly shaping up to be the bridge between traditional finance and the blockchain world”, turning the long-hyped promise of blockchain disrupting finance into a reality. This is why 2024–2025 has seen RWAs touted as the growth narrative in Web3, attracting serious attention from big asset managers, governments, and Web3 entrepreneurs alike.

Key Protocols and Projects in the RWA Space

The RWA landscape in Web3 is broad, comprising various projects each focusing on different asset classes or niches. Here we highlight some key protocols and platforms leading the RWA movement, along with their focus areas and recent progress:

Project / ProtocolFocus & Asset TypesBlockchainNotable Metrics / Highlights
CentrifugeDecentralized securitization of private credit – tokenizing real-world payment assets like invoices, trade receivables, real estate bridge loans, royalties, etc. via asset pools (Tinlake). Investors earn yield from financing these assets.Polkadot parachain (Centrifuge Chain) with Ethereum dApp (Tinlake) integrationTVL ≈ $409M in pools; pioneered RWA DeFi with MakerDAO (Centrifuge pools back certain DAI loans). Partners with institutions like New Silver and FortunaFi for asset origination. Launching Centrifuge V3 for easier cross-chain RWA liquidity.
Maple FinanceInstitutional lending platform – initially undercollateralized crypto loans (to trading firms), now pivoted to RWA-based lending. Offers pools where accredited lenders provide USDC to borrowers (now often backed by real-world collateral or revenue). Launched a Cash Management Pool for on-chain U.S. Treasury investments and Maple Direct for overcollateralized BTC/ETH loans.Ethereum (V2 & Maple 2.0), previously Solana (deprecated)$2.46B in total loans originated to date; shifted to fully collateralized lending after defaults in unsecured lending. Maple’s new Treasury pool allows non-US investors to earn ~5% on T-Bills via USDC. Its native token MPL (soon converting to SYRUP) captures protocol fees; Maple ranks #2 in private credit RWA TVL and is one of few with a liquid token.
GoldfinchDecentralized private credit – originally provided undercollateralized loans to fintech lenders in emerging markets (Latin America, Africa, etc.) by pooling stablecoin from DeFi investors. Now launched Goldfinch Prime, targeting institutional investors to provide on-chain access to multi-billion-dollar private credit funds (managed by Apollo, Ares, Golub, etc.) in one diversified pool. Essentially brings established private debt funds on-chain for qualified investors.EthereumFunded ~$100M in loans across 30+ borrowers since inception. Goldfinch Prime (2023) is offering exposure to top private credit funds (Apollo, Blackstone, T. Rowe Price, etc.) with thousands of underlying loans. Backed by a16z, Coinbase Ventures, etc. Aims to merge DeFi capital with proven TradFi credit strategies, with yields often 8-10%. GFI token governs the protocol.
Ondo FinanceTokenized funds and structured products – pivoted from DeFi services to focusing on on-chain investment funds. Issuer of tokens like OUSG (Ondo Short-Term Government Bond Fund token – effectively tokenized shares of a U.S. Treasury fund) and OSTB/OMMF (money market fund tokens). Also offers USDY (tokenized deposit yielding ~5% from T-bills + bank deposits). Ondo also built Flux, a lending protocol to allow borrowing against its fund tokens.Ethereum (tokens also deployed on Polygon, Solana, etc. for accessibility)$620M+ in tokenized fund AUM (e.g. OUSG, USDY, etc.). OUSG is one of the largest on-chain Treasury products, at ~$580M AUM providing ~4.4% APY. Ondo’s funds are offered under SEC Reg D/S exemptions via a broker-dealer, ensuring compliance. Ondo’s approach of using regulated SPVs and partnering with BlackRock’s BUIDL fund has set a model for tokenized securities in the US. ONDO token (governance) has a ~$2.8B FDV with 15% in circulation (indicative of high investor expectations).
MakerDAO (RWA Program)Decentralized stablecoin issuer (DAI) that has increasingly allocated its collateral to RWA investments. Maker’s RWA effort involves vaults that accept real-world collateral (e.g. loans via Huntingdon Valley Bank, or tokens like CFG (Centrifuge) pools, DROP tokens, and investments into short-term bonds through off-chain structures with partners like BlockTower and Monetalis). Maker essentially invests DAI into RWA to earn yield, which shores up DAI’s stability.EthereumAs of late 2023, Maker had over $1.6B in RWA exposure, including >$1B in U.S. Treasury and corporate bonds and hundreds of millions in loans to real estate and banks (Maker’s Centrifuge vaults, bank loans, and Société Générale bond vault). This now comprises a significant portion of DAI’s collateral, contributing real yield (~4-5% on those assets) to Maker. Maker’s pivot to RWA (part of “Endgame” plan) has been a major validation for RWA in DeFi. However, Maker does not tokenize these assets for broader use; it holds them in trust via legal entities to back DAI.
TruFi & Credix(Grouping two similar credit protocols) TruFi – a protocol for uncollateralized lending to crypto and TradFi borrowers, with a portion of its book in real-world loans (e.g. lending to fintechs). Credix – a Solana-based private credit marketplace connecting USDC lenders to Latin American credit deals (often receivables and SME loans, tokenized as bonds). Both enable underwriters to create loan pools that DeFi users can fund, thus bridging to real economy lending.Ethereum (TruFi), Solana (Credix)TruFi facilitated ~$500M in loans (crypto + some RWA) since launch, though faced defaults; its focus is shifting to credit fund tokenization. Credix has funded tens of millions in receivables in Brazil/Colombia, and in 2023 partnered with Circle and VISA on a pilot to convert receivables to USDC for faster financing. These are notable but smaller players relative to Maple/Goldfinch. Credix’s model influenced Goldfinch’s design.
Securitize & Provenance (Figure)These are more CeFi-oriented RWA platforms: Securitize provides tokenization technology for enterprises (it tokenized private equity funds, stocks, and bonds for clients, operating under full compliance; recently partnered with Hamilton Lane to tokenzie parts of its $800M funds). Provenance Blockchain (Figure), built by Figure Technologies, is a fintech platform mainly for loan securitization and trading (they’ve done HELOC loans, mortgage-backed securities, etc. on their private chain).Private or permissioned chains (Provenance is a Cosmos-based chain; Securitize issues tokens on Ethereum, Polygon, etc.)Figure’s Provenance has facilitated over $12B in loan originations on-chain (mostly between institutions) and is arguably one of the largest by volume (it is the “Figure” noted as top in private credit sector). Securitize has tokenized multiple funds and even enabled retail to buy tokenized equity in companies like Coinbase pre-IPO. They aren’t “DeFi” platforms but are key bridges for RWAs – often working with regulated entities and focusing on compliance (Securitize is a registered broker-dealer/transfer agent). Their presence underscores that RWA tokenization spans both decentralized and enterprise realms.

(Table sources: Centrifuge TVL, Maple transition and loan volume, Goldfinch Prime description, Ondo stats, Ondo–BlackRock partnership, Maker & market projection, Maple rank.)

Centrifuge: Often cited as the first RWA DeFi protocol (launched 2019), Centrifuge allows asset originators (like financing companies) to pool real-world assets and issue ERC-20 tokens called DROP (senior tranche) and TIN (junior tranche) representing claims on the asset pool. These tokens can be used as collateral in MakerDAO or held for yield. Centrifuge operates its own chain for efficiency but connects to Ethereum for liquidity. It currently leads the pack in on-chain private credit TVL (~$409M), demonstrating product-market fit in areas like invoice financing. A recent development is Centrifuge partnering with Clearpool’s upcoming RWA chain (Ozea) to expand its reach, and working on Centrifuge V3 which will enable assets to be composable across any EVM chain (so Centrifuge pools could be tapped by protocols on chains like Ethereum, Avalanche, or Plume).

Maple Finance: Maple showed the promise and perils of undercollateralized DeFi lending. It provided a platform for delegate managers to run credit pools lending to market makers and crypto firms on an unsecured basis. After high-profile defaults in 2022 (e.g. Orthogonal Trading’s collapse related to FTX) which hit Maple’s liquidity, Maple chose to reinvent itself with a safer model. Now Maple’s focus is twofold: (1) RWA “cash management” – giving stablecoin lenders access to Treasury yields, and (2) overcollateralized crypto lending – requiring borrowers to post liquid collateral (BTC/ETH). The Treasury pool (in partnership with Icebreaker Finance) was launched on Solana in 2023, then on Ethereum, enabling accredited lenders to earn ~5% on USDC by purchasing short-duration U.S. Treasury notes. Maple also introduced Maple Direct pools that lend to institutions against crypto collateral, effectively becoming a facilitator for more traditional secured lending. The Maple 2.0 architecture (launched Q1 2023) improved transparency and control for lenders. Despite setbacks, Maple has facilitated nearly $2.5B in loans cumulatively and remains a key player, now straddling both crypto and RWA lending. Its journey underscores the importance of proper risk management and has validated the pivot to real-world collateral for stability.

Goldfinch: Goldfinch’s innovation was to allow “borrower pools” where real-world lending businesses (like microfinance institutions or fintech lenders) could draw stablecoin liquidity from DeFi without posting collateral, instead relying on the “trust-through-consensus” model (where backers stake junior capital to vouch for the borrower). It enabled loans in places like Kenya, Nigeria, Mexico, etc., delivering yields often above 10%. However, to comply with regulations and attract larger capital, Goldfinch introduced KYC gating and Prime. Now with Goldfinch Prime, the protocol is basically onboarding well-known private credit fund managers and letting non-US accredited users provide capital to them on-chain. For example, rather than lending to a single fintech lender, a Goldfinch Prime user can invest in a pool that aggregates many senior secured loans managed by Ares or Apollo – essentially investing in slices of those funds (which off-chain are massive, e.g. Blackstone’s private credit fund is $50B+). This moves Goldfinch upmarket: it’s less about frontier market fintech loans and more about giving crypto investors an entry to institutional-grade yield (with lower risk). Goldfinch’s GFI token and governance remain, but the user base and pool structures have shifted to a more regulated stance. This reflects a broader trend: RWA protocols increasingly working directly with large TradFi asset managers to scale.

Ondo Finance: Ondo’s transformation is a case study in adapting to demand. When DeFi degen yields dried up in the bear market, the thirst for safe yield led Ondo to tokenize T-bills and money market funds. Ondo set up a subsidiary (Ondo Investments) and registered offerings so that accredited and even retail (in some regions) could buy regulated fund tokens. Ondo’s flagship OUSG token is effectively tokenized shares of a short-term US Treasuries ETF; it grew quickly to over half a billion in circulation, confirming huge demand for on-chain Treasuries. Ondo also created USDY, which takes a step further by mixing T-bills and bank deposits to approximate a high-yield savings account on-chain. At ~4.6% APY and a low $500 entry, USDY aims for mass market within crypto. To complement these, Ondo’s Flux protocol lets holders of OUSG or USDY borrow stablecoins against them (solving liquidity since these tokens might otherwise be lockups). Ondo’s success has made it a top-3 RWA issuer by TVL. It’s a prime example of working within regulatory frameworks (SPVs, broker-dealers) to bring traditional securities on-chain. It also collaborates (e.g., using BlackRock’s fund) rather than competing with incumbents, which is a theme in RWA: partnership over disruption.

MakerDAO: While not a standalone RWA platform, Maker deserves mention because it effectively became one of the largest RWA investors in crypto. Maker realized that diversifying DAI’s collateral beyond volatile crypto could both stabilize DAI and generate revenue (through real-world yields). Starting with small experiments (like a loan to a U.S. bank, and vaults for Centrifuge pool tokens), Maker ramped up in 2022-2023 by allocating hundreds of millions of DAI to buy short-term bonds and invest in money market funds via custody accounts. By mid-2023 Maker had allocated $500M to a BlackRock-managed bond fund and a similar amount to a startup (Monetalis) to invest in Treasuries – these are analogous to Ondo’s approach but done under Maker governance. Maker also onboarded loans like the Societe Generale $30M on-chain bond, and vaults for Harbor Trade’s Trade Finance pool, etc. The revenue from these RWA investments has been substantial – by some reports, Maker’s RWA portfolio generates tens of millions in annualized fees, which has made DAI’s system surplus grow (and MKR token started buybacks using those profits). This RWA strategy is central to Maker’s “Endgame” plan, where eventually Maker might spin out specialized subDAOs to handle RWA. The takeaway is that even a decentralized stablecoin protocol sees RWA as key to sustainability, and Maker’s scale (with DAI ~$5B supply) means it can materially impact real-world markets by deploying liquidity there.

Others: There are numerous other projects in the RWA space, each carving out a niche:

  • Tokenized Commodities: Projects like Paxos Gold (PAXG) and Tether Gold (XAUT) have made gold tradable on-chain (combined market cap of ~$1.4B). These tokens give the convenience of crypto with the stability of gold and are fully backed by physical gold in vaults.
  • Tokenized Stocks: Firms like Backed Finance and Synthesized (formerly Mirror, etc.) have issued tokens mirroring equity like Apple (bAAPL) or Tesla. Backed’s tokens (e.g., bNVDA for Nvidia) are 100% collateralized by shares held by a custodian and available under EU regulatory sandbox exemptions, enabling 24/7 trading of stocks on DEXs. The total for tokenized stocks is still small (~$0.46B), but growing as interest in around-the-clock trading and fractional ownership picks up.
  • Real Estate Platforms: Lofty AI (Algorand-based) allows fractional ownership of rental properties with tokens as low as $50 per fraction. RealT (Ethereum) offers tokens for shares in rental homes in Detroit and elsewhere (paying rental income as USDC dividends). Real estate is a huge market ($300T+ globally), so even a fraction coming on-chain could dwarf other categories; projections see $3–4 Trillion in tokenized real estate by 2030-2035 if adoption accelerates. While current on-chain real estate is small, pilots are underway (e.g., Hong Kong’s government sold tokenized green bonds; Dubai is running a tokenized real estate sandbox).
  • Institutional Funds: Beyond Ondo, traditional asset managers are launching tokenized versions of their funds. We saw BlackRock’s BUIDL (a tokenized money market fund that grew from $100M to $1B AUM in one year). WisdomTree issued 13 tokenized ETFs by 2025. Franklin Templeton’s government money fund (BENJI token on Polygon) approached $370M AUM. These efforts indicate that large asset managers view tokenization as a new distribution channel. It also means competition for crypto-native issuers, but overall it validates the space. Many of these tokens target institutional or accredited investors initially (to comply with securities laws), but over time could open to retail as regulations evolve.

Why multiple approaches? The RWA sector has a diverse cast because the space “real-world assets” is extremely broad. Different asset types have different risk, return, and regulatory profiles, necessitating specialized platforms:

  • Private credit (Maple, Goldfinch, Centrifuge) focuses on lending and debt instruments, requiring credit assessment and active management.
  • Tokenized securities/funds (Ondo, Backed, Franklin) deal with regulatory compliance to represent traditional securities on-chain one-to-one.
  • Real estate involves property law, titles, and often local regulations – some platforms work on REIT-like structures or NFTs that confer ownership of an LLC that owns a property.
  • Commodities like gold have simpler one-to-one backing models but require trust in custody and audits.

Despite this fragmentation, we see a trend of convergence and collaboration: e.g., Centrifuge partnering with Clearpool, Goldfinch partnering with Plume (and indirectly Apollo), Ondo’s assets being used by Maker and others, etc. Over time, we may get interoperability standards (perhaps via projects like RWA.xyz, which is building a data aggregator for all RWA tokens).

Common Asset Types Being Tokenized

Almost any asset with an income stream or market value can, in theory, be tokenized. In practice, the RWA tokens we see today largely fall into a few categories:

  • Government Debt (Treasuries & Bonds): This has become the largest category of on-chain RWA by value. Tokenized U.S. Treasury bills and bonds are highly popular as they carry low risk and ~4-5% yield – very attractive to crypto holders in a low DeFi yield environment. Multiple projects offer this: Ondo’s OUSG, Matrixdock’s treasury token (MTNT), Backed’s TBILL token, etc. As of May 2025, government securities dominate tokenized assets with ~$6.79B TVL on-chain, making it the single biggest slice of the RWA pie. This includes not just U.S. Treasuries, but also some European government bonds. The appeal is global 24/7 access to a safe asset; e.g., a user in Asia can buy a token at 3 AM that effectively puts money in U.S. T-Bills. We also see central banks and public entities experimenting: e.g., the Monetary Authority of Singapore (MAS) ran Project Guardian to explore tokenized bonds and forex; Hong Kong’s HSBC and CSOP launched a tokenized money market fund. Government bonds are likely the “killer app” of RWA to date.

  • Private Credit & Corporate Debt: These include loans to businesses, invoices, supply chain finance, consumer loans, etc., as well as corporate bonds and private credit funds. On-chain private credit (via Centrifuge, Maple, Goldfinch, Credix, etc.) is a fast-growing area and forms over 50% of the RWA market by count of projects (though not by value due to Treasuries being big). Tokenized private credit often offers higher yields (8-15% APY) because of higher risk and less liquidity. Examples: Centrifuge tokens (DROP/TIN) backed by loan portfolios; Goldfinch’s pools of fintech loans; Maple’s pools to market makers; JPMorgan’s private credit blockchain pilot (they did intraday repo on-chain); and startups like Flowcarbon (tokenizing carbon credit-backed loans). Even trade receivables from governments (Medicaid claims) are being tokenized (as Plume highlighted). Additionally, corporate bonds are being tokenized: e.g., European Investment Bank issued digital bonds on Ethereum; companies like Siemens did a €60M on-chain bond. There’s about $23B of tokenized “global bonds” on-chain as of early 2025 – a figure that’s still small relative to the $100+ trillion bond market, but the trajectory is upward.

  • Real Estate: Tokenized real estate can mean either debt (e.g., tokenized mortgages, real estate loans) or equity/ownership (fractional ownership of properties). Thus far, more activity has been in tokenized debt (because it fits into DeFi lending models easily). For instance, parts of a real estate bridge loan might be turned into DROP tokens on Centrifuge and used to generate DAI. On the equity side, projects like Lofty have tokenized residential rental properties (issuing tokens that entitle holders to rental income and a share of sale proceeds). We’ve also seen a few REIT-like tokens (RealT’s properties, etc.). Real estate is highly illiquid traditionally, so tokenization’s promise is huge – one could trade fractions of a building on Uniswap, or use a property token as collateral for a loan. That said, legal infrastructure is tricky (you often need each property in an LLC and the token represents LLC shares). Still, given projections of $3-4 Trillion tokenized real estate by 2030-35, many are bullish that this sector will take off as legal frameworks catch up. A notable example: RedSwan tokenized portions of commercial real estate (like student housing complexes) and raised millions via token sales to accredited investors.

  • Commodities: Gold is the poster child here. Paxos Gold (PAXG) and Tether Gold (XAUT) together have over $1.4B market cap, offering investors on-chain exposure to physical gold (each token = 1 fine troy ounce stored in vault). These have become popular as a way to hedge in crypto markets. Other commodities tokenized include silver, platinum (e.g., Tether has XAGT, XAUT, etc.), and even oil to some extent (there were experiments with tokens for oil barrels or hash-rate futures). Commodity-backed stablecoins like Ditto’s eggs or soybean tokens have popped up, but gold remains dominant due to its stable demand. We can also include carbon credits and other environmental assets: tokens like MCO2 (Moss Carbon Credit) or Toucan’s nature-based carbon tokens had a wave of interest in 2021 as corporates looked at on-chain carbon offsets. In general, commodities on-chain are straightforward as they’re fully collateralized, but they require trust in custodians and auditors.

  • Equities (Stocks): Tokenized stocks allow 24/7 trading and fractional ownership of equities. Platforms like Backed (out of Switzerland) and DX.Exchange / FTX (earlier) issued tokens mirroring popular stocks (Tesla, Apple, Google, etc.). Backed’s tokens are fully collateralized (they hold the actual shares via a custodian and issue ERC-20 tokens representing them). These tokens can be traded on DEXs or held in DeFi wallets, which is novel since conventional stock trading is weekdays only. As of 2025, about $460M of tokenized equities are circulating – still a tiny sliver of the multi-trillion stock market, but it’s growing. Notably, in 2023, MSCI launched indices tracking tokenized assets including tokenized stocks, signaling mainstream monitoring. Another angle is synthetic equities (Mirroring stock price via derivatives without holding the stock, as projects like Synthetix did), but regulatory pushback (they can be seen as swaps) made the fully backed approach more favored now.

  • Stablecoins (fiat-backed): It’s worth mentioning that fiat-backed stablecoins like USDC, USDT are essentially tokenized real-world assets (each USDC is backed by $1 in bank accounts or T-bills). In fact, stablecoins are the largest RWA by far – over $200B in stablecoins outstanding (USDT, USDC, BUSD, etc.), mostly backed by cash, Treasury bills, or short-term corporate debt. This has often been cited as the first successful RWA use-case in crypto: tokenized dollars became the lifeblood of crypto trading and DeFi. However, in the RWA context, stablecoins are usually considered separately, because they are currency tokens, not investment products. Still, the existence of stablecoins has paved the way for other RWA tokens (and indeed, projects like Maker and Ondo effectively channel stablecoin capital into real assets).

  • Miscellaneous: We are starting to see even more exotic assets:

    • Fine Art and Collectibles: Platforms like Maecenas and Masterworks explored tokenizing high-end artworks (each token representing a share of a painting). NFTs have proven digital ownership, so it’s conceivable real art or luxury collectibles can be fractionalized similarly (though legal custody and insurance are considerations).
    • Revenue-Sharing Tokens: e.g., CityDAO and other DAOs experimented with tokens that give rights to a revenue stream (like a cut of city revenue or business revenue). These blur the line between securities and utility tokens.
    • Intellectual Property and Royalties: There are efforts to tokenize music royalties (so fans can invest in an artist’s future streaming income) or patents. Royalty Exchange and others have looked into this, allowing tokens that pay out when, say, a song is played (using smart contracts to distribute royalties).
    • Infrastructure and Physical assets: Companies have considered tokenizing things like data center capacity, mining hashpower, shipping cargo space, or even infrastructure projects (some energy companies looked at tokenizing ownership in solar farms or oil wells – Plume itself mentioned “uranium, GPUs, durian farms” as possibilities). These remain experimental but show the broad range of what could be brought on-chain.

In summary, virtually any asset that can be legally and economically ring-fenced can be tokenized. The current focus has been on financial assets with clear cash flows or store-of-value properties (debt, commodities, funds) because they fit well with investor demand and existing law (e.g., an SPV can hold bonds and issue tokens relatively straightforwardly). More complex assets (like direct property ownership or IP rights) will likely take longer due to legal intricacies. But the tide is moving in that direction, as the technology proves itself with simpler assets first and then broadens.

It’s also important to note that each asset type’s tokenization must grapple with how to enforce rights off-chain: e.g., if you hold a token for a property, how do you ensure legal claim on that property? Solutions involve legal wrappers (LLCs, trust agreements) that recognize token holders as beneficiaries. Standardization efforts (like the ERC-1400 standard for security tokens or initiatives by the Interwork Alliance for tokenized assets) are underway to make different RWA tokens more interoperable and legally sound.

Trends & Innovations:

  • Institutional Influx: Perhaps the biggest trend is the entrance of major financial institutions and asset managers into the RWA blockchain space. In the past two years, giants like BlackRock, JPMorgan, Goldman Sachs, Fidelity, Franklin Templeton, WisdomTree, and Apollo have either invested in RWA projects or launched tokenization initiatives. For example, BlackRock’s CEO Larry Fink publicly praised “the tokenization of securities” as the next evolution. BlackRock’s own tokenized money market fund (BUIDL) reaching $1B AUM in one year is a proof-point. WisdomTree creating 13 tokenized index funds by 2025 shows traditional ETFs coming on-chain. Apollo not only invested in Plume but also partnered on tokenized credit (Apollo and Hamilton Lane worked with Figure’s Provenance to tokenize parts of their funds). The involvement of such institutions has a flywheel effect: it legitimizes RWA in the eyes of regulators and investors and accelerates development of compliant platforms. It’s telling that surveys show 67% of institutional investors plan to allocate an average 5.6% of their portfolio to tokenized assets by 2026. High-net-worth individuals similarly are showing ~80% interest in exposure via tokenization. This is a dramatic shift from the 2017-2018 ICO era, as now the movement is institution-led rather than purely grassroots crypto-led.

  • Regulated On-Chain Funds: A notable innovation is bringing regulated investment funds directly on-chain. Instead of creating new instruments from scratch, some projects register traditional funds with regulators and then issue tokens that represent shares. Franklin Templeton’s OnChain U.S. Government Money Fund is a SEC-registered mutual fund whose share ownership is tracked on Stellar (and now Polygon) – investors buy a BENJI token which is effectively a share in a regulated fund, subject to all the usual oversight. Similarly, ARB ETF (Europe) launched a fully regulated digital bond fund on a public chain. This trend of tokenized regulated funds is crucial because it marries compliance with blockchain’s efficiency. It basically means the traditional financial products we know (funds, bonds, etc.) can gain new utility by existing as tokens that trade anytime and integrate with smart contracts. Grayscale’s consideration of $PLUME and similar moves by other asset managers to list crypto or RWA tokens in their offerings also indicates convergence of TradFi and DeFi product menus.

  • Yield Aggregation and Composability: As more RWA yield opportunities emerge, DeFi protocols are innovating to aggregate and leverage them. Plume’s Nest is one example of aggregating multiple yields into one interface. Another example is Yearn Finance beginning to deploy vaults into RWA products (Yearn considered investing in Treasuries through protocols like Notional or Maple). Index Coop created a yield index token that included RWA yield sources. We are also seeing structured products like tranching on-chain: e.g., protocols that issue a junior-senior split of yield streams (Maple explored tranching pools to offer safer vs. riskier slices). Composability means you could one day do things like use a tokenized bond as collateral in Aave to borrow a stablecoin, then use that stablecoin to farm elsewhere – complex strategies bridging TradFi yield and DeFi yield. This is starting to happen; for instance, Flux Finance (by Ondo) lets you borrow against OUSG and then you could deploy that into a stablecoin farm. Leveraged RWA yield farming may become a theme (though careful risk management is needed).

  • Real-Time Transparency & Analytics: Another innovation is the rise of data platforms and standards for RWA. Projects like RWA.xyz aggregate on-chain data to track the market cap, yields, and composition of all tokenized RWAs across networks. This provides much-needed transparency – one can see how big each sector is, track performance, and flag anomalies. Some issuers provide real-time asset tracking: e.g., a token might be updated daily with NAV (net asset value) data from the TradFi custodian, and that can be shown on-chain. The use of oracles is also key – e.g., Chainlink oracles can report interest rates or default events to trigger smart contract functions (like paying out insurance if a debtor defaults). The move towards on-chain credit ratings or reputations is also starting: Goldfinch experimented with off-chain credit scoring for borrowers, Centrifuge has models to estimate pool risk. All of this is to make on-chain RWAs as transparent (or more so) than their off-chain counterparts.

  • Integration with CeFi and Traditional Systems: We see more blending of CeFi and DeFi in RWA. For instance, Coinbase introduced “Institutional DeFi” where they funnel client funds into protocols like Maple or Compound Treasury – giving institutions a familiar interface but yield sourced from DeFi. Bank of America and others have discussed using private blockchain networks to trade tokenized collateral with each other (for faster repo markets, etc.). On the retail front, fintech apps may start offering yields that under the hood come from tokenized assets. This is an innovation in distribution: users might not even know they’re interacting with a blockchain, they just see better yields or liquidity. Such integration will broaden the reach of RWA beyond crypto natives.

Challenges:

Despite the excitement, RWA tokenization faces several challenges and hurdles:

  • Regulatory Compliance and Legal Structure: Perhaps the number one challenge. By turning assets into digital tokens, you often turn them into securities in the eyes of regulators (if they weren’t already). This means projects must navigate securities laws, investment regulations, money transmitter rules, etc. Most RWA tokens (especially in the US) are offered under Reg D (private placement to accredited investors) or Reg S (offshore) exemptions. This limits participation: e.g., retail US investors usually cannot buy these tokens legally. Additionally, each jurisdiction has its own rules – what’s allowed in Switzerland (like Backed’s stock tokens) might not fly in the US without registration. There’s also the legal enforceability angle: a token is a claim on a real asset; ensuring that claim is recognized by courts is crucial. This requires robust legal structuring (LLCs, trusts, SPVs) behind the scenes. It’s complex and costly to set up these structures, which is why many RWA projects partner with legal firms or get acquired by existing players with licenses (for example, Securitize handles a lot of heavy lifting for others). Compliance also means KYC/AML: unlike DeFi’s permissionless nature, RWA platforms often require investors to undergo KYC and accreditation checks, either at token purchase or continuously via whitelists. This friction can deter some DeFi purists and also means these platforms can’t be fully open to “anyone with a wallet” in many cases.

  • Liquidity and Market Adoption: Tokenizing an asset doesn’t automatically make it liquid. Many RWA tokens currently suffer from low liquidity/low trading volumes. For instance, if you buy a tokenized loan, there may be few buyers when you want to sell. Market makers are starting to provide liquidity for certain assets (like stablecoins or Ondo’s fund tokens on DEXes), but order book depth is a work in progress. In times of market stress, there’s concern that RWA tokens could become hard to redeem or trade, especially if underlying assets themselves aren’t liquid (e.g., a real estate token might effectively only be redeemable when the property is sold, which could take months/years). Solutions include creating redemption mechanisms (like Ondo’s funds allow periodic redemptions through the Flux protocol or directly with the issuer), and attracting a diverse investor base to trade these tokens. Over time, as more traditional investors (who are used to holding these assets) come on-chain, liquidity should improve. But currently, fragmentation across different chains and platforms also hinders liquidity – efforts to standardize and maybe aggregate exchanges for RWA tokens (perhaps a specialized RWA exchange or more cross-listings on major CEXes) are needed.

  • Trust and Transparency: Ironically for blockchain-based assets, RWAs often require a lot of off-chain trust. Token holders must trust that the issuer actually holds the real asset and won’t misuse funds. They must trust the custodian holding collateral (in case of stablecoins or gold). They also must trust that if something goes wrong, they have legal recourse. There have been past failures (e.g., some earlier “tokenized real estate” projects that fizzled, leaving token holders in limbo). So, building trust is key. This is done through audits, on-chain proof-of-reserve, reputable custodians (e.g., Coinbase Custody, etc.), and insurance. For example, Paxos publishes monthly audited reports of PAXG reserves, and USDC publishes attestations of its reserves. MakerDAO requires overcollateralization and legal covenants when engaging in RWA loans to mitigate risk of default. Nonetheless, a major default or fraud in a RWA project could set the sector back significantly. This is why, currently, many RWA protocols focus on high-credit quality assets (government bonds, senior secured loans) to build a track record before venturing into riskier territory.

  • Technological Integration: Some challenges are technical. Integrating real-world data on-chain requires robust oracles. For example, pricing a loan portfolio or updating NAV of a fund requires data feeds from traditional systems. Any lag or manipulation in oracles can lead to incorrect valuations on-chain. Additionally, scalability and transaction costs on mainnets like Ethereum can be an issue – moving potentially thousands of real-world payments (think of a pool of hundreds of loans, each with monthly payments) on-chain can be costly or slow. This is partly why specialized chains or Layer-2 solutions (like Plume, or Polygon for some projects, or even permissioned chains) are being used – to have more control and lower cost for these transactions. Interoperability is another technical hurdle: a lot of RWA action is on Ethereum, but some on Solana, Polygon, Polkadot, etc. Bridging assets between chains securely is still non-trivial (though projects like LayerZero, as used by Plume, are making progress). Ideally, an investor shouldn’t have to chase five different chains to manage a portfolio of RWAs – smoother cross-chain operability or a unified interface will be important.

  • Market Education and Perception: Many crypto natives originally were skeptical of RWAs (seeing them as bringing “off-chain risk” into DeFi’s pure ecosystem). Meanwhile, many TradFi people are skeptical of crypto. There is an ongoing need to educate both sides about the benefits and risks. For crypto users, understanding that a token is not just another meme coin but a claim on a legal asset with maybe lock-up periods, etc., is crucial. We’ve seen cases where DeFi users got frustrated that they couldn’t instantly withdraw from a RWA pool because off-chain loan settlements take time – managing expectations is key. Similarly, institutional players often worry about issues like custody of tokens (how to hold them securely), compliance (avoiding wallets that interact with sanctioned addresses, etc.), and volatility (ensuring the token technology is stable). Recent positive developments, like Binance Research showing RWA tokens have lower volatility and even considered “safer than Bitcoin” during certain macro events, help shift perception. But broad acceptance will require time, success stories, and likely regulatory clarity that holding or issuing RWA tokens is legally safe.

  • Regulatory Uncertainty: While we covered compliance, a broader uncertainty is regulatory regimes evolving. The U.S. SEC has not yet given explicit guidance on many tokenized securities beyond enforcing existing laws (which is why most issuers use exemptions or avoid U.S. retail). Europe introduced MiCA (Markets in Crypto Assets) regulation which mostly carves out how crypto (including asset-referenced tokens) should be handled, and launched a DLT Pilot Regime to let institutions trade securities on blockchain with some regulatory sandboxes. That’s promising but not permanent law yet. Countries like Singapore, UAE (Abu Dhabi, Dubai), Switzerland are being proactive with sandboxes and digital asset regulations to attract tokenization business. A challenge is if regulations become too onerous or fragmented: e.g., if every jurisdiction demands a slightly different compliance approach, it adds cost and complexity. On the flip side, regulatory acceptance (like Hong Kong’s recent encouragement of tokenization or Japan exploring on-chain securities) could be a boon. In the U.S., a positive development is that certain tokenized funds (like Franklin’s) got SEC approval, showing that it’s possible within existing frameworks. But the looming question: will regulators eventually allow wider retail access to RWA tokens (perhaps through qualified platforms or raising the caps on crowdfunding exemptions)? If not, RWAfi might remain predominantly an institutional play behind walled gardens, which limits the “open finance” dream.

  • Scaling Trustlessly: Another challenge is how to scale RWA platforms without introducing central points of failure. Many current implementations rely on a degree of centralization (an issuer that can pause token transfers to enforce KYC, a central party that handles asset custody, etc.). While this is acceptable to institutions, it’s philosophically at odds with DeFi’s decentralization. Over time, projects will need to find the right balance: e.g., using decentralized identity solutions for KYC (so it’s not one party controlling the whitelist but a network of verifiers), or using multi-sig/community governance to control issuance and custody operations. We’re seeing early moves like Maker’s Centrifuge vaults where MakerDAO governance approves and oversees RWA vaults, or Maple decentralizing pool delegate roles. But full “DeFi” RWA (where even legal enforcement is trustless) is a hard problem. Eventually, maybe smart contracts and real-world legal systems will interface directly (for example, a loan token smart contract that can automatically trigger legal action via a connected legal API if default occurs – this is futuristic but conceivable).

In summary, the RWA space is rapidly innovating to tackle these challenges. It’s a multi-disciplinary effort: requiring savvy in law, finance, and blockchain tech. Each success (like a fully repaid tokenized loan pool, or a smoothly redeemed tokenized bond) builds confidence. Each challenge (like a regulatory action or an asset default) provides lessons to strengthen the systems. The trajectory suggests that many of these hurdles will be overcome: the momentum of institutional involvement and the clear benefits (efficiency, liquidity) mean tokenization is likely here to stay. As one RWA-focused newsletter put it, “tokenized real-world assets are emerging as the new institutional standard… the infrastructure is finally catching up to the vision of on-chain capital markets.”

Regulatory Landscape and Compliance Considerations

The regulatory landscape for RWAs in crypto is complex and still evolving, as it involves the intersection of traditional securities/commodities laws with novel blockchain technology. Key points and considerations include:

  • Securities Laws: In most jurisdictions, if an RWA token represents an investment in an asset with an expectation of profit (which is often the case), it is deemed a security. For example, in the U.S., tokens representing fractions of income-generating real estate or loan portfolios squarely fall under the definition of investment contracts (Howey Test) or notes, and thus must be registered or offered under an exemption. This is why nearly all RWA offerings to date in the U.S. use private offering exemptions (Reg D 506(c) for accredited investors, Reg S for offshore, Reg A+ for limited public raises, etc.). Compliance with these means restricting token sales to verified investors, implementing transfer restrictions (tokens can only move between whitelisted addresses), and providing necessary disclosures. For instance, Ondo’s OUSG and Maple’s Treasury pool required investors to clear KYC/AML and accreditation checks, and tokens are not freely transferable to unapproved wallets. This creates a semi-permissioned environment, quite different from open DeFi. Europe under MiFID II/MiCA similarly treats tokenized stocks or bonds as digital representations of traditional financial instruments, requiring prospectuses or using the DLT Pilot regime for trading venues. Bottom line: RWA projects must integrate legal compliance from day one – many have in-house counsel or work with legal-tech firms like Securitize, because any misstep (like selling a security token to the public without exemption) could invite enforcement.

  • Consumer Protection and Licensing: Some RWA platforms may need additional licenses. For example, if a platform holds customer fiat to convert into tokens, it might need a money transmitter license or equivalent. If it provides advice or brokerage (matching borrowers and lenders), it might need broker-dealer or ATS (Alternative Trading System) licensing (this is why some partner with broker-dealers – Securitize, INX, Oasis Pro etc., which have ATS licenses to run token marketplaces). Custody of assets (like real estate deeds or cash reserves) might require trust or custody licenses. Anchorage being a partner to Plume is significant because Anchorage is a qualified custodian – institutions feel more at ease if a licensed bank is holding the underlying asset or even the private keys of tokens. In Asia and the Middle East, regulators have been granting specific licenses for tokenization platforms (e.g., the Abu Dhabi Global Market’s FSRA issues permissions for crypto assets including RWA tokens, MAS in Singapore gives project-specific approvals under its sandbox).

  • Regulatory Sandboxes and Government Initiatives: A positive trend is regulators launching sandboxes or pilot programs for tokenization. The EU’s DLT Pilot Regime (2023) allows approved market infrastructures to test trading tokenized securities up to certain sizes without full compliance with every rule – this has led to several European exchanges piloting blockchain bond trading. Dubai announced a tokenization sandbox to boost its digital finance hub. Hong Kong in 2023-24 made tokenization a pillar of its Web3 strategy, with Hong Kong’s SFC exploring tokenized green bonds and art. The UK in 2024 consulted on recognizing digital securities under English law (they already recognize crypto as property). Japan updated its laws to allow security tokens (they call them “electronically recorded transferable rights”) and several tokenized securities have been issued there under that framework. These official programs indicate a willingness by regulators to modernize laws to accommodate tokenization – which could eventually simplify compliance (e.g., creating special categories for tokenized bonds that streamline approval).

  • Travel Rule / AML: Crypto’s global nature triggers AML laws. FATF’s “travel rule” requires that when crypto (including tokens) above a certain threshold is transferred between VASPs (exchanges, custodians), identifying info travels with it. If RWA tokens are mainly transacted on KYC’ed platforms, this is manageable, but if they enter the wider crypto ecosystem, compliance gets tricky. Most RWA platforms currently keep a tight grip: transfers are often restricted to whitelisted addresses whose owners have done KYC. This mitigates AML concerns (as every holder is known). Still, regulators will expect robust AML programs – e.g., screening wallet addresses against sanctions (OFAC lists, etc.). There was a case of a tokenized bond platform in the UK that had to unwind some trades because a token holder became a sanctioned entity – such scenarios will test protocols’ ability to comply. Many platforms build in pause or freeze functions to comply with law enforcement requests (this is controversial in DeFi, but for RWA it’s often non-negotiable to have the ability to lock tokens tied to wrongdoing).

  • Taxation and Reporting: Another compliance consideration: how are these tokens taxed? If you earn yield from a tokenized loan, is it interest income? If you trade a tokenized stock, do wash sale rules apply? Tax authorities have yet to issue comprehensive guidance. In the interim, platforms often provide tax reports to investors (e.g., a Form 1099 in the US for interest or dividends earned via tokens). The transparency of blockchain can help here, as every payment can be recorded and categorized. But cross-border taxation (if someone in Europe holds a token paying US-source interest) can be complex – requiring things like digital W-8BEN forms, etc. This is more of an operational challenge than a roadblock, but it adds friction that automated compliance tech will need to solve.

  • Enforcement and Precedents: We’ve not yet seen many high-profile enforcement actions specifically for RWA tokens – likely because most are trying to comply. However, we have seen enforcement in adjacent areas: e.g., the SEC’s actions against crypto lending products (BlockFi, etc.) underscore that offering yields without registering can be a violation. If an RWA platform slipped up and, say, allowed retail to buy security tokens freely, it could face similar action. There’s also the question of secondary trading venues: If a decentralized exchange allows trading of a security token between non-accredited investors, is that unlawful? Likely yes in the US. This is why a lot of RWA tokens are not listed on Uniswap or are wrapped in a way that restricts addresses. It’s a fine line to walk between DeFi liquidity and compliance – many are erring on the side of compliance, even if it reduces liquidity.

  • Jurisdiction and Conflict of Laws: RWAs by nature connect to specific jurisdictions (e.g., a tokenized real estate in Germany falls under German property law). If tokens trade globally, there can be conflicts of law. Smart contracts might need to encode which law governs. Some platforms choose friendly jurisdictions for incorporation (e.g., the issuer entity in the Cayman Islands and the assets in the U.S., etc.). It’s complex but solvable with careful legal structuring.

  • Investor Protection and Insurance: Regulators will also care about investor protection: ensuring that token holders have clear rights. For example, if a token is supposed to be redeemable for a share of asset proceeds, the mechanism for that must be legally enforceable. Some tokens represent debt securities that can default – what disclosures were given about that risk? Platforms often publish offering memorandums or prospectuses (Ondo did for its tokens). Over time, regulators might require standardized risk disclosures for RWA tokens, much like mutual funds provide. Also, insurance might be mandated or at least expected – for instance, insuring a building in a real estate token, or having crime insurance for a custodian holding collateral.

  • Decentralization vs Regulation: There’s an inherent tension: the more decentralized and permissionless you make an RWA platform, the more it rubs against current regulations which assume identifiable intermediaries. One evolving strategy is to use Decentralized Identities (DID) and verifiable credentials to square this circle. E.g., a wallet could hold a credential that proves the owner is accredited without revealing their identity on-chain, and smart contracts could check for that credential before allowing transfer – making compliance automated and preserving some privacy. Projects like Xref (on XDC network) and Astra Protocol are exploring this. If successful, regulators might accept these novel approaches, which could allow permissionless trading among vetted participants. But that’s still in nascent stages.

In essence, regulation is the make-or-break factor for RWA adoption. The current landscape shows regulators are interested and cautiously supportive, but also vigilant. The RWA projects that thrive will be those that proactively embrace compliance yet innovate to make it as seamless as possible. Jurisdictions that provide clear, accommodative rules will attract more of this business (we’ve seen significant tokenization activity gravitate to places like Switzerland, Singapore, and the UAE due to clarity there). Meanwhile, the industry is engaging with regulators – for instance, by forming trade groups or responding to consultations – to help shape sensible policies. A likely outcome is that regulated DeFi will emerge as a category: platforms like those under Plume’s umbrella could become Alternative Trading Systems (ATS) or registered digital asset securities exchanges for tokenized assets, operating under licenses but with blockchain infrastructure. This hybrid approach may satisfy regulators’ objectives while still delivering the efficiency gains of crypto rails.

Investment and Market Size Data

The market for tokenized real-world assets has grown impressively and is projected to explode in the coming years, reaching into the trillions of dollars if forecasts hold true. Here we’ll summarize some key data points on market size, growth, and investment trends:

  • Current On-Chain RWA Market Size: As of mid-2025, the total on-chain Real-World Asset market (excluding traditional stablecoins) is in the tens of billions. Different sources peg slightly different totals depending on inclusion criteria, but a May 2025 analysis put it at $22.45 billion in Total Value Locked. This figure was up ~9.3% from the previous month, showcasing rapid growth. The composition of that ~$22B (as previously discussed) includes around $6.8B in government bonds, $1.5B in commodity tokens, $0.46B in equities, $0.23B in other bonds, and a few billion in private credit and funds. For perspective, this is still small relative to the broader crypto market (which is ~$1.2T in market cap as of 2025, largely driven by BTC and ETH), but it’s the fastest-growing segment of crypto. It’s also worth noting stablecoins (~$226B) if counted would dwarf these numbers, but usually they’re kept separate.

  • Growth Trajectory: The RWA market has shown a 32% annual growth rate in 2024. If we extrapolate or consider accelerating adoption, some estimate $50B by end of 2025 as plausible. Beyond that, industry projections become very large:

    • BCG and others (2030+): The often-cited BCG/Ripple report projected $16 trillion by 2030 (and ~$19T by 2033) in tokenized assets. This includes broad tokenization of financial markets (not just DeFi-centric usage). This figure would represent about 10% of all assets tokenized, which is aggressive but not unthinkable given tokenization of cash (stablecoins) is already mainstream.
    • Citi GPS Report (2022) talked about $4–5 trillion tokenized by 2030 as a base case, with higher scenarios if institutional adoption is faster.
    • The LinkedIn analysis we saw noted projections ranging from $1.3 trillion to $30 trillion by 2030 – indicating a lot of uncertainty but consensus that trillions are on the table.
    • Even the conservative end (say $1-2T by 2030) would mean a >50x increase from today’s ~$20B level, which gives a sense of the strong growth expectations.
  • Investment into RWA Projects: Venture capital and investment is flowing into RWA startups:

    • Plume’s own funding ($20M Series A, etc.) is one example of VC conviction.
    • Goldfinch raised ~$25M (led by a16z in 2021). Centrifuge raised ~$4M in 2021 and more via token sales; it’s also backed by Coinbase and others.
    • Maple raised $10M Series A in 2021, then additional in 2022.
    • Ondo raised $20M in 2022 (from Founders Fund and Pantera) and more recently did a token sale.
    • There’s also new dedicated funds: e.g., a16z’s crypto fund and others earmarked portions for RWA; Franklin Templeton in 2022 joined a $20M round for a tokenization platform; Matrixport launched a $100M fund for tokenized Treasuries.
    • Traditional finance is investing: Nasdaq Ventures invested in a tokenization startup (XYO Network), London Stock Exchange Group acquired TORA (with tokenization capabilities), etc.
    • We see mergers too: Securitize acquired Distributed Technology Markets to get a broker-dealer; INX (token exchange) raising money to expand offerings.

    Overall, tens of millions have been invested into the leading RWA protocols, and larger financial institutions are acquiring stakes or forming joint ventures in this arena. Apollo’s direct investment in Plume and Hamilton Lane partnering with Securitize to tokenize funds (with Hamilton Lane’s funds being multi-billion themselves) show that this is not just VC bets but real money engagement.

  • Notable On-Chain Assets and Performance: Some data on specific tokens can illustrate traction:

    • Ondo’s OUSG: launched early 2023, by early 2025 it had >$580M outstanding, delivering ~4-5% yield. It rarely deviates in price because it’s fully collateralized and redeemable.
    • Franklin’s BENJI: by mid-2023 reached $270M, and by 2024 ~$368M. It’s one of the first instances of a major US mutual fund being reflected on-chain.
    • MakerDAO’s RWA earnings: Maker, through its ~$1.6B RWA investments, was earning on the order of $80M+ annualized in yield by late 2023 (mostly from bonds). This turned Maker’s finances around after crypto yields dried up.
    • Maple’s Treasury pool: in its pilot, raised ~$22M for T-bill investments from <10 participants (institutions). Maple’s total lending after restructuring is smaller now (~$50-100M active loans), but it’s starting to tick up as trust returns.
    • Goldfinch: funded ~$120M loans and repaid ~$90M with ~<$1M in defaults (they had one notable default from a lender in Kenya but recovered partially). GFI token once peaked at a $600M market cap in late 2021, now much lower (~$50M), indicating market re-rating of risk but still interest.
    • Centrifuge: about 15 active pools. Some key ones (like ConsolFreight’s invoice pool, New Silver’s real estate rehab loan pool) each in the $5-20M range. Centrifuge’s token (CFG) has a market cap around $200M in 2025.
    • Overall RWA Returns: Many RWA tokens offer yields in the 4-10% range. For example, Aave’s yield on stablecoins might be ~2%, whereas putting USDC into Goldfinch’s senior pool yields ~8%. This spread draws DeFi capital gradually into RWA. During crypto market downturns, RWA yields looked especially attractive as they were stable, leading analysts to call RWAs a “safe haven” or “hedge” in Web3.
  • Geographical/Market Segments: A breakdown by region: A lot of tokenized Treasuries are US-based assets offered by US or global firms (Ondo, Franklin, Backed). Europe’s contributions are in tokenized ETFs and bonds (several German and Swiss startups, and big banks like Santander and SocGen doing on-chain bond issues). Asia: Singapore’s Marketnode platform is tokenizing bonds; Japan’s SMBC tokenized some credit products. The Middle East: Dubai’s DFSA approved a tokenized fund. Latin America: a number of experiments, e.g., Brazil’s central bank is tokenizing a portion of bank deposits (as part of their CBDC project, they consider tokenizing assets). Africa: projects like Kotani Pay looked at tokenized micro-asset financing. These indicate tokenization is a global trend, but the US remains the biggest source of underlying assets (due to Treasuries and large credit funds) while Europe is leading on regulatory clarity for trading.

  • Market Sentiment: The narrative around RWAs has shifted very positively in 2024-2025. Crypto media, which used to focus mostly on pure DeFi, now regularly reports on RWA milestones (e.g., “RWA market surpasses $20B despite crypto downturn”). Ratings agencies like Moody’s are studying on-chain assets; major consulting firms (BCG, Deloitte) publish tokenization whitepapers. The sentiment is that RWAfi could drive the next bull phase of crypto by bringing in trillions of value. Even Grayscale considering a Plume product suggests investor appetite for RWA exposure packaged in crypto vehicles. There’s also recognition that RWA is partly counter-cyclical to crypto – when crypto yields are low, people seek RWAs; when crypto booms, RWA provides stable diversification. This makes many investors view RWA tokens as a way to hedge crypto volatility (e.g., Binance research found RWA tokens remained stable and even considered “safer than Bitcoin” during certain macro volatility).

To conclude this section with hard numbers: $20-22B on-chain now, heading to $50B+ in a year or two, and potentially $1T+ within this decade. Investment is pouring in, with dozens of projects collectively backed by well over $200M in venture funding. Traditional finance is actively experimenting, with over $2-3B in real assets already issued on public or permissioned chains by big institutions (including multiple $100M+ bond issues). If even 1% of the global bond market (~$120T) and 1% of global real estate (~$300T) gets tokenized by 2030, that’d be several trillion dollars – which aligns with those bullish projections. There are of course uncertainties (regulation, interest rate environments, etc. can affect adoption), but the data so far supports the idea that tokenization is accelerating. As Plume’s team noted, “the RWA sector is now leading Web3 into its next phase” – a phase where blockchain moves from speculative assets to the backbone of real financial infrastructure. The deep research and alignment of heavyweights behind RWAs underscore that this is not a fleeting trend but a structural evolution of both crypto and traditional finance.


Sources:

  • Plume Network Documentation and Blog
  • News and Press: CoinDesk, The Block, Fortune (via LinkedIn)
  • RWA Market Analysis: RWA.xyz, LinkedIn RWA Report
  • Odaily/ChainCatcher Analysis
  • Goldfinch and Prime info, Ondo info, Centrifuge info, Maple info, Apollo quote, Binance research mention, etc.

Verifiable On-Chain AI with zkML and Cryptographic Proofs

· 36 min read
Dora Noda
Software Engineer

Introduction: The Need for Verifiable AI on Blockchain

As AI systems grow in influence, ensuring their outputs are trustworthy becomes critical. Traditional methods rely on institutional assurances (essentially “just trust us”), which offer no cryptographic guarantees. This is especially problematic in decentralized contexts like blockchains, where a smart contract or user must trust an AI-derived result without being able to re-run a heavy model on-chain. Zero-knowledge Machine Learning (zkML) addresses this by allowing cryptographic verification of ML computations. In essence, zkML enables a prover to generate a succinct proof that “the output $Y$ came from running model $M$ on input $X$”without revealing $X$ or the internal details of $M$. These zero-knowledge proofs (ZKPs) can be verified by anyone (or any contract) efficiently, shifting AI trust from “policy to proof”.

On-chain verifiability of AI means a blockchain can incorporate advanced computations (like neural network inferences) by verifying a proof of correct execution instead of performing the compute itself. This has broad implications: smart contracts can make decisions based on AI predictions, decentralized autonomous agents can prove they followed their algorithms, and cross-chain or off-chain compute services can provide verifiable outputs rather than unverifiable oracles. Ultimately, zkML offers a path to trustless and privacy-preserving AI – for example, proving an AI model’s decisions are correct and authorized without exposing private data or proprietary model weights. This is key for applications ranging from secure healthcare analytics to blockchain gaming and DeFi oracles.

How zkML Works: Compressing ML Inference into Succinct Proofs

At a high level, zkML combines cryptographic proof systems with ML inference so that a complex model evaluation can be “compressed” into a small proof. Internally, the ML model (e.g. a neural network) is represented as a circuit or program consisting of many arithmetic operations (matrix multiplications, activation functions, etc.). Rather than revealing all intermediate values, a prover performs the full computation off-chain and then uses a zero-knowledge proof protocol to attest that every step was done correctly. The verifier, given only the proof and some public data (like the final output and an identifier for the model), can be cryptographically convinced of the correctness without re-executing the model.

To achieve this, zkML frameworks typically transform the model computation into a format amenable to ZKPs:

  • Circuit Compilation: In SNARK-based approaches, the computation graph of the model is compiled into an arithmetic circuit or set of polynomial constraints. Each layer of the neural network (convolutions, matrix multiplies, nonlinear activations) becomes a sub-circuit with constraints ensuring the outputs are correct given the inputs. Because neural nets involve non-linear operations (ReLUs, Sigmoids, etc.) not naturally suited to polynomials, techniques like lookup tables are used to handle these efficiently. For example, a ReLU (output = max(0, input)) can be enforced by a custom constraint or lookup that verifies output equals input if input≥0 else zero. The end result is a set of cryptographic constraints that the prover must satisfy, which implicitly proves the model ran correctly.
  • Execution Trace & Virtual Machines: An alternative is to treat the model inference as a program trace, as done in zkVM approaches. For instance, the JOLT zkVM targets the RISC-V instruction set; one can compile the ML model (or the code that computes it) to RISC-V and then prove each CPU instruction executed properly. JOLT introduces a “lookup singularity” technique, replacing expensive arithmetic constraints with fast table lookups for each valid CPU operation. Every operation (add, multiply, bitwise op, etc.) is checked via a lookup in a giant table of pre-computed valid outcomes, using a specialized argument (Lasso/SHOUT) to keep this efficient. This drastically reduces the prover workload: even complex 64-bit operations become a single table lookup in the proof instead of many arithmetic constraints.
  • Interactive Protocols (GKR Sum-Check): A third approach uses interactive proofs like GKR (Goldwasser–Kalai–Rotblum) to verify a layered computation. Here the model’s computation is viewed as a layered arithmetic circuit (each neural network layer is one layer of the circuit graph). The prover runs the model normally but then engages in a sum-check protocol to prove that each layer’s outputs are correct given its inputs. In Lagrange’s approach (DeepProve, detailed next), the prover and verifier perform an interactive polynomial protocol (made non-interactive via Fiat-Shamir) that checks consistency of each layer’s computations without re-doing them. This sum-check method avoids generating a monolithic static circuit; instead it verifies the consistency of computations in a step-by-step manner with minimal cryptographic operations (mostly hashing or polynomial evaluations).

Regardless of approach, the outcome is a succinct proof (typically a few kilobytes to a few tens of kilobytes) that attests to the correctness of the entire inference. The proof is zero-knowledge, meaning any secret inputs (private data or model parameters) can be kept hidden – they influence the proof but are not revealed to verifiers. Only the intended public outputs or assertions are revealed. This allows scenarios like “prove that model $M$ when applied to patient data $X$ yields diagnosis $Y$, without revealing $X$ or the model’s weights.”

Enabling on-chain verification: Once a proof is generated, it can be posted to a blockchain. Smart contracts can include verification logic to check the proof, often using precompiled cryptographic primitives. For example, Ethereum has precompiles for BLS12-381 pairing operations used in many zk-SNARK verifiers, making on-chain verification of SNARK proofs efficient. STARKs (hash-based proofs) are larger, but can still be verified on-chain with careful optimization or possibly with some trust assumptions (StarkWare’s L2, for instance, verifies STARK proofs on Ethereum by an on-chain verifier contract, albeit with higher gas cost than SNARKs). The key is that the chain does not need to execute the ML model – it only runs a verification which is much cheaper than the original compute. In summary, zkML compresses expensive AI inference into a small proof that blockchains (or any verifier) can check in milliseconds to seconds.

Lagrange DeepProve: Architecture and Performance of a zkML Breakthrough

DeepProve by Lagrange Labs is a state-of-the-art zkML inference framework focusing on speed and scalability. Launched in 2025, DeepProve introduced a new proving system that is dramatically faster than prior solutions like Ezkl. Its design centers on the GKR interactive proof protocol with sum-check and specialized optimizations for neural network circuits. Here’s how DeepProve works and achieves its performance:

  • One-Time Preprocessing: Developers start with a trained neural network (currently supported types include multilayer perceptrons and popular CNN architectures). The model is exported to ONNX format, a standard graph representation. DeepProve’s tool then parses the ONNX model and quantizes it (converts weights to fixed-point/integer form) for efficient field arithmetic. In this phase, it also generates the proving and verification keys for the cryptographic protocol. This setup is done once per model and does not need to be repeated per inference. DeepProve emphasizes ease of integration: “Export your model to ONNX → one-time setup → generate proofs → verify anywhere”.

  • Proving (Inference + Proof Generation): After setup, a prover (which could be run by a user, a service, or Lagrange’s decentralized prover network) takes a new input $X$ and runs the model $M$ on it, obtaining output $Y$. During this execution, DeepProve records an execution trace of each layer’s computations. Instead of translating every multiplication into a static circuit upfront (as SNARK approaches do), DeepProve uses the linear-time GKR protocol to verify each layer on the fly. For each network layer, the prover commits to the layer’s inputs and outputs (e.g., via cryptographic hashes or polynomial commitments) and then engages in a sum-check argument to prove that the outputs indeed result from the inputs as per the layer’s function. The sum-check protocol iteratively convinces the verifier of the correctness of a sum of evaluations of a polynomial that encodes the layer’s computation, without revealing the actual values. Non-linear operations (like ReLU, softmax) are handled efficiently through lookup arguments in DeepProve – if an activation’s output was computed, DeepProve can prove that each output corresponds to a valid input-output pair from a precomputed table for that function. Layer by layer, proofs are generated and then aggregated into one succinct proof covering the whole model’s forward pass. The heavy lifting of cryptography is minimized – DeepProve’s prover mostly performs normal numeric computations (the actual inference) plus some light cryptographic commitments, rather than solving a giant system of constraints.

  • Verification: The verifier uses the final succinct proof along with a few public values – typically the model’s committed identifier (a cryptographic commitment to $M$’s weights), the input $X$ (if not private), and the claimed output $Y$ – to check correctness. Verification in DeepProve’s system involves verifying the sum-check protocol’s transcript and the final polynomial or hash commitments. This is more involved than verifying a classic SNARK (which might be a few pairings), but it’s vastly cheaper than re-running the model. In Lagrange’s benchmarks, verifying a DeepProve proof for a medium CNN takes on the order of 0.5 seconds in software. That is ~0.5s to confirm, for example, that a convolutional network with hundreds of thousands of parameters ran correctly – over 500× faster than naively re-computing that CNN on a GPU for verification. (In fact, DeepProve measured up to 521× faster verification for CNNs and 671× for MLPs compared to re-execution.) The proof size is small enough to transmit on-chain (tens of KB), and verification could be performed in a smart contract if needed, although 0.5s of computation might require careful gas optimization or layer-2 execution.

Architecture and Tooling: DeepProve is implemented in Rust and provides a toolkit (the zkml library) for developers. It natively supports ONNX model graphs, making it compatible with models from PyTorch or TensorFlow (after exporting). The proving process currently targets models up to a few million parameters (tests include a 4M-parameter dense network). DeepProve leverages a combination of cryptographic components: a multilinear polynomial commitment (to commit to layer outputs), the sum-check protocol for verifying computations, and lookup arguments for non-linear ops. Notably, Lagrange’s open-source repository acknowledges it builds on prior work (the sum-check and GKR implementation from Scroll’s Ceno project), indicating an intersection of zkML with zero-knowledge rollup research.

To achieve real-time scalability, Lagrange pairs DeepProve with its Prover Network – a decentralized network of specialized ZK provers. Heavy proof generation can be offloaded to this network: when an application needs an inference proved, it sends the job to Lagrange’s network, where many operators (staked on EigenLayer for security) compute proofs and return the result. This network economically incentivizes reliable proof generation (malicious or failed jobs get the operator slashed). By distributing work across provers (and potentially leveraging GPUs or ASICs), the Lagrange Prover Network hides the complexity and cost from end-users. The result is a fast, scalable, and decentralized zkML service: “verifiable AI inferences fast and affordable”.

Performance Milestones: DeepProve’s claims are backed by benchmarks against the prior state-of-the-art, Ezkl. For a CNN with ~264k parameters (CIFAR-10 scale model), DeepProve’s proving time was ~1.24 seconds versus ~196 seconds for Ezkl – about 158× faster. For a larger dense network with 4 million parameters, DeepProve proved an inference in ~2.3 seconds vs ~126.8 seconds for Ezkl (~54× faster). Verification times also dropped: DeepProve verified the 264k CNN proof in ~0.6s, whereas verifying the Ezkl proof (Halo2-based) took over 5 minutes on CPU in that test. The speedups come from DeepProve’s near-linear complexity: its prover scales roughly O(n) with the number of operations, whereas circuit-based SNARK provers often have superlinear overhead (FFT and polynomial commitments scaling). In fact, DeepProve’s prover throughput can be within an order of magnitude of plain inference runtime – recent GKR systems can be <10× slower than raw execution for large matrix multiplications, an impressive achievement in ZK. This makes real-time or on-demand proofs more feasible, paving the way for verifiable AI in interactive applications.

Use Cases: Lagrange is already collaborating with Web3 and AI projects to apply zkML. Example use cases include: verifiable NFT traits (proving an AI-generated evolution of a game character or collectible is computed by the authorized model), provenance of AI content (proving an image or text was generated by a specific model, to combat deepfakes), DeFi risk models (proving a model’s output that assesses financial risk without revealing proprietary data), and private AI inference in healthcare or finance (where a hospital can get AI predictions with a proof, ensuring correctness without exposing patient data). By making AI outputs verifiable and privacy-preserving, DeepProve opens the door to “AI you can trust” in decentralized systems – moving from an era of “blind trust in black-box models” to one of “objective guarantees”.

SNARK-Based zkML: Ezkl and the Halo2 Approach

The traditional approach to zkML uses zk-SNARKs (Succinct Non-interactive Arguments of Knowledge) to prove neural network inference. Ezkl (by ZKonduit/Modulus Labs) is a leading example of this approach. It builds on the Halo2 proving system (a PLONK-style SNARK with polynomial commitments over BLS12-381). Ezkl provides a tooling chain where a developer can take a PyTorch or TensorFlow model, export it to ONNX, and have Ezkl compile it into a custom arithmetic circuit automatically.

How it works: Each layer of the neural network is converted into constraints:

  • Linear layers (dense or convolution) become collections of multiplication-add constraints that enforce the dot-products between inputs, weights, and outputs.
  • Non-linear layers (like ReLU, sigmoid, etc.) are handled via lookups or piecewise constraints because such functions are not polynomial. For instance, a ReLU can be implemented by a boolean selector $b$ with constraints ensuring $y = x \cdot b$ and $0 \le b \le 1$ and $b=1$ if $x>0$ (one way to do it), or more efficiently by a lookup table mapping $x \mapsto \max(0,x)$ for a range of $x$ values. Halo2’s lookup arguments allow mapping 16-bit (or smaller) chunks of values, so large domains (like all 32-bit values) are usually “chunked” into several smaller lookups. This chunking increases the number of constraints.
  • Big integer ops or divisions (if any) are similarly broken into small pieces. The result is a large set of R1CS/PLONK constraints tailored to the specific model architecture.

Ezkl then uses Halo2 to generate a proof that these constraints hold given the secret inputs (model weights, private inputs) and public outputs. Tooling and integration: One advantage of the SNARK approach is that it leverages well-known primitives. Halo2 is already used in Ethereum rollups (e.g. Zcash, zkEVMs), so it’s battle-tested and has an on-chain verifier readily available. Ezkl’s proofs use BLS12-381 curve, which Ethereum can verify via precompiles, making it straightforward to verify an Ezkl proof in a smart contract. The team has also provided user-friendly APIs; for example, data scientists can work with their models in Python and use Ezkl’s CLI to produce proofs, without deep knowledge of circuits.

Strengths: Ezkl’s approach benefits from the generality and ecosystem of SNARKs. It supports reasonably complex models and has already seen “practical integrations (from DeFi risk models to gaming AI)”, proving real-world ML tasks. Because it operates at the level of the model’s computation graph, it can apply ML-specific optimizations: e.g. pruning insignificant weights or quantizing parameters to reduce circuit size. It also means model confidentiality is natural – the weights can be treated as private witness data, so the verifier only sees that some valid model produced the output, or at best a commitment to the model. The verification of SNARK proofs is extremely fast (typically a few milliseconds or less on-chain), and proof sizes are small (a few kilobytes), which is ideal for blockchain usage.

Weaknesses: Performance is the Achilles’ heel. Circuit-based proving imposes large overheads, especially as models grow. It’s noted that historically, SNARK circuits could be a million times more work for the prover than just running the model itself. Halo2 and Ezkl optimize this, but still, operations like large matrix multiplications generate tons of constraints. If a model has millions of parameters, the prover must handle correspondingly millions of constraints, performing heavy FFTs and multiexponentiation in the process. This leads to high proving times (often minutes or hours for non-trivial models) and high memory usage. For example, proving even a relatively small CNN (e.g. a few hundred thousand parameters) can take tens of minutes with Ezkl on a single machine. The team behind DeepProve cited that Ezkl took hours for certain model proofs that DeepProve can do in minutes. Large models might not even fit in memory or require splitting into multiple proofs (which then need recursive aggregation). While Halo2 is “moderately optimized”, any need to “chunk” lookups or handle wide-bit operations translates to extra overhead. In summary, scalability is limited – Ezkl works well for small-to-medium models (and indeed outperformed some earlier alternatives like naive Stark-based VMs in benchmarks), but struggles as model size grows beyond a point.

Despite these challenges, Ezkl and similar SNARK-based zkML libraries are important stepping stones. They proved that verified ML inference is possible on-chain and have active usage. Notably, projects like Modulus Labs demonstrated verifying an 18-million-parameter model on-chain using SNARKs (with heavy optimization). The cost was non-trivial, but it shows the trajectory. Moreover, the Mina Protocol has its own zkML toolkit that uses SNARKs to allow smart contracts on Mina (which are Snark-based) to verify ML model execution. This indicates a growing multi-platform support for SNARK-based zkML.

STARK-Based Approaches: Transparent and Programmable ZK for ML

zk-STARKs (Scalable Transparent ARguments of Knowledge) offer another route to zkML. STARKs use hash-based cryptography (like FRI for polynomial commitments) and avoid any trusted setup. They often operate by simulating a CPU or VM and proving the execution trace is correct. In context of ML, one can either build a custom STARK for the neural network or use a general-purpose STARK VM to run the model code.

General STARK VMs (RISC Zero, Cairo): A straightforward approach is to write inference code and run it in a STARK VM. For example, Risc0 provides a RISC-V environment where any code (e.g., C++ or Rust implementation of a neural network) can be executed and proven via a STARK. Similarly, StarkWare’s Cairo language can express arbitrary computations (like an LSTM or CNN inference) which are then proved by the StarkNet STARK prover. The advantage is flexibility – you don’t need to design custom circuits for each model. However, early benchmarks showed that naive STARK VMs were slower compared to optimized SNARK circuits for ML. In one test, a Halo2-based proof (Ezkl) was about 3× faster than a STARK-based approach on Cairo, and even 66× faster than a RISC-V STARK VM on a certain benchmark in 2024. This gap is due to the overhead of simulating every low-level instruction in a STARK and the larger constants in STARK proofs (hashing is fast but you need a lot of it; STARK proof sizes are bigger, etc.). However, STARK VMs are improving and have the benefit of transparent setup (no trusted setup) and post-quantum security. As STARK-friendly hardware and protocols advance, proving speeds will improve.

DeepProve’s approach vs STARK: Interestingly, DeepProve’s use of GKR and sum-check yields a proof more akin to a STARK in spirit – it’s an interactive, hash-based proof with no need for a structured reference string. The trade-off is that its proofs are larger and verification is heavier than a SNARK. Yet, DeepProve shows that careful protocol design (specialized to ML’s layered structure) can vastly outperform both generic STARK VMs and SNARK circuits in proving time. We can consider DeepProve as a bespoke STARK-style zkML prover (though they use the term zkSNARK for succinctness, it doesn’t have a traditional SNARK’s small constant-size verification, since 0.5s verify is bigger than typical SNARK verify). Traditional STARK proofs (like StarkNet’s) often involve tens of thousands of field operations to verify, whereas SNARK verifies in maybe a few dozen. Thus, one trade-off is evident: SNARKs yield smaller proofs and faster verifiers, while STARKs (or GKR) offer easier scaling and no trusted setup at the cost of proof size and verify speed.

Emerging improvements: The JOLT zkVM (discussed earlier under JOLTx) is actually outputting SNARKs (using PLONKish commitments) but it embodies ideas that could be applied in STARK context too (Lasso lookups could theoretically be used with FRI commitments). StarkWare and others are researching ways to speed up proving of common operations (like using custom gates or hints in Cairo for big int ops, etc.). There’s also Circomlib-ML by Privacy&Scaling Explorations (PSE), which provides Circom templates for CNN layers, etc. – that’s SNARK-oriented, but conceptually similar templates could be made for STARK languages.

In practice, non-Ethereum ecosystems leveraging STARKs include StarkNet (which could allow on-chain verification of ML if someone writes a verifier, though cost is high) and Risc0’s Bonsai service (which is an off-chain proving service that emits STARK proofs which can be verified on various chains). As of 2025, most zkML demos on blockchain have favored SNARKs (due to verifier efficiency), but STARK approaches remain attractive for their transparency and potential in high-security or quantum-resistant settings. For example, a decentralized compute network might use STARKs to let anyone verify work without a trusted setup, useful for longevity. Also, some specialized ML tasks might exploit STARK-friendly structures: e.g. computations heavily using XOR/bit operations could be faster in STARKs (since those are cheap in boolean algebra and hashing) than in SNARK field arithmetic.

Summary of SNARK vs STARK for ML:

  • Performance: SNARKs (like Halo2) have huge proving overhead per gate but benefit from powerful optimizations and small constants for verify; STARKs (generic) have larger constant overhead but scale more linearly and avoid expensive crypto like pairings. DeepProve shows that customizing the approach (sum-check) yields near-linear proving time (fast) but with a STARK-like proof. JOLT shows that even a general VM can be made faster with heavy use of lookups. Empirically, for models up to millions of operations: a well-optimized SNARK (Ezkl) can handle it but might take tens of minutes, whereas DeepProve (GKR) can do it in seconds. STARK VMs in 2024 were likely in between or worse than SNARKs unless specialized (Risc0 was slower in tests, Cairo was slower without custom hints).
  • Verification: SNARK proofs verify quickest (milliseconds, and minimal data on-chain ~ a few hundred bytes to a few KB). STARK proofs are larger (dozens of KB) and take longer (tens of ms to seconds) to verify due to many hashing steps. In blockchain terms, a SNARK verify might cost e.g. ~200k gas, whereas a STARK verify could cost millions of gas – often too high for L1, acceptable on L2 or with succinct verification schemes.
  • Setup and Security: SNARKs like Groth16 require a trusted setup per circuit (unfriendly for arbitrary models), but universal SNARKs (PLONK, Halo2) have a one-time setup that can be reused for any circuit up to certain size. STARKs need no setup and use only hash assumptions (plus classical polynomial complexity assumptions), and are post-quantum secure. This makes STARKs appealing for longevity – proofs remain secure even if quantum computers emerge, whereas current SNARKs (BLS12-381 based) would be broken by quantum attacks.

We will consolidate these differences in a comparison table shortly.

FHE for ML (FHE-o-ML): Private Computation vs. Verifiable Computation

Fully Homomorphic Encryption (FHE) is a cryptographic technique that allows computations to be performed directly on encrypted data. In the context of ML, FHE can enable a form of privacy-preserving inference: for example, a client can send encrypted input to a model host, the host runs the neural network on the ciphertext without decrypting it, and sends back an encrypted result which the client can decrypt. This ensures data confidentiality – the model owner learns nothing about the input (and potentially the client learns only the output, not the model’s internals if they only get output). However, FHE by itself does not produce a proof of correctness in the same way ZKPs do. The client must trust that the model owner actually performed the computation honestly (the ciphertext could have been manipulated). Usually, if the client has the model or expects a certain distribution of outputs, blatant cheating can be detected, but subtle errors or use of a wrong model version would not be evident just from the encrypted output.

Trade-offs in performance: FHE is notoriously heavy in computation. Running deep learning inference under FHE incurs orders-of-magnitude slowdown. Early experiments (e.g., CryptoNets in 2016) took tens of seconds to evaluate a tiny CNN on encrypted data. By 2024, improvements like CKKS (for approximate arithmetic) and better libraries (Microsoft SEAL, Zama’s Concrete) have reduced this overhead, but it remains large. For example, a user reported that using Zama’s Concrete-ML to run a CIFAR-10 classifier took 25–30 minutes per inference on their hardware. After optimizations, Zama’s team achieved ~40 seconds for that inference on a 192-core server. Even 40s is extremely slow compared to a plaintext inference (which might be 0.01s), showing a ~$10^3$–$10^4\times$ overhead. Larger models or higher precision increase the cost further. Additionally, FHE operations consume a lot of memory and require occasional bootstrapping (a noise-reduction step) which is computationally expensive. In summary, scalability is a major issue – state-of-the-art FHE might handle a small CNN or simple logistic regression, but scaling to large CNNs or Transformers is beyond current practical limits.

Privacy advantages: FHE’s big appeal is data privacy. The input can remain completely encrypted throughout the process. This means an untrusted server can compute on a client’s private data without learning anything about it. Conversely, if the model is sensitive (proprietary), one could envisage encrypting the model parameters and having the client perform FHE inference on their side – but this is less common because if the client has to do the heavy FHE compute, it negates the idea of offloading to a powerful server. Typically, the model is public or held by server in the clear, and the data is encrypted by the client’s key. Model privacy in that scenario is not provided by default (the server knows the model; the client learns outputs but not weights). There are more exotic setups (like secure two-party computation or multi-key FHE) where both model and data can be kept private from each other, but those incur even more complexity. In contrast, zkML via ZKPs can ensure model privacy and data privacy at once – the prover can have both the model and data as secret witness, only revealing what’s needed to the verifier.

No on-chain verification needed (and none possible): With FHE, the result comes out encrypted to the client. The client then decrypts it to obtain the actual prediction. If we want to use that result on-chain, the client (or whoever holds the decryption key) would have to publish the plaintext result and convince others it’s correct. But at that point, trust is back in the loop – unless combined with a ZKP. In principle, one could combine FHE and ZKP: e.g., use FHE to keep data private during compute, and then generate a ZK-proof that the plaintext result corresponds to a correct computation. However, combining them means you pay the performance penalty of FHE and ZKP – extremely impractical with today’s tech. So, in practice FHE-of-ML and zkML serve different use cases:

  • FHE-of-ML: Ideal when the goal is confidentiality between two parties (client and server). For instance, a cloud service can host an ML model and users can query it with their sensitive data without revealing the data to the cloud (and if the model is sensitive, perhaps deploy it via FHE-friendly encodings). This is great for privacy-preserving ML services (medical predictions, etc.). The user still has to trust the service to faithfully run the model (since no proof), but at least any data leakage is prevented. Some projects like Zama are even exploring an “FHE-enabled EVM (fhEVM)” where smart contracts could operate on encrypted inputs, but verifying those computations on-chain would require the contract to somehow enforce correct computation – an open challenge likely requiring ZK proofs or specialized secure hardware.
  • zkML (ZKPs): Ideal when the goal is verifiability and public auditability. If you want anyone (or any contract) to be sure that “Model $M$ was evaluated correctly on $X$ and produced $Y$”, ZKPs are the solution. They also provide privacy as a bonus (you can hide $X$ or $Y$ or $M$ if needed by treating them as private inputs to the proof), but their primary feature is the proof of correct execution.

A complementary relationship: It’s worth noting that ZKPs protect the verifier (they learn nothing about secrets, only that the computation was correctly done), whereas FHE protects the prover’s data from the computing party. In some scenarios, these could be combined – for example, a network of untrusted nodes could use FHE to compute on users’ private data and then provide ZK proofs to the users (or blockchain) that the computations were done according to the protocol. This would cover both privacy and correctness, but the performance cost is enormous with today’s algorithms. More feasible in the near term are hybrids like Trusted Execution Environments (TEE) plus ZKP or Functional Encryption plus ZKP – these are beyond our scope, but they aim to provide something similar (TEEs keep data/model secret during compute, then a ZKP can attest the TEE did the right thing).

In summary, FHE-of-ML prioritizes confidentiality of inputs/outputs, while zkML prioritizes verifiable correctness (with possible privacy). Table 1 below contrasts the key properties:

ApproachProver Performance (Inference & Proof)Proof Size & VerificationPrivacy FeaturesTrusted Setup?Post-Quantum?
zk-SNARK (Halo2, Groth16, PLONK, etc)Heavy prover overhead (up to 10^6× normal runtime without optimizations; in practice 10^3–10^5×). Optimized for specific model/circuit; proving time in minutes for medium models, hours for large. Recent zkML SNARKs (DeepProve with GKR) vastly improve this (near-linear overhead, e.g. seconds instead of minutes for million-param models).Very small proofs (often < 100 KB, sometimes ~a few KB). Verification is fast: a few pairings or polynomial evals (typically < 50 ms on-chain). DeepProve’s GKR-based proofs are larger (tens–hundreds KB) and verify in ~0.5 s (still much faster than re-running the model).Data confidentiality: Yes – inputs can be private in proof (not revealed). Model privacy: Yes – prover can commit to model weights and not reveal them. Output hiding: Optional – proof can be of a statement without revealing output (e.g. “output has property P”). However, if the output itself is needed on-chain, it typically becomes public. Overall, SNARKs offer full zero-knowledge flexibility (hide whichever parts you want).Depends on scheme. Groth16/EZKL require a trusted setup per circuit; PLONK/Halo2 use a universal setup (one time). DeepProve’s sum-check GKR is transparent (no setup) – a bonus of that design.Classical SNARKs (BLS12-381 curves) are not PQ-safe (vulnerable to quantum attacks on elliptic curve discrete log). Some newer SNARKs use PQ-safe commitments, but Halo2/PLONK as used in Ezkl are not PQ-safe. GKR (DeepProve) uses hash commitments (e.g. Poseidon/Merkle) which are conjectured PQ-safe (relying on hash preimage resistance).
zk-STARK (FRI, hash-based proof)Prover overhead is high but more linear scaling. Typically 10^2–10^4× slower than native for large tasks, with room to parallelize. General STARK VMs (Risc0, Cairo) saw slower performance vs SNARK for ML in 2024 (e.g. 3×–66× slower than Halo2 in some cases). Specialized STARKs (or GKR) can approach linear overhead and outperform SNARKs for large circuits.Proofs are larger: often tens of KB (growing with circuit size/log(n)). Verifier must do multiple hash and FFT checks – verification time ~O(n^ε) for small ε (e.g. ~50 ms to 500 ms depending on proof size). On-chain, this is costlier (StarkWare’s L1 verifier can take millions of gas per proof). Some STARKs support recursive proofs to compress size, at cost of prover time.Data & Model privacy: A STARK can be made zero-knowledge by randomizing trace data (adding blinding to polynomial evaluations), so it can hide private inputs similarly to SNARK. Many STARK implementations focus on integrity, but zk-STARK variants do allow privacy. So yes, they can hide inputs/models like SNARKs. Output hiding: likewise possible in theory (prover doesn’t declare the output as public), but rarely used since usually the output is what we want to reveal/verify.No trusted setup. Transparency is a hallmark of STARKs – only require common random string (which Fiat-Shamir can derive). This makes them attractive for open-ended use (any model, any time, no per-model ceremony).Yes, STARKs rely on hash and information-theoretic security assumptions (like random oracle and difficulty of certain codeword decoding in FRI). These are believed to be secure against quantum adversaries. STARK proofs are thus PQ-resistant, an advantage for future-proofing verifiable AI.
FHE for ML (Fully Homomorphic Encryption applied to inference)Prover = party doing computation on encrypted data. The computation time is extremely high: 10^3–10^5× slower than plaintext inference is common. High-end hardware (many-core servers, FPGA, etc.) can mitigate this. Some optimizations (low-precision inference, leveled FHE parameters) can reduce overhead but there is a fundamental performance hit. FHE is currently practical for small models or simple linear models; deep networks remain challenging beyond toy sizes.No proof generated. The result is an encrypted output. Verification in the sense of checking correctness is not provided by FHE alone – one trusts the computing party to not cheat. (If combined with secure hardware, one might get an attestation; otherwise, a malicious server could return an incorrect encrypted result that the client would decrypt to wrong output without knowing the difference).Data confidentiality: Yes – the input is encrypted, so the computing party learns nothing about it. Model privacy: If the model owner is doing the compute on encrypted input, the model is in plaintext on their side (not protected). If roles are reversed (client holds model encrypted and server computes), model could be kept encrypted, but this scenario is less common. There are techniques like secure two-party ML that combine FHE/MPC to protect both, but these go beyond plain FHE. Output hiding: By default, the output of the computation is encrypted (only decryptable by the party with the secret key, usually the input owner). So the output is hidden from the computing server. If we want the output public, the client can decrypt and reveal it.No setup needed. Each user generates their own key pair for encryption. Trust relies on keys remaining secret.The security of FHE schemes (e.g. BFV, CKKS, TFHE) is based on lattice problems (Learning With Errors), which are believed to be resistant to quantum attacks (at least no efficient quantum algorithm is known). So FHE is generally considered post-quantum secure.

Table 1: Comparison of zk-SNARK, zk-STARK, and FHE approaches for machine learning inference (performance and privacy trade-offs).

Use Cases and Implications for Web3 Applications

The convergence of AI and blockchain via zkML unlocks powerful new application patterns in Web3:

  • Decentralized Autonomous Agents & On-Chain Decision-Making: Smart contracts or DAOs can incorporate AI-driven decisions with guarantees of correctness. For example, imagine a DAO that uses a neural network to analyze market conditions before executing trades. With zkML, the DAO’s smart contract can require a zkSNARK proof that the authorized ML model (with a known hash commitment) was run on the latest data and produced the recommended action, before the action is accepted. This prevents malicious actors from injecting a fake prediction – the chain verifies the AI’s computation. Over time, one could even have fully on-chain autonomous agents (contracts that query off-chain AI or contain simplified models) making decisions in DeFi or games, with all their moves proven correct and policy-compliant via zk proofs. This raises the trust in autonomous agents, since their “thinking” is transparent and verifiable rather than a black-box.

  • Verifiable Compute Markets: Projects like Lagrange are effectively creating verifiable computation marketplaces – developers can outsource heavy ML inference to a network of provers and get back a proof with the result. This is analogous to decentralized cloud computing, but with built-in trust: you don’t need to trust the server, only the proof. It’s a paradigm shift for oracles and off-chain computation. Protocols like Ethereum’s upcoming DSC (decentralized sequencing layer) or oracle networks could use this to provide data feeds or analytic feeds with cryptographic guarantees. For instance, an oracle could supply “the result of model X on input Y” and anyone can verify the attached proof on-chain, rather than trusting the oracle’s word. This could enable verifiable AI-as-a-service on blockchain: any contract can request a computation (like “score these credit risks with my private model”) and accept the answer only with a valid proof. Projects such as Gensyn are exploring decentralized training and inference marketplaces using these verification techniques.

  • NFTs and Gaming – Provenance and Evolution: In blockchain games or NFT collectibles, zkML can prove traits or game moves were generated by legitimate AI models. For example, a game might allow an AI to evolve an NFT pet’s attributes. Without ZK, a clever user might modify the AI or the outcome to get a superior pet. With zkML, the game can require a proof that “pet’s new stats were computed by the official evolution model on the pet’s old stats”, preventing cheating. Similarly for generative art NFTs: an artist could release a generative model as a commitment; later, when minting NFTs, prove each image was produced by that model given some seed, guaranteeing authenticity (and even doing so without revealing the exact model to the public, preserving the artist’s IP). This provenance verification ensures authenticity in a manner akin to verifiable randomness – except here it’s verifiable creativity.

  • Privacy-Preserving AI in Sensitive Domains: zkML allows confirmation of outcomes without exposing inputs. In healthcare, a patient’s data could be run through an AI diagnostic model by a cloud provider; the hospital receives a diagnosis and a proof that the model (which could be privately held by a pharmaceutical company) was run correctly on the patient data. The patient data remains private (only an encrypted or committed form was used in the proof), and the model weights remain proprietary – yet the result is trusted. Regulators or insurance could also verify that only approved models were used. In finance, a company could prove to an auditor or regulator that its risk model was applied to its internal data and produced certain metrics without revealing the underlying sensitive financial data. This enables compliance and oversight with cryptographic assurances rather than manual trust.

  • Cross-Chain and Off-Chain Interoperability: Because zero-knowledge proofs are fundamentally portable, zkML can facilitate cross-chain AI results. One chain might have an AI-intensive application running off-chain; it can post a proof of the result to a different blockchain, which will trustlessly accept it. For instance, consider a multi-chain DAO using an AI to aggregate sentiment across social media (off-chain data). The AI analysis (complex NLP on large data) is done off-chain by a service that then posts a proof to a small blockchain (or multiple chains) that “analysis was done correctly and output sentiment score = 0.85”. All chains can verify and use that result in their governance logic, without each needing to rerun the analysis. This kind of interoperable verifiable compute is what Lagrange’s network aims to support, by serving multiple rollups or L1s simultaneously. It removes the need for trusted bridges or oracle assumptions when moving results between chains.

  • AI Alignment and Governance: On a more forward-looking note, zkML has been highlighted as a tool for AI governance and safety. Lagrange’s vision statements, for example, argue that as AI systems become more powerful (even superintelligent), cryptographic verification will be essential to ensure they follow agreed rules. By requiring AI models to produce proofs of their reasoning or constraints, humans retain a degree of control – “you cannot trust what you cannot verify”. While this is speculative and involves social as much as technical aspects, the technology could enforce that an AI agent running autonomously still proves it is using an approved model and hasn’t been tampered with. Decentralized AI networks might use on-chain proofs to verify contributions (e.g., a network of nodes collaboratively training a model can prove each update was computed faithfully). Thus zkML could play a role in ensuring AI systems remain accountable to human-defined protocols even in decentralized or uncontrolled environments.

In conclusion, zkML and verifiable on-chain AI represent a convergence of advanced cryptography and machine learning that stands to enhance trust, transparency, and privacy in AI applications. By comparing the major approaches – zk-SNARKs, zk-STARKs, and FHE – we see a spectrum of trade-offs between performance and privacy, each suitable for different scenarios. SNARK-based frameworks like Ezkl and innovations like Lagrange’s DeepProve have made it feasible to prove substantial neural network inferences with practical effort, opening the door to real-world deployments of verifiable AI. STARK-based and VM-based approaches promise greater flexibility and post-quantum security, which will become important as the field matures. FHE, while not a solution for verifiability, addresses the complementary need of confidential ML computation, and in combination with ZKPs or in specific private contexts it can empower users to leverage AI without sacrificing data privacy.

The implications for Web3 are significant: we can foresee smart contracts reacting to AI predictions, knowing they are correct; markets for compute where results are trustlessly sold; digital identities (like Worldcoin’s proof-of-personhood via iris AI) protected by zkML to confirm someone is human without leaking their biometric image; and generally a new class of “provable intelligence” that enriches blockchain applications. Many challenges remain – performance for very large models, developer ergonomics, and the need for specialized hardware – but the trajectory is clear. As one report noted, “today’s ZKPs can support small models, but moderate to large models break the paradigm”; however, rapid advances (50×–150× speedups with DeepProve over prior art) are pushing that boundary outward. With ongoing research (e.g., on hardware acceleration and distributed proving), we can expect progressively larger and more complex AI models to become provable. zkML might soon evolve from niche demos to an essential component of trusted AI infrastructure, ensuring that as AI becomes ubiquitous, it does so in a way that is auditable, decentralized, and aligned with user privacy and security.