Skip to main content

49 posts tagged with "blockchain"

View All Tags

Sui-Backed MPC Network Ika – Comprehensive Technical and Investment Evaluation

· 39 min read

Introduction

Ika is a parallel Multi-Party Computation (MPC) network strategically backed by the Sui Foundation. Formerly known as dWallet Network, Ika is designed to enable zero-trust, cross-chain interoperability at high speed and scale. It allows smart contracts (especially on the Sui blockchain) to securely control and coordinate assets on other blockchains without traditional bridges. This report provides a deep dive into Ika’s technical architecture and cryptographic design from a founder’s perspective, as well as a business and investment analysis covering team, funding, tokenomics, adoption, and competition. A summary comparison table of Ika versus other MPC-based networks (Lit Protocol, Threshold Network, and Zama) is also included for context.

Ika Network

Technical Architecture and Features (Founder’s Perspective)

Architecture and Cryptographic Primitives

Ika’s core innovation is a novel “2PC-MPC” cryptographic scheme – a two-party computation within a multi-party computation framework. In simple terms, the signing process always involves two parties: (1) the user and (2) the Ika network. The user retains a private key share, and the network – composed of many independent nodes – holds the other share. A signature can only be produced with participation from both, ensuring the network alone can never forge a signature without the user. The network side isn’t a single entity but a distributed MPC among N validators that collectively act as the second party. A threshold of at least two-thirds of these nodes must agree (akin to Byzantine Fault Tolerance consensus) to generate the network’s share of the signature. This nested MPC structure (user + network) makes Ika non-collusive: even if all Ika nodes collude, they cannot steal user assets because the user’s participation (their key share) is always cryptographically required. In other words, Ika enables “zero-trust” security, upholding decentralization and user ownership principles of Web3 – no single entity or small group can unilaterally compromise assets.

Figure: Schematic of Ika’s 2PC-MPC architecture – the user acts as one party (holding a private key share) and the Ika network of N validators forms the other party via an MPC threshold protocol (t-out-of-N). This guarantees that both the user and a supermajority of decentralized nodes must cooperate to produce a valid signature.

Technically, Ika is implemented as a standalone blockchain network forked from the Sui codebase. It runs its own instance of Sui’s high-performance consensus engine (Mysticeti, a DAG-based BFT protocol) to coordinate the MPC nodes. Notably, Ika’s version of Sui has smart contracts disabled (Ika’s chain exists solely to run the MPC protocol) and includes custom modules for the 2PC-MPC signing algorithm. Mysticeti provides a reliable broadcast channel among the nodes, replacing the complex mesh of peer-to-peer messages that traditional MPC protocols use. By leveraging a DAG-based consensus for communication, Ika avoids the exponential communication overhead of earlier threshold signing schemes, which required each of n parties to send messages to all others. Instead, Ika’s nodes broadcast messages via the consensus, achieving linear communication complexity O(n), and using batching and aggregation techniques to keep per-node costs almost constant even as N grows large. This represents a significant breakthrough in threshold cryptography: the Ika team replaced point-to-point “unicast” communication with efficient broadcast and aggregation, enabling the protocol to support hundreds or thousands of participants without slowing down.

Zero-knowledge integrations: At present, Ika’s security is achieved through threshold cryptography and BFT consensus rather than explicit zero-knowledge proofs. The system does not rely on zk-SNARKs or zk-STARKs in its core signing process. However, Ika uses on-chain state proofs (light client proofs) to verify events from other chains, which is a form of cryptographic verification (e.g. verifying Merkle proofs of block headers or state). The design leaves room for integrating zero-knowledge techniques in the future – for example, to validate cross-chain state or conditions without revealing sensitive data – but as of 2025 no specific zk-SNARK module is part of Ika’s published architecture. The emphasis is instead on the “zero-trust” principle (meaning no trust assumptions) via the 2PC-MPC scheme, rather than zero-knowledge proof systems.

Performance and Scalability

A primary goal of Ika is to overcome the performance bottlenecks of prior MPC networks. Legacy threshold signature protocols (like Lindell’s 2PC ECDSA or GG20) struggled to support more than a handful of participants, often taking many seconds or minutes to produce a single signature. In contrast, Ika’s optimized protocol achieves sub-second latency for signing and can handle a very high throughput of signature requests in parallel. Benchmark claims indicate Ika can scale to around 10,000 signatures per second while maintaining security across a large node cluster. This is possible thanks to the aforementioned linear communication and heavy use of batching: many signatures can be generated concurrently by the network in one round of protocol, dramatically amortizing costs. According to the team, Ika can be “10,000× faster” than existing MPC networks under load. In practical terms, this means real-time, high-frequency transactions (such as trading or cross-chain DeFi operations) can be supported without the usual delays of threshold signing. Latency is on the order of sub-second finality, meaning a signature (and the corresponding cross-chain operation) can be completed almost instantly after a user’s request.

Equally important, Ika does this while scaling out the number of signers to enhance decentralization. Traditional MPC setups often used a fixed committee of maybe 10–20 nodes to avoid performance collapse. Ika’s architecture can expand to hundreds or even thousands of validators participating in the signing process without significant slowdown. This massive decentralization improves security (harder for an attacker to corrupt a majority) and network robustness. The underlying consensus is Byzantine fault tolerant, so the network can tolerate up to one-third of nodes being compromised or offline and still function correctly. In any given signing operation, only a threshold t-of-N of nodes (e.g. 67% of N) need to actively participate; by design, if too many nodes are down, the signature might be delayed, but the system is engineered to handle typical failure scenarios gracefully (similar to a blockchain’s consensus liveness and safety properties). In summary, Ika achieves both high throughput and high validator count, a combination that sets it apart from earlier MPC solutions that had to trade off decentralization for speed.

Developer Tooling and Integration

The Ika network is built to be developer-friendly, especially for those already building on Sui. Developers do not write smart contracts on Ika itself (since Ika’s chain doesn’t run user-defined contracts), but instead interact with Ika from other chains. For example, a Sui Move contract can invoke Ika’s functionality to sign transactions on external chains. To facilitate this, Ika provides robust tooling and SDKs:

  • TypeScript SDK: Ika offers a TypeScript SDK (Node.js library) that mirrors the style of the Sui SDK. This SDK allows builders to create and manage dWallets (decentralized wallets) and issue signing requests to Ika from their applications. Using the TS SDK, developers can generate keypairs, register user shares, and call Ika’s RPC to coordinate threshold signatures – all with familiar patterns from Sui’s API. The SDK abstracts away the complexity of the MPC protocol, making it as simple as calling a function to request (for example) a Bitcoin transaction signature, given the appropriate context and user approval.

  • CLI and Local Network: For more direct interaction, a command-line interface (CLI) called dWallet CLI is available. Developers can run a local Ika node or even a local test network by forking the open-source repository. This is valuable for testing and integration in a development environment. The documentation guides through setting up a local devnet, getting testnet tokens (DWLT – the testnet token), and creating a first dWallet address.

  • Documentation and Examples: Ika’s docs include step-by-step tutorials for common scenarios, such as “Your First dWallet”. These show how to establish a dWallet that corresponds to an address on another chain (e.g., a Bitcoin address controlled by Ika’s keys), how to encrypt the user’s key share for safekeeping, and how to initiate cross-chain transactions. Example code covers use cases like transferring BTC via a Sui smart contract call, or scheduling future transactions (a feature Ika supports whereby a transaction can be pre-signed under certain conditions).

  • Sui Integration (Light Clients): Out-of-the-box, Ika is tightly integrated with the Sui blockchain. The Ika network runs a Sui light client internally to trustlessly read Sui on-chain data. This means a Sui smart contract can emit an event or call that Ika will recognize (via a state proof) as a trigger to perform an action. For instance, a Sui contract might instruct Ika: “when event X occurs, sign and broadcast a transaction on Ethereum”. Ika nodes will verify the Sui event using the light client proof and then collectively produce the signature for the Ethereum transaction. The signed payload can then be delivered to the target chain (possibly by an off-chain relayer or by the user) to execute the desired action. Currently, Sui is the first fully supported controller chain (given Ika’s origins on Sui), but the architecture is multi-chain by design. Support for other chains’ state proofs and integrations are on the roadmap – for example, the team has mentioned extending Ika to work with rollups in the Polygon Avail ecosystem (providing dWallet capabilities on rollups with Avail as a data layer) and other Layer-1s in the future.

  • Supported Crypto Algorithms: Ika’s network can generate keys/signatures for virtually any blockchain’s signature scheme. Initially it supports ECDSA (the elliptic curve algorithm used by Bitcoin, Ethereum’s ECDSA accounts, BNB Chain, etc.). In the near term, it’s planned to support EdDSA (Ed25519, used by chains like Solana and some Cosmos chains) and Schnorr signatures (e.g. Bitcoin Taproot’s Schnorr keys). This broad support means an Ika dWallet can have an address on Bitcoin, an address on Ethereum, on Solana, and so on – all controlled by the same underlying distributed key. Developers on Sui or other platforms can thus integrate any of these chains into their dApps through one unified framework (Ika), instead of dealing with chain-specific bridges or custodians.

In summary, Ika offers a developer experience similar to interacting with a blockchain node or wallet, abstracting away the heavy cryptography. Whether via the TypeScript SDK or directly through Move contracts and light clients, it strives to make cross-chain logic “plug-and-play” for builders.

Security, Decentralization, and Fault Tolerance

Security is paramount in Ika’s design. The zero-trust model means that no user has to trust the Ika network with unilateral control of assets at any point. If a user creates a dWallet (say a BTC address managed by Ika), that address’s private key is never held by any single party – not even the user alone. Instead, the user holds a secret share and the network collectively holds the other share. Both are required to sign any transaction. Thus, even if the worst-case scenario occurred (e.g. many Ika nodes were compromised by an attacker), they still could not move funds without the user’s secret key share. This property addresses a major risk in conventional bridges, where a quorum of validators could collude to steal locked assets. Ika eliminates that risk by fundamentally changing the access structure (the threshold is set such that the network alone is never enough – the threshold effectively includes the user). In the literature, this is a new paradigm: a non-collusive MPC network where the asset owner remains part of the signing quorum by design.

On the network side, Ika uses a delegated Proof-of-Stake model (inherited from Sui’s design) for selecting and incentivizing validators. IKA token holders can delegate stake to validator nodes; the top validators (weighted by stake) become the authorities for an epoch, and are Byzantine-fault-tolerant (2/3 honest) in each epoch. This means the system assumes <33% of stake is malicious to maintain safety. If a validator misbehaves (e.g. tries to produce an incorrect signature share or censor transactions), the consensus and MPC protocol will detect it – incorrect signature shares can be identified (they won’t combine to a valid signature), and a malicious node can be logged and potentially slashed or removed in future epochs. Meanwhile, liveness is maintained as long as enough nodes (>67%) participate; the consensus can continue to finalize operations even if many nodes crash or go offline unexpectedly. This fault tolerance ensures the service is robust – no single point of failure exists since hundreds of independent operators in different jurisdictions are participating. Decentralization is further reinforced by the sheer number of participants: Ika does not limit itself to a fixed small committee, so it can onboard more validators to increase security without sacrificing much performance. In fact, Ika’s protocol was explicitly designed to “transcend the node limit of MPC networks” and allow massive decentralization.

Finally, the Ika team has subjected their cryptography to external review. They published a comprehensive whitepaper in 2024 detailing the 2PC-MPC protocol, and they have undergone at least one third-party security audit so far. For example, in June 2024, an audit by Symbolic Software examined Ika’s Rust implementation of the 2PC-MPC protocol and related crypto libraries. The audit would have focused on validating the correctness of the cryptographic protocols (ensuring no flaw in the threshold ECDSA scheme, key generation, or share aggregation) and checking for potential vulnerabilities. The codebase is open-source (under the dWallet Labs GitHub), allowing the community to inspect and contribute to its security. As of the alpha testnet stage, the team also cautioned that the software was still experimental and not yet production-audited, but ongoing audits and security improvements were a top priority prior to mainnet launch. In summary, Ika’s security model is a combination of provable cryptographic guarantees (from threshold schemes) and blockchain-grade decentralization (from the PoS consensus and large validator set), reviewed by experts, to provide strong assurances against both external attackers and insider collusion.

Compatibility and Ecosystem Interoperability

Ika is purpose-built to be an interoperability layer, initially for Sui but extensible to many ecosystems. On day one, its closest integration is with the Sui blockchain: it effectively acts as an add-on module to Sui, empowering Sui dApps with multi-chain capabilities. This tight alignment is by design – Sui’s Move contracts and object-centric model make it a good “controller” for Ika’s dWallets. For instance, a Sui DeFi application can use Ika to pull liquidity from Ethereum or Bitcoin on the fly, making Sui a hub for multi-chain liquidity. Sui Foundation’s support for Ika indicates a strategy to position Sui as “the base chain for every chain”, leveraging Ika to connect to external assets. In practice, when Ika mainnet is live, a Sui builder might create a Move contract that, say, accepts BTC deposits: behind the scenes, that contract would create a Bitcoin dWallet (an address) via Ika and issue instructions to move BTC when needed. The end user experiences this as if Bitcoin is just another asset managed within the Sui app, even though the BTC stays native on Bitcoin until a valid threshold-signed transaction moves it.

Beyond Sui, Ika’s architecture supports other Layer-1 blockchains, Layer-2s, and even off-chain systems. The network can host multiple light clients concurrently, so it can validate state from Ethereum, Solana, Avalanche, or others – enabling smart contracts on those chains (or their users) to also leverage Ika’s MPC network. While such capabilities might roll out gradually, the design goal is chain-agnostic. In the interim, even without deep on-chain integration, Ika can be used in a more manual way: for example, an application on Ethereum could call an Ika API (via an Oracle or off-chain service) to request a signature for an Ethereum tx or a message. Because Ika supports ECDSA, it could even be used to manage an Ethereum account’s key in a decentralized way, similarly to how Lit Protocol’s PKPs work (we discuss Lit later). Ika has also showcased use cases like controlling Bitcoin on rollups – an example being integrating with the Polygon Avail framework to allow rollup users to manage BTC without trusting a centralized custodian. This suggests Ika may collaborate with various ecosystems (Polygon/Avail, Celestia rollups, etc.) as a provider of decentralized key infrastructure.

In summary, from a technical standpoint Ika is compatible with any system that relies on digital signatures – which is essentially all blockchains. Its initial deployment on Sui is just the beginning; the long-term vision is a universal MPC layer that any chain or dApp can plug into for secure cross-chain operations. By supporting common cryptographic standards (ECDSA, Ed25519, Schnorr) and providing the needed light client verifications, Ika could become a kind of “MPC-as-a-service” network for all of Web3, bridging assets and actions in a trust-minimized way.

Business and Investment Perspective

Founding Team and Background

Ika was founded by a team of seasoned cryptography and blockchain specialists, primarily based in Israel. The project’s creator and CEO is Omer Sadika, an entrepreneur with a strong pedigree in the crypto security space. Omer previously co-founded the Odsy Network, another project centered on decentralized wallet infrastructure, and he is the Founder/CEO of dWallet Labs, the company behind Ika. His background includes training at Y Combinator (YC alum) and a focus on cybersecurity and distributed systems. Omer’s experience with Odsy and dWallet Labs directly informed Ika’s vision – in essence, Ika can be seen as an evolution of the “dynamic decentralized wallet” concept Odsy worked on, now implemented as an MPC network on Sui.

Ika’s CTO and co-founder is Yehonatan Cohen Scaly, a cryptography expert who co-authored the 2PC-MPC protocol. Yehonatan leads the R&D for Ika’s novel cryptographic algorithms and had previously worked in cybersecurity (possibly with academic research in cryptography). He has been quoted discussing the limitations of existing threshold schemes and how Ika’s approach overcomes them, reflecting deep expertise in MPC and distributed cryptographic protocols. Another co-founder is David Lachmish, who oversees product development. David’s role is to translate the core technology into developer-friendly products and real-world use cases. The trio of Omer, Yehonatan, and David – along with other researchers like Dr. Dolev Mutzari (VP of Research at dWallet Labs) – anchors Ika’s leadership. Collectively, the team’s credentials include prior startups, academic research contributions, and experience at the intersection of crypto, security, and blockchain. This depth is why Ika is described as being created by “some of the world’s leading cryptography experts”.

In addition to the founders, Ika’s broader team and advisors likely feature individuals with strong cryptography backgrounds. For instance, Dolev Mutzari (mentioned above) is a co-author of the technical paper and instrumental in the protocol design. The presence of such talent gives investors confidence that Ika’s complex technology is in capable hands. Moreover, having a founder (Omer) who already successfully raised funds and built a community around Odsy/dWallet concepts means Ika benefits from lessons learned in previous iterations of the idea. The team’s base in Israel – a country known for its cryptography and cybersecurity sector – also situates them in a rich talent pool for hiring developers and researchers.

Funding Rounds and Key Backers

Ika (and its parent, dWallet Labs) has attracted significant venture funding and strategic investment since its inception. To date it has raised over $21 million across multiple rounds. The project’s initial seed round in August 2022 was $5M, which was remarkable given the bear market conditions at that time. That seed round included a wide array of well-known crypto investors and angels. Notable participants included Node Capital (lead), Lemniscap, Collider Ventures, Dispersion Capital, Lightshift Capital, Tykhe Block Ventures, Liquid2 Ventures, Zero Knowledge Ventures, and others. Prominent individual investors also joined, such as Naval Ravikant (AngelList co-founder and prominent tech investor), Marc Bhargava (co-founder of Tagomi), Rene Reinsberg (co-founder of Celo), and several other industry figures. Such a roster of backers underscored strong confidence in Ika’s approach to decentralized custody even at the idea stage.

In May 2023, Ika raised an additional ~$7.5M in what appears to be a Series A or strategic round, reportedly at a valuation around $250M. This round was led by Blockchange Ventures and Node Capital (again), with participation from Insignius Capital, Rubik Ventures, and others. By this point, the thesis of scalable MPC networks had gained traction, and Ika’s progress likely attracted these investors to double down. The $250M valuation for a relatively early-stage network reflected the market’s expectation that Ika could become foundational infrastructure in web3 (on par with L1 blockchains or major DeFi protocols in terms of value).

The most high-profile investment came in April 2025, when the Sui Foundation announced a strategic investment in Ika. This partnership with Sui’s ecosystem fund pushed Ika’s total funding above $21M and cemented a close alignment with the Sui blockchain. While the exact amount Sui Foundation invested wasn’t publicly disclosed, it’s clear this was a significant endorsement – likely on the order of several million USD. The Sui Foundation’s support is not just financial; it also means Ika gets strong go-to-market assistance within the Sui ecosystem (developer outreach, integration support, marketing, etc.). According to press releases, “Ika…announced a strategic investment from the Sui Foundation, pushing its total funding to over $21 million.” This strategic round, rather than a traditional VC equity round, highlights that Sui sees Ika as critical infrastructure for its blockchain’s future (similar to how Ethereum Foundation might directly back a Layer-2 or interoperability project that benefits Ethereum).

Aside from Sui, other backers worth noting are Node Capital (a China-based crypto fund known for early investments in infrastructure), Lemniscap (a crypto VC focusing on early protocol innovation), and Collider Ventures (Israel-based VC, likely providing local support). Blockchange Ventures leading the 2023 round is notable; Blockchange is a VC that has backed several crypto infrastructure plays and their lead suggests they saw Ika’s tech as potentially category-defining. Additionally, Digital Currency Group (DCG) and Node Capital led a $5M fundraise for dWallet Labs prior to Ika’s rebranding (according to a LinkedIn post by Omer) – DCG’s involvement (via an earlier round for the company) indicates even more support in the background.

In summary, Ika’s funding journey shows a mix of traditional VCs and strategic partners. The Sui Foundation’s involvement particularly stands out, as it not only provides capital but also an integrated ecosystem to deploy Ika’s technology. Investors are essentially betting that Ika will become the go-to solution for decentralized key management and bridging across many networks, and they have valued the project accordingly.

Tokenomics and Economic Model

Ika will have a native utility token called $IKA, which is central to the network’s economics and security model. Uniquely, the IKA token is being launched on the Sui blockchain (as an SUI native asset), even though the Ika network itself is a separate chain. This means IKA will exist as a coin that can be held and transferred on Sui like any other Sui asset, and it will be used in a dual manner: within the Ika network for staking and fees, and on Sui for governance or access in dApps. The tokenomics can be outlined as follows:

  • Gas Fees: Just as ETH is gas in Ethereum or SUI is gas in Sui, IKA serves as the gas/payment for MPC operations on the Ika network. When a user or a dApp requests a signature or dWallet operation, a fee in IKA is paid to the network. These fees compensate validators for the computation and communication work of running the threshold signing protocol. The whitepaper analogizes IKA’s role to Sui’s gas, confirming that all cross-chain transactions facilitated by Ika will incur a small IKA fee. The fee schedule is likely proportional to the complexity of the operation (e.g., a single signature might cost a baseline fee, while more complex multi-step workflows could cost more).

  • Staking and Security: IKA is also a staking token. Validator nodes in the Ika network must be delegated a stake of IKA to participate in consensus and signing. The consensus follows a delegated proof-of-stake similar to Sui’s: token holders delegate IKA to validators, and the weight of each validator in the consensus (and thus in the threshold signature processes) is determined by stake. In each epoch, validators are chosen and their voting power is a function of stake, with the overall set being Byzantine fault tolerant (meaning if a validator set has total stake $X$, up to ~$X/3$ stake could be malicious without breaking the network’s guarantees). Stakers (delegators) are incentivized by staking rewards: Ika’s model likely includes distribution of the collected fees (and possibly inflationary rewards) to validators and their delegators at epoch ends. Indeed, documentation notes that all transaction fees collected are distributed to authorities, who may share a portion with their delegators as rewards. This mirrors the Sui model of rewarding service providers for throughput.

  • Supply and Distribution: As of now (Q2 2025), details on IKA’s total supply, initial distribution, and inflation are not fully public. However, given the funding rounds, we can infer some structure. Likely, a portion of IKA is allocated to early investors (seed and series rounds) and the team, with a large part reserved for community and future incentives. There may be a community sale or airdrop planned, especially since Ika ran a notable NFT campaign raising 1.4M SUI as mentioned in news (this was an NFT art campaign on Sui that set a record; it’s possible participants in that campaign might get IKA rewards or early access). The NFT campaign suggests a strategy to involve the community and bootstrap token distribution to users, not just VCs.

  • Token Launch Timing: The Sui Foundation’s October 2024 announcement indicated “The IKA token will launch natively on Sui, unlocking new functionality and utility in decentralized security”. Mainnet was slated for December 2024, so presumably the token generation event (TGE) would coincide or shortly follow. If mainnet launched on schedule, IKA tokens might have begun distribution in late 2024 or early 2025. The token would then start being used for gas on the Ika network and staking. Before that, on testnet, a temporary token (DWLT on testnet) was used for gas, which had no real value.

  • Use Cases and Value Accrual: The value of IKA as an investment hinges on Ika network usage. As more cross-chain transactions flow through Ika, more fees are paid in IKA, creating demand. Additionally, if many want to run validators or secure the network, they must acquire and stake IKA, which locks up supply (reducing float). Thus IKA has a utility plus governance nature – utility in paying for services and staking, and likely governance in directing the future of the protocol (though governance isn’t explicitly mentioned yet, it’s common for such networks to eventually decentralize control via token voting). One can imagine IKA token holders voting on adding support for new chains, adjusting fee parameters, or other protocol upgrades in the future.

Overall, IKA’s tokenomics aim to balance network security with usability. By launching on Sui, they make it easy for Sui ecosystem users to obtain and use IKA (no separate chain onboarding needed for the token itself), which can jumpstart adoption. Investors will watch metrics like the portion of supply staked (indicating security), the fee revenue (indicating usage), and partnerships that drive transactions (indicating demand for the token).

Business Model and Go-to-Market Strategy

Ika’s business model is that of an infrastructure provider in the blockchain ecosystem. It doesn’t offer a consumer-facing product; instead it offers a protocol service (decentralized key management and transaction execution) that other projects integrate. As such, the primary revenue (or value capture) mechanism is the fee for service – i.e., the gas fees in IKA for using the network. One can liken Ika to a decentralized AWS for key signing: any developer can plug in and use it, paying per use. In the long run, as the network decentralizes, dWallet Labs (the founding company) might capture value by holding a stake in the network and via token appreciation rather than charging SaaS-style fees off-chain.

Go-to-Market (GTM) Strategy: Early on, Ika is targeting blockchain developers and projects that need cross-chain functionality or custody solutions. The alignment with Sui gives a ready pool of such developers. Sui itself, being a newer L1, needs unique features to attract users – and Ika offers cross-chain DeFi, Bitcoin access, and more on Sui, which are compelling features. Thus, Ika’s GTM piggybacks on Sui’s growing ecosystem. Notably, even before mainnet, several Sui projects announced they are integrating Ika:

  • Projects like Full Sail, Rhei, Aeon, Human Tech, Covault, Lucky Kat, Native, Nativerse, Atoma, and Ekko (all builders on Sui) have “announced their upcoming launches utilizing Ika”, covering use cases from DeFi to gaming. For example, Full Sail might be building an exchange that can trade BTC via Ika; Lucky Kat (a gaming studio) could use Ika to enable in-game assets that reside on multiple chains; Covault likely involves custody solutions, etc. By securing these partnerships early, Ika ensures that upon launch there will be immediate transaction volume and real applications showcasing its capabilities.

  • Ika is also emphasizing institutional use-cases, such as decentralized custody for institutions. In press releases, they highlight “unmatched security for institutional and individual users” in custody via Ika. This suggests Ika could be marketed to crypto custodians, exchanges, or even TradFi players that want a more secure way to manage private keys (perhaps as an alternative or complement to Fireblocks or Copper, which use MPC but in a centralized enterprise setting). In fact, by being a decentralized network, Ika could allow competitors in custody to all rely on the same robust signing network rather than each building their own. This cooperative model could attract institutions that prefer a neutral, decentralized custodian for certain assets.

  • Another angle is AI integrations: Ika mentions “AI Agent guardrails” as a use case. This is forward-looking, playing on the trend of AI autonomy (e.g., AI agents executing on blockchain). Ika can ensure an AI agent (say an autonomous economic agent given control of some funds) cannot run off with the funds because the agent itself isn’t the sole holder of the key – it would still need the user’s share or abide by conditions in Ika. Marketing Ika as providing safety rails for AI in Web3 is a novel angle to capture interest from that sector.

Geographically, the presence of Node Capital and others hints at an Asia focus as well, in addition to the Western market. Sui has a strong Asia community (especially in China). Ika’s NFT campaign on Sui (the art campaign raising 1.4M SUI) indicates a community-building effort – possibly engaging Chinese users who are avid in Sui NFT space. By doing NFT sales or community airdrops, Ika can cultivate a grassroots user base who hold IKA tokens and are incentivized to promote its adoption.

Over time, the business model could extend to offering premium features or enterprise integrations. For instance, while the public Ika network is permissionless, dWallet Labs could spin up private instances or consortium versions for certain clients, or provide consulting services to projects integrating Ika. They could also earn via running some of the validators early on (bootstrap phase) and thus collecting part of the fees.

In summary, Ika’s GTM is strongly tied to ecosystem partnerships. By embedding deeply into Sui’s roadmap (where Sui’s 2025 goals include cross-chain liquidity and unique use cases), Ika ensures it will ride the growth of that L1. Simultaneously, it positions itself as a generalized solution for multi-chain coordination, which can then be pitched to projects on other chains once a success on Sui is demonstrated. The backing from Sui Foundation and the early integration announcements give Ika a significant head start in credibility and adoption compared to if it launched in isolation.

Ecosystem Adoption, Partnerships, and Roadmap

Even at its early stage, Ika has built an impressive roster of ecosystem engagements:

  • Sui Ecosystem Adoption: As mentioned, multiple Sui-based projects are integrating Ika. This means upon Ika’s mainnet launch, we expect to see Sui dApps enabling features like “Powered by Ika” – for example, a Sui lending protocol that lets users deposit BTC, or a DAO on Sui that uses Ika to hold its treasury on multiple chains. The fact that names like Rhei, Atoma, Nativerse (likely DeFi projects) and Lucky Kat (gaming/NFT) are on board shows that Ika’s applicability spans various verticals.

  • Strategic Partnerships: Ika’s most important partnership is with the Sui Foundation itself, which is both an investor and a promoter. Sui’s official channels (blog, etc.) have featured Ika prominently, effectively endorsing it as the interoperability solution for Sui. Additionally, Ika has likely been working with other infrastructure providers. For instance, given the mention of zkLogin (Sui’s Web2 login feature) alongside Ika, there could be a combined use-case where zkLogin handles user authentication and Ika handles cross-chain transactions, together providing a seamless UX. Also, Ika’s mention of Avail (Polygon) in its blogs suggests a partnership or pilot in that ecosystem: perhaps with Polygon Labs or teams building rollups on Avail to use Ika for bridging Bitcoin to those rollups. Another potential partnership domain is with custodians – for example, integrating Ika with wallet providers like Zengo (notable since ZenGo’s co-founder was Omer’s prior project) or with institutional custody tech like Fireblocks. While not confirmed, these would be logical targets (indeed Fireblocks has partnered with Sui elsewhere; one could imagine Fireblocks leveraging Ika for MPC on Sui).

  • Community and Developer Engagement: Ika runs a Discord and likely hackathons to get developers building with dWallets. The technology is novel, so evangelizing it through education is key. The presence of “Use cases” and “Builders” sections on their site, plus blog posts explaining core concepts, indicates a push to get developers comfortable with the concept of dWallets. The more developers understand that they can build cross-chain logic without bridges (and without compromising security), the more organic adoption will grow.

  • Roadmap: As of 2025, Ika’s roadmap included:

    • Alpha and Testnet (2023–2024): The alpha testnet launched in 2024 on Sui, allowing developers to experiment with dWallets and providing feedback. This stage was used to refine the protocol, fix bugs, and run internal audits.
    • Mainnet Launch (Dec 2024): Ika planned to go live on mainnet by end of 2024. If achieved, by now (mid-2025) Ika’s mainnet should be operational. Launch likely included initial support for a set of chains: at least Bitcoin and Ethereum (ECDSA chains) out of the gate, given those were heavily mentioned in marketing.
    • Post-Launch 2025 Goals: In 2025, we expect the focus to be on scaling usage (through Sui apps and possibly expanding to other chains). The team will work on adding Ed25519 and Schnorr support shortly after launch, enabling integration with Solana, Polkadot, and other ecosystems. They will also implement more light clients (perhaps Ethereum light client for Ika, Solana light client, etc.) to broaden the trustless control. Another roadmap item is likely permissionless validator expansion – encouraging more independent validators to join and decentralizing the network further. Since the code is a Sui fork, running an Ika validator is similar to running a Sui node, which many operators can do.
    • Feature Enhancements: Two interesting features hinted in blogs are Encrypted User Shares and Future Transaction signing. Encrypted user share means users can optionally encrypt their private share and store it on-chain (perhaps on Ika or elsewhere) in a way that only they can decrypt, simplifying recovery. Future transaction signing implies the ability to have Ika pre-sign a transaction that executes later when conditions are met. These features increase usability (users won’t have to be online for every action if they pre-approve certain logic, all while maintaining non-custodial security). Delivering these in 2025 would further differentiate Ika’s offering.
    • Ecosystem Growth: By end of 2025, Ika likely aims to have multiple chain ecosystems actively using it. We might see, for example, an Ethereum project using Ika via an oracle (if direct on-chain integration is not yet there) or collaborations with interchain projects like Wormhole or LayerZero, where Ika could serve as the signing mechanism for secure messaging.

The competitive landscape will also shape Ika’s strategy. It’s not alone in offering decentralized key management, so part of its roadmap will involve highlighting its performance edge and unique two-party security in contrast to others. In the next section, we compare Ika to its notable competitors Lit Protocol, Threshold Network, and Zama.

Competitive Analysis: Ika vs. Other MPC/Threshold Networks

Ika operates in a cutting-edge arena of cryptographic networks, where a few projects are pursuing similar goals with varying approaches. Below is a summary comparison of Ika with Lit Protocol, Threshold Network, and Zama (each a representative competitor in decentralized key infrastructure or privacy computing):

AspectIka (Parallel MPC Network)Lit Protocol (PKI & Compute)Threshold Network (tBTC & TSS)Zama (FHE Network)
Launch & StatusFounded 2022; Testnet in 2024; Mainnet launched on Sui in Dec 2024 (early 2025). Token $IKA live on Sui.Launched 2021; Lit nodes network live. Token $LIT (launched 2021). Building “Chronicle” rollup for scaling.Network went live 2022 after Keep/NuCypher merger. Token $T governs DAO. tBTC v2 launched for Bitcoin bridging.In development (no public network yet as of 2025). Raised large VC rounds for R&D. No token yet (FHE tools in alpha stage).
Core Focus/Use-CaseCross-chain interoperability and custody: threshold signing to control native assets across chains (e.g. BTC, ETH) via dWallets. Enables DeFi, multi-chain dApps, etc.Decentralized key management & access control: threshold encryption/decryption and conditional signing via PKPs (Programmable Key Pairs). Popular for gating content, cross-chain automation with JavaScript “Lit Actions”.Threshold cryptography services: e.g. tBTC decentralized Bitcoin-to-Ethereum bridge; threshold ECDSA for digital asset custody; threshold proxy re-encryption (PRE) for data privacy.Privacy-preserving computation: Fully Homomorphic Encryption (FHE) to enable encrypted data processing and private smart contracts. Focus on confidentiality (e.g. private DeFi, on-chain ML) rather than cross-chain control.
ArchitectureFork of Sui blockchain (DAG consensus Mysticeti) modified for MPC. No user smart contracts on Ika; uses off-chain 2PC-MPC protocol among ~N validators + user share. High throughput (10k TPS) design.Decentralized network + L2: Lit nodes run MPC and also a TEE-based JS runtime. “Chronicle” Arbitrum Rollup used to anchor state and coordinate nodes. Uses 2/3 threshold for consensus on key operations.Decentralized network on Ethereum: Node operators are staked with $T and randomly selected into signing groups (e.g. 100 nodes for tBTC). Uses off-chain protocols (GG18, etc.) with on-chain Ethereum contracts for coordination and deposit handling.FHE Toolkits atop existing chains: Zama’s tech (e.g. Concrete, TFHE libraries) enables FHE on Ethereum (fhEVM). Plans for a threshold key management system (TKMS) for FHE keys. Likely will integrate with L1s or run as Layer-2 for private computations.
Security Model2PC-MPC, non-collusive: User’s key share + threshold of N validators (2/3 BFT) required for any signature. No single entity ever has full key. BFT consensus tolerates <33% malicious. Audited by Symbolic (2024).Threshold + TEE: Requires 2/3 of Lit nodes to sign/decrypt. Uses Trusted Execution Environments on each node to run user-provided code (Lit Actions) securely. Security depends on node honesty and hardware security.Threshold multi-party: e.g. for tBTC, a randomly selected group of ~100 nodes must reach a threshold (e.g. 51) to sign BTC transactions. Economic incentives ($T staking, slashing) to keep honest majority. DAO governed; security incidents would be handled via governance.FHE-based: Security relies on cryptographic hardness of FHE (learning with errors, etc.) – data remains encrypted at all times. Zama’s TKMS indicates use of threshold cryptography to manage FHE keys as well. Not a live network yet; security under review by academics.
PerformanceSub-second latency, ~10,000 signatures/sec in theory. Scales to hundreds or thousands of nodes without major perf loss (broadcast & batching approach). Suitable for real-time dApp use (trading, gaming).Moderate latency (heavier due to TEE and consensus overhead). Lit has ~50 nodes; uses “shadow splicing” to scale but large node count can degrade performance. Good for moderate-frequency tasks (opening access, occasional tx signing). Chronicle L2 helps batching.Lower throughput, higher latency: tBTC minting can take minutes (waiting for Bitcoin confirmations + threshold signing) and uses small groups to sign. Threshold’s focus is quality (security) over quantity – fine for bridging transactions and access control, not designed for thousands TPS.Heavy computation latency: FHE is currently much slower than plaintext computation (orders of magnitude). Zama is optimizing, but running private contracts will be slower and costlier than normal ones. Not aimed at high-frequency tasks; targeted at complex computations where privacy is paramount.
DecentralizationHigh – permissionless validator set, hundreds of validators possible. Delegated PoS (Sui-style) ensures open participation and decentralized governance over time. User always in the loop (can’t be bypassed).Medium – currently ~30-50 core nodes run by Lit team and partners. Plans to decentralize further. Nodes do heavy tasks (MPC + TEE), so scaling out is non-trivial. Governance not fully decentralized yet (Lit DAO exists but early).High – large pool of stakers; however actual signing done by selected groups (not entire network at once). The network is as decentralized as its stake distribution. Governed by Threshold DAO (token holder votes) – mature decentralization in governance.N/A (for network) – Zama is more a company-driven project now. If fhEVM or networks launch, initially likely centralized or limited set of nodes (given complexity). Over time could decentralize execution of FHE transactions, but that’s uncharted territory in 2025.
Token and Incentives$IKA (Sui-based) for gas fees, staking, and potentially governance. Incentive: earn fees for running validators; token appreciates with network usage. Sui Foundation backing gives it ecosystem value.$LIT token – used for governance and maybe fees for advanced services. Lit Actions currently free to developers (no gas); long-term may introduce fee model. $LIT incentivizes node operation (stakers) but exact token economics evolving.$T token – staked by nodes, governs the DAO treasury and protocol upgrades. Nodes earn in $T and fees (in ETH or tBTC fees). $T secures network (slashing for misbehavior). Also used in liquidity programs for tBTC adoption.No token (yet) – Zama is VC-funded; might introduce a token if they launch a network service (could be used for paying for private computation or staking to secure networks running FHE contracts). Currently developers use Zama’s tools without a token.
Key BackersSui Foundation (strategic investor); VCs: Node Capital, Blockchange, Lemniscap, Collider; angels like Naval Ravikant. Strong support from Sui ecosystem.Backed by 1kx, Pantera, Coinbase Ventures, Framework, etc. (Raised $13M in 2022). Has growing developer community via Lit DAO. Partnerships with Ceramic, NFT projects for access control.Emerged from Keep & NuCypher communities (backed by a16z, Polychain in past). Threshold is run by DAO; no new VC funding post-merger (grants from Ethereum Community Fund, etc.). Partnerships: works with Curve, Aave (tBTC integrations).Backed by a16z, SoftBank, Multicoin Capital (raised $73M Series A). Close ties with Ethereum Foundation research (Rand Hindi, CEO, is an outspoken FHE advocate in Ethereum). Collaborating with projects like Optalysys for hardware acceleration.

Ika’s Competitive Edge: Ika’s differentiators lie in its performance at scale and unique security model. Compared to Lit Protocol, Ika can support far more signers and much higher throughput, making it suitable for use cases (like high-volume trading or gaming) that Lit’s network would struggle with. Ika also does not rely on Trusted Execution Environments, which some developers are wary of (due to potential exploits in SGX); instead, Ika achieves trustlessness purely with cryptography and consensus. Against Threshold Network, Ika offers a more general-purpose platform. Threshold is largely focused on Bitcoin↔Ethereum bridging (tBTC) and a couple of cryptographic services like proxy re-encryption, whereas Ika is a flexible interoperability layer that can work with any chain and asset out-of-the-box. Also, Ika’s user-in-the-loop model means it doesn’t require over-collateralization or insurance for deposits (tBTC v2 uses a robust but complex economic model to secure BTC deposits, whereas in Ika the user never gives up control in the first place). Compared to Zama, Ika addresses a different problem – Zama targets privacy, while Ika targets interoperability. However, it’s conceivable that in the future the two could complement each other (e.g., using FHE on Ika-stored assets). For now, Ika has the advantage of being operational sooner in a niche with immediate demand (bridges and MPC networks are needed today, whereas FHE is still maturing).

One potential challenge for Ika is market education and trust. It’s introducing a novel way of doing cross-chain interactions (dWallets instead of traditional lock-and-mint bridges). It will need to demonstrate its security in practice over time to win the same level of trust that, say, the Threshold Network has gradually earned (Threshold had to prove out tBTC after an earlier version was paused due to risks). If Ika’s technology works as advertised, it effectively leapfrogs the competition by solving the trilemma of decentralization, security, and speed in the MPC space. The strong backing from Sui and the extensive audits/papers lend credibility.

In conclusion, Ika stands out among MPC networks for its ambitious scalability and user-centric security model. Investors see it as a bet on the future of cross-chain coordination – one where users can seamlessly move value and logic across many blockchains without ever giving up control of their keys. If Ika achieves broad adoption, it could become as integral to Web3 infrastructure as cross-chain messaging protocols or major Layer-1 blockchains themselves. The coming year (2025) will be critical as Ika’s mainnet and first use cases go live, proving whether this cutting-edge cryptography can deliver on its promises in real market conditions. The early signs – strong technical fundamentals, an active pipeline of integrations, and substantial investor support – suggest that Ika has a real shot at redefining blockchain interoperability with MPC.

Sources: Primary information was gathered from Ika’s official documentation and whitepaper, Sui Foundation announcements, press releases and funding news, as well as competitor technical docs and analyses for context (Lit Protocol’s Messari report, Threshold Network documentation, and Zama’s FHE descriptions). All information is up-to-date as of 2025.

Programmable Privacy in Blockchain: Off‑Chain Compute with On‑Chain Verification

· 47 min read
Dora Noda
Software Engineer

Public blockchains provide transparency and integrity at the cost of privacy – every transaction and contract state is exposed to all participants. This openness creates problems like MEV (Miner Extractable Value) attacks, copy-trading, and leakage of sensitive business logic. Programmable privacy aims to solve these issues by allowing computations on private data without revealing the data itself. Two emerging cryptographic paradigms are making this possible: Fully Homomorphic Encryption Virtual Machines (FHE-VM) and Zero-Knowledge (ZK) Coprocessors. These approaches enable off-chain or encrypted computation with on-chain verification, preserving confidentiality while retaining trustless correctness. In this report, we dive deep into FHE-VM and ZK-coprocessor architectures, compare their trade-offs, and explore use cases across finance, identity, healthcare, data markets, and decentralized machine learning.

Fully Homomorphic Encryption Virtual Machine (FHE-VM)

Fully Homomorphic Encryption (FHE) allows arbitrary computations on encrypted data without ever decrypting it. An FHE Virtual Machine integrates this capability into blockchain smart contracts, enabling encrypted contract state and logic. In an FHE-enabled blockchain (often called an fhEVM for EVM-compatible designs), all inputs, contract storage, and outputs remain encrypted throughout execution. This means validators can process transactions and update state without learning any sensitive values, achieving on-chain execution with data confidentiality.

Architecture and Design of FHE-VM

A typical FHE-VM extends a standard smart contract runtime (like the Ethereum Virtual Machine) with native support for encrypted data types and operations. For example, Zama’s FHEVM introduces encrypted integers (euint8, euint32, etc.), encrypted booleans (ebool), and even encrypted arrays as first-class types. Smart contract languages like Solidity are augmented via libraries or new opcodes so developers can perform arithmetic (add, mul, etc.), logical operations, and comparisons directly on ciphertexts. Under the hood, these operations invoke FHE primitives (e.g. using the TFHE library) to manipulate encrypted bits and produce encrypted results.

Encrypted state storage is supported so that contract variables remain encrypted in the blockchain state. The execution flow is typically:

  1. Client-Side Encryption: Users encrypt their inputs locally using the public FHE key before sending transactions. The encryption key is public (for encryption and evaluation), while the decryption key remains secret. In some designs, each user manages their own key; in others, a single global FHE key is used (discussed below).
  2. On-Chain Homomorphic Computation: Miners/validators execute the contract with encrypted opcodes. They perform the same deterministic homomorphic operations on the ciphertexts, so consensus can be reached on the encrypted new state. Crucially, validators never see plaintext data – they just see “gibberish” ciphertext but can still process it consistently.
  3. Decryption (Optional): If a result needs to be revealed or used off-chain, an authorized party with the private key can decrypt the output ciphertext. Otherwise, results remain encrypted and can be used as inputs to further transactions (allowing consecutive computations on persistent encrypted state).

A major design consideration is key management. One approach is per-user keys, where each user holds their secret key and only they can decrypt outputs relevant to them. This maximizes privacy (no one else can ever decrypt your data), but homomorphic operations cannot mix data encrypted under different keys without complex multi-key protocols. Another approach, used by Zama’s FHEVM, is a global FHE key: a single public key encrypts all contract data and a distributed set of validators holds shares of the threshold decryption key. The public encryption and evaluation keys are published on-chain, so anyone can encrypt data to the network; the private key is split among validators who can collectively decrypt if needed under a threshold scheme. To prevent validator collusion from compromising privacy, Zama employs a threshold FHE protocol (based on their Noah’s Ark research) with “noise flooding” to make partial decryptions secure. Only if a sufficient quorum of validators cooperates can a plaintext be recovered, for example to serve a read request. In normal operation, however, no single node ever sees plaintext – data remains encrypted on-chain at all times.

Access control is another crucial component. FHE-VM implementations include fine-grained controls to manage who (if anyone) can trigger decryptions or access certain encrypted fields. For instance, Cypher’s fhEVM supports Access Control Lists on ciphertexts, allowing developers to specify which addresses or contracts can interact with or re-encrypt certain data. Some frameworks support re-encryption: the ability to transfer an encrypted value from one user’s key to another’s without exposing plaintext. This is useful for things like data marketplaces, where a data owner can encrypt a dataset with their key, and upon purchase, re-encrypt it to the buyer’s key – all on-chain, without ever decrypting publicly.

Ensuring Correctness and Privacy

One might ask: if all data is encrypted, how do we enforce correctness of contract logic? How can the chain prevent invalid operations if it can’t “see” the values? FHE by itself doesn’t provide a proof of correctness – validators can perform the homomorphic steps, but they can’t inherently tell if a user’s encrypted input was valid or if a conditional branch should be taken, etc., without decrypting. Zero-knowledge proofs (ZKPs) can complement FHE to solve this gap. In an FHE-VM, typically users must provide a ZK proof attesting to certain plaintext conditions whenever needed. Zama’s design, for example, uses a ZK Proof of Plaintext Knowledge (ZKPoK) to accompany each encrypted input. This proves that the user knows the plaintext corresponding to their ciphertext and that it meets expected criteria, without revealing the plaintext itself. Such “certified ciphertexts” prevent a malicious user from submitting a malformed encryption or an out-of-range value. Similarly, for operations that require a decision (e.g. ensure account balance ≥ withdrawal amount), the user can supply a ZK proof that this condition holds true on the plaintexts before the encrypted operation is executed. In this way, the chain doesn’t decrypt or see the values, but it gains confidence that the encrypted transactions follow the rules.

Another approach in FHE rollups is to perform off-chain validation with ZKPs. Fhenix (an L2 rollup using FHE) opts for an optimistic model where a separate network component called a Threshold Service Network can decrypt or verify encrypted results, and any incorrect computation can be challenged with a fraud-proof. In general, combining FHE + ZK or fraud proofs ensures that encrypted execution remains trustless. Validators either collectively decrypt only when authorized, or they verify proofs that each encrypted state transition was valid without needing to see plaintext.

Performance considerations: FHE operations are computationally heavy – many orders of magnitude slower than normal arithmetic. For example, a simple 64-bit addition on Ethereum costs ~3 gas, whereas an addition on an encrypted 64-bit integer (euint64) under Zama’s FHEVM costs roughly 188,000 gas. Even an 8-bit add can cost ~94k gas. This enormous overhead means a straightforward implementation on existing nodes would be impractically slow and costly. FHE-VM projects tackle this with optimized cryptographic libraries (like Zama’s TFHE-rs library for binary gate bootstrapping) and custom EVM modifications for performance. For instance, Cypher’s modified Geth client adds new opcodes and optimizes homomorphic instruction execution in C++/assembly to minimize overhead. Nevertheless, achieving usable throughput requires acceleration. Ongoing work includes using GPUs, FPGAs, and even specialized photonic chips to speed up FHE computations. Zama reports their FHE performance improved 100× since 2024 and is targeting thousands of TPS with GPU/FPGA acceleration. Dedicated FHE co-processor servers (such as Optalysys’s LightLocker Node) can plug into validator nodes to offload encrypted operations to hardware, supporting >100 encrypted ERC-20 transfers per second per node. As hardware and algorithms improve, the gap between FHE and plain computation will narrow, enabling private contracts to approach more practical speeds.

Compatibility: A key goal of FHE-VM designs is to remain compatible with existing development workflows. Cypher’s and Zama’s fhEVM implementations allow developers to write contracts in Solidity with minimal changes – using a library to declare encrypted types and operations. The rest of the Ethereum toolchain (Remix, Hardhat, etc.) can still be used, as the underlying modifications are mostly at the client/node level. This lowers the barrier to entry: developers don’t need to be cryptography experts to write a confidential smart contract. For example, a simple addition of two numbers can be written as euint32 c = a + b; and the FHEVM will handle the encryption-specific details behind the scenes. The contracts can even interoperate with normal contracts – e.g. an encrypted contract could output a decrypted result to a standard contract if desired, allowing a mix of private and public parts in one ecosystem.

Current FHE-VM Projects: Several projects are pioneering this space. Zama (a Paris-based FHE startup) developed the core FHEVM concept and libraries (TFHE-rs and an fhevm-solidity library). They do not intend to launch their own chain, but rather provide infrastructure to others. Inco is an L1 blockchain (built on Cosmos SDK with Evmos) that integrated Zama’s FHEVM to create a modular confidential chain. Their testnets (named Gentry and Paillier) showcase encrypted ERC-20 transfers and other private DeFi primitives. Fhenix is an Ethereum Layer-2 optimistic rollup using FHE for privacy. It decided on an optimistic (fraud-proof) approach rather than ZK-rollup due to the heavy cost of doing FHE and ZK together for every block. Fhenix uses the same TFHE-rs library (with some modifications) and introduces a Threshold Service Network for handling decryptions in a decentralized way. There are also independent teams like Fhenix (now rebranded) and startups exploring MPC + FHE hybrids. Additionally, Cypher (by Z1 Labs) is building a Layer-3 network focused on AI and privacy, using an fhEVM with features like secret stores and federated learning support. The ecosystem is nascent but growing rapidly, fueled by significant funding – e.g. Zama became a “unicorn” with >$130M raised by 2025 to advance FHE tech.

In summary, an FHE-VM enables privacy-preserving smart contracts by executing all logic on encrypted data on-chain. This paradigm ensures maximum confidentiality – nothing sensitive is ever exposed in transactions or state – while leveraging the existing blockchain consensus for integrity. The cost is increased computational burden on validators and complexity in key management and proof integration. Next, we explore an alternative paradigm that offloads compute entirely off-chain and only uses the chain for verification: the zero-knowledge coprocessor.

Zero-Knowledge Coprocessors (ZK-Coprocessors)

A ZK-coprocessor is a new blockchain architecture pattern where expensive computations are performed off-chain, and a succinct zero-knowledge proof of their correctness is verified on-chain. This allows smart contracts to harness far greater computational power and data than on-chain execution would allow, without sacrificing trustlessness. The term coprocessor is used by analogy to hardware coprocessors (like a math co-processor or GPU) that handle specialized tasks for a CPU. Here, the blockchain’s “CPU” (the native VM like EVM) delegates certain tasks to a zero-knowledge proof system which acts as a cryptographic coprocessor. The ZK-coprocessor returns a result and a proof that the result was computed correctly, which the on-chain contract can verify and then use.

Architecture and Workflow

In a typical setup, a dApp developer identifies parts of their application logic that are too expensive or complex for on-chain execution (e.g. large computations over historical data, heavy algorithms, ML model inference, etc.). They implement those parts as an off-chain program (in a high-level language or circuit DSL) that can produce a zero-knowledge proof of its execution. The on-chain component is a verifier smart contract that checks proofs and makes the results available to the rest of the system. The flow can be summarized as:

  1. Request – The on-chain contract triggers a request for a certain computation to be done off-chain. This could be initiated by a user transaction or by one contract calling into the ZK-coprocessor’s interface. For example, a DeFi contract might call “proveInterestRate(currentState)” or a user calls “queryHistoricalData(query)”.
  2. Off-Chain Execution & Proving – An off-chain service (which could be a decentralized network of provers or a trusted service, depending on the design) picks up the request. It gathers any required data (on-chain state, off-chain inputs, etc.) and executes the computation in a special ZK Virtual Machine (ZKVM) or circuit. During execution, a proof trace is generated. At the end, the service produces a succinct proof (e.g. a SNARK or STARK) attesting that “Computing function F on input X yields output Y” and optionally attesting to data integrity (more on this below).
  3. On-Chain Verification – The proof and result are returned to the blockchain (often via a callback function). The verifier contract checks the proof’s validity using efficient cryptographic verification (pairing checks, etc.). If valid, the contract can now trust the output Y as correct. The result can be stored in state, emitted as an event, or fed into further contract logic. If the proof is invalid or not provided within some time, the request can be considered failed (and potentially some fallback or timeout logic triggers).

Figure 1: Architecture of a ZK Coprocessor (RISC Zero Bonsai example). Off-chain, a program runs on a ZKVM with inputs from the smart contract call. A proof of execution is returned on-chain via a relay contract, which invokes a callback with the verified results.

Critically, the on-chain gas cost for verification is constant (or grows very slowly) regardless of how complex the off-chain computation was. Verifying a succinct proof might cost on the order of a few hundred thousand gas (a fraction of an Ethereum block), but that proof could represent millions of computational steps done off-chain. As one developer quipped, “Want to prove one digital signature? ~$15. Want to prove one million signatures? Also ~$15.”. This scalability is a huge win: dApps can offer complex functionalities (big data analytics, elaborate financial models, etc.) without clogging the blockchain.

The main components of a ZK-coprocessor system are:

  • Proof Generation Environment: This can be a general-purpose ZKVM (able to run arbitrary programs) or custom circuits tailored to specific computations. Approaches vary:

    • Some projects use handcrafted circuits for each supported query or function (maximizing efficiency for that function).
    • Others provide a Domain-Specific Language (DSL) or an Embedded DSL that developers use to write their off-chain logic, which is then compiled into circuits (balancing ease-of-use and performance).
    • The most flexible approach is a zkVM: a virtual machine (often based on RISC architectures) where programs can be written in standard languages (Rust, C, etc.) and automatically proven. This sacrifices performance (simulating a CPU in a circuit adds overhead) for maximum developer experience.
  • Data Access and Integrity: A unique challenge is feeding the off-chain computation with the correct data, especially if that data resides on the blockchain (past blocks, contract states, etc.). A naive solution is to have the prover read from an archive node and trust it – but that introduces trust assumptions. ZK-coprocessors instead typically prove that any on-chain data used was indeed authentic by linking to Merkle proofs or state commitments. For example, the query program might take a block number and a Merkle proof of a storage slot or transaction, and the circuit will verify that proof against a known block header hash. Three patterns exist:

    1. Inline Data: Put the needed data on-chain (as input to the verifier) so it can be directly checked. This is very costly for large data and undermines the whole point.
    2. Trust an Oracle: Have an oracle service feed the data to the proof and vouch for it. This is simpler but reintroduces trust in a third party.
    3. Prove Data Inclusion via ZK: Incorporate proofs of data inclusion in the chain’s history within the zero-knowledge circuit itself. This leverages the fact that each Ethereum block header commits to the entire prior state (via state root) and transaction history. By verifying Merkle Patricia proofs of the data within the circuit, the output proof assures the contract that “this computation used genuine blockchain data from block N” with no additional trust needed.

    The third approach is the most trustless and is used by advanced ZK-coprocessors like Axiom and Xpansion (it does increase proving cost, but is preferable for security). For instance, Axiom’s system models Ethereum’s block structure, state trie, and transaction trie inside its circuits, so it can prove statements like “the account X had balance Y at block N or “a transaction with certain properties occurred in block N”. It leverages the fact that given a recent trusted block hash, one can recursively prove inclusion of historical data without trusting any external party.

  • Verifier Contract: This on-chain contract contains the verifying key and logic to accept or reject proofs. For SNARKs like Groth16 or PLONK, the verifier might do a few elliptic curve pairings; for STARKs, it might do some hash computations. Performance optimizations like aggregation and recursion can minimize on-chain load. For example, RISC Zero’s Bonsai uses a STARK-to-SNARK wrapper: it runs a STARK-based VM off-chain for speed, but then generates a small SNARK proof attesting to the STARK’s validity. This shrinks proof size from hundreds of kilobytes to a few hundred bytes, making on-chain verification feasible and cheap. The Solidity verifier then just checks the SNARK (which is a constant-time operation).

In terms of deployment, ZK-coprocessors can function as layer-2 like networks or as pure off-chain services. Some, like Axiom, started as a specialized service for Ethereum (with Paradigm’s backing) where developers submit queries to Axiom’s prover network and get proofs on-chain. Axiom’s tagline was providing Ethereum contracts “trustless access to all on-chain data and arbitrary expressive compute over it.” It effectively acts as a query oracle where the answers are verified by ZKPs instead of trust. Others, like RISC Zero’s Bonsai, offer a more open platform: any developer can upload a program (compiled to a RISC-V compatible ZKVM) and use Bonsai’s proving service via a relay contract. The relay pattern, as illustrated in Figure 1, involves a contract that mediates requests and responses: the dApp contract calls the relay to ask for a proof, the off-chain service listens for this (e.g. via event or direct call), computes the proof, and then the relay invokes a callback function on the dApp contract with the result and proof. This asynchronous model is necessary because proving may take from seconds to minutes depending on complexity. It introduces a latency (and a liveness assumption that the prover will respond), whereas FHE-VM computations happen synchronously within a block. Designing the application to handle this async workflow (possibly akin to Oracle responses) is part of using a ZK-coprocessor.

Notable ZK-Coprocessor Projects

  • Axiom: Axiom is a ZK coprocessor tailored for Ethereum, focused originally on proving historical on-chain data queries. It uses the Halo2 proving framework (a Plonk-ish SNARK) to create proofs that incorporate Ethereum’s cryptographic structures. In Axiom’s system, a developer can query things like “what was the state of contract X at block N?” or perform a computation over all transactions in a range. Under the hood, Axiom’s circuits had to implement Ethereum’s state/trie logic, even performing elliptic curve operations and SNARK verification inside the circuit to support recursion. Trail of Bits, in an audit, noted the complexity of Axiom’s Halo2 circuits modeling entire blocks and states. After auditing, Axiom generalized their tech into an OpenVM, allowing arbitrary Rust code to be proved with the same Halo2-based infrastructure. (This mirrors the trend of moving from domain-specific circuits to a more general ZKVM approach.) The Axiom team demonstrated ZK queries that Ethereum natively cannot do, enabling stateless access to any historical data with cryptographic integrity. They have also emphasized security, catching and fixing under-constrained circuit bugs and ensuring soundness. While Axiom’s initial product was shut down during their pivot, their approach remains a landmark in ZK coprocessors.

  • RISC Zero Bonsai: RISC Zero is a ZKVM based on the RISC-V architecture. Their zkVM can execute arbitrary programs (written in Rust, C++ and other languages compiled to RISC-V) and produce a STARK proof of execution. Bonsai is RISC Zero’s cloud service that provides this proving on demand, acting as a coprocessor for smart contracts. To use it, a developer writes a program (say a function that performs complex math or verifies an off-chain API response), uploads it to the Bonsai service, and deploys a corresponding verifier contract. When the contract needs that computation, it calls the Bonsai relay which triggers the proof generation and returns the result via callback. One example application demonstrated was off-chain governance computation: RISC Zero showed a DAO using Bonsai to tally votes and compute complex voting metrics off-chain, then post a proof so that the on-chain Governor contract could trust the outcome with minimal gas cost. RISC Zero’s technology emphasizes that developers can use familiar programming paradigms – for instance, writing a Rust function to compute something – and the heavy lifting of circuit creation is handled by the zkVM. However, proofs can be large, so as noted earlier they implemented a SNARK compression for on-chain verification. In August 2023 they successfully verified RISC Zero proofs on Ethereum’s Sepolia testnet, costing on the order of 300k gas per proof. This opens the door for Ethereum dApps to use Bonsai today as a scaling and privacy solution. (Bonsai is still in alpha, not production-ready, and uses a temporary SNARK setup without a ceremony.)

  • Others: There are numerous other players and research initiatives. Expansion/Xpansion (as mentioned in a blog) uses an embedded DSL approach, where developers can write queries over on-chain data with a specialized language, and it handles proof generation internally. StarkWare’s Cairo and Polygon’s zkEVM are more general ZK-rollup VMs, but their tech could be repurposed for coprocessor-like use by verifying proofs within L1 contracts. We also see projects in the ZKML (ZK Machine Learning) domain, which effectively act as coprocessors to verify ML model inference or training results on-chain. For example, a zkML setup can prove that “a neural network inference on private inputs produced classification X” without revealing the inputs or doing the computation on-chain. These are special cases of the coprocessor concept applied to AI.

Trust assumptions: ZK-coprocessors rely on the soundness of the cryptographic proofs. If the proof system is secure (and any trusted setup is done honestly), then an accepted proof guarantees the computation was correct. No additional trust in the prover is needed – even a malicious prover cannot convince the verifier of a false statement. However, there is a liveness assumption: someone must actually perform the off-chain computation and produce the proof. In practice this might be a decentralized network (with incentives or fees to do the work) or a single service operator. If no one provides the proof, the on-chain request might remain unresolved. Another subtle trust aspect is data availability for off-chain inputs that aren’t on the blockchain. If the computation depends on some private or external data, the verifier can’t know if that data was honestly provided unless additional measures (like data commitments or oracle signatures) are used. But for purely on-chain data computations, the mechanisms described ensure trustlessness equivalent to the chain itself (Axiom argued their proofs offer “security cryptographically equivalent to Ethereum” for historical queries).

Privacy: Zero-knowledge proofs also inherently support privacy – the prover can keep inputs hidden while proving statements about them. In a coprocessor context, this means a proof can allow a contract to use a result that was derived from private data. For example, a proof might show “user’s credit score > 700, so approve loan” without revealing the actual credit score or raw data. Axiom’s use-case was more about publicly known data (blockchain history), so privacy wasn’t the focus there. But RISC Zero’s zkVM could be used to prove assertions about secret data provided by a user: the data stays off-chain and only the needed outcome goes on-chain. It’s worth noting that unlike FHE, a ZK proof doesn’t usually provide ongoing confidentiality of state – it’s a one-time proof. If a workflow needs maintaining a secret state across transactions, one might build it by having the contract store a commitment to the state and each proof showing a valid state transition from old commitment to new, with secrets hidden. This is essentially how zk-rollups for private transactions (like Aztec or Zcash) work. So ZK coprocessors can facilitate fully private state machines, but the implementation is nontrivial; often they are used for one-off computations where either the input or the output (or both) can be private as needed.

Developer experience: Using a ZK-coprocessor typically requires learning new tools. Writing custom circuits (option (1) above) is quite complex and usually only done for narrow purposes. Higher-level options like DSLs or zkVMs make life easier but still add overhead: the dev must write and deploy off-chain code and manage the interaction. In contrast to FHE-VM where the encryption is mostly handled behind the scenes and the developer writes normal smart contract code, here the developer needs to partition their logic and possibly write in a different language (Rust, etc.) for the off-chain part. However, initiatives like Noir, Leo, Circom DSLs or RISC Zero’s approach are rapidly improving accessibility. For instance, RISC Zero provides templates and Foundry integration such that a developer can simulate their off-chain code locally (for correctness) and then seamlessly hook it into solidity tests via the Bonsai callback. Over time, we can expect development frameworks that abstract away whether a piece of logic is executed via ZK proof or on-chain – the compiler or tooling might decide based on cost.

FHE-VM vs ZK-Coprocessor: Comparison

Both FHE-VMs and ZK-coprocessors enable a form of “compute on private data with on-chain assurance”, but they differ fundamentally in architecture. The table below summarizes key differences:

AspectFHE-VM (Encrypted On-Chain Execution)ZK-Coprocessor (Off-Chain Proving)
Where computation happensDirectly on-chain (all nodes execute homomorphic operations on ciphertexts).Off-chain (a prover or network executes the program; only a proof is verified on-chain).
Data confidentialityFull encryption: data remains encrypted at all times on-chain; validators never see plaintext. Only holders of decryption keys can decrypt outputs.Zero-knowledge: prover’s private inputs never revealed on-chain; proof reveals no secrets beyond what’s in public outputs. However, any data used in computation that must affect on-chain state must be encoded in the output or commitment. Secrets remain off-chain by default.
Trust modelTrust in consensus execution and cryptography: if majority of validators follow protocol, encrypted execution is deterministic and correct. No external trust needed for computation correctness (all nodes recompute it). Must trust FHE scheme security (typically based on lattice hardness) for privacy. In some designs, also trust that no collusion of enough validators can occur to misuse threshold keys.Trust in the proof system security (soundness of SNARK/STARK). If proof verifies, result is correct with cryptographic certainty. Off-chain provers cannot cheat the math. There is a liveness assumption on provers to actually do the work. If using a trusted setup (e.g. SNARK SRS), must trust that it was generated honestly or use transparent/no-setup systems.
On-chain cost and scalabilityHigh per-transaction cost: Homomorphic ops are extremely expensive computationally, and every node must perform them. Gas costs are high (e.g. 100k+ gas for a single 8-bit addition). Complex contracts are limited by what every validator can compute in a block. Throughput is much lower than normal smart contracts unless specialized hardware is employed. Scalability is improved by faster cryptography and hardware acceleration, but fundamentally each operation grows chain workload.Low verification cost: Verifying a succinct proof is efficient and constant-size, so on-chain gas is modest (hundreds of thousands gas for any size computation). This decouples complexity from on-chain resource limits – large computations have no extra on-chain cost. Thus, it scales in terms of on-chain load. Off-chain, proving time can be significant (minutes or more for huge tasks) and might require powerful machines, but this doesn’t directly slow the blockchain. Overall throughput can be high as long as proofs can be generated in time (potential parallel prover networks).
LatencyResults are available immediately in the same transaction/block, since computation occurs during execution. No additional round-trips – synchronous operation. However, longer block processing time might increase blockchain latency if FHE ops are slow.Inherently asynchronous. Typically requires one transaction to request and a later transaction (or callback) to provide the proof/result. This introduces delay (possibly seconds to hours depending on proof complexity and proving hardware). Not suitable for instant finality of a single transaction – more like an async job model.
Privacy guaranteesStrong: Everything (inputs, outputs, intermediate state) can remain encrypted on-chain. You can have long-lived encrypted state that multiple transactions update without ever revealing it. Only authorized decryption actions (if any) reveal outputs, and those can be controlled via keys/ACLs. However, side-channel considerations like gas usage or event logs must be managed so they don’t leak patterns (fhEVM designs strive for data-oblivious execution with constant gas for operations to avoid leaks).Selective: The proof reveals whatever is in the public outputs or is necessary to verify (e.g. a commitment to initial state). Designers can ensure that only the intended result is revealed, and all other inputs remain zero-knowledge hidden. But unlike FHE, the blockchain typically doesn’t store the hidden state – privacy is achieved by keeping data off-chain entirely. If a persistent private state is needed, the contract may store a cryptographic commitment to it (so state updates still reveal a new commitment each time). Privacy is limited by what you choose to prove; you have flexibility to prove e.g. a threshold was met without revealing exact values.
Integrity enforcementBy design, all validators recompute the next state homomorphically, so if a malicious actor provides a wrong ciphertext result, others will detect a mismatch – consensus fails unless everyone gets the same result. Thus, integrity is enforced by redundant execution (like normal blockchain, just on encrypted data). Additional ZK proofs are often used to enforce business rules (e.g. user couldn’t violate a constraint) because validators can’t directly check plaintext conditions.Integrity is enforced by the verifier contract checking the ZK proof. As long as the proof verifies, the result is guaranteed to be consistent with some valid execution of the off-chain program. No honest-majority assumption needed for correctness – even a single honest verifier (the contract code itself) suffices. The on-chain contract will simply reject any false proof or missing proof (similar to how it would reject an invalid signature). One consideration: if the prover aborts or delays, the contract may need fallback logic (or users may need to try again later), but it won’t accept incorrect results.
Developer experiencePros: Can largely use familiar smart contract languages (Solidity, etc.) with extensions. The confidentiality is handled by the platform – devs worry mainly about what to encrypt and who holds keys. Composition of encrypted and normal contracts is possible, maintaining the composability of DeFi (just with encrypted variables). Cons: Must understand FHE limitations – e.g. no direct conditional jumps on secret data without special handling, limited circuit depth (though bootstrapping in TFHE allows arbitrary length of computation at expense of time). Debugging encrypted logic can be tricky since you can’t easily introspect runtime values without the key. Also, key management and permissioning add complexity to contract design.Pros: Potentially use any programming language for off-chain part (especially with a zkVM). Leverage existing code/libraries in off-chain program (with caveats for ZK-compatibility). No custom cryptography needed by developer if using a general ZKVM – they write normal code and get a proof. Also, the heavy computation can use libraries (e.g. machine learning code) that would never run on-chain. Cons: Developers must orchestrate off-chain infrastructure or use a proving service. Handling asynchronous workflows and integrating them with on-chain logic requires more design work (e.g. storing a pending state, waiting for callback). Writing efficient circuits or zkVM code might require learning new constraints (e.g. no floating point, use fixed-point or special primitives; avoid heavy branching that blows up proving time; optimize for constraints count). There is also the burden of dealing with proof failures, timeouts, etc., which are not concerns in regular solidity. The ecosystem of tools is growing, but it’s a new paradigm for many.

Both approaches are actively being improved, and we even see convergence: as noted, ZKPs are used inside FHE-VMs for certain checks, and conversely some researchers propose using FHE to keep prover inputs private in ZK (so a cloud prover doesn’t see your secret data). It’s conceivable future systems will combine them – e.g. performing FHE off-chain and then proving the correctness of that to chain, or using FHE on-chain but ZK-proving to light clients that the encrypted ops were done right. Each technique has strengths: FHE-VM offers continuous privacy and real-time interaction at the cost of heavy computation, whereas ZK-coprocessors offer scalability and flexibility at the cost of latency and complexity.

Use Cases and Implications

The advent of programmable privacy unlocks a wealth of new blockchain applications across industries. Below we explore how FHE-VMs and ZK-coprocessors (or hybrids) can empower various domains by enabling privacy-preserving smart contracts and a secure data economy.

Confidential DeFi and Finance

In decentralized finance, privacy can mitigate front-running, protect trading strategies, and satisfy compliance without sacrificing transparency where needed. Confidential DeFi could allow users to interact with protocols without revealing their positions to the world.

  • Private Transactions and Hidden Balances: Using FHE, one can implement confidential token transfers (encrypted ERC-20 balances and transactions) or shielded pools on a blockchain L1. No observer can see how much of a token you hold or transferred, eliminating the risk of targeted attacks based on holdings. ZK proofs can ensure balances stay in sync and no double-spending occurs (similar to Zcash but on smart contract platforms). An example is a confidential AMM (Automated Market Maker) where pool reserves and trades are encrypted on-chain. Arbitrageurs or front-runners cannot exploit the pool because they can’t observe the price slippage until after the trade is settled, reducing MEV. Only after some delay or via an access-controlled mechanism might some data be revealed for audit.

  • MEV-Resistant Auctions and Trading: Miners and bots exploit transaction transparency to front-run trades. With encryption, you could have an encrypted mempool or batch auctions where orders are submitted in ciphertext. Only after the auction clears do trades decrypt. This concept, sometimes called Fair Order Flow, can be achieved with threshold decryption (multiple validators collectively decrypt the batch) or by proving auction outcomes via ZK without revealing individual bids. For instance, a ZK-coprocessor could take a batch of sealed bids off-chain, compute the auction clearing price, and output just that price and winners with proofs. This preserves fairness and privacy of losing bids.

  • Confidential Lending and Derivatives: In DeFi lending, users might not want to reveal the size of their loans or collateral (it can affect market sentiment or invite exploitation). An FHE-VM can maintain an encrypted loan book where each loan’s details are encrypted. Smart contract logic can still enforce rules like liquidation conditions by operating on encrypted health factors. If a loan’s collateral ratio falls below threshold, the contract (with help of ZK proofs) can flag it for liquidation without ever exposing exact values – it might just produce a yes/no flag in plaintext. Similarly, secret derivatives or options positions could be managed on-chain, with only aggregated risk metrics revealed. This could prevent copy trading and protect proprietary strategies, encouraging more institutional participation.

  • Compliant Privacy: Not all financial contexts want total anonymity; sometimes selective disclosure is needed for regulation. With these tools, we can achieve regulated privacy: for example, trades are private to the public, but a regulated exchange can decrypt or receive proofs about certain properties. One could prove via ZK that “this trade did not involve a blacklisted address and both parties are KYC-verified” without revealing identities to the chain. This balance could satisfy Anti-Money Laundering (AML) rules while still keeping user identities and positions confidential to everyone else. FHE could allow an on-chain compliance officer contract to scan encrypted transactions for risk signals (with a decryption key accessible only under court order, for instance).

Digital Identity and Personal Data

Identity systems stand to gain significantly from on-chain privacy tech. Currently, putting personal credentials or attributes on a public ledger is impractical due to privacy laws and user reluctance. With FHE and ZK, self-sovereign identity can be realized in a privacy-preserving way:

  • Zero-Knowledge Credentials: Using ZK proofs (already common in some identity projects), a user can prove statements like “I am over 18”, “I have a valid driver’s license”, or “I earn above $50k (for credit scoring)” without revealing any other personal info. ZK-coprocessors can enhance this by handling more complex checks off-chain, e.g. proving a user’s credit score is above a threshold by querying a private credit database in an Axiom-like fashion, outputting only a yes/no to the blockchain.

  • Confidential KYC on DeFi: Imagine a DeFi protocol that by law must ensure users are KYC’d. With FHE-VM, a user’s credentials can be stored encrypted on-chain (or referenced via DID), and a smart contract can perform an FHE computation to verify the KYC info meets requirements. For instance, a contract could homomorphically check that name and SSN in an encrypted user profile match a sanctioned users list (also encrypted), or that the user’s country is not restricted. The contract would only get an encrypted “pass/fail” which can be threshold-decrypted by network validators to a boolean flag. Only the fact that the user is allowed or not is revealed, preserving PII confidentiality and aligning with GDPR principles. This selective disclosure ensures compliance and privacy.

  • Attribute-Based Access and Selective Disclosure: Users could hold a bunch of verifiable credentials (age, citizenship, skills, etc.) as encrypted attributes. They can authorize certain dApps to run computations on them without disclosing everything. For example, a decentralized recruitment DApp could filter candidates by performing searches on encrypted resumes (using FHE) – e.g. count years of experience, check for a certification – and only if a match is found, contact the candidate off-chain. The candidate’s private details remain encrypted unless they choose to reveal. ZK proofs can also let users selectively prove they possess a combination of attributes (e.g. over 21 and within a certain ZIP code) without revealing the actual values.

  • Multi-Party Identity Verification: Sometimes a user’s identity needs to be vetted by multiple parties (say, background check by company A, credit check by company B). With homomorphic and ZK tools, each verifier could contribute an encrypted score or approval, and a smart contract can aggregate these to a final decision without exposing individual contributions. For instance, three agencies provide encrypted “pass/fail” bits, and the contract outputs an approval if all three are passes – the user or relying party only sees the final outcome, not which specific agency might have failed them, preserving privacy of the user’s record at each agency. This can reduce bias and stigma associated with, say, one failed check revealing a specific issue.

Healthcare and Sensitive Data Sharing

Healthcare data is highly sensitive and regulated, yet combining data from multiple sources can unlock huge value (for research, insurance, personalized medicine). Blockchain could provide a trust layer for data exchange if privacy is solved. Confidential smart contracts could enable new health data ecosystems:

  • Secure Medical Data Exchange: Patients could store references to their medical records on-chain in encrypted form. An FHE-enabled contract could allow a research institution to run analytics on a cohort of patient data without decrypting it. For example, a contract could compute the average efficacy of a drug across encrypted patient outcomes. Only aggregate statistical results come out decrypted (and perhaps only if a minimum number of patients is included, to prevent re-identification). Patients could receive micropayments for contributing their encrypted data to research, knowing that their privacy is preserved because even the blockchain and researchers only see ciphertext or aggregate proofs. This fosters a data marketplace for healthcare that respects privacy.

  • Privacy-Preserving Insurance Claims: Health insurance claims processing could be automated via smart contracts that verify conditions on medical data without exposing the data to the insurer. A claim could include an encrypted diagnosis code and encrypted treatment cost; the contract, using FHE, checks policy rules (e.g. coverage, deductible) on that encrypted data. It could output an approval and payment amount without ever revealing the actual diagnosis to the insurer’s blockchain (only the patient and doctor had the key). ZK proofs might be used to show that the patient’s data came from a certified hospital’s records (using something like Axiom to verify a hospital’s signature or record inclusion) without revealing the record itself. This ensures patient privacy while preventing fraud.

  • Genomic and Personal Data Computation: Genomic data is extremely sensitive (it’s literally one’s DNA blueprint). However, analyzing genomes can provide valuable health insights. Companies could use FHE-VM to perform computations on encrypted genomes uploaded by users. For instance, a smart contract could run a gene-environment risk model on encrypted genomic data and encrypted environmental data (from wearables perhaps), outputting a risk score that only the user can decrypt. The logic (maybe a polygenic risk score algorithm) is coded in the contract and runs homomorphically, so the genomic data never appears in plain. This way, users get insights without giving companies raw DNA data – mitigating both privacy and data ownership concerns.

  • Epidemiology and Public Health: During situations like pandemics, sharing data is vital for modeling disease spread, but privacy laws can hinder data sharing. ZK coprocessors could allow public health authorities to submit queries like “How many people in region X tested positive in last 24h?” to a network of hospitals’ data via proofs. Each hospital keeps patient test records off-chain but can prove to the authority’s contract the count of positives without revealing who. Similarly, contact tracing could be done by matching encrypted location trails: contracts can compute intersections of encrypted location histories of patients to identify hotspots, outputting only the hotspot locations (and perhaps an encrypted list of affected IDs that only health dept can decrypt). The raw location trails of individuals remain private.

Data Marketplaces and Collaboration

The ability to compute on data without revealing it opens new business models around data sharing. Entities can collaborate on computations knowing their proprietary data will not be exposed:

  • Secure Data Marketplaces: Sellers can make data available in encrypted form on a blockchain marketplace. Buyers can pay to run specific analytics or machine learning models on the encrypted dataset via a smart contract, getting either the trained model or aggregated results. The seller’s raw data is never revealed to the buyer or the public – the buyer might only receive a model (which still might leak some info in weights, but techniques like differential privacy or controlling output granularity can mitigate this). ZK proofs can ensure the buyer that the computation was done correctly over the promised dataset (e.g. the seller can’t cheat by running the model on dummy data because the proof ties it to the committed encrypted dataset). This scenario encourages data sharing: for instance, a company could monetize user behavior data by allowing approved algorithms to run on it under encryption, without giving away the data itself.

  • Federated Learning & Decentralized AI: In decentralized machine learning, multiple parties (e.g. different companies or devices) want to jointly train a model on their combined data without sharing data with each other. FHE-VMs excel here: they can enable federated learning where each party’s model updates are homomorphically aggregated by a contract. Because the updates are encrypted, no participant learns others’ contributions. The contract could even perform parts of the training loop (like gradient descent steps) on-chain under encryption, producing an updated model that only authorized parties can decrypt. ZK can complement this by proving that each party’s update was computed following the training algorithm (preventing a malicious participant from poisoning the model). This means a global model can be trained with full auditability on-chain, yet the training data of each contributor remains private. Use cases include jointly training fraud detection models across banks or improving AI assistants using data from many users without centralizing the raw data.

  • Cross-Organizational Analytics: Consider two companies that want to find their intersection of customers for a partnership campaign without exposing their entire customer lists to each other. They could each encrypt their customer ID lists and upload a commitment. An FHE-enabled contract can compute the intersection on the encrypted sets (using techniques like private set intersection via FHE). The result could be an encrypted list of common customer IDs that only a mutually trusted third-party (or the customers themselves, via some mechanism) can decrypt. Alternatively, a ZK approach: one party proves to the other in zero-knowledge that “we have N customers in common and here is an encryption of those IDs” with a proof that the encryption indeed corresponds to common entries. This way, they can proceed with a campaign to those N customers without ever exchanging their full lists in plaintext. Similar scenarios: computing supply chain metrics across competitors without revealing individual supplier details, or banks collating credit info without sharing full client data.

  • Secure Multi-Party Computation (MPC) on Blockchain: FHE and ZK essentially bring MPC concepts on-chain. Complex business logic spanning multiple organizations can be encoded in a smart contract such that each org’s inputs are secret-shared or encrypted. The contract (as an MPC facilitator) produces outputs like profit splits, cost calculations, or joint risk assessments that everyone can trust. For example, suppose several energy companies want to settle a marketplace of power trading. They could feed their encrypted bids and offers into a smart contract auction; the contract computes the clearing prices and allocations on encrypted bids, and outputs each company’s allocation and cost just to that company (via encryption to their public key). No company sees others’ bids, protecting competitive info, but the auction result is fair and verifiable. This combination of blockchain transparency and MPC privacy could revolutionize consortiums and enterprise consortia that currently rely on trusted third parties.

Decentralized Machine Learning (ZKML and FHE-ML)

Bringing machine learning to blockchains in a verifiable and private way is an emerging frontier:

  • Verifiable ML Inference: Using ZK proofs, one can prove that “a machine learning model f, when given input x, produces output y” without revealing either x (if it’s private data) or the inner workings of f (if the model weights are proprietary). This is crucial for AI services on blockchain – e.g., a decentralized AI oracle that provides predictions or classifications. A ZK-coprocessor can run the model off-chain (since models can be large and expensive to evaluate) and post a proof of the result. For instance, an oracle could prove the statement “The satellite image provided shows at least 50% tree cover” to support a carbon credit contract, without revealing the satellite image or possibly even the model. This is known as ZKML and projects are working on optimizing circuit-friendly neural nets. It ensures the integrity of AI outputs used in smart contracts (no cheating or arbitrary outputs) and can preserve confidentiality of the input data and model parameters.

  • Training with Privacy and Auditability: Training an ML model is even more computation-intensive, but if achievable, it would allow blockchain-based model marketplaces. Multiple data providers could contribute to training a model under FHE so that the training algorithm runs on encrypted data. The result might be an encrypted model that only the buyer can decrypt. Throughout training, ZK proofs could be supplied periodically to prove that the training was following the protocol (preventing a malicious trainer from inserting a backdoor, for example). While fully on-chain ML training is far off given costs, a hybrid approach could use off-chain compute with ZK proofs for critical parts. One could imagine a decentralized Kaggle-like competition where participants train models on private datasets and submit ZK proofs of the model’s accuracy on encrypted test data to determine a winner – all without revealing the datasets or the test data.

  • Personalized AI and Data Ownership: With these technologies, users could retain ownership of their personal data and still benefit from AI. For example, a user’s mobile device could use FHE to encrypt their usage data and send it to an analytics contract which computes a personalized AI model (like a recommendation model) just for them. The model is encrypted and only the user’s device can decrypt and use it locally. The platform (maybe a social network) never sees the raw data or model, but the user gets the AI benefit. If the platform wants aggregated insights, it could request ZK proofs of certain aggregate patterns from the contract without accessing individual data.

Additional Areas

  • Gaming: On-chain games often struggle with hiding secret information (e.g. hidden card hands, fog-of-war in strategy games). FHE can enable hidden state games where the game logic runs on encrypted state. For example, a poker game contract could shuffle and deal encrypted cards; players get decryptions of their own cards, but the contract and others only see ciphertext. Betting logic can use ZK proofs to ensure a player isn’t bluffing about an action (or to reveal the winning hand at the end in a verifiably fair way). Similarly, random seeds for NFT minting or game outcomes can be generated and proven fair without exposing the seed (preventing manipulation). This can greatly enhance blockchain gaming, allowing it to support the same dynamics as traditional games.

  • Voting and Governance: DAOs could use privacy tech for secret ballots on-chain, eliminating vote buying and pressure. FHE-VM could tally votes that are cast in encrypted form, and only final totals are decrypted. ZK proofs can assure each vote was valid (came from an eligible voter, who hasn’t voted twice) without revealing who voted for what. This provides verifiability (everyone can verify the proofs and tally) while keeping individual votes secret – crucial for unbiased governance.

  • Secure Supply Chain and IoT: In supply chains, partners might want to share proof of certain properties (origin, quality metrics) without exposing full details to competitors. For instance, an IoT sensor on a food shipment could continuously send encrypted temperature data to a blockchain. A contract could use FHE to check if the temperature stayed in a safe range throughout transit. If a threshold was exceeded, it can trigger an alert or penalty, but it doesn’t have to reveal the entire temperature log publicly – maybe only a proof or an aggregate like “90th percentile temp”. This builds trust in supply chain automation while respecting confidentiality of process data.

Each of these use cases leverages the core ability: compute on or verify data without revealing the data. This capability can fundamentally change how we handle sensitive information in decentralized systems. It reduces the trade-off between transparency and privacy that has limited blockchain adoption in areas dealing with private data.

Conclusion

Blockchain technology is entering a new era of programmable privacy, where data confidentiality and smart contract functionality go hand in hand. The paradigms of FHE-VM and ZK-coprocessors, while technically distinct, both strive to expand the scope of blockchain applications by decoupling what we can compute from what we must reveal.

Fully Homomorphic Encryption Virtual Machines keep computations on-chain and encrypted, preserving decentralization and composability but demanding advances in efficiency. Zero-Knowledge coprocessors shift heavy lifting off-chain, enabling virtually unbounded computation under cryptographic guarantees, and are already proving their worth in scaling and enhancing Ethereum. The choice between them (and hybrids thereof) will depend on the use case: if real-time interaction with private state is needed, an FHE approach might be more suitable; if extremely complex computation or integration with existing code is required, a ZK-coprocessor might be the way to go. In many cases, they are complementary – indeed, we see ZK proofs bolstering FHE integrity, and FHE potentially helping ZK by handling private data for provers.

For developers, these technologies will introduce new design patterns. We will think in terms of encrypted variables and proof verification as first-class elements of dApp architecture. Tooling is rapidly evolving: high-level languages and SDKs are abstracting away cryptographic details (e.g. Zama’s libraries making FHE types as easy as native types, or RISC Zero’s templates for proof requests). In a few years, writing a confidential smart contract could feel almost as straightforward as writing a regular one, just with privacy “built-in” by default.

The implications for the data economy are profound. Individuals and enterprises will be more willing to put data or logic on-chain when they can control its visibility. This can unlock cross-organization collaborations, new financial products, and AI models that were previously untenable due to privacy concerns. Regulators, too, may come to embrace these techniques as they allow compliance checks and audits via cryptographic means (e.g. proving taxes are paid correctly on-chain without exposing all transactions).

We are still in the early days – current FHE-VM prototypes have performance limits, and ZK proofs, while much faster than before, can still be a bottleneck for extremely complex tasks. But continuous research and engineering efforts (including specialized hardware, as evidenced by companies like Optalysys pushing optical FHE acceleration) are quickly eroding these barriers. The funding pouring into this space (e.g. Zama’s unicorn status, Paradigm’s investment in Axiom) underscores a strong belief that privacy features will be as fundamental to Web3 as transparency was to Web1/2.

In conclusion, programmable privacy via FHE-VMs and ZK-coprocessors heralds a new class of dApps that are trustless, decentralized, and confidential. From DeFi trades that reveal no details, to health research that protects patient data, to machine learning models trained across the world without exposing raw data – the possibilities are vast. As these technologies mature, blockchain platforms will no longer force the trade-off between utility and privacy, enabling broader adoption in industries that require confidentiality. The future of Web3 is one where *users and organizations can confidently transact and compute with sensitive data on-chain, knowing the blockchain will verify integrity while keeping their secrets safe*.

Sources: The information in this report is drawn from technical documentation and recent research blogs of leading projects in this space, including Cypher’s and Zama’s FHEVM documentation, detailed analyses from Trail of Bits on Axiom’s circuits, RISC Zero’s developer guides and blog posts, as well as industry articles highlighting use cases of confidential blockchain tech. These sources and more have been cited throughout to provide further reading and evidence for the described architectures and applications.

Plume Network and Real-World Assets (RWA) in Web3

· 77 min read

Plume Network: Overview and Value Proposition

Plume Network is a blockchain platform purpose-built for Real-World Assets (RWA). It is a public, Ethereum-compatible chain designed to tokenize a wide range of real-world financial assets – from private credit and real estate to carbon credits and even collectibles – and make them as usable as native crypto assets. In other words, Plume doesn’t just put assets on-chain; it allows users to hold and utilize tokenized real assets in decentralized finance (DeFi) – enabling familiar crypto activities like staking, lending, borrowing, swapping, and speculative trading on assets that originate in traditional finance.

The core value proposition of Plume is to bridge TradFi and DeFi by turning traditionally illiquid or inaccessible assets into programmable, liquid tokens. By integrating institutional-grade assets (e.g. private credit funds, ETFs, commodities) with DeFi infrastructure, Plume aims to make high-quality investments – which were once limited to large institutions or specific markets – permissionless, composable, and a click away for crypto users. This opens the door for crypto participants to earn “real yield” backed by stable real-world cash flows (such as loan interest, rental income, bond yields, etc.) rather than relying on inflationary token rewards. Plume’s mission is to drive “RWA Finance (RWAfi)”, creating a transparent and open financial system where anyone can access assets like private credit, real estate debt, or commodities on-chain, and use them freely in novel ways.

In summary, Plume Network serves as an “on-chain home for real-world assets”, offering a full-stack ecosystem that transforms off-chain assets into globally accessible financial tools with true crypto-native utility. Users can stake stablecoins to earn yields from top fund managers (Apollo, BlackRock, Blackstone, etc.), loop and leverage RWA-backed tokens as collateral, and trade RWAs as easily as ERC-20 tokens. By doing so, Plume stands out as a platform striving to make alternative assets more liquid and programmable, bringing fresh capital and investment opportunities into Web3 without sacrificing transparency or user experience.

Technology and Architecture

Plume Network is implemented as an EVM-compatible blockchain with a modular Layer-2 architecture. Under the hood, Plume operates similarly to an Ethereum rollup (comparable to Arbitrum’s technology), utilizing Ethereum for data availability and security. Every transaction on Plume is eventually batch-posted to Ethereum, which means users pay a small extra fee to cover the cost of publishing calldata on Ethereum. This design leverages Ethereum’s robust security while allowing Plume to have its own high-throughput execution environment. Plume runs a sequencer that aggregates transactions and commits them to Ethereum periodically, giving the chain faster execution and lower fees for RWA use-cases, but anchored to Ethereum for trust and finality.

Because Plume is EVM-compatible, developers can deploy Solidity smart contracts on Plume just as they would on Ethereum, with almost no changes. The chain supports the standard Ethereum RPC methods and Solidity operations, with only minor differences (e.g. Plume’s block number and timestamp semantics mirror Arbitrum’s conventions due to the Layer-2 design). In practice, this means Plume can easily integrate existing DeFi protocols and developer tooling. The Plume docs note that cross-chain messaging is supported between Ethereum (the “parent” chain) and Plume (the L2), enabling assets and data to move between the chains as needed.

Notably, Plume describes itself as a “modular blockchain” optimized for RWA finance. The modular approach is evident in its architecture: it has dedicated components for bridging assets (called Arc for bringing anything on-chain), for omnichain yield routing (SkyLink) across multiple blockchains, and for on-chain data feeds (Nexus, an “onchain data highway”). This suggests Plume is building an interconnected system where real-world asset tokens on Plume can interact with liquidity on other chains and where off-chain data (like asset valuations, interest rates, etc.) is reliably fed on-chain. Plume’s infrastructure also includes a custom wallet called Plume Passport (the “RWAfi Wallet”) which likely handles identity/AML checks necessary for RWA compliance, and a native stablecoin (pUSD) for transacting in the ecosystem.

Importantly, Plume’s current iteration is often called a Layer-2 or rollup chain – it is built atop Ethereum for security. However, the team has hinted at ambitious plans to evolve the tech further. Plume’s CTO noted that they started as a modular L2 rollup but are now pushing “down the stack” toward a fully sovereign Layer-1 architecture, optimizing a new chain from scratch with high performance, privacy features “comparable to Swiss banks,” and a novel crypto-economic security model to secure the next trillion dollars on-chain. While specifics are scant, this suggests that over time Plume may transition to a more independent chain or incorporate advanced features like FHE (Fully Homomorphic Encryption) or zk-proofs (the mention of zkTLS and privacy) to meet institutional requirements. For now, though, Plume’s mainnet leverages Ethereum’s security and EVM environment to rapidly onboard assets and users, providing a familiar but enhanced DeFi experience for RWAs.

Tokenomics and Incentives

PLUME ($PLUME) is the native utility token of the Plume Network. The $PLUME token is used to power transactions, governance, and network security on Plume. As the gas token, $PLUME is required to pay transaction fees on the Plume chain (similar to how ETH is gas on Ethereum). This means all operations – trading, staking, deploying contracts – consume $PLUME for fees. Beyond gas, $PLUME has several utility and incentive roles:

  • Governance: $PLUME holders can participate in governance decisions, presumably voting on protocol parameters, upgrades, or asset onboarding decisions.
  • Staking/Security: The token can be staked, which likely supports the network’s validator or sequencer operations. Stakers help secure the chain and in return earn staking rewards in $PLUME. (Even as a rollup, Plume may use a proof-of-stake mechanism for its sequencer or for eventual decentralization of block production).
  • Real Yield and DeFi utility: Plume’s docs mention that users can use $PLUME across dApps to “unlock real yield”. This suggests that holding or staking $PLUME might confer higher yields in certain RWA yield farms or access to exclusive opportunities in the ecosystem.
  • Ecosystem Incentives: $PLUME is also used to reward community engagement – for example, users might earn tokens via community quests, referral programs, testnet participation (such as the “Take Flight” developer program or the testnet “Goons” NFTs). This incentive design is meant to bootstrap network effects by distributing tokens to those who actively use and grow the platform.

Token Supply & Distribution: Plume has a fixed total supply of 10 billion $PLUME tokens. At the Token Generation Event (mainnet launch), the initial circulating supply is 20% of the total (i.e. 2 billion tokens). The allocation is heavily weighted toward community and ecosystem development:

  • 59% to Community, Ecosystem & Foundation – this large share is reserved for grants, liquidity incentives, community rewards, and a foundation pool to support the ecosystem’s long-term growth. This ensures a majority of tokens are available to bootstrap usage (and potentially signals commitment to decentralization over time).
  • 21% to Early Backers – these tokens are allocated to strategic investors and partners who funded Plume’s development. (As we’ll see, Plume raised capital from prominent crypto funds; this allocation likely vests over time as per investor agreements.)
  • 20% to Core Contributors (Team) – allocated to the founding team and core developers driving Plume. This portion incentivizes the team and aligns them with the network’s success, typically vesting over a multi-year period.

Besides $PLUME, Plume’s ecosystem includes a stablecoin called Plume USD (pUSD). pUSD is designed as the RWAfi ecosystem stablecoin for Plume. It serves as the unit of account and primary trading/collateral currency within Plume’s DeFi apps. Uniquely, pUSD is fully backed 1:1 by USDC – effectively a wrapped USDC for the Plume network. This design choice (wrapping USDC) was made to reduce friction for traditional institutions: if an organization is already comfortable holding and minting USDC, they can seamlessly mint and use pUSD on Plume under the same frameworks. pUSD is minted and redeemed natively on both Ethereum and Plume, meaning users or institutions can deposit USDC on Ethereum and receive pUSD on Plume, or vice versa. By tying pUSD 1:1 to USDC (and ultimately to USD reserves), Plume ensures its stablecoin remains fully collateralized and liquid, which is critical for RWA transactions (where predictability and stability of the medium of exchange are required). In practice, pUSD provides a common stable liquidity layer for all RWA apps on Plume – whether it’s buying tokenized bonds, investing in RWA yield vaults, or trading assets on a DEX, pUSD is the stablecoin that underpins value exchange.

Overall, Plume’s tokenomics aim to balance network utility with growth incentives. $PLUME ensures the network is self-sustaining (through fees and staking security) and community-governed, while large allocations to ecosystem funds and airdrops help drive early adoption. Meanwhile, pUSD anchors the financial ecosystem in a trustworthy stable asset, making it easier for traditional capital to enter Plume and for DeFi users to measure returns on real-world investments.

Founding Team and Backers

Plume Network was founded in 2022 by a trio of entrepreneurs with backgrounds in crypto and finance: Chris Yin (CEO), Eugene Shen (CTO), and Teddy Pornprinya (CBO). Chris Yin is described as the visionary product leader of the team, driving the platform’s strategy and thought leadership in the RWA space. Eugene Shen leads the technical development as CTO (previously having worked on modular blockchain architectures, given his note about “customizing geth” and building from the ground up). Teddy Pornprinya, as Chief Business Officer, spearheads partnerships, business development, and marketing – he was instrumental in onboarding dozens of projects into the Plume ecosystem early on. Together, the founders identified the gap in the market for an RWA-optimized chain and quit their prior roles to build Plume, officially launching the project roughly a year after conception.

Plume has attracted significant backing from both crypto-native VCs and traditional finance giants, signaling strong confidence in its vision:

  • In May 2023, Plume raised a $10 million seed round led by Haun Ventures (the fund of former a16z partner Katie Haun). Other participants in the seed included Galaxy Digital, Superscrypt (Temasek’s crypto arm), A Capital, SV Angel, Portal Ventures, and Reciprocal Ventures. This diverse investor base gave Plume a strong start, combining crypto expertise and institutional connections.

  • By late 2024, Plume secured a $20 million Series A funding to accelerate its development. This round was backed by top-tier investors such as Brevan Howard Digital, Haun Ventures (returning), Galaxy, and Faction VC. The inclusion of Brevan Howard, one of the world’s largest hedge funds with a dedicated crypto arm, is especially notable and underscored the growing Wall Street interest in RWAs on blockchain.

  • In April 2025, Apollo Global Management – one of the world’s largest alternative asset managers – made a strategic investment in Plume. Apollo’s investment was a seven-figure (USD) amount intended to help Plume scale its infrastructure and bring more traditional financial products on-chain. Apollo’s involvement is a strong validation of Plume’s approach: Christine Moy, Apollo’s Head of Digital Assets, said their investment “underscores Apollo’s focus on technologies that broaden access to institutional-quality products… Plume represents a new kind of infrastructure focused on digital asset utility, investor engagement, and next-generation financial solutions”. In other words, Apollo sees Plume as key infrastructure to make private markets more liquid and accessible via blockchain.

  • Another strategic backer is YZi Labs, formerly Binance Labs. In early 2025, YZi (Binance’s venture arm rebranded) announced a strategic investment in Plume Network as well. YZi Labs highlighted Plume as a “cutting-edge Layer-2 blockchain designed for scaling real world assets”, and their support signals confidence that Plume can bridge TradFi and DeFi at a large scale. (It’s worth noting Binance Labs’ rebranding to YZi Labs indicates continuity of their investments in core infrastructure projects like Plume.)

  • Plume’s backers also include traditional fintech and crypto institutions through partnerships (detailed below) – for example, Mercado Bitcoin (Latin America’s largest digital asset platform) and Anchorage Digital (a regulated crypto custodian) are ecosystem partners, effectively aligning themselves with Plume’s success. Additionally, Grayscale Investments – the world’s largest digital asset manager – has taken notice: in April 2025, Grayscale officially added $PLUME to its list of assets “Under Consideration” for future investment products. Being on Grayscale’s radar means Plume could potentially be included in institutional crypto trusts or ETFs, a major nod of legitimacy for a relatively new project.

In summary, Plume’s funding and support comes from a who’s-who of top investors: premier crypto VCs (Haun, Galaxy, a16z via GFI’s backing of Goldfinch, etc.), hedge funds and TradFi players (Brevan Howard, Apollo), and corporate venture arms (Binance/YZi). This mix of backers brings not just capital but also strategic guidance, regulatory expertise, and connections to real-world asset originators. It has also provided Plume with war-chest funding (at least $30M+ over seed and Series A) to build out its specialized blockchain and onboard assets. The strong backing serves as a vote of confidence that Plume is positioned as a leading platform in the fast-growing RWA sector.

Ecosystem Partners and Integrations

Plume has been very active in forging ecosystem partnerships across both crypto and traditional finance, assembling a broad network of integrations even before (and immediately upon) mainnet launch. These partners provide the assets, infrastructure, and distribution that make Plume’s RWA ecosystem functional:

  • Nest Protocol (Nest Credit): An RWA yield platform that operates on Plume, allowing users to deposit stablecoins into vaults and receive yield-bearing tokens backed by real-world assets. Nest is essentially a DeFi frontend for RWA yields, offering products like tokenized U.S. Treasury Bills, private credit, mineral rights, etc., but abstracting away the complexity so they “feel like crypto.” Users swap USDC (or pUSD) for Nest-issued tokens that are fully backed by regulated, audited assets held by custodians. Nest works closely with Plume – a testimonial from Anil Sood of Anemoy (a partner) highlights that “partnering with Plume accelerates our mission to bring institutional-grade RWAs to every investor… This collaboration is a blueprint for the future of RWA innovation.”. In practice, Nest is Plume’s native yield marketplace (sometimes called “Nest Yield” or RWA staking platform), and many of Plume’s big partnerships funnel into Nest vaults.

  • Mercado Bitcoin (MB): The largest digital asset exchange in Latin America (based in Brazil) has partnered with Plume to tokenize ~$40 million of Brazilian real-world assets. This initiative, announced in Feb 2025, involves MB using Plume’s blockchain to issue tokens representing Brazilian asset-backed securities, consumer credit portfolios, corporate debt, and accounts receivable. The goal is to connect global investors with yield-bearing opportunities in Brazil’s economy – effectively opening up Brazilian credit markets to on-chain investors worldwide through Plume. These Brazilian RWA tokens will be available from day one of Plume’s mainnet on the Nest platform, providing stable on-chain returns backed by Brazilian small-business loans and credit receivables. This partnership is notable because it gives Plume a geographic reach (LATAM) and a pipeline of emerging-market assets, showcasing how Plume can serve as a hub connecting regional asset originators to global liquidity.

  • Superstate: Superstate is a fintech startup founded by Robert Leshner (former founder of Compound), focused on bringing regulated U.S. Treasury fund products on-chain. In 2024, Superstate launched a tokenized U.S. Treasury fund (approved as a 1940 Act mutual fund) targeted at crypto users. Plume was chosen by Superstate to power its multi-chain expansion. In practice, this means Superstate’s tokenized T-bill fund (which offers stable yield from U.S. government bonds) is being made available on Plume, where it can be integrated into Plume’s DeFi ecosystem. Leshner himself said: “by expanding to Plume – the unique RWAfi chain – we can demonstrate how purpose-built infrastructure can enable great new use-cases for tokenized assets. We’re excited to build on Plume.”. This indicates Superstate will deploy its fund tokens (e.g., maybe an on-chain share of a Treasuries fund) on Plume, allowing Plume users to hold or use them in DeFi (perhaps as collateral for borrowing, or in Nest vaults for auto-yield). It is a strong validation that Plume’s chain is seen as a preferred home for regulated asset tokens like Treasuries.

  • Ondo Finance: Ondo is a well-known DeFi project that pivoted into the RWA space by offering tokenized bonds and yield products (notably, Ondo’s OUSG token, which represents shares in a short-term U.S. Treasury fund, and USDY, representing an interest-bearing USD deposit product). Ondo is listed among Plume’s ecosystem partners, implying a collaboration where Ondo’s yield-bearing tokens (like OUSG, USDY) can be used on Plume. In fact, Ondo’s products align closely with Plume’s goals: Ondo established legal vehicles (SPVs) to ensure compliance, and its OUSG token is backed by BlackRock’s tokenized money market fund (BUIDL), providing ~4.5% APY from Treasuries. By integrating Ondo, Plume gains blue-chip RWA assets like U.S. Treasuries on-chain. Indeed, as of late 2024, Ondo’s RWA products had a market value around $600+ million, so bridging them to Plume adds significant TVL. This synergy likely allows Plume users to swap into Ondo’s tokens or include them in Nest vaults for composite strategies.

  • Centrifuge: Centrifuge is a pioneer in RWA tokenization (operating its own Polkadot parachain for RWA pools). Plume’s site lists Centrifuge as a partner, suggesting collaboration or integration. This could mean that Centrifuge’s pools of assets (trade finance, real estate bridge loans, etc.) might be accessible from Plume, or that Centrifuge will use Plume’s infrastructure for distribution. For example, Plume’s SkyLink omnichain yield might route liquidity from Plume into Centrifuge pools on Polkadot, or Centrifuge could tokenize certain assets directly onto Plume for deeper DeFi composability. Given Centrifuge leads the private credit RWA category with ~$409M TVL in its pools, its participation in Plume’s ecosystem is significant. It indicates an industry-wide move toward interoperability among RWA platforms, with Plume acting as a unifying layer for RWA liquidity across chains.

  • Credbull: Credbull is a private credit fund platform that partnered with Plume to launch a large tokenized credit fund. According to CoinDesk, Credbull is rolling out up to a $500M private credit fund on Plume, offering a fixed high yield to on-chain investors. This likely involves packaging private credit (loans to mid-sized companies or other credit assets) into a vehicle where on-chain stablecoin holders can invest for a fixed return. The significance is twofold: (1) It adds a huge pipeline of yield assets (~half a billion dollars) to Plume’s network, and (2) it exemplifies how Plume is attracting real asset managers to originate products on its chain. Combined with other pipeline assets, Plume said it planned to tokenize about $1.25 billion worth of RWAs by late 2024, including Credbull’s fund, plus $300M of renewable energy assets (solar farms via Plural Energy), ~$120M of healthcare receivables (Medicaid-backed invoices), and even oil & gas mineral rights. This large pipeline shows that at launch, Plume isn’t empty – it comes with tangible assets ready to go.

  • Goldfinch: Goldfinch is a decentralized credit protocol that provided undercollateralized loans to fintech lenders globally. In 2023, Goldfinch pivoted to “Goldfinch Prime”, targeting accredited and institutional investors by offering on-chain access to top private credit funds. Plume and Goldfinch announced a strategic partnership to bring Goldfinch Prime’s offerings to Plume’s Nest platform, effectively marrying Goldfinch’s institutional credit deals with Plume’s user base. Through this partnership, institutional investors on Plume can stake stablecoins into funds managed by Apollo, Golub Capital, Aries, Stellus, and other leading private credit managers via Goldfinch’s integration. The ambition is massive: collectively these managers represent over $1 trillion in assets, and the partnership aims to eventually make portions of that available on-chain. In practical terms, a user on Plume could invest in a diversified pool that earns yield from hundreds of real-world loans made by these credit funds, all tokenized through Goldfinch Prime. This not only enhances Plume’s asset diversity but also underscores Plume’s credibility to partner with top-tier RWA platforms.

  • Infrastructure Partners (Custody and Connectivity): Plume has also integrated key infrastructure players. Anchorage Digital, a regulated crypto custodian bank, is a partner – Anchorage’s involvement likely means institutional users can custody their tokenized assets or $PLUME securely in a bank-level custody solution (a must for big money). Paxos is another listed partner, which could relate to stablecoin infrastructure (Paxos issues USDP stablecoin and also provides custody and brokerage services – possibly Paxos could be safeguarding the reserves for pUSD or facilitating asset tokenization pipelines). LayerZero is mentioned as well, indicating Plume uses LayerZero’s interoperability protocol for cross-chain messaging. This would allow assets on Plume to move to other chains (and vice versa) in a trust-minimized way, complementing Plume’s rollup bridge.

  • Other DeFi Integrations: Plume’s ecosystem page cites 180+ protocols, including RWA specialists and mainstream DeFi projects. For instance, names like Nucleus Yield (a platform for tokenized yields), and possibly on-chain KYC providers or identity solutions, are part of the mix. By the time of mainnet, Plume had over 200 integrated protocols in its testnet environment – meaning many existing dApps (DEXs, money markets, etc.) have deployed or are ready to deploy on Plume. This ensures that once real-world assets are tokenized, they have immediate utility: e.g., a tokenized solar farm revenue stream could be traded on an order-book exchange, or used as collateral for a loan, or included in an index – because the DeFi “money lego” pieces (DEXs, lending platforms, asset management protocols) are available on the chain from the start.

In summary, Plume’s ecosystem strategy has been aggressive and comprehensive: secure anchor partnerships for assets (e.g. funds from Apollo, BlackRock via Superstate/Ondo, private credit via Goldfinch and Credbull, emerging market assets via Mercado Bitcoin), ensure infrastructure and compliance in place (Anchorage custody, Paxos, identity/AML tooling), and port over the DeFi primitives to allow a flourishing of secondary markets and leverage. The result is that Plume enters 2025 as potentially the most interconnected RWA network in Web3 – a hub where various RWA protocols and real-world institutions plug in. This “network-of-networks” effect could drive significant total value locked and user activity, as indicated by early metrics (Plume’s testnet saw 18+ million unique wallets and 280+ million transactions in a short span, largely due to incentive campaigns and the breadth of projects testing the waters).

Roadmap and Development Milestones

Plume’s development has moved at a rapid clip, with a phased approach to scaling up real-world assets on-chain:

  • Testnet and Community Growth (2023): Plume launched its incentivized testnet (code-named “Miles”) in mid-late 2023. The testnet campaign was extremely successful in attracting users – over 18 million testnet wallet addresses were created, executing 280 million+ transactions. This was likely driven by testnet “missions” and an airdrop campaign (Season 1 of Plume’s airdrop was claimed by early users). The testnet also onboarded over 200 protocols and saw 1 million NFTs (“Goons”) minted, indicating a vibrant trial ecosystem. This massive testnet was a milestone proving out Plume’s tech scalability and generating buzz (and a large community: Plume now counts ~1M Twitter followers and hundreds of thousands in Discord/Telegram).

  • Mainnet Launch (Q1 2025): Plume targeted the end of 2024 or early 2025 for mainnet launch. Indeed, by February 2025, partners like Mercado Bitcoin announced their tokenized assets would go live “from the first day of Plume’s mainnet launch.”. This implies Plume mainnet went live or was scheduled to go live around Feb 2025. Mainnet launch is a crucial milestone, bringing the testnet’s lessons to production along with the initial slate of real assets (~$1B+ worth) ready to be tokenized. The launch likely included the release of Plume’s core products: the Plume Chain (mainnet), Arc for asset onboarding, pUSD stablecoin, and Plume Passport wallet, as well as initial DeFi dApps (DEXs, money markets) deployed by partners.

  • Phased Asset Onboarding: Plume has indicated a “phased onboarding” strategy for assets to ensure a secure, liquid environment. In early phases, simpler or lower-risk assets (like fully backed stablecoins, tokenized bonds) come first, alongside controlled participation (perhaps whitelisted institutions) to build trust and liquidity. Each phase then unlocks more use cases and asset classes as the ecosystem proves itself. For example, Phase 1 might focus on on-chain Treasuries and private credit fund tokens (relatively stable, yield-generating assets). Subsequent phases could bring more esoteric or higher-yield assets like renewable energy revenue streams, real estate equity tokens, or even exotic assets (the docs amusingly mention “GPUs, uranium, mineral rights, durian farms” as eventual on-chain asset possibilities). Plume’s roadmap thus expands the asset menu over time, parallel with developing the needed market depth and risk management on-chain.

  • Scaling and Decentralization: Following mainnet, a key development goal is to decentralize the Plume chain’s operations. Currently, Plume has a sequencer model (likely run by the team or a few nodes). Over time, they plan to introduce a robust validator/sequencer set where $PLUME stakers help secure the network, and possibly even transition to a fully independent consensus. The founder’s note about building an optimized L1 with a new crypto-economic model hints that Plume might implement a novel Proof-of-Stake or hybrid security model to protect high-value RWAs on-chain. Milestones in this category would include open-sourcing more of the stack, running incentivized testnet for node operators, and implementing fraud proofs or zk-proofs (if moving beyond an optimistic rollup).

  • Feature Upgrades: Plume’s roadmap also includes adding advanced features demanded by institutions. This could involve:

    • Privacy enhancements: e.g., integrating zero-knowledge proofs for confidential transactions or identity, so that sensitive financial details of RWAs (like borrower info or cashflow data) can be kept private on a public ledger. The mention of FHE and zkTLS suggests research in enabling private yet verifiable asset handling.
    • Compliance and Identity: Plume already has AML screening and compliance modules, but future work will refine on-chain identity (perhaps DID integration in Plume Passport) so that RWA tokens can enforce transfer restrictions or only be held by eligible investors when required.
    • Interoperability: Further integrations with cross-chain protocols (expanding on LayerZero) and bridges so that Plume’s RWA liquidity can seamlessly flow into major ecosystems like Ethereum mainnet, Layer-2s, and even other app-chains. The SkyLink omnichain yield product is likely part of this, enabling users on other chains to tap yields from Plume’s RWA pools.
  • Growth Targets: Plume’s leadership has publicly stated goals like “tokenize $3 billion+ in assets by Q4 2024” and eventually far more. While $1.25B was the short-term pipeline at launch, the journey to $3B in tokenized RWAs is an explicit milestone. Longer term, given the trillions in institutional assets potentially tokenizable, Plume will measure success in how much real-world value it brings on-chain. Another metric is TVL and user adoption: by April 2025 the RWA tokenization market crossed $20B in TVL overall, and Plume aspires to capture a significant share of that. If its partnerships mature (e.g., if even 5% of that $1 trillion Goldfinch pipeline comes on-chain), Plume’s TVL could grow exponentially.

  • Recent Highlights: By spring 2025, Plume had several noteworthy milestones:

    • The Apollo investment (Apr 2025) – which not only brought funding but also the opportunity to work with Apollo’s portfolio (Apollo manages $600B+ including credit, real estate, and private equity assets that could eventually be tokenized).
    • Grayscale consideration (Apr 2025) – being added to Grayscale’s watchlist is a milestone in recognition, potentially paving the way for a Plume investment product for institutions.
    • RWA Market Leadership: Plume’s team frequently publishes the “Plumeberg” Newsletters noting RWA market trends. In one, they celebrated RWA protocols surpassing $10B TVL and noted Plume’s key role in the narrative. They have positioned Plume as core infrastructure as the sector grows, which suggests a milestone of becoming a reference platform in the RWA conversation.

In essence, Plume’s roadmap is about scaling up and out: scale up in terms of assets (from hundreds of millions to billions tokenized), and scale out in terms of features (privacy, compliance, decentralization) and integrations (connecting to more assets and users globally). Each successful asset onboarding (be it a Brazilian credit deal or an Apollo fund tranche) is a development milestone in proving the model. If Plume can maintain momentum, upcoming milestones might include major financial institutions launching products directly on Plume (e.g., a bank issuing a bond on Plume), or government entities using Plume for public asset auctions – all part of the longer-term vision of Plume as a global on-chain marketplace for real-world finance.

Metrics and Traction

While still early, Plume Network’s traction can be gauged by a combination of testnet metrics, partnership pipeline, and the overall growth of RWA on-chain:

  • Testnet Adoption: Plume’s incentivized testnet (2023) saw extraordinary participation. 18 million+ unique addresses and 280 million transactions were recorded – numbers rivaling or exceeding many mainnets. This was driven by an enthusiastic community drawn by Plume’s airdrop incentives and the allure of RWAs. It demonstrates a strong retail interest in the platform (though many may have been speculators aiming for rewards, it nonetheless seeded a large user base). Additionally, over 200 DeFi protocols deployed contracts on the testnet, signaling broad developer interest. This effectively primed Plume with a large user and developer community even before launch.

  • Community Size: Plume quickly built a social following in the millions (e.g., 1M followers on X/Twitter, 450k in Discord, etc.). They brand their community members as “Goons” – over 1 million “Goon” NFTs were minted as a part of testnet achievements. Such gamified growth reflects one of the fastest community buildups in recent Web3 memory, indicating that the narrative of real-world assets resonates with a wide audience in crypto.

  • Ecosystem and TVL Pipeline: At mainnet launch, Plume projected having over $1 billion in real-world assets tokenized or available on day one. In a statement, co-founder Chris Yin highlighted proprietary access to high-yield, privately held assets that are “exclusively” coming to Plume. Indeed, specific assets lined up included:

    • $500M from a Credbull private credit fund,
    • $300M in solar energy farms (Plural Energy),
    • $120M in healthcare (Medicaid receivables),
    • plus mineral rights and other esoteric assets. These sum to ~$1B, and Yin stated the aim to reach $3B tokenized by end of 2024. Such figures, if realized, would place Plume among the top chains for RWA TVL. By comparison, the entire RWA sector’s on-chain TVL was about $20B as of April 2025, so $3B on one platform would be a very significant share.
  • Current TVL / Usage: Since mainnet launch is recent, concrete TVL figures on Plume aren’t yet publicly reported like on DeFiLlama. However, we know several integrated projects bring their own TVL:

    • Ondo’s products (OUSG, etc.) had $623M in market value around early 2024 – some of that may now reside or be mirrored on Plume.
    • The tokenized assets via Mercado Bitcoin (Brazil) add $40M pipeline.
    • Goldfinch Prime’s pool could attract large deposits (Goldfinch’s legacy pools originated ~$100M+ of loans; Prime could scale higher with institutions).
    • If Nest vaults aggregate multiple yields, that could quickly accumulate nine-figure TVL on Plume as stablecoin holders seek 5-10% yields from RWAs. As a qualitative metric, demand for RWA yields has been high even in bear markets – for instance, tokenized Treasury funds like Ondo’s saw hundreds of millions in a few months. Plume, concentrating many such offerings, could see a rapid uptick in TVL as DeFi users rotate into more “real” yields.
  • Transactions and Activity: We might anticipate relatively lower on-chain transaction counts on Plume compared to say a gaming chain, because RWA transactions are higher-value but less frequent (e.g., moving millions in a bond token vs. many micro-transactions). That said, if secondary trading picks up (on an order book exchange or AMM on Plume), we could see steady activity. The presence of 280M test txns suggests Plume can handle high throughput if needed. With Plume’s low fees (designed to be cheaper than Ethereum) and composability, it encourages more complex strategies (like looping collateral, automated yield strategies by smart contracts) which could drive interactions.

  • Real-World Impact: Another “metric” is traditional participation. Plume’s partnership with Apollo and others means institutional AuM (Assets under Management) connected to Plume is in the tens of billions (just counting Apollo’s involved funds, BlackRock’s BUIDL fund, etc.). While not all that value is on-chain, even a small allocation from each could quickly swell Plume’s on-chain assets. For example, BlackRock’s BUIDL fund (tokenized money market) hit $1B AUM within a year. Franklin Templeton’s on-chain government money fund reached $368M. If similar funds launch on Plume or existing ones connect, those figures reflect potential scale.

  • Security/Compliance Metrics: It’s worth noting Plume touts being fully onchain 24/7, permissionless yet compliant. One measure of success will be zero security incidents or defaults in the initial cohorts of RWA tokens. Metrics like payment yields delivered to users (e.g., X amount of interest paid out via Plume smart contracts from real assets) will build credibility. Plume’s design includes real-time auditing and on-chain verification of asset collateral (some partners provide daily transparency reports, as Ondo does for USDY). Over time, consistent, verified yield payouts and perhaps credit ratings on-chain could become key metrics to watch.

In summary, early indicators show strong interest and a robust pipeline for Plume. The testnet numbers demonstrate crypto community traction, and the partnerships outline a path to significant on-chain TVL and usage. As Plume transitions to steady state, we will track metrics like how many asset types are live, how much yield is distributed, and how many active users (especially institutional) engage on the platform. Given that the entire RWA category is growing fast (over $22.4B TVL as of May 2025, with a 9.3% monthly growth rate), Plume’s metrics should be viewed in context of this expanding pie. There is a real possibility that Plume could emerge as a leading RWA hub capturing a multi-billion-dollar share of the market if it continues executing.


Real-World Assets (RWA) in Web3: Overview and Significance

Real-World Assets (RWAs) refer to tangible or financial assets from the traditional economy that are tokenized on blockchain – in other words, digital tokens that represent ownership or rights to real assets or cash flows. These can include assets like real estate properties, corporate bonds, trade invoices, commodities (gold, oil), stocks, or even intangible assets like carbon credits and intellectual property. RWA tokenization is arguably one of the most impactful trends in crypto, because it serves as a bridge between traditional finance (TradFi) and decentralized finance (DeFi). By bringing real-world assets on-chain, blockchain technology can inject transparency, efficiency, and broader access into historically opaque and illiquid markets.

The significance of RWAs in Web3 has grown dramatically in recent years:

  • They unlock new sources of collateral and yield for the crypto ecosystem. Instead of relying on speculative token trading or purely crypto-native yield farming, DeFi users can invest in tokens that derive value from real economic activity (e.g., revenue from a real estate portfolio or interest from loans). This introduces “real yield” and diversification, making DeFi more sustainable.
  • For traditional finance, tokenization promises to increase liquidity and accessibility. Assets like commercial real estate or loan portfolios, which typically have limited buyers and cumbersome settlement processes, can be fractionalized and traded 24/7 on global markets. This can reduce financing costs and democratize access to investments that were once restricted to banks or large funds.
  • RWAs also leverage blockchain’s strengths: transparency, programmability, and efficiency. Settlement of tokenized securities can be near-instant and peer-to-peer, eliminating layers of intermediaries and reducing settlement times from days to seconds. Smart contracts can automate interest payments or enforce covenants. Additionally, the immutable audit trail of blockchains enhances transparency – investors can see exactly how an asset is performing (especially when coupled with oracle data) and trust that the token supply matches real assets (with on-chain proofs of reserve, etc.).
  • Importantly, RWA tokenization is seen as a key driver of the next wave of institutional adoption of blockchain. Unlike the largely speculative DeFi summer of 2020 or the NFT boom, RWAs appeal directly to the finance industry’s core, by making familiar assets more efficient. A recent report by Ripple and BCG projected that the market for tokenized assets could reach **$18.9 trillion** by 2033, underscoring the vast addressable market. Even nearer term, growth is rapid – as of May 2025, RWA projects’ TVL was $22.45B (up ~9.3% in one month) and projected to hit ~$50B by end of 2025. Some estimates foresee **$1–$3 trillion tokenized by 2030**, with upper scenarios as high as $30T if adoption accelerates.

In short, RWA tokenization is transforming capital markets by making traditional assets more liquid, borderless, and programmable. It represents a maturation of the crypto industry – moving beyond purely self-referential assets toward financing the real economy. As one analysis put it, RWAs are “rapidly shaping up to be the bridge between traditional finance and the blockchain world”, turning the long-hyped promise of blockchain disrupting finance into a reality. This is why 2024–2025 has seen RWAs touted as the growth narrative in Web3, attracting serious attention from big asset managers, governments, and Web3 entrepreneurs alike.

Key Protocols and Projects in the RWA Space

The RWA landscape in Web3 is broad, comprising various projects each focusing on different asset classes or niches. Here we highlight some key protocols and platforms leading the RWA movement, along with their focus areas and recent progress:

Project / ProtocolFocus & Asset TypesBlockchainNotable Metrics / Highlights
CentrifugeDecentralized securitization of private credit – tokenizing real-world payment assets like invoices, trade receivables, real estate bridge loans, royalties, etc. via asset pools (Tinlake). Investors earn yield from financing these assets.Polkadot parachain (Centrifuge Chain) with Ethereum dApp (Tinlake) integrationTVL ≈ $409M in pools; pioneered RWA DeFi with MakerDAO (Centrifuge pools back certain DAI loans). Partners with institutions like New Silver and FortunaFi for asset origination. Launching Centrifuge V3 for easier cross-chain RWA liquidity.
Maple FinanceInstitutional lending platform – initially undercollateralized crypto loans (to trading firms), now pivoted to RWA-based lending. Offers pools where accredited lenders provide USDC to borrowers (now often backed by real-world collateral or revenue). Launched a Cash Management Pool for on-chain U.S. Treasury investments and Maple Direct for overcollateralized BTC/ETH loans.Ethereum (V2 & Maple 2.0), previously Solana (deprecated)$2.46B in total loans originated to date; shifted to fully collateralized lending after defaults in unsecured lending. Maple’s new Treasury pool allows non-US investors to earn ~5% on T-Bills via USDC. Its native token MPL (soon converting to SYRUP) captures protocol fees; Maple ranks #2 in private credit RWA TVL and is one of few with a liquid token.
GoldfinchDecentralized private credit – originally provided undercollateralized loans to fintech lenders in emerging markets (Latin America, Africa, etc.) by pooling stablecoin from DeFi investors. Now launched Goldfinch Prime, targeting institutional investors to provide on-chain access to multi-billion-dollar private credit funds (managed by Apollo, Ares, Golub, etc.) in one diversified pool. Essentially brings established private debt funds on-chain for qualified investors.EthereumFunded ~$100M in loans across 30+ borrowers since inception. Goldfinch Prime (2023) is offering exposure to top private credit funds (Apollo, Blackstone, T. Rowe Price, etc.) with thousands of underlying loans. Backed by a16z, Coinbase Ventures, etc. Aims to merge DeFi capital with proven TradFi credit strategies, with yields often 8-10%. GFI token governs the protocol.
Ondo FinanceTokenized funds and structured products – pivoted from DeFi services to focusing on on-chain investment funds. Issuer of tokens like OUSG (Ondo Short-Term Government Bond Fund token – effectively tokenized shares of a U.S. Treasury fund) and OSTB/OMMF (money market fund tokens). Also offers USDY (tokenized deposit yielding ~5% from T-bills + bank deposits). Ondo also built Flux, a lending protocol to allow borrowing against its fund tokens.Ethereum (tokens also deployed on Polygon, Solana, etc. for accessibility)$620M+ in tokenized fund AUM (e.g. OUSG, USDY, etc.). OUSG is one of the largest on-chain Treasury products, at ~$580M AUM providing ~4.4% APY. Ondo’s funds are offered under SEC Reg D/S exemptions via a broker-dealer, ensuring compliance. Ondo’s approach of using regulated SPVs and partnering with BlackRock’s BUIDL fund has set a model for tokenized securities in the US. ONDO token (governance) has a ~$2.8B FDV with 15% in circulation (indicative of high investor expectations).
MakerDAO (RWA Program)Decentralized stablecoin issuer (DAI) that has increasingly allocated its collateral to RWA investments. Maker’s RWA effort involves vaults that accept real-world collateral (e.g. loans via Huntingdon Valley Bank, or tokens like CFG (Centrifuge) pools, DROP tokens, and investments into short-term bonds through off-chain structures with partners like BlockTower and Monetalis). Maker essentially invests DAI into RWA to earn yield, which shores up DAI’s stability.EthereumAs of late 2023, Maker had over $1.6B in RWA exposure, including >$1B in U.S. Treasury and corporate bonds and hundreds of millions in loans to real estate and banks (Maker’s Centrifuge vaults, bank loans, and Société Générale bond vault). This now comprises a significant portion of DAI’s collateral, contributing real yield (~4-5% on those assets) to Maker. Maker’s pivot to RWA (part of “Endgame” plan) has been a major validation for RWA in DeFi. However, Maker does not tokenize these assets for broader use; it holds them in trust via legal entities to back DAI.
TruFi & Credix(Grouping two similar credit protocols) TruFi – a protocol for uncollateralized lending to crypto and TradFi borrowers, with a portion of its book in real-world loans (e.g. lending to fintechs). Credix – a Solana-based private credit marketplace connecting USDC lenders to Latin American credit deals (often receivables and SME loans, tokenized as bonds). Both enable underwriters to create loan pools that DeFi users can fund, thus bridging to real economy lending.Ethereum (TruFi), Solana (Credix)TruFi facilitated ~$500M in loans (crypto + some RWA) since launch, though faced defaults; its focus is shifting to credit fund tokenization. Credix has funded tens of millions in receivables in Brazil/Colombia, and in 2023 partnered with Circle and VISA on a pilot to convert receivables to USDC for faster financing. These are notable but smaller players relative to Maple/Goldfinch. Credix’s model influenced Goldfinch’s design.
Securitize & Provenance (Figure)These are more CeFi-oriented RWA platforms: Securitize provides tokenization technology for enterprises (it tokenized private equity funds, stocks, and bonds for clients, operating under full compliance; recently partnered with Hamilton Lane to tokenzie parts of its $800M funds). Provenance Blockchain (Figure), built by Figure Technologies, is a fintech platform mainly for loan securitization and trading (they’ve done HELOC loans, mortgage-backed securities, etc. on their private chain).Private or permissioned chains (Provenance is a Cosmos-based chain; Securitize issues tokens on Ethereum, Polygon, etc.)Figure’s Provenance has facilitated over $12B in loan originations on-chain (mostly between institutions) and is arguably one of the largest by volume (it is the “Figure” noted as top in private credit sector). Securitize has tokenized multiple funds and even enabled retail to buy tokenized equity in companies like Coinbase pre-IPO. They aren’t “DeFi” platforms but are key bridges for RWAs – often working with regulated entities and focusing on compliance (Securitize is a registered broker-dealer/transfer agent). Their presence underscores that RWA tokenization spans both decentralized and enterprise realms.

(Table sources: Centrifuge TVL, Maple transition and loan volume, Goldfinch Prime description, Ondo stats, Ondo–BlackRock partnership, Maker & market projection, Maple rank.)

Centrifuge: Often cited as the first RWA DeFi protocol (launched 2019), Centrifuge allows asset originators (like financing companies) to pool real-world assets and issue ERC-20 tokens called DROP (senior tranche) and TIN (junior tranche) representing claims on the asset pool. These tokens can be used as collateral in MakerDAO or held for yield. Centrifuge operates its own chain for efficiency but connects to Ethereum for liquidity. It currently leads the pack in on-chain private credit TVL (~$409M), demonstrating product-market fit in areas like invoice financing. A recent development is Centrifuge partnering with Clearpool’s upcoming RWA chain (Ozea) to expand its reach, and working on Centrifuge V3 which will enable assets to be composable across any EVM chain (so Centrifuge pools could be tapped by protocols on chains like Ethereum, Avalanche, or Plume).

Maple Finance: Maple showed the promise and perils of undercollateralized DeFi lending. It provided a platform for delegate managers to run credit pools lending to market makers and crypto firms on an unsecured basis. After high-profile defaults in 2022 (e.g. Orthogonal Trading’s collapse related to FTX) which hit Maple’s liquidity, Maple chose to reinvent itself with a safer model. Now Maple’s focus is twofold: (1) RWA “cash management” – giving stablecoin lenders access to Treasury yields, and (2) overcollateralized crypto lending – requiring borrowers to post liquid collateral (BTC/ETH). The Treasury pool (in partnership with Icebreaker Finance) was launched on Solana in 2023, then on Ethereum, enabling accredited lenders to earn ~5% on USDC by purchasing short-duration U.S. Treasury notes. Maple also introduced Maple Direct pools that lend to institutions against crypto collateral, effectively becoming a facilitator for more traditional secured lending. The Maple 2.0 architecture (launched Q1 2023) improved transparency and control for lenders. Despite setbacks, Maple has facilitated nearly $2.5B in loans cumulatively and remains a key player, now straddling both crypto and RWA lending. Its journey underscores the importance of proper risk management and has validated the pivot to real-world collateral for stability.

Goldfinch: Goldfinch’s innovation was to allow “borrower pools” where real-world lending businesses (like microfinance institutions or fintech lenders) could draw stablecoin liquidity from DeFi without posting collateral, instead relying on the “trust-through-consensus” model (where backers stake junior capital to vouch for the borrower). It enabled loans in places like Kenya, Nigeria, Mexico, etc., delivering yields often above 10%. However, to comply with regulations and attract larger capital, Goldfinch introduced KYC gating and Prime. Now with Goldfinch Prime, the protocol is basically onboarding well-known private credit fund managers and letting non-US accredited users provide capital to them on-chain. For example, rather than lending to a single fintech lender, a Goldfinch Prime user can invest in a pool that aggregates many senior secured loans managed by Ares or Apollo – essentially investing in slices of those funds (which off-chain are massive, e.g. Blackstone’s private credit fund is $50B+). This moves Goldfinch upmarket: it’s less about frontier market fintech loans and more about giving crypto investors an entry to institutional-grade yield (with lower risk). Goldfinch’s GFI token and governance remain, but the user base and pool structures have shifted to a more regulated stance. This reflects a broader trend: RWA protocols increasingly working directly with large TradFi asset managers to scale.

Ondo Finance: Ondo’s transformation is a case study in adapting to demand. When DeFi degen yields dried up in the bear market, the thirst for safe yield led Ondo to tokenize T-bills and money market funds. Ondo set up a subsidiary (Ondo Investments) and registered offerings so that accredited and even retail (in some regions) could buy regulated fund tokens. Ondo’s flagship OUSG token is effectively tokenized shares of a short-term US Treasuries ETF; it grew quickly to over half a billion in circulation, confirming huge demand for on-chain Treasuries. Ondo also created USDY, which takes a step further by mixing T-bills and bank deposits to approximate a high-yield savings account on-chain. At ~4.6% APY and a low $500 entry, USDY aims for mass market within crypto. To complement these, Ondo’s Flux protocol lets holders of OUSG or USDY borrow stablecoins against them (solving liquidity since these tokens might otherwise be lockups). Ondo’s success has made it a top-3 RWA issuer by TVL. It’s a prime example of working within regulatory frameworks (SPVs, broker-dealers) to bring traditional securities on-chain. It also collaborates (e.g., using BlackRock’s fund) rather than competing with incumbents, which is a theme in RWA: partnership over disruption.

MakerDAO: While not a standalone RWA platform, Maker deserves mention because it effectively became one of the largest RWA investors in crypto. Maker realized that diversifying DAI’s collateral beyond volatile crypto could both stabilize DAI and generate revenue (through real-world yields). Starting with small experiments (like a loan to a U.S. bank, and vaults for Centrifuge pool tokens), Maker ramped up in 2022-2023 by allocating hundreds of millions of DAI to buy short-term bonds and invest in money market funds via custody accounts. By mid-2023 Maker had allocated $500M to a BlackRock-managed bond fund and a similar amount to a startup (Monetalis) to invest in Treasuries – these are analogous to Ondo’s approach but done under Maker governance. Maker also onboarded loans like the Societe Generale $30M on-chain bond, and vaults for Harbor Trade’s Trade Finance pool, etc. The revenue from these RWA investments has been substantial – by some reports, Maker’s RWA portfolio generates tens of millions in annualized fees, which has made DAI’s system surplus grow (and MKR token started buybacks using those profits). This RWA strategy is central to Maker’s “Endgame” plan, where eventually Maker might spin out specialized subDAOs to handle RWA. The takeaway is that even a decentralized stablecoin protocol sees RWA as key to sustainability, and Maker’s scale (with DAI ~$5B supply) means it can materially impact real-world markets by deploying liquidity there.

Others: There are numerous other projects in the RWA space, each carving out a niche:

  • Tokenized Commodities: Projects like Paxos Gold (PAXG) and Tether Gold (XAUT) have made gold tradable on-chain (combined market cap of ~$1.4B). These tokens give the convenience of crypto with the stability of gold and are fully backed by physical gold in vaults.
  • Tokenized Stocks: Firms like Backed Finance and Synthesized (formerly Mirror, etc.) have issued tokens mirroring equity like Apple (bAAPL) or Tesla. Backed’s tokens (e.g., bNVDA for Nvidia) are 100% collateralized by shares held by a custodian and available under EU regulatory sandbox exemptions, enabling 24/7 trading of stocks on DEXs. The total for tokenized stocks is still small (~$0.46B), but growing as interest in around-the-clock trading and fractional ownership picks up.
  • Real Estate Platforms: Lofty AI (Algorand-based) allows fractional ownership of rental properties with tokens as low as $50 per fraction. RealT (Ethereum) offers tokens for shares in rental homes in Detroit and elsewhere (paying rental income as USDC dividends). Real estate is a huge market ($300T+ globally), so even a fraction coming on-chain could dwarf other categories; projections see $3–4 Trillion in tokenized real estate by 2030-2035 if adoption accelerates. While current on-chain real estate is small, pilots are underway (e.g., Hong Kong’s government sold tokenized green bonds; Dubai is running a tokenized real estate sandbox).
  • Institutional Funds: Beyond Ondo, traditional asset managers are launching tokenized versions of their funds. We saw BlackRock’s BUIDL (a tokenized money market fund that grew from $100M to $1B AUM in one year). WisdomTree issued 13 tokenized ETFs by 2025. Franklin Templeton’s government money fund (BENJI token on Polygon) approached $370M AUM. These efforts indicate that large asset managers view tokenization as a new distribution channel. It also means competition for crypto-native issuers, but overall it validates the space. Many of these tokens target institutional or accredited investors initially (to comply with securities laws), but over time could open to retail as regulations evolve.

Why multiple approaches? The RWA sector has a diverse cast because the space “real-world assets” is extremely broad. Different asset types have different risk, return, and regulatory profiles, necessitating specialized platforms:

  • Private credit (Maple, Goldfinch, Centrifuge) focuses on lending and debt instruments, requiring credit assessment and active management.
  • Tokenized securities/funds (Ondo, Backed, Franklin) deal with regulatory compliance to represent traditional securities on-chain one-to-one.
  • Real estate involves property law, titles, and often local regulations – some platforms work on REIT-like structures or NFTs that confer ownership of an LLC that owns a property.
  • Commodities like gold have simpler one-to-one backing models but require trust in custody and audits.

Despite this fragmentation, we see a trend of convergence and collaboration: e.g., Centrifuge partnering with Clearpool, Goldfinch partnering with Plume (and indirectly Apollo), Ondo’s assets being used by Maker and others, etc. Over time, we may get interoperability standards (perhaps via projects like RWA.xyz, which is building a data aggregator for all RWA tokens).

Common Asset Types Being Tokenized

Almost any asset with an income stream or market value can, in theory, be tokenized. In practice, the RWA tokens we see today largely fall into a few categories:

  • Government Debt (Treasuries & Bonds): This has become the largest category of on-chain RWA by value. Tokenized U.S. Treasury bills and bonds are highly popular as they carry low risk and ~4-5% yield – very attractive to crypto holders in a low DeFi yield environment. Multiple projects offer this: Ondo’s OUSG, Matrixdock’s treasury token (MTNT), Backed’s TBILL token, etc. As of May 2025, government securities dominate tokenized assets with ~$6.79B TVL on-chain, making it the single biggest slice of the RWA pie. This includes not just U.S. Treasuries, but also some European government bonds. The appeal is global 24/7 access to a safe asset; e.g., a user in Asia can buy a token at 3 AM that effectively puts money in U.S. T-Bills. We also see central banks and public entities experimenting: e.g., the Monetary Authority of Singapore (MAS) ran Project Guardian to explore tokenized bonds and forex; Hong Kong’s HSBC and CSOP launched a tokenized money market fund. Government bonds are likely the “killer app” of RWA to date.

  • Private Credit & Corporate Debt: These include loans to businesses, invoices, supply chain finance, consumer loans, etc., as well as corporate bonds and private credit funds. On-chain private credit (via Centrifuge, Maple, Goldfinch, Credix, etc.) is a fast-growing area and forms over 50% of the RWA market by count of projects (though not by value due to Treasuries being big). Tokenized private credit often offers higher yields (8-15% APY) because of higher risk and less liquidity. Examples: Centrifuge tokens (DROP/TIN) backed by loan portfolios; Goldfinch’s pools of fintech loans; Maple’s pools to market makers; JPMorgan’s private credit blockchain pilot (they did intraday repo on-chain); and startups like Flowcarbon (tokenizing carbon credit-backed loans). Even trade receivables from governments (Medicaid claims) are being tokenized (as Plume highlighted). Additionally, corporate bonds are being tokenized: e.g., European Investment Bank issued digital bonds on Ethereum; companies like Siemens did a €60M on-chain bond. There’s about $23B of tokenized “global bonds” on-chain as of early 2025 – a figure that’s still small relative to the $100+ trillion bond market, but the trajectory is upward.

  • Real Estate: Tokenized real estate can mean either debt (e.g., tokenized mortgages, real estate loans) or equity/ownership (fractional ownership of properties). Thus far, more activity has been in tokenized debt (because it fits into DeFi lending models easily). For instance, parts of a real estate bridge loan might be turned into DROP tokens on Centrifuge and used to generate DAI. On the equity side, projects like Lofty have tokenized residential rental properties (issuing tokens that entitle holders to rental income and a share of sale proceeds). We’ve also seen a few REIT-like tokens (RealT’s properties, etc.). Real estate is highly illiquid traditionally, so tokenization’s promise is huge – one could trade fractions of a building on Uniswap, or use a property token as collateral for a loan. That said, legal infrastructure is tricky (you often need each property in an LLC and the token represents LLC shares). Still, given projections of $3-4 Trillion tokenized real estate by 2030-35, many are bullish that this sector will take off as legal frameworks catch up. A notable example: RedSwan tokenized portions of commercial real estate (like student housing complexes) and raised millions via token sales to accredited investors.

  • Commodities: Gold is the poster child here. Paxos Gold (PAXG) and Tether Gold (XAUT) together have over $1.4B market cap, offering investors on-chain exposure to physical gold (each token = 1 fine troy ounce stored in vault). These have become popular as a way to hedge in crypto markets. Other commodities tokenized include silver, platinum (e.g., Tether has XAGT, XAUT, etc.), and even oil to some extent (there were experiments with tokens for oil barrels or hash-rate futures). Commodity-backed stablecoins like Ditto’s eggs or soybean tokens have popped up, but gold remains dominant due to its stable demand. We can also include carbon credits and other environmental assets: tokens like MCO2 (Moss Carbon Credit) or Toucan’s nature-based carbon tokens had a wave of interest in 2021 as corporates looked at on-chain carbon offsets. In general, commodities on-chain are straightforward as they’re fully collateralized, but they require trust in custodians and auditors.

  • Equities (Stocks): Tokenized stocks allow 24/7 trading and fractional ownership of equities. Platforms like Backed (out of Switzerland) and DX.Exchange / FTX (earlier) issued tokens mirroring popular stocks (Tesla, Apple, Google, etc.). Backed’s tokens are fully collateralized (they hold the actual shares via a custodian and issue ERC-20 tokens representing them). These tokens can be traded on DEXs or held in DeFi wallets, which is novel since conventional stock trading is weekdays only. As of 2025, about $460M of tokenized equities are circulating – still a tiny sliver of the multi-trillion stock market, but it’s growing. Notably, in 2023, MSCI launched indices tracking tokenized assets including tokenized stocks, signaling mainstream monitoring. Another angle is synthetic equities (Mirroring stock price via derivatives without holding the stock, as projects like Synthetix did), but regulatory pushback (they can be seen as swaps) made the fully backed approach more favored now.

  • Stablecoins (fiat-backed): It’s worth mentioning that fiat-backed stablecoins like USDC, USDT are essentially tokenized real-world assets (each USDC is backed by $1 in bank accounts or T-bills). In fact, stablecoins are the largest RWA by far – over $200B in stablecoins outstanding (USDT, USDC, BUSD, etc.), mostly backed by cash, Treasury bills, or short-term corporate debt. This has often been cited as the first successful RWA use-case in crypto: tokenized dollars became the lifeblood of crypto trading and DeFi. However, in the RWA context, stablecoins are usually considered separately, because they are currency tokens, not investment products. Still, the existence of stablecoins has paved the way for other RWA tokens (and indeed, projects like Maker and Ondo effectively channel stablecoin capital into real assets).

  • Miscellaneous: We are starting to see even more exotic assets:

    • Fine Art and Collectibles: Platforms like Maecenas and Masterworks explored tokenizing high-end artworks (each token representing a share of a painting). NFTs have proven digital ownership, so it’s conceivable real art or luxury collectibles can be fractionalized similarly (though legal custody and insurance are considerations).
    • Revenue-Sharing Tokens: e.g., CityDAO and other DAOs experimented with tokens that give rights to a revenue stream (like a cut of city revenue or business revenue). These blur the line between securities and utility tokens.
    • Intellectual Property and Royalties: There are efforts to tokenize music royalties (so fans can invest in an artist’s future streaming income) or patents. Royalty Exchange and others have looked into this, allowing tokens that pay out when, say, a song is played (using smart contracts to distribute royalties).
    • Infrastructure and Physical assets: Companies have considered tokenizing things like data center capacity, mining hashpower, shipping cargo space, or even infrastructure projects (some energy companies looked at tokenizing ownership in solar farms or oil wells – Plume itself mentioned “uranium, GPUs, durian farms” as possibilities). These remain experimental but show the broad range of what could be brought on-chain.

In summary, virtually any asset that can be legally and economically ring-fenced can be tokenized. The current focus has been on financial assets with clear cash flows or store-of-value properties (debt, commodities, funds) because they fit well with investor demand and existing law (e.g., an SPV can hold bonds and issue tokens relatively straightforwardly). More complex assets (like direct property ownership or IP rights) will likely take longer due to legal intricacies. But the tide is moving in that direction, as the technology proves itself with simpler assets first and then broadens.

It’s also important to note that each asset type’s tokenization must grapple with how to enforce rights off-chain: e.g., if you hold a token for a property, how do you ensure legal claim on that property? Solutions involve legal wrappers (LLCs, trust agreements) that recognize token holders as beneficiaries. Standardization efforts (like the ERC-1400 standard for security tokens or initiatives by the Interwork Alliance for tokenized assets) are underway to make different RWA tokens more interoperable and legally sound.

Trends & Innovations:

  • Institutional Influx: Perhaps the biggest trend is the entrance of major financial institutions and asset managers into the RWA blockchain space. In the past two years, giants like BlackRock, JPMorgan, Goldman Sachs, Fidelity, Franklin Templeton, WisdomTree, and Apollo have either invested in RWA projects or launched tokenization initiatives. For example, BlackRock’s CEO Larry Fink publicly praised “the tokenization of securities” as the next evolution. BlackRock’s own tokenized money market fund (BUIDL) reaching $1B AUM in one year is a proof-point. WisdomTree creating 13 tokenized index funds by 2025 shows traditional ETFs coming on-chain. Apollo not only invested in Plume but also partnered on tokenized credit (Apollo and Hamilton Lane worked with Figure’s Provenance to tokenize parts of their funds). The involvement of such institutions has a flywheel effect: it legitimizes RWA in the eyes of regulators and investors and accelerates development of compliant platforms. It’s telling that surveys show 67% of institutional investors plan to allocate an average 5.6% of their portfolio to tokenized assets by 2026. High-net-worth individuals similarly are showing ~80% interest in exposure via tokenization. This is a dramatic shift from the 2017-2018 ICO era, as now the movement is institution-led rather than purely grassroots crypto-led.

  • Regulated On-Chain Funds: A notable innovation is bringing regulated investment funds directly on-chain. Instead of creating new instruments from scratch, some projects register traditional funds with regulators and then issue tokens that represent shares. Franklin Templeton’s OnChain U.S. Government Money Fund is a SEC-registered mutual fund whose share ownership is tracked on Stellar (and now Polygon) – investors buy a BENJI token which is effectively a share in a regulated fund, subject to all the usual oversight. Similarly, ARB ETF (Europe) launched a fully regulated digital bond fund on a public chain. This trend of tokenized regulated funds is crucial because it marries compliance with blockchain’s efficiency. It basically means the traditional financial products we know (funds, bonds, etc.) can gain new utility by existing as tokens that trade anytime and integrate with smart contracts. Grayscale’s consideration of $PLUME and similar moves by other asset managers to list crypto or RWA tokens in their offerings also indicates convergence of TradFi and DeFi product menus.

  • Yield Aggregation and Composability: As more RWA yield opportunities emerge, DeFi protocols are innovating to aggregate and leverage them. Plume’s Nest is one example of aggregating multiple yields into one interface. Another example is Yearn Finance beginning to deploy vaults into RWA products (Yearn considered investing in Treasuries through protocols like Notional or Maple). Index Coop created a yield index token that included RWA yield sources. We are also seeing structured products like tranching on-chain: e.g., protocols that issue a junior-senior split of yield streams (Maple explored tranching pools to offer safer vs. riskier slices). Composability means you could one day do things like use a tokenized bond as collateral in Aave to borrow a stablecoin, then use that stablecoin to farm elsewhere – complex strategies bridging TradFi yield and DeFi yield. This is starting to happen; for instance, Flux Finance (by Ondo) lets you borrow against OUSG and then you could deploy that into a stablecoin farm. Leveraged RWA yield farming may become a theme (though careful risk management is needed).

  • Real-Time Transparency & Analytics: Another innovation is the rise of data platforms and standards for RWA. Projects like RWA.xyz aggregate on-chain data to track the market cap, yields, and composition of all tokenized RWAs across networks. This provides much-needed transparency – one can see how big each sector is, track performance, and flag anomalies. Some issuers provide real-time asset tracking: e.g., a token might be updated daily with NAV (net asset value) data from the TradFi custodian, and that can be shown on-chain. The use of oracles is also key – e.g., Chainlink oracles can report interest rates or default events to trigger smart contract functions (like paying out insurance if a debtor defaults). The move towards on-chain credit ratings or reputations is also starting: Goldfinch experimented with off-chain credit scoring for borrowers, Centrifuge has models to estimate pool risk. All of this is to make on-chain RWAs as transparent (or more so) than their off-chain counterparts.

  • Integration with CeFi and Traditional Systems: We see more blending of CeFi and DeFi in RWA. For instance, Coinbase introduced “Institutional DeFi” where they funnel client funds into protocols like Maple or Compound Treasury – giving institutions a familiar interface but yield sourced from DeFi. Bank of America and others have discussed using private blockchain networks to trade tokenized collateral with each other (for faster repo markets, etc.). On the retail front, fintech apps may start offering yields that under the hood come from tokenized assets. This is an innovation in distribution: users might not even know they’re interacting with a blockchain, they just see better yields or liquidity. Such integration will broaden the reach of RWA beyond crypto natives.

Challenges:

Despite the excitement, RWA tokenization faces several challenges and hurdles:

  • Regulatory Compliance and Legal Structure: Perhaps the number one challenge. By turning assets into digital tokens, you often turn them into securities in the eyes of regulators (if they weren’t already). This means projects must navigate securities laws, investment regulations, money transmitter rules, etc. Most RWA tokens (especially in the US) are offered under Reg D (private placement to accredited investors) or Reg S (offshore) exemptions. This limits participation: e.g., retail US investors usually cannot buy these tokens legally. Additionally, each jurisdiction has its own rules – what’s allowed in Switzerland (like Backed’s stock tokens) might not fly in the US without registration. There’s also the legal enforceability angle: a token is a claim on a real asset; ensuring that claim is recognized by courts is crucial. This requires robust legal structuring (LLCs, trusts, SPVs) behind the scenes. It’s complex and costly to set up these structures, which is why many RWA projects partner with legal firms or get acquired by existing players with licenses (for example, Securitize handles a lot of heavy lifting for others). Compliance also means KYC/AML: unlike DeFi’s permissionless nature, RWA platforms often require investors to undergo KYC and accreditation checks, either at token purchase or continuously via whitelists. This friction can deter some DeFi purists and also means these platforms can’t be fully open to “anyone with a wallet” in many cases.

  • Liquidity and Market Adoption: Tokenizing an asset doesn’t automatically make it liquid. Many RWA tokens currently suffer from low liquidity/low trading volumes. For instance, if you buy a tokenized loan, there may be few buyers when you want to sell. Market makers are starting to provide liquidity for certain assets (like stablecoins or Ondo’s fund tokens on DEXes), but order book depth is a work in progress. In times of market stress, there’s concern that RWA tokens could become hard to redeem or trade, especially if underlying assets themselves aren’t liquid (e.g., a real estate token might effectively only be redeemable when the property is sold, which could take months/years). Solutions include creating redemption mechanisms (like Ondo’s funds allow periodic redemptions through the Flux protocol or directly with the issuer), and attracting a diverse investor base to trade these tokens. Over time, as more traditional investors (who are used to holding these assets) come on-chain, liquidity should improve. But currently, fragmentation across different chains and platforms also hinders liquidity – efforts to standardize and maybe aggregate exchanges for RWA tokens (perhaps a specialized RWA exchange or more cross-listings on major CEXes) are needed.

  • Trust and Transparency: Ironically for blockchain-based assets, RWAs often require a lot of off-chain trust. Token holders must trust that the issuer actually holds the real asset and won’t misuse funds. They must trust the custodian holding collateral (in case of stablecoins or gold). They also must trust that if something goes wrong, they have legal recourse. There have been past failures (e.g., some earlier “tokenized real estate” projects that fizzled, leaving token holders in limbo). So, building trust is key. This is done through audits, on-chain proof-of-reserve, reputable custodians (e.g., Coinbase Custody, etc.), and insurance. For example, Paxos publishes monthly audited reports of PAXG reserves, and USDC publishes attestations of its reserves. MakerDAO requires overcollateralization and legal covenants when engaging in RWA loans to mitigate risk of default. Nonetheless, a major default or fraud in a RWA project could set the sector back significantly. This is why, currently, many RWA protocols focus on high-credit quality assets (government bonds, senior secured loans) to build a track record before venturing into riskier territory.

  • Technological Integration: Some challenges are technical. Integrating real-world data on-chain requires robust oracles. For example, pricing a loan portfolio or updating NAV of a fund requires data feeds from traditional systems. Any lag or manipulation in oracles can lead to incorrect valuations on-chain. Additionally, scalability and transaction costs on mainnets like Ethereum can be an issue – moving potentially thousands of real-world payments (think of a pool of hundreds of loans, each with monthly payments) on-chain can be costly or slow. This is partly why specialized chains or Layer-2 solutions (like Plume, or Polygon for some projects, or even permissioned chains) are being used – to have more control and lower cost for these transactions. Interoperability is another technical hurdle: a lot of RWA action is on Ethereum, but some on Solana, Polygon, Polkadot, etc. Bridging assets between chains securely is still non-trivial (though projects like LayerZero, as used by Plume, are making progress). Ideally, an investor shouldn’t have to chase five different chains to manage a portfolio of RWAs – smoother cross-chain operability or a unified interface will be important.

  • Market Education and Perception: Many crypto natives originally were skeptical of RWAs (seeing them as bringing “off-chain risk” into DeFi’s pure ecosystem). Meanwhile, many TradFi people are skeptical of crypto. There is an ongoing need to educate both sides about the benefits and risks. For crypto users, understanding that a token is not just another meme coin but a claim on a legal asset with maybe lock-up periods, etc., is crucial. We’ve seen cases where DeFi users got frustrated that they couldn’t instantly withdraw from a RWA pool because off-chain loan settlements take time – managing expectations is key. Similarly, institutional players often worry about issues like custody of tokens (how to hold them securely), compliance (avoiding wallets that interact with sanctioned addresses, etc.), and volatility (ensuring the token technology is stable). Recent positive developments, like Binance Research showing RWA tokens have lower volatility and even considered “safer than Bitcoin” during certain macro events, help shift perception. But broad acceptance will require time, success stories, and likely regulatory clarity that holding or issuing RWA tokens is legally safe.

  • Regulatory Uncertainty: While we covered compliance, a broader uncertainty is regulatory regimes evolving. The U.S. SEC has not yet given explicit guidance on many tokenized securities beyond enforcing existing laws (which is why most issuers use exemptions or avoid U.S. retail). Europe introduced MiCA (Markets in Crypto Assets) regulation which mostly carves out how crypto (including asset-referenced tokens) should be handled, and launched a DLT Pilot Regime to let institutions trade securities on blockchain with some regulatory sandboxes. That’s promising but not permanent law yet. Countries like Singapore, UAE (Abu Dhabi, Dubai), Switzerland are being proactive with sandboxes and digital asset regulations to attract tokenization business. A challenge is if regulations become too onerous or fragmented: e.g., if every jurisdiction demands a slightly different compliance approach, it adds cost and complexity. On the flip side, regulatory acceptance (like Hong Kong’s recent encouragement of tokenization or Japan exploring on-chain securities) could be a boon. In the U.S., a positive development is that certain tokenized funds (like Franklin’s) got SEC approval, showing that it’s possible within existing frameworks. But the looming question: will regulators eventually allow wider retail access to RWA tokens (perhaps through qualified platforms or raising the caps on crowdfunding exemptions)? If not, RWAfi might remain predominantly an institutional play behind walled gardens, which limits the “open finance” dream.

  • Scaling Trustlessly: Another challenge is how to scale RWA platforms without introducing central points of failure. Many current implementations rely on a degree of centralization (an issuer that can pause token transfers to enforce KYC, a central party that handles asset custody, etc.). While this is acceptable to institutions, it’s philosophically at odds with DeFi’s decentralization. Over time, projects will need to find the right balance: e.g., using decentralized identity solutions for KYC (so it’s not one party controlling the whitelist but a network of verifiers), or using multi-sig/community governance to control issuance and custody operations. We’re seeing early moves like Maker’s Centrifuge vaults where MakerDAO governance approves and oversees RWA vaults, or Maple decentralizing pool delegate roles. But full “DeFi” RWA (where even legal enforcement is trustless) is a hard problem. Eventually, maybe smart contracts and real-world legal systems will interface directly (for example, a loan token smart contract that can automatically trigger legal action via a connected legal API if default occurs – this is futuristic but conceivable).

In summary, the RWA space is rapidly innovating to tackle these challenges. It’s a multi-disciplinary effort: requiring savvy in law, finance, and blockchain tech. Each success (like a fully repaid tokenized loan pool, or a smoothly redeemed tokenized bond) builds confidence. Each challenge (like a regulatory action or an asset default) provides lessons to strengthen the systems. The trajectory suggests that many of these hurdles will be overcome: the momentum of institutional involvement and the clear benefits (efficiency, liquidity) mean tokenization is likely here to stay. As one RWA-focused newsletter put it, “tokenized real-world assets are emerging as the new institutional standard… the infrastructure is finally catching up to the vision of on-chain capital markets.”

Regulatory Landscape and Compliance Considerations

The regulatory landscape for RWAs in crypto is complex and still evolving, as it involves the intersection of traditional securities/commodities laws with novel blockchain technology. Key points and considerations include:

  • Securities Laws: In most jurisdictions, if an RWA token represents an investment in an asset with an expectation of profit (which is often the case), it is deemed a security. For example, in the U.S., tokens representing fractions of income-generating real estate or loan portfolios squarely fall under the definition of investment contracts (Howey Test) or notes, and thus must be registered or offered under an exemption. This is why nearly all RWA offerings to date in the U.S. use private offering exemptions (Reg D 506(c) for accredited investors, Reg S for offshore, Reg A+ for limited public raises, etc.). Compliance with these means restricting token sales to verified investors, implementing transfer restrictions (tokens can only move between whitelisted addresses), and providing necessary disclosures. For instance, Ondo’s OUSG and Maple’s Treasury pool required investors to clear KYC/AML and accreditation checks, and tokens are not freely transferable to unapproved wallets. This creates a semi-permissioned environment, quite different from open DeFi. Europe under MiFID II/MiCA similarly treats tokenized stocks or bonds as digital representations of traditional financial instruments, requiring prospectuses or using the DLT Pilot regime for trading venues. Bottom line: RWA projects must integrate legal compliance from day one – many have in-house counsel or work with legal-tech firms like Securitize, because any misstep (like selling a security token to the public without exemption) could invite enforcement.

  • Consumer Protection and Licensing: Some RWA platforms may need additional licenses. For example, if a platform holds customer fiat to convert into tokens, it might need a money transmitter license or equivalent. If it provides advice or brokerage (matching borrowers and lenders), it might need broker-dealer or ATS (Alternative Trading System) licensing (this is why some partner with broker-dealers – Securitize, INX, Oasis Pro etc., which have ATS licenses to run token marketplaces). Custody of assets (like real estate deeds or cash reserves) might require trust or custody licenses. Anchorage being a partner to Plume is significant because Anchorage is a qualified custodian – institutions feel more at ease if a licensed bank is holding the underlying asset or even the private keys of tokens. In Asia and the Middle East, regulators have been granting specific licenses for tokenization platforms (e.g., the Abu Dhabi Global Market’s FSRA issues permissions for crypto assets including RWA tokens, MAS in Singapore gives project-specific approvals under its sandbox).

  • Regulatory Sandboxes and Government Initiatives: A positive trend is regulators launching sandboxes or pilot programs for tokenization. The EU’s DLT Pilot Regime (2023) allows approved market infrastructures to test trading tokenized securities up to certain sizes without full compliance with every rule – this has led to several European exchanges piloting blockchain bond trading. Dubai announced a tokenization sandbox to boost its digital finance hub. Hong Kong in 2023-24 made tokenization a pillar of its Web3 strategy, with Hong Kong’s SFC exploring tokenized green bonds and art. The UK in 2024 consulted on recognizing digital securities under English law (they already recognize crypto as property). Japan updated its laws to allow security tokens (they call them “electronically recorded transferable rights”) and several tokenized securities have been issued there under that framework. These official programs indicate a willingness by regulators to modernize laws to accommodate tokenization – which could eventually simplify compliance (e.g., creating special categories for tokenized bonds that streamline approval).

  • Travel Rule / AML: Crypto’s global nature triggers AML laws. FATF’s “travel rule” requires that when crypto (including tokens) above a certain threshold is transferred between VASPs (exchanges, custodians), identifying info travels with it. If RWA tokens are mainly transacted on KYC’ed platforms, this is manageable, but if they enter the wider crypto ecosystem, compliance gets tricky. Most RWA platforms currently keep a tight grip: transfers are often restricted to whitelisted addresses whose owners have done KYC. This mitigates AML concerns (as every holder is known). Still, regulators will expect robust AML programs – e.g., screening wallet addresses against sanctions (OFAC lists, etc.). There was a case of a tokenized bond platform in the UK that had to unwind some trades because a token holder became a sanctioned entity – such scenarios will test protocols’ ability to comply. Many platforms build in pause or freeze functions to comply with law enforcement requests (this is controversial in DeFi, but for RWA it’s often non-negotiable to have the ability to lock tokens tied to wrongdoing).

  • Taxation and Reporting: Another compliance consideration: how are these tokens taxed? If you earn yield from a tokenized loan, is it interest income? If you trade a tokenized stock, do wash sale rules apply? Tax authorities have yet to issue comprehensive guidance. In the interim, platforms often provide tax reports to investors (e.g., a Form 1099 in the US for interest or dividends earned via tokens). The transparency of blockchain can help here, as every payment can be recorded and categorized. But cross-border taxation (if someone in Europe holds a token paying US-source interest) can be complex – requiring things like digital W-8BEN forms, etc. This is more of an operational challenge than a roadblock, but it adds friction that automated compliance tech will need to solve.

  • Enforcement and Precedents: We’ve not yet seen many high-profile enforcement actions specifically for RWA tokens – likely because most are trying to comply. However, we have seen enforcement in adjacent areas: e.g., the SEC’s actions against crypto lending products (BlockFi, etc.) underscore that offering yields without registering can be a violation. If an RWA platform slipped up and, say, allowed retail to buy security tokens freely, it could face similar action. There’s also the question of secondary trading venues: If a decentralized exchange allows trading of a security token between non-accredited investors, is that unlawful? Likely yes in the US. This is why a lot of RWA tokens are not listed on Uniswap or are wrapped in a way that restricts addresses. It’s a fine line to walk between DeFi liquidity and compliance – many are erring on the side of compliance, even if it reduces liquidity.

  • Jurisdiction and Conflict of Laws: RWAs by nature connect to specific jurisdictions (e.g., a tokenized real estate in Germany falls under German property law). If tokens trade globally, there can be conflicts of law. Smart contracts might need to encode which law governs. Some platforms choose friendly jurisdictions for incorporation (e.g., the issuer entity in the Cayman Islands and the assets in the U.S., etc.). It’s complex but solvable with careful legal structuring.

  • Investor Protection and Insurance: Regulators will also care about investor protection: ensuring that token holders have clear rights. For example, if a token is supposed to be redeemable for a share of asset proceeds, the mechanism for that must be legally enforceable. Some tokens represent debt securities that can default – what disclosures were given about that risk? Platforms often publish offering memorandums or prospectuses (Ondo did for its tokens). Over time, regulators might require standardized risk disclosures for RWA tokens, much like mutual funds provide. Also, insurance might be mandated or at least expected – for instance, insuring a building in a real estate token, or having crime insurance for a custodian holding collateral.

  • Decentralization vs Regulation: There’s an inherent tension: the more decentralized and permissionless you make an RWA platform, the more it rubs against current regulations which assume identifiable intermediaries. One evolving strategy is to use Decentralized Identities (DID) and verifiable credentials to square this circle. E.g., a wallet could hold a credential that proves the owner is accredited without revealing their identity on-chain, and smart contracts could check for that credential before allowing transfer – making compliance automated and preserving some privacy. Projects like Xref (on XDC network) and Astra Protocol are exploring this. If successful, regulators might accept these novel approaches, which could allow permissionless trading among vetted participants. But that’s still in nascent stages.

In essence, regulation is the make-or-break factor for RWA adoption. The current landscape shows regulators are interested and cautiously supportive, but also vigilant. The RWA projects that thrive will be those that proactively embrace compliance yet innovate to make it as seamless as possible. Jurisdictions that provide clear, accommodative rules will attract more of this business (we’ve seen significant tokenization activity gravitate to places like Switzerland, Singapore, and the UAE due to clarity there). Meanwhile, the industry is engaging with regulators – for instance, by forming trade groups or responding to consultations – to help shape sensible policies. A likely outcome is that regulated DeFi will emerge as a category: platforms like those under Plume’s umbrella could become Alternative Trading Systems (ATS) or registered digital asset securities exchanges for tokenized assets, operating under licenses but with blockchain infrastructure. This hybrid approach may satisfy regulators’ objectives while still delivering the efficiency gains of crypto rails.

Investment and Market Size Data

The market for tokenized real-world assets has grown impressively and is projected to explode in the coming years, reaching into the trillions of dollars if forecasts hold true. Here we’ll summarize some key data points on market size, growth, and investment trends:

  • Current On-Chain RWA Market Size: As of mid-2025, the total on-chain Real-World Asset market (excluding traditional stablecoins) is in the tens of billions. Different sources peg slightly different totals depending on inclusion criteria, but a May 2025 analysis put it at $22.45 billion in Total Value Locked. This figure was up ~9.3% from the previous month, showcasing rapid growth. The composition of that ~$22B (as previously discussed) includes around $6.8B in government bonds, $1.5B in commodity tokens, $0.46B in equities, $0.23B in other bonds, and a few billion in private credit and funds. For perspective, this is still small relative to the broader crypto market (which is ~$1.2T in market cap as of 2025, largely driven by BTC and ETH), but it’s the fastest-growing segment of crypto. It’s also worth noting stablecoins (~$226B) if counted would dwarf these numbers, but usually they’re kept separate.

  • Growth Trajectory: The RWA market has shown a 32% annual growth rate in 2024. If we extrapolate or consider accelerating adoption, some estimate $50B by end of 2025 as plausible. Beyond that, industry projections become very large:

    • BCG and others (2030+): The often-cited BCG/Ripple report projected $16 trillion by 2030 (and ~$19T by 2033) in tokenized assets. This includes broad tokenization of financial markets (not just DeFi-centric usage). This figure would represent about 10% of all assets tokenized, which is aggressive but not unthinkable given tokenization of cash (stablecoins) is already mainstream.
    • Citi GPS Report (2022) talked about $4–5 trillion tokenized by 2030 as a base case, with higher scenarios if institutional adoption is faster.
    • The LinkedIn analysis we saw noted projections ranging from $1.3 trillion to $30 trillion by 2030 – indicating a lot of uncertainty but consensus that trillions are on the table.
    • Even the conservative end (say $1-2T by 2030) would mean a >50x increase from today’s ~$20B level, which gives a sense of the strong growth expectations.
  • Investment into RWA Projects: Venture capital and investment is flowing into RWA startups:

    • Plume’s own funding ($20M Series A, etc.) is one example of VC conviction.
    • Goldfinch raised ~$25M (led by a16z in 2021). Centrifuge raised ~$4M in 2021 and more via token sales; it’s also backed by Coinbase and others.
    • Maple raised $10M Series A in 2021, then additional in 2022.
    • Ondo raised $20M in 2022 (from Founders Fund and Pantera) and more recently did a token sale.
    • There’s also new dedicated funds: e.g., a16z’s crypto fund and others earmarked portions for RWA; Franklin Templeton in 2022 joined a $20M round for a tokenization platform; Matrixport launched a $100M fund for tokenized Treasuries.
    • Traditional finance is investing: Nasdaq Ventures invested in a tokenization startup (XYO Network), London Stock Exchange Group acquired TORA (with tokenization capabilities), etc.
    • We see mergers too: Securitize acquired Distributed Technology Markets to get a broker-dealer; INX (token exchange) raising money to expand offerings.

    Overall, tens of millions have been invested into the leading RWA protocols, and larger financial institutions are acquiring stakes or forming joint ventures in this arena. Apollo’s direct investment in Plume and Hamilton Lane partnering with Securitize to tokenize funds (with Hamilton Lane’s funds being multi-billion themselves) show that this is not just VC bets but real money engagement.

  • Notable On-Chain Assets and Performance: Some data on specific tokens can illustrate traction:

    • Ondo’s OUSG: launched early 2023, by early 2025 it had >$580M outstanding, delivering ~4-5% yield. It rarely deviates in price because it’s fully collateralized and redeemable.
    • Franklin’s BENJI: by mid-2023 reached $270M, and by 2024 ~$368M. It’s one of the first instances of a major US mutual fund being reflected on-chain.
    • MakerDAO’s RWA earnings: Maker, through its ~$1.6B RWA investments, was earning on the order of $80M+ annualized in yield by late 2023 (mostly from bonds). This turned Maker’s finances around after crypto yields dried up.
    • Maple’s Treasury pool: in its pilot, raised ~$22M for T-bill investments from <10 participants (institutions). Maple’s total lending after restructuring is smaller now (~$50-100M active loans), but it’s starting to tick up as trust returns.
    • Goldfinch: funded ~$120M loans and repaid ~$90M with ~<$1M in defaults (they had one notable default from a lender in Kenya but recovered partially). GFI token once peaked at a $600M market cap in late 2021, now much lower (~$50M), indicating market re-rating of risk but still interest.
    • Centrifuge: about 15 active pools. Some key ones (like ConsolFreight’s invoice pool, New Silver’s real estate rehab loan pool) each in the $5-20M range. Centrifuge’s token (CFG) has a market cap around $200M in 2025.
    • Overall RWA Returns: Many RWA tokens offer yields in the 4-10% range. For example, Aave’s yield on stablecoins might be ~2%, whereas putting USDC into Goldfinch’s senior pool yields ~8%. This spread draws DeFi capital gradually into RWA. During crypto market downturns, RWA yields looked especially attractive as they were stable, leading analysts to call RWAs a “safe haven” or “hedge” in Web3.
  • Geographical/Market Segments: A breakdown by region: A lot of tokenized Treasuries are US-based assets offered by US or global firms (Ondo, Franklin, Backed). Europe’s contributions are in tokenized ETFs and bonds (several German and Swiss startups, and big banks like Santander and SocGen doing on-chain bond issues). Asia: Singapore’s Marketnode platform is tokenizing bonds; Japan’s SMBC tokenized some credit products. The Middle East: Dubai’s DFSA approved a tokenized fund. Latin America: a number of experiments, e.g., Brazil’s central bank is tokenizing a portion of bank deposits (as part of their CBDC project, they consider tokenizing assets). Africa: projects like Kotani Pay looked at tokenized micro-asset financing. These indicate tokenization is a global trend, but the US remains the biggest source of underlying assets (due to Treasuries and large credit funds) while Europe is leading on regulatory clarity for trading.

  • Market Sentiment: The narrative around RWAs has shifted very positively in 2024-2025. Crypto media, which used to focus mostly on pure DeFi, now regularly reports on RWA milestones (e.g., “RWA market surpasses $20B despite crypto downturn”). Ratings agencies like Moody’s are studying on-chain assets; major consulting firms (BCG, Deloitte) publish tokenization whitepapers. The sentiment is that RWAfi could drive the next bull phase of crypto by bringing in trillions of value. Even Grayscale considering a Plume product suggests investor appetite for RWA exposure packaged in crypto vehicles. There’s also recognition that RWA is partly counter-cyclical to crypto – when crypto yields are low, people seek RWAs; when crypto booms, RWA provides stable diversification. This makes many investors view RWA tokens as a way to hedge crypto volatility (e.g., Binance research found RWA tokens remained stable and even considered “safer than Bitcoin” during certain macro volatility).

To conclude this section with hard numbers: $20-22B on-chain now, heading to $50B+ in a year or two, and potentially $1T+ within this decade. Investment is pouring in, with dozens of projects collectively backed by well over $200M in venture funding. Traditional finance is actively experimenting, with over $2-3B in real assets already issued on public or permissioned chains by big institutions (including multiple $100M+ bond issues). If even 1% of the global bond market (~$120T) and 1% of global real estate (~$300T) gets tokenized by 2030, that’d be several trillion dollars – which aligns with those bullish projections. There are of course uncertainties (regulation, interest rate environments, etc. can affect adoption), but the data so far supports the idea that tokenization is accelerating. As Plume’s team noted, “the RWA sector is now leading Web3 into its next phase” – a phase where blockchain moves from speculative assets to the backbone of real financial infrastructure. The deep research and alignment of heavyweights behind RWAs underscore that this is not a fleeting trend but a structural evolution of both crypto and traditional finance.


Sources:

  • Plume Network Documentation and Blog
  • News and Press: CoinDesk, The Block, Fortune (via LinkedIn)
  • RWA Market Analysis: RWA.xyz, LinkedIn RWA Report
  • Odaily/ChainCatcher Analysis
  • Goldfinch and Prime info, Ondo info, Centrifuge info, Maple info, Apollo quote, Binance research mention, etc.

Verifiable On-Chain AI with zkML and Cryptographic Proofs

· 36 min read
Dora Noda
Software Engineer

Introduction: The Need for Verifiable AI on Blockchain

As AI systems grow in influence, ensuring their outputs are trustworthy becomes critical. Traditional methods rely on institutional assurances (essentially “just trust us”), which offer no cryptographic guarantees. This is especially problematic in decentralized contexts like blockchains, where a smart contract or user must trust an AI-derived result without being able to re-run a heavy model on-chain. Zero-knowledge Machine Learning (zkML) addresses this by allowing cryptographic verification of ML computations. In essence, zkML enables a prover to generate a succinct proof that “the output $Y$ came from running model $M$ on input $X$”without revealing $X$ or the internal details of $M$. These zero-knowledge proofs (ZKPs) can be verified by anyone (or any contract) efficiently, shifting AI trust from “policy to proof”.

On-chain verifiability of AI means a blockchain can incorporate advanced computations (like neural network inferences) by verifying a proof of correct execution instead of performing the compute itself. This has broad implications: smart contracts can make decisions based on AI predictions, decentralized autonomous agents can prove they followed their algorithms, and cross-chain or off-chain compute services can provide verifiable outputs rather than unverifiable oracles. Ultimately, zkML offers a path to trustless and privacy-preserving AI – for example, proving an AI model’s decisions are correct and authorized without exposing private data or proprietary model weights. This is key for applications ranging from secure healthcare analytics to blockchain gaming and DeFi oracles.

How zkML Works: Compressing ML Inference into Succinct Proofs

At a high level, zkML combines cryptographic proof systems with ML inference so that a complex model evaluation can be “compressed” into a small proof. Internally, the ML model (e.g. a neural network) is represented as a circuit or program consisting of many arithmetic operations (matrix multiplications, activation functions, etc.). Rather than revealing all intermediate values, a prover performs the full computation off-chain and then uses a zero-knowledge proof protocol to attest that every step was done correctly. The verifier, given only the proof and some public data (like the final output and an identifier for the model), can be cryptographically convinced of the correctness without re-executing the model.

To achieve this, zkML frameworks typically transform the model computation into a format amenable to ZKPs:

  • Circuit Compilation: In SNARK-based approaches, the computation graph of the model is compiled into an arithmetic circuit or set of polynomial constraints. Each layer of the neural network (convolutions, matrix multiplies, nonlinear activations) becomes a sub-circuit with constraints ensuring the outputs are correct given the inputs. Because neural nets involve non-linear operations (ReLUs, Sigmoids, etc.) not naturally suited to polynomials, techniques like lookup tables are used to handle these efficiently. For example, a ReLU (output = max(0, input)) can be enforced by a custom constraint or lookup that verifies output equals input if input≥0 else zero. The end result is a set of cryptographic constraints that the prover must satisfy, which implicitly proves the model ran correctly.
  • Execution Trace & Virtual Machines: An alternative is to treat the model inference as a program trace, as done in zkVM approaches. For instance, the JOLT zkVM targets the RISC-V instruction set; one can compile the ML model (or the code that computes it) to RISC-V and then prove each CPU instruction executed properly. JOLT introduces a “lookup singularity” technique, replacing expensive arithmetic constraints with fast table lookups for each valid CPU operation. Every operation (add, multiply, bitwise op, etc.) is checked via a lookup in a giant table of pre-computed valid outcomes, using a specialized argument (Lasso/SHOUT) to keep this efficient. This drastically reduces the prover workload: even complex 64-bit operations become a single table lookup in the proof instead of many arithmetic constraints.
  • Interactive Protocols (GKR Sum-Check): A third approach uses interactive proofs like GKR (Goldwasser–Kalai–Rotblum) to verify a layered computation. Here the model’s computation is viewed as a layered arithmetic circuit (each neural network layer is one layer of the circuit graph). The prover runs the model normally but then engages in a sum-check protocol to prove that each layer’s outputs are correct given its inputs. In Lagrange’s approach (DeepProve, detailed next), the prover and verifier perform an interactive polynomial protocol (made non-interactive via Fiat-Shamir) that checks consistency of each layer’s computations without re-doing them. This sum-check method avoids generating a monolithic static circuit; instead it verifies the consistency of computations in a step-by-step manner with minimal cryptographic operations (mostly hashing or polynomial evaluations).

Regardless of approach, the outcome is a succinct proof (typically a few kilobytes to a few tens of kilobytes) that attests to the correctness of the entire inference. The proof is zero-knowledge, meaning any secret inputs (private data or model parameters) can be kept hidden – they influence the proof but are not revealed to verifiers. Only the intended public outputs or assertions are revealed. This allows scenarios like “prove that model $M$ when applied to patient data $X$ yields diagnosis $Y$, without revealing $X$ or the model’s weights.”

Enabling on-chain verification: Once a proof is generated, it can be posted to a blockchain. Smart contracts can include verification logic to check the proof, often using precompiled cryptographic primitives. For example, Ethereum has precompiles for BLS12-381 pairing operations used in many zk-SNARK verifiers, making on-chain verification of SNARK proofs efficient. STARKs (hash-based proofs) are larger, but can still be verified on-chain with careful optimization or possibly with some trust assumptions (StarkWare’s L2, for instance, verifies STARK proofs on Ethereum by an on-chain verifier contract, albeit with higher gas cost than SNARKs). The key is that the chain does not need to execute the ML model – it only runs a verification which is much cheaper than the original compute. In summary, zkML compresses expensive AI inference into a small proof that blockchains (or any verifier) can check in milliseconds to seconds.

Lagrange DeepProve: Architecture and Performance of a zkML Breakthrough

DeepProve by Lagrange Labs is a state-of-the-art zkML inference framework focusing on speed and scalability. Launched in 2025, DeepProve introduced a new proving system that is dramatically faster than prior solutions like Ezkl. Its design centers on the GKR interactive proof protocol with sum-check and specialized optimizations for neural network circuits. Here’s how DeepProve works and achieves its performance:

  • One-Time Preprocessing: Developers start with a trained neural network (currently supported types include multilayer perceptrons and popular CNN architectures). The model is exported to ONNX format, a standard graph representation. DeepProve’s tool then parses the ONNX model and quantizes it (converts weights to fixed-point/integer form) for efficient field arithmetic. In this phase, it also generates the proving and verification keys for the cryptographic protocol. This setup is done once per model and does not need to be repeated per inference. DeepProve emphasizes ease of integration: “Export your model to ONNX → one-time setup → generate proofs → verify anywhere”.

  • Proving (Inference + Proof Generation): After setup, a prover (which could be run by a user, a service, or Lagrange’s decentralized prover network) takes a new input $X$ and runs the model $M$ on it, obtaining output $Y$. During this execution, DeepProve records an execution trace of each layer’s computations. Instead of translating every multiplication into a static circuit upfront (as SNARK approaches do), DeepProve uses the linear-time GKR protocol to verify each layer on the fly. For each network layer, the prover commits to the layer’s inputs and outputs (e.g., via cryptographic hashes or polynomial commitments) and then engages in a sum-check argument to prove that the outputs indeed result from the inputs as per the layer’s function. The sum-check protocol iteratively convinces the verifier of the correctness of a sum of evaluations of a polynomial that encodes the layer’s computation, without revealing the actual values. Non-linear operations (like ReLU, softmax) are handled efficiently through lookup arguments in DeepProve – if an activation’s output was computed, DeepProve can prove that each output corresponds to a valid input-output pair from a precomputed table for that function. Layer by layer, proofs are generated and then aggregated into one succinct proof covering the whole model’s forward pass. The heavy lifting of cryptography is minimized – DeepProve’s prover mostly performs normal numeric computations (the actual inference) plus some light cryptographic commitments, rather than solving a giant system of constraints.

  • Verification: The verifier uses the final succinct proof along with a few public values – typically the model’s committed identifier (a cryptographic commitment to $M$’s weights), the input $X$ (if not private), and the claimed output $Y$ – to check correctness. Verification in DeepProve’s system involves verifying the sum-check protocol’s transcript and the final polynomial or hash commitments. This is more involved than verifying a classic SNARK (which might be a few pairings), but it’s vastly cheaper than re-running the model. In Lagrange’s benchmarks, verifying a DeepProve proof for a medium CNN takes on the order of 0.5 seconds in software. That is ~0.5s to confirm, for example, that a convolutional network with hundreds of thousands of parameters ran correctly – over 500× faster than naively re-computing that CNN on a GPU for verification. (In fact, DeepProve measured up to 521× faster verification for CNNs and 671× for MLPs compared to re-execution.) The proof size is small enough to transmit on-chain (tens of KB), and verification could be performed in a smart contract if needed, although 0.5s of computation might require careful gas optimization or layer-2 execution.

Architecture and Tooling: DeepProve is implemented in Rust and provides a toolkit (the zkml library) for developers. It natively supports ONNX model graphs, making it compatible with models from PyTorch or TensorFlow (after exporting). The proving process currently targets models up to a few million parameters (tests include a 4M-parameter dense network). DeepProve leverages a combination of cryptographic components: a multilinear polynomial commitment (to commit to layer outputs), the sum-check protocol for verifying computations, and lookup arguments for non-linear ops. Notably, Lagrange’s open-source repository acknowledges it builds on prior work (the sum-check and GKR implementation from Scroll’s Ceno project), indicating an intersection of zkML with zero-knowledge rollup research.

To achieve real-time scalability, Lagrange pairs DeepProve with its Prover Network – a decentralized network of specialized ZK provers. Heavy proof generation can be offloaded to this network: when an application needs an inference proved, it sends the job to Lagrange’s network, where many operators (staked on EigenLayer for security) compute proofs and return the result. This network economically incentivizes reliable proof generation (malicious or failed jobs get the operator slashed). By distributing work across provers (and potentially leveraging GPUs or ASICs), the Lagrange Prover Network hides the complexity and cost from end-users. The result is a fast, scalable, and decentralized zkML service: “verifiable AI inferences fast and affordable”.

Performance Milestones: DeepProve’s claims are backed by benchmarks against the prior state-of-the-art, Ezkl. For a CNN with ~264k parameters (CIFAR-10 scale model), DeepProve’s proving time was ~1.24 seconds versus ~196 seconds for Ezkl – about 158× faster. For a larger dense network with 4 million parameters, DeepProve proved an inference in ~2.3 seconds vs ~126.8 seconds for Ezkl (~54× faster). Verification times also dropped: DeepProve verified the 264k CNN proof in ~0.6s, whereas verifying the Ezkl proof (Halo2-based) took over 5 minutes on CPU in that test. The speedups come from DeepProve’s near-linear complexity: its prover scales roughly O(n) with the number of operations, whereas circuit-based SNARK provers often have superlinear overhead (FFT and polynomial commitments scaling). In fact, DeepProve’s prover throughput can be within an order of magnitude of plain inference runtime – recent GKR systems can be <10× slower than raw execution for large matrix multiplications, an impressive achievement in ZK. This makes real-time or on-demand proofs more feasible, paving the way for verifiable AI in interactive applications.

Use Cases: Lagrange is already collaborating with Web3 and AI projects to apply zkML. Example use cases include: verifiable NFT traits (proving an AI-generated evolution of a game character or collectible is computed by the authorized model), provenance of AI content (proving an image or text was generated by a specific model, to combat deepfakes), DeFi risk models (proving a model’s output that assesses financial risk without revealing proprietary data), and private AI inference in healthcare or finance (where a hospital can get AI predictions with a proof, ensuring correctness without exposing patient data). By making AI outputs verifiable and privacy-preserving, DeepProve opens the door to “AI you can trust” in decentralized systems – moving from an era of “blind trust in black-box models” to one of “objective guarantees”.

SNARK-Based zkML: Ezkl and the Halo2 Approach

The traditional approach to zkML uses zk-SNARKs (Succinct Non-interactive Arguments of Knowledge) to prove neural network inference. Ezkl (by ZKonduit/Modulus Labs) is a leading example of this approach. It builds on the Halo2 proving system (a PLONK-style SNARK with polynomial commitments over BLS12-381). Ezkl provides a tooling chain where a developer can take a PyTorch or TensorFlow model, export it to ONNX, and have Ezkl compile it into a custom arithmetic circuit automatically.

How it works: Each layer of the neural network is converted into constraints:

  • Linear layers (dense or convolution) become collections of multiplication-add constraints that enforce the dot-products between inputs, weights, and outputs.
  • Non-linear layers (like ReLU, sigmoid, etc.) are handled via lookups or piecewise constraints because such functions are not polynomial. For instance, a ReLU can be implemented by a boolean selector $b$ with constraints ensuring $y = x \cdot b$ and $0 \le b \le 1$ and $b=1$ if $x>0$ (one way to do it), or more efficiently by a lookup table mapping $x \mapsto \max(0,x)$ for a range of $x$ values. Halo2’s lookup arguments allow mapping 16-bit (or smaller) chunks of values, so large domains (like all 32-bit values) are usually “chunked” into several smaller lookups. This chunking increases the number of constraints.
  • Big integer ops or divisions (if any) are similarly broken into small pieces. The result is a large set of R1CS/PLONK constraints tailored to the specific model architecture.

Ezkl then uses Halo2 to generate a proof that these constraints hold given the secret inputs (model weights, private inputs) and public outputs. Tooling and integration: One advantage of the SNARK approach is that it leverages well-known primitives. Halo2 is already used in Ethereum rollups (e.g. Zcash, zkEVMs), so it’s battle-tested and has an on-chain verifier readily available. Ezkl’s proofs use BLS12-381 curve, which Ethereum can verify via precompiles, making it straightforward to verify an Ezkl proof in a smart contract. The team has also provided user-friendly APIs; for example, data scientists can work with their models in Python and use Ezkl’s CLI to produce proofs, without deep knowledge of circuits.

Strengths: Ezkl’s approach benefits from the generality and ecosystem of SNARKs. It supports reasonably complex models and has already seen “practical integrations (from DeFi risk models to gaming AI)”, proving real-world ML tasks. Because it operates at the level of the model’s computation graph, it can apply ML-specific optimizations: e.g. pruning insignificant weights or quantizing parameters to reduce circuit size. It also means model confidentiality is natural – the weights can be treated as private witness data, so the verifier only sees that some valid model produced the output, or at best a commitment to the model. The verification of SNARK proofs is extremely fast (typically a few milliseconds or less on-chain), and proof sizes are small (a few kilobytes), which is ideal for blockchain usage.

Weaknesses: Performance is the Achilles’ heel. Circuit-based proving imposes large overheads, especially as models grow. It’s noted that historically, SNARK circuits could be a million times more work for the prover than just running the model itself. Halo2 and Ezkl optimize this, but still, operations like large matrix multiplications generate tons of constraints. If a model has millions of parameters, the prover must handle correspondingly millions of constraints, performing heavy FFTs and multiexponentiation in the process. This leads to high proving times (often minutes or hours for non-trivial models) and high memory usage. For example, proving even a relatively small CNN (e.g. a few hundred thousand parameters) can take tens of minutes with Ezkl on a single machine. The team behind DeepProve cited that Ezkl took hours for certain model proofs that DeepProve can do in minutes. Large models might not even fit in memory or require splitting into multiple proofs (which then need recursive aggregation). While Halo2 is “moderately optimized”, any need to “chunk” lookups or handle wide-bit operations translates to extra overhead. In summary, scalability is limited – Ezkl works well for small-to-medium models (and indeed outperformed some earlier alternatives like naive Stark-based VMs in benchmarks), but struggles as model size grows beyond a point.

Despite these challenges, Ezkl and similar SNARK-based zkML libraries are important stepping stones. They proved that verified ML inference is possible on-chain and have active usage. Notably, projects like Modulus Labs demonstrated verifying an 18-million-parameter model on-chain using SNARKs (with heavy optimization). The cost was non-trivial, but it shows the trajectory. Moreover, the Mina Protocol has its own zkML toolkit that uses SNARKs to allow smart contracts on Mina (which are Snark-based) to verify ML model execution. This indicates a growing multi-platform support for SNARK-based zkML.

STARK-Based Approaches: Transparent and Programmable ZK for ML

zk-STARKs (Scalable Transparent ARguments of Knowledge) offer another route to zkML. STARKs use hash-based cryptography (like FRI for polynomial commitments) and avoid any trusted setup. They often operate by simulating a CPU or VM and proving the execution trace is correct. In context of ML, one can either build a custom STARK for the neural network or use a general-purpose STARK VM to run the model code.

General STARK VMs (RISC Zero, Cairo): A straightforward approach is to write inference code and run it in a STARK VM. For example, Risc0 provides a RISC-V environment where any code (e.g., C++ or Rust implementation of a neural network) can be executed and proven via a STARK. Similarly, StarkWare’s Cairo language can express arbitrary computations (like an LSTM or CNN inference) which are then proved by the StarkNet STARK prover. The advantage is flexibility – you don’t need to design custom circuits for each model. However, early benchmarks showed that naive STARK VMs were slower compared to optimized SNARK circuits for ML. In one test, a Halo2-based proof (Ezkl) was about 3× faster than a STARK-based approach on Cairo, and even 66× faster than a RISC-V STARK VM on a certain benchmark in 2024. This gap is due to the overhead of simulating every low-level instruction in a STARK and the larger constants in STARK proofs (hashing is fast but you need a lot of it; STARK proof sizes are bigger, etc.). However, STARK VMs are improving and have the benefit of transparent setup (no trusted setup) and post-quantum security. As STARK-friendly hardware and protocols advance, proving speeds will improve.

DeepProve’s approach vs STARK: Interestingly, DeepProve’s use of GKR and sum-check yields a proof more akin to a STARK in spirit – it’s an interactive, hash-based proof with no need for a structured reference string. The trade-off is that its proofs are larger and verification is heavier than a SNARK. Yet, DeepProve shows that careful protocol design (specialized to ML’s layered structure) can vastly outperform both generic STARK VMs and SNARK circuits in proving time. We can consider DeepProve as a bespoke STARK-style zkML prover (though they use the term zkSNARK for succinctness, it doesn’t have a traditional SNARK’s small constant-size verification, since 0.5s verify is bigger than typical SNARK verify). Traditional STARK proofs (like StarkNet’s) often involve tens of thousands of field operations to verify, whereas SNARK verifies in maybe a few dozen. Thus, one trade-off is evident: SNARKs yield smaller proofs and faster verifiers, while STARKs (or GKR) offer easier scaling and no trusted setup at the cost of proof size and verify speed.

Emerging improvements: The JOLT zkVM (discussed earlier under JOLTx) is actually outputting SNARKs (using PLONKish commitments) but it embodies ideas that could be applied in STARK context too (Lasso lookups could theoretically be used with FRI commitments). StarkWare and others are researching ways to speed up proving of common operations (like using custom gates or hints in Cairo for big int ops, etc.). There’s also Circomlib-ML by Privacy&Scaling Explorations (PSE), which provides Circom templates for CNN layers, etc. – that’s SNARK-oriented, but conceptually similar templates could be made for STARK languages.

In practice, non-Ethereum ecosystems leveraging STARKs include StarkNet (which could allow on-chain verification of ML if someone writes a verifier, though cost is high) and Risc0’s Bonsai service (which is an off-chain proving service that emits STARK proofs which can be verified on various chains). As of 2025, most zkML demos on blockchain have favored SNARKs (due to verifier efficiency), but STARK approaches remain attractive for their transparency and potential in high-security or quantum-resistant settings. For example, a decentralized compute network might use STARKs to let anyone verify work without a trusted setup, useful for longevity. Also, some specialized ML tasks might exploit STARK-friendly structures: e.g. computations heavily using XOR/bit operations could be faster in STARKs (since those are cheap in boolean algebra and hashing) than in SNARK field arithmetic.

Summary of SNARK vs STARK for ML:

  • Performance: SNARKs (like Halo2) have huge proving overhead per gate but benefit from powerful optimizations and small constants for verify; STARKs (generic) have larger constant overhead but scale more linearly and avoid expensive crypto like pairings. DeepProve shows that customizing the approach (sum-check) yields near-linear proving time (fast) but with a STARK-like proof. JOLT shows that even a general VM can be made faster with heavy use of lookups. Empirically, for models up to millions of operations: a well-optimized SNARK (Ezkl) can handle it but might take tens of minutes, whereas DeepProve (GKR) can do it in seconds. STARK VMs in 2024 were likely in between or worse than SNARKs unless specialized (Risc0 was slower in tests, Cairo was slower without custom hints).
  • Verification: SNARK proofs verify quickest (milliseconds, and minimal data on-chain ~ a few hundred bytes to a few KB). STARK proofs are larger (dozens of KB) and take longer (tens of ms to seconds) to verify due to many hashing steps. In blockchain terms, a SNARK verify might cost e.g. ~200k gas, whereas a STARK verify could cost millions of gas – often too high for L1, acceptable on L2 or with succinct verification schemes.
  • Setup and Security: SNARKs like Groth16 require a trusted setup per circuit (unfriendly for arbitrary models), but universal SNARKs (PLONK, Halo2) have a one-time setup that can be reused for any circuit up to certain size. STARKs need no setup and use only hash assumptions (plus classical polynomial complexity assumptions), and are post-quantum secure. This makes STARKs appealing for longevity – proofs remain secure even if quantum computers emerge, whereas current SNARKs (BLS12-381 based) would be broken by quantum attacks.

We will consolidate these differences in a comparison table shortly.

FHE for ML (FHE-o-ML): Private Computation vs. Verifiable Computation

Fully Homomorphic Encryption (FHE) is a cryptographic technique that allows computations to be performed directly on encrypted data. In the context of ML, FHE can enable a form of privacy-preserving inference: for example, a client can send encrypted input to a model host, the host runs the neural network on the ciphertext without decrypting it, and sends back an encrypted result which the client can decrypt. This ensures data confidentiality – the model owner learns nothing about the input (and potentially the client learns only the output, not the model’s internals if they only get output). However, FHE by itself does not produce a proof of correctness in the same way ZKPs do. The client must trust that the model owner actually performed the computation honestly (the ciphertext could have been manipulated). Usually, if the client has the model or expects a certain distribution of outputs, blatant cheating can be detected, but subtle errors or use of a wrong model version would not be evident just from the encrypted output.

Trade-offs in performance: FHE is notoriously heavy in computation. Running deep learning inference under FHE incurs orders-of-magnitude slowdown. Early experiments (e.g., CryptoNets in 2016) took tens of seconds to evaluate a tiny CNN on encrypted data. By 2024, improvements like CKKS (for approximate arithmetic) and better libraries (Microsoft SEAL, Zama’s Concrete) have reduced this overhead, but it remains large. For example, a user reported that using Zama’s Concrete-ML to run a CIFAR-10 classifier took 25–30 minutes per inference on their hardware. After optimizations, Zama’s team achieved ~40 seconds for that inference on a 192-core server. Even 40s is extremely slow compared to a plaintext inference (which might be 0.01s), showing a ~$10^3$–$10^4\times$ overhead. Larger models or higher precision increase the cost further. Additionally, FHE operations consume a lot of memory and require occasional bootstrapping (a noise-reduction step) which is computationally expensive. In summary, scalability is a major issue – state-of-the-art FHE might handle a small CNN or simple logistic regression, but scaling to large CNNs or Transformers is beyond current practical limits.

Privacy advantages: FHE’s big appeal is data privacy. The input can remain completely encrypted throughout the process. This means an untrusted server can compute on a client’s private data without learning anything about it. Conversely, if the model is sensitive (proprietary), one could envisage encrypting the model parameters and having the client perform FHE inference on their side – but this is less common because if the client has to do the heavy FHE compute, it negates the idea of offloading to a powerful server. Typically, the model is public or held by server in the clear, and the data is encrypted by the client’s key. Model privacy in that scenario is not provided by default (the server knows the model; the client learns outputs but not weights). There are more exotic setups (like secure two-party computation or multi-key FHE) where both model and data can be kept private from each other, but those incur even more complexity. In contrast, zkML via ZKPs can ensure model privacy and data privacy at once – the prover can have both the model and data as secret witness, only revealing what’s needed to the verifier.

No on-chain verification needed (and none possible): With FHE, the result comes out encrypted to the client. The client then decrypts it to obtain the actual prediction. If we want to use that result on-chain, the client (or whoever holds the decryption key) would have to publish the plaintext result and convince others it’s correct. But at that point, trust is back in the loop – unless combined with a ZKP. In principle, one could combine FHE and ZKP: e.g., use FHE to keep data private during compute, and then generate a ZK-proof that the plaintext result corresponds to a correct computation. However, combining them means you pay the performance penalty of FHE and ZKP – extremely impractical with today’s tech. So, in practice FHE-of-ML and zkML serve different use cases:

  • FHE-of-ML: Ideal when the goal is confidentiality between two parties (client and server). For instance, a cloud service can host an ML model and users can query it with their sensitive data without revealing the data to the cloud (and if the model is sensitive, perhaps deploy it via FHE-friendly encodings). This is great for privacy-preserving ML services (medical predictions, etc.). The user still has to trust the service to faithfully run the model (since no proof), but at least any data leakage is prevented. Some projects like Zama are even exploring an “FHE-enabled EVM (fhEVM)” where smart contracts could operate on encrypted inputs, but verifying those computations on-chain would require the contract to somehow enforce correct computation – an open challenge likely requiring ZK proofs or specialized secure hardware.
  • zkML (ZKPs): Ideal when the goal is verifiability and public auditability. If you want anyone (or any contract) to be sure that “Model $M$ was evaluated correctly on $X$ and produced $Y$”, ZKPs are the solution. They also provide privacy as a bonus (you can hide $X$ or $Y$ or $M$ if needed by treating them as private inputs to the proof), but their primary feature is the proof of correct execution.

A complementary relationship: It’s worth noting that ZKPs protect the verifier (they learn nothing about secrets, only that the computation was correctly done), whereas FHE protects the prover’s data from the computing party. In some scenarios, these could be combined – for example, a network of untrusted nodes could use FHE to compute on users’ private data and then provide ZK proofs to the users (or blockchain) that the computations were done according to the protocol. This would cover both privacy and correctness, but the performance cost is enormous with today’s algorithms. More feasible in the near term are hybrids like Trusted Execution Environments (TEE) plus ZKP or Functional Encryption plus ZKP – these are beyond our scope, but they aim to provide something similar (TEEs keep data/model secret during compute, then a ZKP can attest the TEE did the right thing).

In summary, FHE-of-ML prioritizes confidentiality of inputs/outputs, while zkML prioritizes verifiable correctness (with possible privacy). Table 1 below contrasts the key properties:

ApproachProver Performance (Inference & Proof)Proof Size & VerificationPrivacy FeaturesTrusted Setup?Post-Quantum?
zk-SNARK (Halo2, Groth16, PLONK, etc)Heavy prover overhead (up to 10^6× normal runtime without optimizations; in practice 10^3–10^5×). Optimized for specific model/circuit; proving time in minutes for medium models, hours for large. Recent zkML SNARKs (DeepProve with GKR) vastly improve this (near-linear overhead, e.g. seconds instead of minutes for million-param models).Very small proofs (often < 100 KB, sometimes ~a few KB). Verification is fast: a few pairings or polynomial evals (typically < 50 ms on-chain). DeepProve’s GKR-based proofs are larger (tens–hundreds KB) and verify in ~0.5 s (still much faster than re-running the model).Data confidentiality: Yes – inputs can be private in proof (not revealed). Model privacy: Yes – prover can commit to model weights and not reveal them. Output hiding: Optional – proof can be of a statement without revealing output (e.g. “output has property P”). However, if the output itself is needed on-chain, it typically becomes public. Overall, SNARKs offer full zero-knowledge flexibility (hide whichever parts you want).Depends on scheme. Groth16/EZKL require a trusted setup per circuit; PLONK/Halo2 use a universal setup (one time). DeepProve’s sum-check GKR is transparent (no setup) – a bonus of that design.Classical SNARKs (BLS12-381 curves) are not PQ-safe (vulnerable to quantum attacks on elliptic curve discrete log). Some newer SNARKs use PQ-safe commitments, but Halo2/PLONK as used in Ezkl are not PQ-safe. GKR (DeepProve) uses hash commitments (e.g. Poseidon/Merkle) which are conjectured PQ-safe (relying on hash preimage resistance).
zk-STARK (FRI, hash-based proof)Prover overhead is high but more linear scaling. Typically 10^2–10^4× slower than native for large tasks, with room to parallelize. General STARK VMs (Risc0, Cairo) saw slower performance vs SNARK for ML in 2024 (e.g. 3×–66× slower than Halo2 in some cases). Specialized STARKs (or GKR) can approach linear overhead and outperform SNARKs for large circuits.Proofs are larger: often tens of KB (growing with circuit size/log(n)). Verifier must do multiple hash and FFT checks – verification time ~O(n^ε) for small ε (e.g. ~50 ms to 500 ms depending on proof size). On-chain, this is costlier (StarkWare’s L1 verifier can take millions of gas per proof). Some STARKs support recursive proofs to compress size, at cost of prover time.Data & Model privacy: A STARK can be made zero-knowledge by randomizing trace data (adding blinding to polynomial evaluations), so it can hide private inputs similarly to SNARK. Many STARK implementations focus on integrity, but zk-STARK variants do allow privacy. So yes, they can hide inputs/models like SNARKs. Output hiding: likewise possible in theory (prover doesn’t declare the output as public), but rarely used since usually the output is what we want to reveal/verify.No trusted setup. Transparency is a hallmark of STARKs – only require common random string (which Fiat-Shamir can derive). This makes them attractive for open-ended use (any model, any time, no per-model ceremony).Yes, STARKs rely on hash and information-theoretic security assumptions (like random oracle and difficulty of certain codeword decoding in FRI). These are believed to be secure against quantum adversaries. STARK proofs are thus PQ-resistant, an advantage for future-proofing verifiable AI.
FHE for ML (Fully Homomorphic Encryption applied to inference)Prover = party doing computation on encrypted data. The computation time is extremely high: 10^3–10^5× slower than plaintext inference is common. High-end hardware (many-core servers, FPGA, etc.) can mitigate this. Some optimizations (low-precision inference, leveled FHE parameters) can reduce overhead but there is a fundamental performance hit. FHE is currently practical for small models or simple linear models; deep networks remain challenging beyond toy sizes.No proof generated. The result is an encrypted output. Verification in the sense of checking correctness is not provided by FHE alone – one trusts the computing party to not cheat. (If combined with secure hardware, one might get an attestation; otherwise, a malicious server could return an incorrect encrypted result that the client would decrypt to wrong output without knowing the difference).Data confidentiality: Yes – the input is encrypted, so the computing party learns nothing about it. Model privacy: If the model owner is doing the compute on encrypted input, the model is in plaintext on their side (not protected). If roles are reversed (client holds model encrypted and server computes), model could be kept encrypted, but this scenario is less common. There are techniques like secure two-party ML that combine FHE/MPC to protect both, but these go beyond plain FHE. Output hiding: By default, the output of the computation is encrypted (only decryptable by the party with the secret key, usually the input owner). So the output is hidden from the computing server. If we want the output public, the client can decrypt and reveal it.No setup needed. Each user generates their own key pair for encryption. Trust relies on keys remaining secret.The security of FHE schemes (e.g. BFV, CKKS, TFHE) is based on lattice problems (Learning With Errors), which are believed to be resistant to quantum attacks (at least no efficient quantum algorithm is known). So FHE is generally considered post-quantum secure.

Table 1: Comparison of zk-SNARK, zk-STARK, and FHE approaches for machine learning inference (performance and privacy trade-offs).

Use Cases and Implications for Web3 Applications

The convergence of AI and blockchain via zkML unlocks powerful new application patterns in Web3:

  • Decentralized Autonomous Agents & On-Chain Decision-Making: Smart contracts or DAOs can incorporate AI-driven decisions with guarantees of correctness. For example, imagine a DAO that uses a neural network to analyze market conditions before executing trades. With zkML, the DAO’s smart contract can require a zkSNARK proof that the authorized ML model (with a known hash commitment) was run on the latest data and produced the recommended action, before the action is accepted. This prevents malicious actors from injecting a fake prediction – the chain verifies the AI’s computation. Over time, one could even have fully on-chain autonomous agents (contracts that query off-chain AI or contain simplified models) making decisions in DeFi or games, with all their moves proven correct and policy-compliant via zk proofs. This raises the trust in autonomous agents, since their “thinking” is transparent and verifiable rather than a black-box.

  • Verifiable Compute Markets: Projects like Lagrange are effectively creating verifiable computation marketplaces – developers can outsource heavy ML inference to a network of provers and get back a proof with the result. This is analogous to decentralized cloud computing, but with built-in trust: you don’t need to trust the server, only the proof. It’s a paradigm shift for oracles and off-chain computation. Protocols like Ethereum’s upcoming DSC (decentralized sequencing layer) or oracle networks could use this to provide data feeds or analytic feeds with cryptographic guarantees. For instance, an oracle could supply “the result of model X on input Y” and anyone can verify the attached proof on-chain, rather than trusting the oracle’s word. This could enable verifiable AI-as-a-service on blockchain: any contract can request a computation (like “score these credit risks with my private model”) and accept the answer only with a valid proof. Projects such as Gensyn are exploring decentralized training and inference marketplaces using these verification techniques.

  • NFTs and Gaming – Provenance and Evolution: In blockchain games or NFT collectibles, zkML can prove traits or game moves were generated by legitimate AI models. For example, a game might allow an AI to evolve an NFT pet’s attributes. Without ZK, a clever user might modify the AI or the outcome to get a superior pet. With zkML, the game can require a proof that “pet’s new stats were computed by the official evolution model on the pet’s old stats”, preventing cheating. Similarly for generative art NFTs: an artist could release a generative model as a commitment; later, when minting NFTs, prove each image was produced by that model given some seed, guaranteeing authenticity (and even doing so without revealing the exact model to the public, preserving the artist’s IP). This provenance verification ensures authenticity in a manner akin to verifiable randomness – except here it’s verifiable creativity.

  • Privacy-Preserving AI in Sensitive Domains: zkML allows confirmation of outcomes without exposing inputs. In healthcare, a patient’s data could be run through an AI diagnostic model by a cloud provider; the hospital receives a diagnosis and a proof that the model (which could be privately held by a pharmaceutical company) was run correctly on the patient data. The patient data remains private (only an encrypted or committed form was used in the proof), and the model weights remain proprietary – yet the result is trusted. Regulators or insurance could also verify that only approved models were used. In finance, a company could prove to an auditor or regulator that its risk model was applied to its internal data and produced certain metrics without revealing the underlying sensitive financial data. This enables compliance and oversight with cryptographic assurances rather than manual trust.

  • Cross-Chain and Off-Chain Interoperability: Because zero-knowledge proofs are fundamentally portable, zkML can facilitate cross-chain AI results. One chain might have an AI-intensive application running off-chain; it can post a proof of the result to a different blockchain, which will trustlessly accept it. For instance, consider a multi-chain DAO using an AI to aggregate sentiment across social media (off-chain data). The AI analysis (complex NLP on large data) is done off-chain by a service that then posts a proof to a small blockchain (or multiple chains) that “analysis was done correctly and output sentiment score = 0.85”. All chains can verify and use that result in their governance logic, without each needing to rerun the analysis. This kind of interoperable verifiable compute is what Lagrange’s network aims to support, by serving multiple rollups or L1s simultaneously. It removes the need for trusted bridges or oracle assumptions when moving results between chains.

  • AI Alignment and Governance: On a more forward-looking note, zkML has been highlighted as a tool for AI governance and safety. Lagrange’s vision statements, for example, argue that as AI systems become more powerful (even superintelligent), cryptographic verification will be essential to ensure they follow agreed rules. By requiring AI models to produce proofs of their reasoning or constraints, humans retain a degree of control – “you cannot trust what you cannot verify”. While this is speculative and involves social as much as technical aspects, the technology could enforce that an AI agent running autonomously still proves it is using an approved model and hasn’t been tampered with. Decentralized AI networks might use on-chain proofs to verify contributions (e.g., a network of nodes collaboratively training a model can prove each update was computed faithfully). Thus zkML could play a role in ensuring AI systems remain accountable to human-defined protocols even in decentralized or uncontrolled environments.

In conclusion, zkML and verifiable on-chain AI represent a convergence of advanced cryptography and machine learning that stands to enhance trust, transparency, and privacy in AI applications. By comparing the major approaches – zk-SNARKs, zk-STARKs, and FHE – we see a spectrum of trade-offs between performance and privacy, each suitable for different scenarios. SNARK-based frameworks like Ezkl and innovations like Lagrange’s DeepProve have made it feasible to prove substantial neural network inferences with practical effort, opening the door to real-world deployments of verifiable AI. STARK-based and VM-based approaches promise greater flexibility and post-quantum security, which will become important as the field matures. FHE, while not a solution for verifiability, addresses the complementary need of confidential ML computation, and in combination with ZKPs or in specific private contexts it can empower users to leverage AI without sacrificing data privacy.

The implications for Web3 are significant: we can foresee smart contracts reacting to AI predictions, knowing they are correct; markets for compute where results are trustlessly sold; digital identities (like Worldcoin’s proof-of-personhood via iris AI) protected by zkML to confirm someone is human without leaking their biometric image; and generally a new class of “provable intelligence” that enriches blockchain applications. Many challenges remain – performance for very large models, developer ergonomics, and the need for specialized hardware – but the trajectory is clear. As one report noted, “today’s ZKPs can support small models, but moderate to large models break the paradigm”; however, rapid advances (50×–150× speedups with DeepProve over prior art) are pushing that boundary outward. With ongoing research (e.g., on hardware acceleration and distributed proving), we can expect progressively larger and more complex AI models to become provable. zkML might soon evolve from niche demos to an essential component of trusted AI infrastructure, ensuring that as AI becomes ubiquitous, it does so in a way that is auditable, decentralized, and aligned with user privacy and security.

Ethereum's Anonymity Myth: How Researchers Unmasked 15% of Validators

· 6 min read
Dora Noda
Software Engineer

One of the core promises of blockchain technology like Ethereum is a degree of anonymity. Participants, known as validators, are supposed to operate behind a veil of cryptographic pseudonyms, protecting their real-world identity and, by extension, their security.

However, a recent research paper titled "Deanonymizing Ethereum Validators: The P2P Network Has a Privacy Issue" from researchers at ETH Zurich and other institutions reveals a critical flaw in this assumption. They demonstrate a simple, low-cost method to link a validator's public identifier directly to the IP address of the machine it's running on.

In short, Ethereum validators are not nearly as anonymous as many believe. The findings were significant enough to earn the researchers a bug bounty from the Ethereum Foundation, acknowledging the severity of the privacy leak.

How the Vulnerability Works: A Flaw in the Gossip

To understand the vulnerability, we first need a basic picture of how Ethereum validators communicate. The network consists of over a million validators who constantly "vote" on the state of the chain. These votes are called attestations, and they are broadcast across a peer-to-peer (P2PP2P) network to all other nodes.

With so many validators, having everyone broadcast every vote to everyone else would instantly overwhelm the network. To solve this, Ethereum’s designers implemented a clever scaling solution: the network is divided into 64 distinct communication channels, known as subnets.

  • By default, each node (the computer running the validator software) subscribes to only two of these 64 subnets. Its primary job is to diligently relay all messages it sees on those two channels.
  • When a validator needs to cast a vote, its attestation is randomly assigned to one of the 64 subnets for broadcast.

This is where the vulnerability lies. Imagine a node whose job is to manage traffic for channels 12 and 13. All day, it faithfully forwards messages from just those two channels. But then, it suddenly sends you a message that belongs to channel 45.

This is a powerful clue. Why would a node handle a message from a channel it's not responsible for? The most logical conclusion is that the node itself generated that message. This implies that the validator who created the attestation for channel 45 is running on that very machine.

The researchers exploited this exact principle. By setting up their own listening nodes, they monitored the subnets from which their peers sent attestations. When a peer sent a message from a subnet it wasn't officially subscribed to, they could infer with high confidence that the peer hosted the originating validator.

The method proved shockingly effective. Using just four nodes over three days, the team successfully located the IP addresses of over 161,000 validators, representing more than 15% of the entire Ethereum network.

Why This Matters: The Risks of Deanonymization

Exposing a validator's IP address is not a trivial matter. It opens the door for targeted attacks that threaten individual operators and the health of the Ethereum network as a whole.

1. Targeted Attacks and Reward Theft Ethereum announces which validator is scheduled to propose the next block a few minutes in advance. An attacker who knows this validator's IP address can launch a Denial-of-Service (DDoS) attack, flooding it with traffic and knocking it offline. If the validator misses its four-second window to propose the block, the opportunity passes to the next validator in line. If the attacker is that next validator, they can then claim the block rewards and valuable transaction fees (MEV) that should have gone to the victim.

2. Threats to Network Liveness and Safety A well-resourced attacker could perform these "sniping" attacks repeatedly, causing the entire blockchain to slow down or halt (a liveness attack). In a more severe scenario, an attacker could use this information to launch sophisticated network-partitioning attacks, potentially causing different parts of the network to disagree on the chain's history, thus compromising its integrity (a safety attack).

3. Revealing a Centralized Reality The research also shed light on some uncomfortable truths about the network's decentralization:

  • Extreme Concentration: The team found peers hosting a staggering number of validators, including one IP address running over 19,000. The failure of a single machine could have an outsized impact on the network.
  • Dependence on Cloud Services: Roughly 90% of located validators run on cloud providers like AWS and Hetzner, not on the computers of solo home stakers. This represents a significant point of centralization.
  • Hidden Dependencies: Many large staking pools claim their operators are independent. However, the research found instances where validators from different, competing pools were running on the same physical machine, creating hidden systemic risks.

Mitigations: How Can Validators Protect Themselves?

Fortunately, there are ways to defend against this deanonymization technique. The researchers proposed several mitigations:

  • Create More Noise: A validator can choose to subscribe to more than two subnets—or even all 64. This makes it much harder for an observer to distinguish between relayed messages and self-generated ones.
  • Use Multiple Nodes: An operator can separate validator duties across different machines with different IPs. For example, one node could handle attestations while a separate, private node is used only for proposing high-value blocks.
  • Private Peering: Validators can establish trusted, private connections with other nodes to relay their messages, obscuring their true origin within a small, trusted group.
  • Anonymous Broadcasting Protocols: More advanced solutions like Dandelion, which obfuscates a message's origin by passing it along a random "stem" before broadcasting it widely, could be implemented.

Conclusion

This research powerfully illustrates the inherent trade-off between performance and privacy in distributed systems. In its effort to scale, Ethereum's P2PP2P network adopted a design that compromised the anonymity of its most critical participants.

By bringing this vulnerability to light, the researchers have given the Ethereum community the knowledge and tools needed to address it. Their work is a crucial step toward building a more robust, secure, and truly decentralized network for the future.

Expanding Our Horizons: BlockEden.xyz Adds Base, Berachain, and Blast to API Marketplace

· 4 min read

We're thrilled to announce a significant expansion to BlockEden.xyz's API Marketplace with the addition of three cutting-edge blockchain networks: Base, Berachain, and Blast. These new offerings reflect our commitment to providing developers with comprehensive access to the most innovative blockchain infrastructures, enabling seamless development across multiple ecosystems.

API Marketplace Expansion

Base: Coinbase's Ethereum L2 Solution

Base is an Ethereum Layer 2 (L2) solution developed by Coinbase, designed to bring millions of users into the onchain ecosystem. As a secure, low-cost, developer-friendly Ethereum L2, Base combines the robust security of Ethereum with the scalability benefits of optimistic rollups.

Our new Base API endpoint lets developers:

  • Access Base's infrastructure without managing their own nodes
  • Leverage high-performance RPC connections with 99.9% uptime
  • Build applications that benefit from Ethereum's security with lower fees
  • Seamlessly interact with Base's expanding ecosystem of applications

Base is particularly appealing for developers looking to create consumer-facing applications that require Ethereum's security but at a fraction of the cost.

Berachain: Performance Meets EVM Compatibility

Berachain brings a unique approach to blockchain infrastructure, combining high performance with complete Ethereum Virtual Machine (EVM) compatibility. As an emerging network gaining significant attention from developers, Berachain offers:

  • EVM compatibility with enhanced throughput
  • Advanced smart contract capabilities
  • A growing ecosystem of innovative DeFi applications
  • Unique consensus mechanisms optimized for transaction speed

Our Berachain API provides developers with immediate access to this promising network, allowing teams to build and test applications without the complexity of managing infrastructure.

Blast: The First Native Yield L2

Blast stands out as the first Ethereum L2 with native yield for ETH and stablecoins. This innovative approach to yield generation makes Blast particularly interesting for DeFi developers and applications focused on capital efficiency.

Key benefits of our Blast API include:

  • Direct access to Blast's native yield mechanisms
  • Support for building yield-optimized applications
  • Simplified integration with Blast's unique features
  • High-performance RPC connections for seamless interactions

Blast's focus on native yield represents an exciting direction for Ethereum L2 solutions, potentially setting new standards for capital efficiency in the ecosystem.

Seamless Integration Process

Getting started with these new networks is straightforward with BlockEden.xyz:

  1. Visit our API Marketplace and select your desired network
  2. Create an API key through your BlockEden.xyz dashboard
  3. Integrate the endpoint into your development environment using our comprehensive documentation
  4. Start building with confidence, backed by our 99.9% uptime guarantee

Why Choose BlockEden.xyz for These Networks?

BlockEden.xyz continues to distinguish itself through several core offerings:

  • High Availability: Our infrastructure maintains 99.9% uptime across all supported networks
  • Developer-First Approach: Comprehensive documentation and support for seamless integration
  • Unified Experience: Access multiple blockchain networks through a single, consistent interface
  • Competitive Pricing: Our compute unit credit (CUC) system ensures cost-effective scaling

Looking Forward

The addition of Base, Berachain, and Blast to our API Marketplace represents our ongoing commitment to supporting the diverse and evolving blockchain ecosystem. As these networks continue to mature and attract developers, BlockEden.xyz will be there to provide the reliable infrastructure needed to build the next generation of decentralized applications.

We invite developers to explore these new offerings and provide feedback as we continue to enhance our services. Your input is invaluable in helping us refine and expand our API marketplace to meet your evolving needs.

Ready to start building on Base, Berachain, or Blast? Visit BlockEden.xyz API Marketplace today and create your access key to begin your journey!

For the latest updates and announcements, connect with us on Twitter or join our community on Discord.

Sony's Soneium: Bringing Blockchain to the Entertainment World

· 6 min read

In the rapidly evolving landscape of blockchain technology, a familiar name has stepped into the arena with a bold vision. Sony, the entertainment and technology giant, has launched Soneium—an Ethereum Layer-2 blockchain designed to bridge the gap between cutting-edge Web3 innovations and mainstream internet services. But what exactly is Soneium, and why should you care? Let's dive in.

What is Soneium?

Soneium is a Layer-2 blockchain built on top of Ethereum, developed by Sony Block Solutions Labs—a joint venture between Sony Group and Startale Labs. Launched in January 2025 after a successful testnet phase, Soneium aims to "realize the open internet that transcends boundaries" by making blockchain technology accessible, scalable, and practical for everyday use.

Think of it as Sony's attempt to make blockchain as user-friendly as its PlayStations and Walkmans once made gaming and music.

The Tech Behind Soneium

For the tech-curious among us, Soneium is built on Optimism's OP Stack, which means it uses the same optimistic rollup framework as other popular Layer-2 solutions. In plain English? It processes transactions off-chain and only periodically posts compressed data back to Ethereum, making transactions faster and cheaper while maintaining security.

Soneium is fully compatible with the Ethereum Virtual Machine (EVM), so developers familiar with Ethereum can easily deploy their applications on the platform. It also joins Optimism's "Superchain" ecosystem, allowing it to communicate easily with other Layer-2 networks like Coinbase's Base.

What Makes Soneium Special?

While there are already several Layer-2 solutions on the market, Soneium stands out for its focus on entertainment, creative content, and fan engagement—areas where Sony has decades of experience and vast resources.

Imagine buying a movie ticket and receiving an exclusive digital collectible that grants access to bonus content. Or attending a virtual concert where your NFT ticket becomes a memento with special perks. These are the kinds of experiences Sony envisions building on Soneium.

The platform is designed to support:

  • Gaming experiences with faster transactions for in-game assets
  • NFT marketplaces for digital collectibles
  • Fan engagement apps where communities can interact with creators
  • Financial tools for creators and fans
  • Enterprise blockchain solutions

Sony's Partnerships Power Soneium

Sony isn't going it alone. The company has forged strategic partnerships to bolster Soneium's development and adoption:

  • Startale Labs, a Singapore-based blockchain startup led by Sota Watanabe (co-founder of Astar Network), is Sony's key technical partner
  • Optimism Foundation provides the underlying technology
  • Circle ensures that USD Coin (USDC) serves as a primary currency on the network
  • Samsung has made a strategic investment through its venture arm
  • Alchemy, Chainlink, Pyth Network, and The Graph provide essential infrastructure services

Sony is also leveraging its internal divisions—including Sony Pictures, Sony Music Entertainment, and Sony Music Publishing—to pilot Web3 fan engagement projects on Soneium. For example, the platform has already hosted NFT campaigns for the "Ghost in the Shell" franchise and various music artists under Sony's label.

Early Signs of Success

Despite being just a few months old, Soneium has shown promising traction:

  • Its testnet phase saw over 15 million active wallets and processed over 47 million transactions
  • Within the first month of mainnet launch, Soneium attracted over 248,000 on-chain accounts and about 1.8 million addresses interacting with the network
  • The platform has successfully launched several NFT drops, including a collaboration with Web3 music label Coop Records

To fuel growth, Sony and Astar Network launched a 100-day incentive campaign with a 100 million token reward pool, encouraging users to try out apps, supply liquidity, and be active on the platform.

Security and Scalability: A Balancing Act

Security is paramount for Sony, especially as it carries its trusted brand into the blockchain space. Soneium inherits Ethereum's security while adding its own protective measures.

Interestingly, Sony has taken a somewhat controversial approach by blacklisting certain smart contracts and tokens deemed to infringe on intellectual property. While this has raised questions about decentralization, Sony argues that some curation is necessary to protect creators and build trust with mainstream users.

On the scalability front, Soneium's very purpose is to enhance Ethereum's throughput. By processing transactions off-chain, it can handle a much higher volume of transactions at much lower costs—crucial for mass adoption of applications like games or large NFT drops.

The Road Ahead

Sony has outlined a multi-phase roadmap for Soneium:

  1. First year: Onboarding Web3 enthusiasts and early adopters
  2. Within two years: Integrating Sony products like Sony Bank, Sony Music, and Sony Pictures
  3. Within three years: Expanding to enterprises and general applications beyond Sony's ecosystem

The company is gradually rolling out its NFT-driven Fan Marketing Platform, which will allow brands and artists to easily issue NFTs to fans, offering perks like exclusive content and event access.

While Soneium currently relies on ETH for gas fees and uses ASTR (Astar Network's token) for incentives, there's speculation about a potential Soneium native token in the future.

How Soneium Compares to Other Layer-2 Networks

In the crowded Layer-2 market, Soneium faces competition from established players like Arbitrum, Optimism, and Polygon. However, Sony is carving a unique position by leveraging its entertainment empire and focusing on creative use cases.

Unlike purely community-driven Layer-2 networks, Soneium benefits from Sony's brand trust, access to content IP, and a potentially huge user base from existing Sony services.

The trade-off is less decentralization (at least initially) compared to networks like Optimism and Arbitrum, which have issued tokens and implemented community governance.

The Big Picture

Sony's Soneium represents a significant step toward blockchain mass adoption. By focusing on content and fan engagement—areas where Sony excels—the company is positioning Soneium as a bridge between Web3 enthusiasts and everyday consumers.

If Sony can successfully convert even a fraction of its millions of customers into Web3 participants, Soneium could become one of the first truly mainstream blockchain platforms.

The experiment has just begun, but the potential is enormous. As the lines between entertainment, technology, and blockchain continue to blur, Soneium may well be at the forefront of this convergence, bringing blockchain technology to the masses one gaming avatar or music NFT at a time.

MegaETH: The 100,000 TPS Layer-2 Aiming to Supercharge Ethereum

· 9 min read

The Speed Revolution Ethereum Has Been Waiting For?

In the high-stakes world of blockchain scaling solutions, a new contender has emerged that's generating both excitement and controversy. MegaETH is positioning itself as Ethereum's answer to ultra-fast chains like Solana—promising sub-millisecond latency and an astonishing 100,000 transactions per second (TPS).

MegaETH

But these claims come with significant trade-offs. MegaETH is making calculated sacrifices to "Make Ethereum Great Again," raising important questions about the balance between performance, security, and decentralization.

As infrastructure providers who've seen many promising solutions come and go, we at BlockEden.xyz have conducted this analysis to help developers and builders understand what makes MegaETH unique—and what risks to consider before building on it.

What Makes MegaETH Different?

MegaETH is an Ethereum Layer-2 solution that has reimagined blockchain architecture with a singular focus: real-time performance.

While most L2 solutions improve on Ethereum's ~15 TPS by a factor of 10-100x, MegaETH aims for 1,000-10,000x improvement—speeds that would put it in a category of its own.

Revolutionary Technical Approach

MegaETH achieves its extraordinary speed through radical engineering decisions:

  1. Single Sequencer Architecture: Unlike most L2s that use multiple sequencers or plan to decentralize, MegaETH uses a single sequencer for ordering transactions, deliberately choosing performance over decentralization.

  2. Optimized State Trie: A completely redesigned state storage system that can handle terabyte-level state data efficiently, even on nodes with limited RAM.

  3. JIT Bytecode Compilation: Just-in-time compilation of Ethereum smart contract bytecode, bringing execution closer to "bare-metal" speed.

  4. Parallel Execution Pipeline: A multi-core approach that processes transactions in parallel streams to maximize throughput.

  5. Micro Blocks: Targeting ~1ms block times through continuous "streaming" block production rather than batch processing.

  6. EigenDA Integration: Using EigenLayer's data availability solution instead of posting all data to Ethereum L1, reducing costs while maintaining security through Ethereum-aligned validation.

This architecture delivers performance metrics that seem almost impossible for a blockchain:

  • Sub-millisecond latency (10ms target)
  • 100,000+ TPS throughput
  • EVM compatibility for easy application porting

Testing the Claims: MegaETH's Current Status

As of March 2025, MegaETH's public testnet is live. The initial deployment began on March 6th with a phased rollout, starting with infrastructure partners and dApp teams before opening to broader user onboarding.

Early testnet metrics show:

  • ~1.68 Giga-gas per second throughput
  • ~15ms block times (significantly faster than other L2s)
  • Support for parallel execution that will eventually push performance even higher

The team has indicated that the testnet is running in a somewhat throttled mode, with plans to enable additional parallelization that could double gas throughput to around 3.36 Ggas/sec, moving toward their ultimate target of 10 Ggas/sec (10 billion gas per second).

The Security and Trust Model

MegaETH's approach to security represents a significant departure from blockchain orthodoxy. Unlike Ethereum's trust-minimized design with thousands of validating nodes, MegaETH embraces a centralized execution layer with Ethereum as its security backstop.

The "Can't Be Evil" Philosophy

MegaETH employs an optimistic rollup security model with some unique characteristics:

  1. Fraud Proof System: Like other optimistic rollups, MegaETH allows observers to challenge invalid state transitions through fraud proofs submitted to Ethereum.

  2. Verifier Nodes: Independent nodes replicate the sequencer's computations and would initiate fraud proofs if discrepancies are found.

  3. Ethereum Settlement: All transactions are eventually settled on Ethereum, inheriting its security for final state.

This creates what the team calls a "can't be evil" mechanism—the sequencer can't produce invalid blocks or alter state incorrectly without being caught and punished.

The Centralization Trade-off

The controversial aspect: MegaETH runs with a single sequencer and explicitly has "no plans to ever decentralize the sequencer." This brings two significant risks:

  1. Liveness Risk: If the sequencer goes offline, the network could halt until it recovers or a new sequencer is appointed.

  2. Censorship Risk: The sequencer could theoretically censor certain transactions or users in the short term (though users could ultimately exit via L1).

MegaETH argues these risks are acceptable because:

  • The L2 is anchored to Ethereum for final security
  • Data availability is handled by multiple nodes in EigenDA
  • Any censorship or fraud can be seen and challenged by the community

Use Cases: When Ultra-Fast Execution Matters

MegaETH's real-time capabilities unlock use cases that were previously impractical on slower blockchains:

1. High-Frequency Trading and DeFi

MegaETH enables DEXs with near-instant trade execution and order book updates. Projects already building include:

  • GTE: A real-time spot DEX combining central limit order books and AMM liquidity
  • Teko Finance: A money market for leveraged lending with rapid margin updates
  • Cap: A stablecoin and yield engine that arbitrages across markets
  • Avon: A lending protocol with orderbook-based loan matching

These DeFi applications benefit from MegaETH's throughput to operate with minimal slippage and high-frequency updates.

2. Gaming and Metaverse

The sub-second finality makes fully on-chain games viable without waiting for confirmations:

  • Awe: An open-world 3D game with on-chain actions
  • Biomes: An on-chain metaverse similar to Minecraft
  • Mega Buddies and Mega Cheetah: Collectible avatar series

Such applications can deliver real-time feedback in blockchain games, enabling fast-paced gameplay and on-chain PvP battles.

3. Enterprise Applications

MegaETH's performance makes it suitable for enterprise applications requiring high throughput:

  • Instantaneous payments infrastructure
  • Real-time risk management systems
  • Supply chain verification with immediate finality
  • High-frequency auction systems

The key advantage in all these cases is the ability to run compute-intensive applications with immediate feedback while still being connected to Ethereum's ecosystem.

The Team Behind MegaETH

MegaETH was co-founded by a team with impressive credentials:

  • Li Yilong: PhD in computer science from Stanford specializing in low-latency computing systems
  • Yang Lei: PhD from MIT researching decentralized systems and Ethereum connectivity
  • Shuyao Kong: Former Head of Global Business Development at ConsenSys

The project has attracted notable backers, including Ethereum co-founders Vitalik Buterin and Joseph Lubin as angel investors. Vitalik's involvement is particularly noteworthy, as he rarely invests in specific projects.

Other investors include Sreeram Kannan (founder of EigenLayer), VC firms like Dragonfly Capital, Figment Capital, and Robot Ventures, and influential community figures such as Cobie.

Token Strategy: The Soulbound NFT Approach

MegaETH introduced an innovative token distribution method through "soulbound NFTs" called "The Fluffle." In February 2025, they created 10,000 non-transferable NFTs representing at least 5% of the total MegaETH token supply.

Key aspects of the tokenomics:

  • 5,000 NFTs were sold at 1 ETH each (raising ~$13-14 million)
  • The other 5,000 NFTs were allocated to ecosystem projects and builders
  • The NFTs are soulbound (cannot be transferred), ensuring long-term alignment
  • Implied valuation of around $540 million, extremely high for a pre-launch project
  • The team has raised approximately $30-40 million in venture funding

Eventually, the MegaETH token is expected to serve as the native currency for transaction fees and possibly for staking and governance.

How MegaETH Compares to Competitors

vs. Other Ethereum L2s

Compared to Optimism, Arbitrum, and Base, MegaETH is significantly faster but makes bigger compromises on decentralization:

  • Performance: MegaETH targets 100,000+ TPS vs. Arbitrum's ~250 ms transaction times and lower throughput
  • Decentralization: MegaETH uses a single sequencer vs. other L2s' plans for decentralized sequencers
  • Data Availability: MegaETH uses EigenDA vs. other L2s posting data directly to Ethereum

vs. Solana and High-Performance L1s

MegaETH aims to "beat Solana at its own game" while leveraging Ethereum's security:

  • Throughput: MegaETH targets 100k+ TPS vs. Solana's theoretical 65k TPS (typically a few thousand in practice)
  • Latency: MegaETH ~10 ms vs. Solana's ~400 ms finality
  • Decentralization: MegaETH has 1 sequencer vs. Solana's ~1,900 validators

vs. ZK-Rollups (StarkNet, zkSync)

While ZK-rollups offer stronger security guarantees through validity proofs:

  • Speed: MegaETH offers faster user experience without waiting for ZK proof generation
  • Trustlessness: ZK-rollups don't require trust in a sequencer's honesty, providing stronger security
  • Future Plans: MegaETH may eventually integrate ZK proofs, becoming a hybrid solution

MegaETH's positioning is clear: it's the fastest option within the Ethereum ecosystem, sacrificing some decentralization to achieve Web2-like speeds.

The Infrastructure Perspective: What Builders Should Consider

As an infrastructure provider connecting developers to blockchain nodes, BlockEden.xyz sees both opportunities and challenges in MegaETH's approach:

Potential Benefits for Builders

  1. Exceptional User Experience: Applications can offer instant feedback and high throughput, creating Web2-like responsiveness.

  2. EVM Compatibility: Existing Ethereum dApps can port over with minimal changes, unlocking performance without rewrites.

  3. Cost Efficiency: High throughput means lower per-transaction costs for users and applications.

  4. Ethereum Security Backstop: Despite centralization at the execution layer, Ethereum settlement provides a security foundation.

Risk Considerations

  1. Single Point of Failure: The centralized sequencer creates liveness risk—if it goes down, so does your application.

  2. Censorship Vulnerability: Applications could face transaction censorship without immediate recourse.

  3. Early-Stage Technology: MegaETH's novel architecture hasn't been battle-tested at scale with real value.

  4. Dependency on EigenDA: Using a newer data availability solution adds an additional trust assumption.

Infrastructure Requirements

Supporting MegaETH's throughput will require robust infrastructure:

  • High-capacity RPC nodes capable of handling the firehose of data
  • Advanced indexing solutions for real-time data access
  • Specialized monitoring for the unique architecture
  • Reliable bridge monitoring for cross-chain operations

Conclusion: Revolution or Compromise?

MegaETH represents a bold experiment in blockchain scaling—one that deliberately prioritizes performance over decentralization. Whether this approach succeeds depends on whether the market values speed more than decentralized execution.

The coming months will be critical as MegaETH transitions from testnet to mainnet. If it delivers on its performance promises while maintaining sufficient security, it could fundamentally reshape how we think about blockchain scaling. If it stumbles, it will reinforce why decentralization remains a core blockchain value.

For now, MegaETH stands as one of the most ambitious Ethereum scaling solutions to date. Its willingness to challenge orthodoxy has already sparked important conversations about what trade-offs are acceptable in pursuit of mainstream blockchain adoption.

At BlockEden.xyz, we're committed to supporting developers wherever they build, including high-performance networks like MegaETH. Our reliable node infrastructure and API services are designed to help applications thrive across the multi-chain ecosystem, regardless of which approach to scaling ultimately prevails.


Looking to build on MegaETH or need reliable node infrastructure for high-throughput applications? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.

Scaling Blockchains: How Caldera and the RaaS Revolution Are Shaping Web3's Future

· 7 min read

The Web3 Scaling Problem

The blockchain industry faces a persistent challenge: how do we scale to support millions of users without sacrificing security or decentralization?

Ethereum, the leading smart contract platform, processes roughly 15 transactions per second on its base layer. During periods of high demand, this limitation has led to exorbitant gas fees—sometimes exceeding $100 per transaction during NFT mints or DeFi farming frenzies.

This scaling bottleneck presents an existential threat to Web3 adoption. Users accustomed to the instant responsiveness of Web2 applications won't tolerate paying $50 and waiting 3 minutes just to swap tokens or mint an NFT.

Enter the solution that's rapidly reshaping blockchain architecture: Rollups-as-a-Service (RaaS).

Scaling Blockchains

Understanding Rollups-as-a-Service (RaaS)

RaaS platforms enable developers to deploy their own custom blockchain rollups without the complexity of building everything from scratch. These services transform what would normally require a specialized engineering team and months of development into a streamlined, sometimes one-click deployment process.

Why does this matter? Because rollups are the key to blockchain scaling.

Rollups work by:

  • Processing transactions off the main chain (Layer 1)
  • Batching these transactions together
  • Submitting compressed proofs of these transactions back to the main chain

The result? Drastically increased throughput and significantly reduced costs while inheriting security from the underlying Layer 1 blockchain (like Ethereum).

"Rollups don't compete with Ethereum—they extend it. They're like specialized Express lanes built on top of Ethereum's highway."

This approach to scaling is so promising that Ethereum officially adopted a "rollup-centric roadmap" in 2020, acknowledging that the future isn't a single monolithic chain, but rather an ecosystem of interconnected, purpose-built rollups.

Caldera: Leading the RaaS Revolution

Among the emerging RaaS providers, Caldera stands out as a frontrunner. Founded in 2023 and having raised $25M from prominent investors including Dragonfly, Sequoia Capital, and Lattice, Caldera has quickly positioned itself as a leading infrastructure provider in the rollup space.

What Makes Caldera Different?

Caldera distinguishes itself in several key ways:

  1. Multi-Framework Support: Unlike competitors who focus on a single rollup framework, Caldera supports major frameworks like Optimism's OP Stack and Arbitrum's Orbit/Nitro technology, giving developers flexibility in their technical approach.

  2. End-to-End Infrastructure: When you deploy with Caldera, you get a complete suite of components: reliable RPC nodes, block explorers, indexing services, and bridge interfaces.

  3. Rich Integration Ecosystem: Caldera comes pre-integrated with 40+ Web3 tools and services, including oracles, faucets, wallets, and cross-chain bridges (LayerZero, Axelar, Wormhole, Connext, and more).

  4. The Metalayer Network: Perhaps Caldera's most ambitious innovation is its Metalayer—a network that connects all Caldera-powered rollups into a unified ecosystem, allowing them to share liquidity and messages seamlessly.

  5. Multi-VM Support: In late 2024, Caldera became the first RaaS to support the Solana Virtual Machine (SVM) on Ethereum, enabling Solana-like high-performance chains that still settle to Ethereum's secure base layer.

Caldera's approach is creating what they call an "everything layer" for rollups—a cohesive network where different rollups can interoperate rather than exist as isolated islands.

Real-World Adoption: Who's Using Caldera?

Caldera has gained significant traction, with over 75 rollups in production as of late 2024. Some notable projects include:

  • Manta Pacific: A highly scalable network for deploying zero-knowledge applications that uses Caldera's OP Stack combined with Celestia for data availability.

  • RARI Chain: Rarible's NFT-focused rollup that processes transactions in under a second and enforces NFT royalties at the protocol level.

  • Kinto: A regulatory-compliant DeFi platform with on-chain KYC/AML and account abstraction capabilities.

  • Injective's inEVM: An EVM-compatible rollup that extends Injective's interoperability, connecting the Cosmos ecosystem with Ethereum-based dApps.

These projects highlight how application-specific rollups enable customization not possible on general-purpose Layer 1s. By late 2024, Caldera's collective rollups had reportedly processed over 300 million transactions for 6+ million unique wallets, with nearly $1 billion in total value locked (TVL).

How RaaS Compares: Caldera vs. Competitors

The RaaS landscape is becoming increasingly competitive, with several notable players:

Conduit

  • Focuses exclusively on Optimism and Arbitrum ecosystems
  • Emphasizes a fully self-serve, no-code experience
  • Powers approximately 20% of Ethereum's mainnet rollups, including Zora

AltLayer

  • Offers "Flashlayers"—disposable, on-demand rollups for temporary needs
  • Focuses on elastic scaling for specific events or high-traffic periods
  • Demonstrated impressive throughput during gaming events (180,000+ daily transactions)

Sovereign Labs

  • Building a Rollup SDK focused on zero-knowledge technologies
  • Aims to enable ZK-rollups on any base blockchain, not just Ethereum
  • Still in development, positioning for the next wave of multi-chain ZK deployment

While these competitors excel in specific niches, Caldera's comprehensive approach—combining a unified rollup network, multi-VM support, and a focus on developer experience—has helped establish it as a market leader.

The Future of RaaS and Blockchain Scaling

RaaS is poised to reshape the blockchain landscape in profound ways:

1. The Proliferation of App-Specific Chains

Industry research suggests we're moving toward a future with potentially millions of rollups, each serving specific applications or communities. With RaaS lowering deployment barriers, every significant dApp could have its own optimized chain.

2. Interoperability as the Critical Challenge

As rollups multiply, the ability to communicate and share value between them becomes crucial. Caldera's Metalayer represents an early attempt to solve this challenge—creating a unified experience across a web of rollups.

3. From Isolated Chains to Networked Ecosystems

The end goal is a seamless multi-chain experience where users hardly need to know which chain they're on. Value and data would flow freely through an interconnected web of specialized rollups, all secured by robust Layer 1 networks.

4. Cloud-Like Blockchain Infrastructure

RaaS is effectively turning blockchain infrastructure into a cloud-like service. Caldera's "Rollup Engine" allows dynamic upgrades and modular components, treating rollups like configurable cloud services that can scale on demand.

What This Means for Developers and BlockEden.xyz

At BlockEden.xyz, we see enormous potential in the RaaS revolution. As an infrastructure provider connecting developers to blockchain nodes securely, we're positioned to play a crucial role in this evolving landscape.

The proliferation of rollups means developers need reliable node infrastructure more than ever. A future with thousands of application-specific chains demands robust RPC services with high availability—precisely what BlockEden.xyz specializes in providing.

We're particularly excited about the opportunities in:

  1. Specialized RPC Services for Rollups: As rollups adopt unique features and optimizations, specialized infrastructure becomes crucial.

  2. Cross-Chain Data Indexing: With value flowing between multiple rollups, developers need tools to track and analyze cross-chain activities.

  3. Enhanced Developer Tools: As rollup deployment becomes simpler, the need for sophisticated monitoring, debugging, and analytics tools grows.

  4. Unified API Access: Developers working across multiple rollups need simplified, unified access to diverse blockchain networks.

Conclusion: The Modular Blockchain Future

The rise of Rollups-as-a-Service represents a fundamental shift in how we think about blockchain scaling. Rather than forcing all applications onto a single chain, we're moving toward a modular future with specialized chains for specific use cases, all interconnected and secured by robust Layer 1 networks.

Caldera's approach—creating a unified network of rollups with shared liquidity and seamless messaging—offers a glimpse of this future. By making rollup deployment as simple as spinning up a cloud server, RaaS providers are democratizing access to blockchain infrastructure.

At BlockEden.xyz, we're committed to supporting this evolution by providing the reliable node infrastructure and developer tools needed to build in this multi-chain future. As we often say, the future of Web3 isn't a single chain—it's thousands of specialized chains working together.


Looking to build on a rollup or need reliable node infrastructure for your blockchain project? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.