Skip to main content

One post tagged with "ZK"

View All Tags

Programmable Privacy in Blockchain: Off‑Chain Compute with On‑Chain Verification

· 47 min read
Dora Noda
Software Engineer

Public blockchains provide transparency and integrity at the cost of privacy – every transaction and contract state is exposed to all participants. This openness creates problems like MEV (Miner Extractable Value) attacks, copy-trading, and leakage of sensitive business logic. Programmable privacy aims to solve these issues by allowing computations on private data without revealing the data itself. Two emerging cryptographic paradigms are making this possible: Fully Homomorphic Encryption Virtual Machines (FHE-VM) and Zero-Knowledge (ZK) Coprocessors. These approaches enable off-chain or encrypted computation with on-chain verification, preserving confidentiality while retaining trustless correctness. In this report, we dive deep into FHE-VM and ZK-coprocessor architectures, compare their trade-offs, and explore use cases across finance, identity, healthcare, data markets, and decentralized machine learning.

Fully Homomorphic Encryption Virtual Machine (FHE-VM)

Fully Homomorphic Encryption (FHE) allows arbitrary computations on encrypted data without ever decrypting it. An FHE Virtual Machine integrates this capability into blockchain smart contracts, enabling encrypted contract state and logic. In an FHE-enabled blockchain (often called an fhEVM for EVM-compatible designs), all inputs, contract storage, and outputs remain encrypted throughout execution. This means validators can process transactions and update state without learning any sensitive values, achieving on-chain execution with data confidentiality.

Architecture and Design of FHE-VM

A typical FHE-VM extends a standard smart contract runtime (like the Ethereum Virtual Machine) with native support for encrypted data types and operations. For example, Zama’s FHEVM introduces encrypted integers (euint8, euint32, etc.), encrypted booleans (ebool), and even encrypted arrays as first-class types. Smart contract languages like Solidity are augmented via libraries or new opcodes so developers can perform arithmetic (add, mul, etc.), logical operations, and comparisons directly on ciphertexts. Under the hood, these operations invoke FHE primitives (e.g. using the TFHE library) to manipulate encrypted bits and produce encrypted results.

Encrypted state storage is supported so that contract variables remain encrypted in the blockchain state. The execution flow is typically:

  1. Client-Side Encryption: Users encrypt their inputs locally using the public FHE key before sending transactions. The encryption key is public (for encryption and evaluation), while the decryption key remains secret. In some designs, each user manages their own key; in others, a single global FHE key is used (discussed below).
  2. On-Chain Homomorphic Computation: Miners/validators execute the contract with encrypted opcodes. They perform the same deterministic homomorphic operations on the ciphertexts, so consensus can be reached on the encrypted new state. Crucially, validators never see plaintext data – they just see “gibberish” ciphertext but can still process it consistently.
  3. Decryption (Optional): If a result needs to be revealed or used off-chain, an authorized party with the private key can decrypt the output ciphertext. Otherwise, results remain encrypted and can be used as inputs to further transactions (allowing consecutive computations on persistent encrypted state).

A major design consideration is key management. One approach is per-user keys, where each user holds their secret key and only they can decrypt outputs relevant to them. This maximizes privacy (no one else can ever decrypt your data), but homomorphic operations cannot mix data encrypted under different keys without complex multi-key protocols. Another approach, used by Zama’s FHEVM, is a global FHE key: a single public key encrypts all contract data and a distributed set of validators holds shares of the threshold decryption key. The public encryption and evaluation keys are published on-chain, so anyone can encrypt data to the network; the private key is split among validators who can collectively decrypt if needed under a threshold scheme. To prevent validator collusion from compromising privacy, Zama employs a threshold FHE protocol (based on their Noah’s Ark research) with “noise flooding” to make partial decryptions secure. Only if a sufficient quorum of validators cooperates can a plaintext be recovered, for example to serve a read request. In normal operation, however, no single node ever sees plaintext – data remains encrypted on-chain at all times.

Access control is another crucial component. FHE-VM implementations include fine-grained controls to manage who (if anyone) can trigger decryptions or access certain encrypted fields. For instance, Cypher’s fhEVM supports Access Control Lists on ciphertexts, allowing developers to specify which addresses or contracts can interact with or re-encrypt certain data. Some frameworks support re-encryption: the ability to transfer an encrypted value from one user’s key to another’s without exposing plaintext. This is useful for things like data marketplaces, where a data owner can encrypt a dataset with their key, and upon purchase, re-encrypt it to the buyer’s key – all on-chain, without ever decrypting publicly.

Ensuring Correctness and Privacy

One might ask: if all data is encrypted, how do we enforce correctness of contract logic? How can the chain prevent invalid operations if it can’t “see” the values? FHE by itself doesn’t provide a proof of correctness – validators can perform the homomorphic steps, but they can’t inherently tell if a user’s encrypted input was valid or if a conditional branch should be taken, etc., without decrypting. Zero-knowledge proofs (ZKPs) can complement FHE to solve this gap. In an FHE-VM, typically users must provide a ZK proof attesting to certain plaintext conditions whenever needed. Zama’s design, for example, uses a ZK Proof of Plaintext Knowledge (ZKPoK) to accompany each encrypted input. This proves that the user knows the plaintext corresponding to their ciphertext and that it meets expected criteria, without revealing the plaintext itself. Such “certified ciphertexts” prevent a malicious user from submitting a malformed encryption or an out-of-range value. Similarly, for operations that require a decision (e.g. ensure account balance ≥ withdrawal amount), the user can supply a ZK proof that this condition holds true on the plaintexts before the encrypted operation is executed. In this way, the chain doesn’t decrypt or see the values, but it gains confidence that the encrypted transactions follow the rules.

Another approach in FHE rollups is to perform off-chain validation with ZKPs. Fhenix (an L2 rollup using FHE) opts for an optimistic model where a separate network component called a Threshold Service Network can decrypt or verify encrypted results, and any incorrect computation can be challenged with a fraud-proof. In general, combining FHE + ZK or fraud proofs ensures that encrypted execution remains trustless. Validators either collectively decrypt only when authorized, or they verify proofs that each encrypted state transition was valid without needing to see plaintext.

Performance considerations: FHE operations are computationally heavy – many orders of magnitude slower than normal arithmetic. For example, a simple 64-bit addition on Ethereum costs ~3 gas, whereas an addition on an encrypted 64-bit integer (euint64) under Zama’s FHEVM costs roughly 188,000 gas. Even an 8-bit add can cost ~94k gas. This enormous overhead means a straightforward implementation on existing nodes would be impractically slow and costly. FHE-VM projects tackle this with optimized cryptographic libraries (like Zama’s TFHE-rs library for binary gate bootstrapping) and custom EVM modifications for performance. For instance, Cypher’s modified Geth client adds new opcodes and optimizes homomorphic instruction execution in C++/assembly to minimize overhead. Nevertheless, achieving usable throughput requires acceleration. Ongoing work includes using GPUs, FPGAs, and even specialized photonic chips to speed up FHE computations. Zama reports their FHE performance improved 100× since 2024 and is targeting thousands of TPS with GPU/FPGA acceleration. Dedicated FHE co-processor servers (such as Optalysys’s LightLocker Node) can plug into validator nodes to offload encrypted operations to hardware, supporting >100 encrypted ERC-20 transfers per second per node. As hardware and algorithms improve, the gap between FHE and plain computation will narrow, enabling private contracts to approach more practical speeds.

Compatibility: A key goal of FHE-VM designs is to remain compatible with existing development workflows. Cypher’s and Zama’s fhEVM implementations allow developers to write contracts in Solidity with minimal changes – using a library to declare encrypted types and operations. The rest of the Ethereum toolchain (Remix, Hardhat, etc.) can still be used, as the underlying modifications are mostly at the client/node level. This lowers the barrier to entry: developers don’t need to be cryptography experts to write a confidential smart contract. For example, a simple addition of two numbers can be written as euint32 c = a + b; and the FHEVM will handle the encryption-specific details behind the scenes. The contracts can even interoperate with normal contracts – e.g. an encrypted contract could output a decrypted result to a standard contract if desired, allowing a mix of private and public parts in one ecosystem.

Current FHE-VM Projects: Several projects are pioneering this space. Zama (a Paris-based FHE startup) developed the core FHEVM concept and libraries (TFHE-rs and an fhevm-solidity library). They do not intend to launch their own chain, but rather provide infrastructure to others. Inco is an L1 blockchain (built on Cosmos SDK with Evmos) that integrated Zama’s FHEVM to create a modular confidential chain. Their testnets (named Gentry and Paillier) showcase encrypted ERC-20 transfers and other private DeFi primitives. Fhenix is an Ethereum Layer-2 optimistic rollup using FHE for privacy. It decided on an optimistic (fraud-proof) approach rather than ZK-rollup due to the heavy cost of doing FHE and ZK together for every block. Fhenix uses the same TFHE-rs library (with some modifications) and introduces a Threshold Service Network for handling decryptions in a decentralized way. There are also independent teams like Fhenix (now rebranded) and startups exploring MPC + FHE hybrids. Additionally, Cypher (by Z1 Labs) is building a Layer-3 network focused on AI and privacy, using an fhEVM with features like secret stores and federated learning support. The ecosystem is nascent but growing rapidly, fueled by significant funding – e.g. Zama became a “unicorn” with >$130M raised by 2025 to advance FHE tech.

In summary, an FHE-VM enables privacy-preserving smart contracts by executing all logic on encrypted data on-chain. This paradigm ensures maximum confidentiality – nothing sensitive is ever exposed in transactions or state – while leveraging the existing blockchain consensus for integrity. The cost is increased computational burden on validators and complexity in key management and proof integration. Next, we explore an alternative paradigm that offloads compute entirely off-chain and only uses the chain for verification: the zero-knowledge coprocessor.

Zero-Knowledge Coprocessors (ZK-Coprocessors)

A ZK-coprocessor is a new blockchain architecture pattern where expensive computations are performed off-chain, and a succinct zero-knowledge proof of their correctness is verified on-chain. This allows smart contracts to harness far greater computational power and data than on-chain execution would allow, without sacrificing trustlessness. The term coprocessor is used by analogy to hardware coprocessors (like a math co-processor or GPU) that handle specialized tasks for a CPU. Here, the blockchain’s “CPU” (the native VM like EVM) delegates certain tasks to a zero-knowledge proof system which acts as a cryptographic coprocessor. The ZK-coprocessor returns a result and a proof that the result was computed correctly, which the on-chain contract can verify and then use.

Architecture and Workflow

In a typical setup, a dApp developer identifies parts of their application logic that are too expensive or complex for on-chain execution (e.g. large computations over historical data, heavy algorithms, ML model inference, etc.). They implement those parts as an off-chain program (in a high-level language or circuit DSL) that can produce a zero-knowledge proof of its execution. The on-chain component is a verifier smart contract that checks proofs and makes the results available to the rest of the system. The flow can be summarized as:

  1. Request – The on-chain contract triggers a request for a certain computation to be done off-chain. This could be initiated by a user transaction or by one contract calling into the ZK-coprocessor’s interface. For example, a DeFi contract might call “proveInterestRate(currentState)” or a user calls “queryHistoricalData(query)”.
  2. Off-Chain Execution & Proving – An off-chain service (which could be a decentralized network of provers or a trusted service, depending on the design) picks up the request. It gathers any required data (on-chain state, off-chain inputs, etc.) and executes the computation in a special ZK Virtual Machine (ZKVM) or circuit. During execution, a proof trace is generated. At the end, the service produces a succinct proof (e.g. a SNARK or STARK) attesting that “Computing function F on input X yields output Y” and optionally attesting to data integrity (more on this below).
  3. On-Chain Verification – The proof and result are returned to the blockchain (often via a callback function). The verifier contract checks the proof’s validity using efficient cryptographic verification (pairing checks, etc.). If valid, the contract can now trust the output Y as correct. The result can be stored in state, emitted as an event, or fed into further contract logic. If the proof is invalid or not provided within some time, the request can be considered failed (and potentially some fallback or timeout logic triggers).

Figure 1: Architecture of a ZK Coprocessor (RISC Zero Bonsai example). Off-chain, a program runs on a ZKVM with inputs from the smart contract call. A proof of execution is returned on-chain via a relay contract, which invokes a callback with the verified results.

Critically, the on-chain gas cost for verification is constant (or grows very slowly) regardless of how complex the off-chain computation was. Verifying a succinct proof might cost on the order of a few hundred thousand gas (a fraction of an Ethereum block), but that proof could represent millions of computational steps done off-chain. As one developer quipped, “Want to prove one digital signature? ~$15. Want to prove one million signatures? Also ~$15.”. This scalability is a huge win: dApps can offer complex functionalities (big data analytics, elaborate financial models, etc.) without clogging the blockchain.

The main components of a ZK-coprocessor system are:

  • Proof Generation Environment: This can be a general-purpose ZKVM (able to run arbitrary programs) or custom circuits tailored to specific computations. Approaches vary:

    • Some projects use handcrafted circuits for each supported query or function (maximizing efficiency for that function).
    • Others provide a Domain-Specific Language (DSL) or an Embedded DSL that developers use to write their off-chain logic, which is then compiled into circuits (balancing ease-of-use and performance).
    • The most flexible approach is a zkVM: a virtual machine (often based on RISC architectures) where programs can be written in standard languages (Rust, C, etc.) and automatically proven. This sacrifices performance (simulating a CPU in a circuit adds overhead) for maximum developer experience.
  • Data Access and Integrity: A unique challenge is feeding the off-chain computation with the correct data, especially if that data resides on the blockchain (past blocks, contract states, etc.). A naive solution is to have the prover read from an archive node and trust it – but that introduces trust assumptions. ZK-coprocessors instead typically prove that any on-chain data used was indeed authentic by linking to Merkle proofs or state commitments. For example, the query program might take a block number and a Merkle proof of a storage slot or transaction, and the circuit will verify that proof against a known block header hash. Three patterns exist:

    1. Inline Data: Put the needed data on-chain (as input to the verifier) so it can be directly checked. This is very costly for large data and undermines the whole point.
    2. Trust an Oracle: Have an oracle service feed the data to the proof and vouch for it. This is simpler but reintroduces trust in a third party.
    3. Prove Data Inclusion via ZK: Incorporate proofs of data inclusion in the chain’s history within the zero-knowledge circuit itself. This leverages the fact that each Ethereum block header commits to the entire prior state (via state root) and transaction history. By verifying Merkle Patricia proofs of the data within the circuit, the output proof assures the contract that “this computation used genuine blockchain data from block N” with no additional trust needed.

    The third approach is the most trustless and is used by advanced ZK-coprocessors like Axiom and Xpansion (it does increase proving cost, but is preferable for security). For instance, Axiom’s system models Ethereum’s block structure, state trie, and transaction trie inside its circuits, so it can prove statements like “the account X had balance Y at block N or “a transaction with certain properties occurred in block N”. It leverages the fact that given a recent trusted block hash, one can recursively prove inclusion of historical data without trusting any external party.

  • Verifier Contract: This on-chain contract contains the verifying key and logic to accept or reject proofs. For SNARKs like Groth16 or PLONK, the verifier might do a few elliptic curve pairings; for STARKs, it might do some hash computations. Performance optimizations like aggregation and recursion can minimize on-chain load. For example, RISC Zero’s Bonsai uses a STARK-to-SNARK wrapper: it runs a STARK-based VM off-chain for speed, but then generates a small SNARK proof attesting to the STARK’s validity. This shrinks proof size from hundreds of kilobytes to a few hundred bytes, making on-chain verification feasible and cheap. The Solidity verifier then just checks the SNARK (which is a constant-time operation).

In terms of deployment, ZK-coprocessors can function as layer-2 like networks or as pure off-chain services. Some, like Axiom, started as a specialized service for Ethereum (with Paradigm’s backing) where developers submit queries to Axiom’s prover network and get proofs on-chain. Axiom’s tagline was providing Ethereum contracts “trustless access to all on-chain data and arbitrary expressive compute over it.” It effectively acts as a query oracle where the answers are verified by ZKPs instead of trust. Others, like RISC Zero’s Bonsai, offer a more open platform: any developer can upload a program (compiled to a RISC-V compatible ZKVM) and use Bonsai’s proving service via a relay contract. The relay pattern, as illustrated in Figure 1, involves a contract that mediates requests and responses: the dApp contract calls the relay to ask for a proof, the off-chain service listens for this (e.g. via event or direct call), computes the proof, and then the relay invokes a callback function on the dApp contract with the result and proof. This asynchronous model is necessary because proving may take from seconds to minutes depending on complexity. It introduces a latency (and a liveness assumption that the prover will respond), whereas FHE-VM computations happen synchronously within a block. Designing the application to handle this async workflow (possibly akin to Oracle responses) is part of using a ZK-coprocessor.

Notable ZK-Coprocessor Projects

  • Axiom: Axiom is a ZK coprocessor tailored for Ethereum, focused originally on proving historical on-chain data queries. It uses the Halo2 proving framework (a Plonk-ish SNARK) to create proofs that incorporate Ethereum’s cryptographic structures. In Axiom’s system, a developer can query things like “what was the state of contract X at block N?” or perform a computation over all transactions in a range. Under the hood, Axiom’s circuits had to implement Ethereum’s state/trie logic, even performing elliptic curve operations and SNARK verification inside the circuit to support recursion. Trail of Bits, in an audit, noted the complexity of Axiom’s Halo2 circuits modeling entire blocks and states. After auditing, Axiom generalized their tech into an OpenVM, allowing arbitrary Rust code to be proved with the same Halo2-based infrastructure. (This mirrors the trend of moving from domain-specific circuits to a more general ZKVM approach.) The Axiom team demonstrated ZK queries that Ethereum natively cannot do, enabling stateless access to any historical data with cryptographic integrity. They have also emphasized security, catching and fixing under-constrained circuit bugs and ensuring soundness. While Axiom’s initial product was shut down during their pivot, their approach remains a landmark in ZK coprocessors.

  • RISC Zero Bonsai: RISC Zero is a ZKVM based on the RISC-V architecture. Their zkVM can execute arbitrary programs (written in Rust, C++ and other languages compiled to RISC-V) and produce a STARK proof of execution. Bonsai is RISC Zero’s cloud service that provides this proving on demand, acting as a coprocessor for smart contracts. To use it, a developer writes a program (say a function that performs complex math or verifies an off-chain API response), uploads it to the Bonsai service, and deploys a corresponding verifier contract. When the contract needs that computation, it calls the Bonsai relay which triggers the proof generation and returns the result via callback. One example application demonstrated was off-chain governance computation: RISC Zero showed a DAO using Bonsai to tally votes and compute complex voting metrics off-chain, then post a proof so that the on-chain Governor contract could trust the outcome with minimal gas cost. RISC Zero’s technology emphasizes that developers can use familiar programming paradigms – for instance, writing a Rust function to compute something – and the heavy lifting of circuit creation is handled by the zkVM. However, proofs can be large, so as noted earlier they implemented a SNARK compression for on-chain verification. In August 2023 they successfully verified RISC Zero proofs on Ethereum’s Sepolia testnet, costing on the order of 300k gas per proof. This opens the door for Ethereum dApps to use Bonsai today as a scaling and privacy solution. (Bonsai is still in alpha, not production-ready, and uses a temporary SNARK setup without a ceremony.)

  • Others: There are numerous other players and research initiatives. Expansion/Xpansion (as mentioned in a blog) uses an embedded DSL approach, where developers can write queries over on-chain data with a specialized language, and it handles proof generation internally. StarkWare’s Cairo and Polygon’s zkEVM are more general ZK-rollup VMs, but their tech could be repurposed for coprocessor-like use by verifying proofs within L1 contracts. We also see projects in the ZKML (ZK Machine Learning) domain, which effectively act as coprocessors to verify ML model inference or training results on-chain. For example, a zkML setup can prove that “a neural network inference on private inputs produced classification X” without revealing the inputs or doing the computation on-chain. These are special cases of the coprocessor concept applied to AI.

Trust assumptions: ZK-coprocessors rely on the soundness of the cryptographic proofs. If the proof system is secure (and any trusted setup is done honestly), then an accepted proof guarantees the computation was correct. No additional trust in the prover is needed – even a malicious prover cannot convince the verifier of a false statement. However, there is a liveness assumption: someone must actually perform the off-chain computation and produce the proof. In practice this might be a decentralized network (with incentives or fees to do the work) or a single service operator. If no one provides the proof, the on-chain request might remain unresolved. Another subtle trust aspect is data availability for off-chain inputs that aren’t on the blockchain. If the computation depends on some private or external data, the verifier can’t know if that data was honestly provided unless additional measures (like data commitments or oracle signatures) are used. But for purely on-chain data computations, the mechanisms described ensure trustlessness equivalent to the chain itself (Axiom argued their proofs offer “security cryptographically equivalent to Ethereum” for historical queries).

Privacy: Zero-knowledge proofs also inherently support privacy – the prover can keep inputs hidden while proving statements about them. In a coprocessor context, this means a proof can allow a contract to use a result that was derived from private data. For example, a proof might show “user’s credit score > 700, so approve loan” without revealing the actual credit score or raw data. Axiom’s use-case was more about publicly known data (blockchain history), so privacy wasn’t the focus there. But RISC Zero’s zkVM could be used to prove assertions about secret data provided by a user: the data stays off-chain and only the needed outcome goes on-chain. It’s worth noting that unlike FHE, a ZK proof doesn’t usually provide ongoing confidentiality of state – it’s a one-time proof. If a workflow needs maintaining a secret state across transactions, one might build it by having the contract store a commitment to the state and each proof showing a valid state transition from old commitment to new, with secrets hidden. This is essentially how zk-rollups for private transactions (like Aztec or Zcash) work. So ZK coprocessors can facilitate fully private state machines, but the implementation is nontrivial; often they are used for one-off computations where either the input or the output (or both) can be private as needed.

Developer experience: Using a ZK-coprocessor typically requires learning new tools. Writing custom circuits (option (1) above) is quite complex and usually only done for narrow purposes. Higher-level options like DSLs or zkVMs make life easier but still add overhead: the dev must write and deploy off-chain code and manage the interaction. In contrast to FHE-VM where the encryption is mostly handled behind the scenes and the developer writes normal smart contract code, here the developer needs to partition their logic and possibly write in a different language (Rust, etc.) for the off-chain part. However, initiatives like Noir, Leo, Circom DSLs or RISC Zero’s approach are rapidly improving accessibility. For instance, RISC Zero provides templates and Foundry integration such that a developer can simulate their off-chain code locally (for correctness) and then seamlessly hook it into solidity tests via the Bonsai callback. Over time, we can expect development frameworks that abstract away whether a piece of logic is executed via ZK proof or on-chain – the compiler or tooling might decide based on cost.

FHE-VM vs ZK-Coprocessor: Comparison

Both FHE-VMs and ZK-coprocessors enable a form of “compute on private data with on-chain assurance”, but they differ fundamentally in architecture. The table below summarizes key differences:

AspectFHE-VM (Encrypted On-Chain Execution)ZK-Coprocessor (Off-Chain Proving)
Where computation happensDirectly on-chain (all nodes execute homomorphic operations on ciphertexts).Off-chain (a prover or network executes the program; only a proof is verified on-chain).
Data confidentialityFull encryption: data remains encrypted at all times on-chain; validators never see plaintext. Only holders of decryption keys can decrypt outputs.Zero-knowledge: prover’s private inputs never revealed on-chain; proof reveals no secrets beyond what’s in public outputs. However, any data used in computation that must affect on-chain state must be encoded in the output or commitment. Secrets remain off-chain by default.
Trust modelTrust in consensus execution and cryptography: if majority of validators follow protocol, encrypted execution is deterministic and correct. No external trust needed for computation correctness (all nodes recompute it). Must trust FHE scheme security (typically based on lattice hardness) for privacy. In some designs, also trust that no collusion of enough validators can occur to misuse threshold keys.Trust in the proof system security (soundness of SNARK/STARK). If proof verifies, result is correct with cryptographic certainty. Off-chain provers cannot cheat the math. There is a liveness assumption on provers to actually do the work. If using a trusted setup (e.g. SNARK SRS), must trust that it was generated honestly or use transparent/no-setup systems.
On-chain cost and scalabilityHigh per-transaction cost: Homomorphic ops are extremely expensive computationally, and every node must perform them. Gas costs are high (e.g. 100k+ gas for a single 8-bit addition). Complex contracts are limited by what every validator can compute in a block. Throughput is much lower than normal smart contracts unless specialized hardware is employed. Scalability is improved by faster cryptography and hardware acceleration, but fundamentally each operation grows chain workload.Low verification cost: Verifying a succinct proof is efficient and constant-size, so on-chain gas is modest (hundreds of thousands gas for any size computation). This decouples complexity from on-chain resource limits – large computations have no extra on-chain cost. Thus, it scales in terms of on-chain load. Off-chain, proving time can be significant (minutes or more for huge tasks) and might require powerful machines, but this doesn’t directly slow the blockchain. Overall throughput can be high as long as proofs can be generated in time (potential parallel prover networks).
LatencyResults are available immediately in the same transaction/block, since computation occurs during execution. No additional round-trips – synchronous operation. However, longer block processing time might increase blockchain latency if FHE ops are slow.Inherently asynchronous. Typically requires one transaction to request and a later transaction (or callback) to provide the proof/result. This introduces delay (possibly seconds to hours depending on proof complexity and proving hardware). Not suitable for instant finality of a single transaction – more like an async job model.
Privacy guaranteesStrong: Everything (inputs, outputs, intermediate state) can remain encrypted on-chain. You can have long-lived encrypted state that multiple transactions update without ever revealing it. Only authorized decryption actions (if any) reveal outputs, and those can be controlled via keys/ACLs. However, side-channel considerations like gas usage or event logs must be managed so they don’t leak patterns (fhEVM designs strive for data-oblivious execution with constant gas for operations to avoid leaks).Selective: The proof reveals whatever is in the public outputs or is necessary to verify (e.g. a commitment to initial state). Designers can ensure that only the intended result is revealed, and all other inputs remain zero-knowledge hidden. But unlike FHE, the blockchain typically doesn’t store the hidden state – privacy is achieved by keeping data off-chain entirely. If a persistent private state is needed, the contract may store a cryptographic commitment to it (so state updates still reveal a new commitment each time). Privacy is limited by what you choose to prove; you have flexibility to prove e.g. a threshold was met without revealing exact values.
Integrity enforcementBy design, all validators recompute the next state homomorphically, so if a malicious actor provides a wrong ciphertext result, others will detect a mismatch – consensus fails unless everyone gets the same result. Thus, integrity is enforced by redundant execution (like normal blockchain, just on encrypted data). Additional ZK proofs are often used to enforce business rules (e.g. user couldn’t violate a constraint) because validators can’t directly check plaintext conditions.Integrity is enforced by the verifier contract checking the ZK proof. As long as the proof verifies, the result is guaranteed to be consistent with some valid execution of the off-chain program. No honest-majority assumption needed for correctness – even a single honest verifier (the contract code itself) suffices. The on-chain contract will simply reject any false proof or missing proof (similar to how it would reject an invalid signature). One consideration: if the prover aborts or delays, the contract may need fallback logic (or users may need to try again later), but it won’t accept incorrect results.
Developer experiencePros: Can largely use familiar smart contract languages (Solidity, etc.) with extensions. The confidentiality is handled by the platform – devs worry mainly about what to encrypt and who holds keys. Composition of encrypted and normal contracts is possible, maintaining the composability of DeFi (just with encrypted variables). Cons: Must understand FHE limitations – e.g. no direct conditional jumps on secret data without special handling, limited circuit depth (though bootstrapping in TFHE allows arbitrary length of computation at expense of time). Debugging encrypted logic can be tricky since you can’t easily introspect runtime values without the key. Also, key management and permissioning add complexity to contract design.Pros: Potentially use any programming language for off-chain part (especially with a zkVM). Leverage existing code/libraries in off-chain program (with caveats for ZK-compatibility). No custom cryptography needed by developer if using a general ZKVM – they write normal code and get a proof. Also, the heavy computation can use libraries (e.g. machine learning code) that would never run on-chain. Cons: Developers must orchestrate off-chain infrastructure or use a proving service. Handling asynchronous workflows and integrating them with on-chain logic requires more design work (e.g. storing a pending state, waiting for callback). Writing efficient circuits or zkVM code might require learning new constraints (e.g. no floating point, use fixed-point or special primitives; avoid heavy branching that blows up proving time; optimize for constraints count). There is also the burden of dealing with proof failures, timeouts, etc., which are not concerns in regular solidity. The ecosystem of tools is growing, but it’s a new paradigm for many.

Both approaches are actively being improved, and we even see convergence: as noted, ZKPs are used inside FHE-VMs for certain checks, and conversely some researchers propose using FHE to keep prover inputs private in ZK (so a cloud prover doesn’t see your secret data). It’s conceivable future systems will combine them – e.g. performing FHE off-chain and then proving the correctness of that to chain, or using FHE on-chain but ZK-proving to light clients that the encrypted ops were done right. Each technique has strengths: FHE-VM offers continuous privacy and real-time interaction at the cost of heavy computation, whereas ZK-coprocessors offer scalability and flexibility at the cost of latency and complexity.

Use Cases and Implications

The advent of programmable privacy unlocks a wealth of new blockchain applications across industries. Below we explore how FHE-VMs and ZK-coprocessors (or hybrids) can empower various domains by enabling privacy-preserving smart contracts and a secure data economy.

Confidential DeFi and Finance

In decentralized finance, privacy can mitigate front-running, protect trading strategies, and satisfy compliance without sacrificing transparency where needed. Confidential DeFi could allow users to interact with protocols without revealing their positions to the world.

  • Private Transactions and Hidden Balances: Using FHE, one can implement confidential token transfers (encrypted ERC-20 balances and transactions) or shielded pools on a blockchain L1. No observer can see how much of a token you hold or transferred, eliminating the risk of targeted attacks based on holdings. ZK proofs can ensure balances stay in sync and no double-spending occurs (similar to Zcash but on smart contract platforms). An example is a confidential AMM (Automated Market Maker) where pool reserves and trades are encrypted on-chain. Arbitrageurs or front-runners cannot exploit the pool because they can’t observe the price slippage until after the trade is settled, reducing MEV. Only after some delay or via an access-controlled mechanism might some data be revealed for audit.

  • MEV-Resistant Auctions and Trading: Miners and bots exploit transaction transparency to front-run trades. With encryption, you could have an encrypted mempool or batch auctions where orders are submitted in ciphertext. Only after the auction clears do trades decrypt. This concept, sometimes called Fair Order Flow, can be achieved with threshold decryption (multiple validators collectively decrypt the batch) or by proving auction outcomes via ZK without revealing individual bids. For instance, a ZK-coprocessor could take a batch of sealed bids off-chain, compute the auction clearing price, and output just that price and winners with proofs. This preserves fairness and privacy of losing bids.

  • Confidential Lending and Derivatives: In DeFi lending, users might not want to reveal the size of their loans or collateral (it can affect market sentiment or invite exploitation). An FHE-VM can maintain an encrypted loan book where each loan’s details are encrypted. Smart contract logic can still enforce rules like liquidation conditions by operating on encrypted health factors. If a loan’s collateral ratio falls below threshold, the contract (with help of ZK proofs) can flag it for liquidation without ever exposing exact values – it might just produce a yes/no flag in plaintext. Similarly, secret derivatives or options positions could be managed on-chain, with only aggregated risk metrics revealed. This could prevent copy trading and protect proprietary strategies, encouraging more institutional participation.

  • Compliant Privacy: Not all financial contexts want total anonymity; sometimes selective disclosure is needed for regulation. With these tools, we can achieve regulated privacy: for example, trades are private to the public, but a regulated exchange can decrypt or receive proofs about certain properties. One could prove via ZK that “this trade did not involve a blacklisted address and both parties are KYC-verified” without revealing identities to the chain. This balance could satisfy Anti-Money Laundering (AML) rules while still keeping user identities and positions confidential to everyone else. FHE could allow an on-chain compliance officer contract to scan encrypted transactions for risk signals (with a decryption key accessible only under court order, for instance).

Digital Identity and Personal Data

Identity systems stand to gain significantly from on-chain privacy tech. Currently, putting personal credentials or attributes on a public ledger is impractical due to privacy laws and user reluctance. With FHE and ZK, self-sovereign identity can be realized in a privacy-preserving way:

  • Zero-Knowledge Credentials: Using ZK proofs (already common in some identity projects), a user can prove statements like “I am over 18”, “I have a valid driver’s license”, or “I earn above $50k (for credit scoring)” without revealing any other personal info. ZK-coprocessors can enhance this by handling more complex checks off-chain, e.g. proving a user’s credit score is above a threshold by querying a private credit database in an Axiom-like fashion, outputting only a yes/no to the blockchain.

  • Confidential KYC on DeFi: Imagine a DeFi protocol that by law must ensure users are KYC’d. With FHE-VM, a user’s credentials can be stored encrypted on-chain (or referenced via DID), and a smart contract can perform an FHE computation to verify the KYC info meets requirements. For instance, a contract could homomorphically check that name and SSN in an encrypted user profile match a sanctioned users list (also encrypted), or that the user’s country is not restricted. The contract would only get an encrypted “pass/fail” which can be threshold-decrypted by network validators to a boolean flag. Only the fact that the user is allowed or not is revealed, preserving PII confidentiality and aligning with GDPR principles. This selective disclosure ensures compliance and privacy.

  • Attribute-Based Access and Selective Disclosure: Users could hold a bunch of verifiable credentials (age, citizenship, skills, etc.) as encrypted attributes. They can authorize certain dApps to run computations on them without disclosing everything. For example, a decentralized recruitment DApp could filter candidates by performing searches on encrypted resumes (using FHE) – e.g. count years of experience, check for a certification – and only if a match is found, contact the candidate off-chain. The candidate’s private details remain encrypted unless they choose to reveal. ZK proofs can also let users selectively prove they possess a combination of attributes (e.g. over 21 and within a certain ZIP code) without revealing the actual values.

  • Multi-Party Identity Verification: Sometimes a user’s identity needs to be vetted by multiple parties (say, background check by company A, credit check by company B). With homomorphic and ZK tools, each verifier could contribute an encrypted score or approval, and a smart contract can aggregate these to a final decision without exposing individual contributions. For instance, three agencies provide encrypted “pass/fail” bits, and the contract outputs an approval if all three are passes – the user or relying party only sees the final outcome, not which specific agency might have failed them, preserving privacy of the user’s record at each agency. This can reduce bias and stigma associated with, say, one failed check revealing a specific issue.

Healthcare and Sensitive Data Sharing

Healthcare data is highly sensitive and regulated, yet combining data from multiple sources can unlock huge value (for research, insurance, personalized medicine). Blockchain could provide a trust layer for data exchange if privacy is solved. Confidential smart contracts could enable new health data ecosystems:

  • Secure Medical Data Exchange: Patients could store references to their medical records on-chain in encrypted form. An FHE-enabled contract could allow a research institution to run analytics on a cohort of patient data without decrypting it. For example, a contract could compute the average efficacy of a drug across encrypted patient outcomes. Only aggregate statistical results come out decrypted (and perhaps only if a minimum number of patients is included, to prevent re-identification). Patients could receive micropayments for contributing their encrypted data to research, knowing that their privacy is preserved because even the blockchain and researchers only see ciphertext or aggregate proofs. This fosters a data marketplace for healthcare that respects privacy.

  • Privacy-Preserving Insurance Claims: Health insurance claims processing could be automated via smart contracts that verify conditions on medical data without exposing the data to the insurer. A claim could include an encrypted diagnosis code and encrypted treatment cost; the contract, using FHE, checks policy rules (e.g. coverage, deductible) on that encrypted data. It could output an approval and payment amount without ever revealing the actual diagnosis to the insurer’s blockchain (only the patient and doctor had the key). ZK proofs might be used to show that the patient’s data came from a certified hospital’s records (using something like Axiom to verify a hospital’s signature or record inclusion) without revealing the record itself. This ensures patient privacy while preventing fraud.

  • Genomic and Personal Data Computation: Genomic data is extremely sensitive (it’s literally one’s DNA blueprint). However, analyzing genomes can provide valuable health insights. Companies could use FHE-VM to perform computations on encrypted genomes uploaded by users. For instance, a smart contract could run a gene-environment risk model on encrypted genomic data and encrypted environmental data (from wearables perhaps), outputting a risk score that only the user can decrypt. The logic (maybe a polygenic risk score algorithm) is coded in the contract and runs homomorphically, so the genomic data never appears in plain. This way, users get insights without giving companies raw DNA data – mitigating both privacy and data ownership concerns.

  • Epidemiology and Public Health: During situations like pandemics, sharing data is vital for modeling disease spread, but privacy laws can hinder data sharing. ZK coprocessors could allow public health authorities to submit queries like “How many people in region X tested positive in last 24h?” to a network of hospitals’ data via proofs. Each hospital keeps patient test records off-chain but can prove to the authority’s contract the count of positives without revealing who. Similarly, contact tracing could be done by matching encrypted location trails: contracts can compute intersections of encrypted location histories of patients to identify hotspots, outputting only the hotspot locations (and perhaps an encrypted list of affected IDs that only health dept can decrypt). The raw location trails of individuals remain private.

Data Marketplaces and Collaboration

The ability to compute on data without revealing it opens new business models around data sharing. Entities can collaborate on computations knowing their proprietary data will not be exposed:

  • Secure Data Marketplaces: Sellers can make data available in encrypted form on a blockchain marketplace. Buyers can pay to run specific analytics or machine learning models on the encrypted dataset via a smart contract, getting either the trained model or aggregated results. The seller’s raw data is never revealed to the buyer or the public – the buyer might only receive a model (which still might leak some info in weights, but techniques like differential privacy or controlling output granularity can mitigate this). ZK proofs can ensure the buyer that the computation was done correctly over the promised dataset (e.g. the seller can’t cheat by running the model on dummy data because the proof ties it to the committed encrypted dataset). This scenario encourages data sharing: for instance, a company could monetize user behavior data by allowing approved algorithms to run on it under encryption, without giving away the data itself.

  • Federated Learning & Decentralized AI: In decentralized machine learning, multiple parties (e.g. different companies or devices) want to jointly train a model on their combined data without sharing data with each other. FHE-VMs excel here: they can enable federated learning where each party’s model updates are homomorphically aggregated by a contract. Because the updates are encrypted, no participant learns others’ contributions. The contract could even perform parts of the training loop (like gradient descent steps) on-chain under encryption, producing an updated model that only authorized parties can decrypt. ZK can complement this by proving that each party’s update was computed following the training algorithm (preventing a malicious participant from poisoning the model). This means a global model can be trained with full auditability on-chain, yet the training data of each contributor remains private. Use cases include jointly training fraud detection models across banks or improving AI assistants using data from many users without centralizing the raw data.

  • Cross-Organizational Analytics: Consider two companies that want to find their intersection of customers for a partnership campaign without exposing their entire customer lists to each other. They could each encrypt their customer ID lists and upload a commitment. An FHE-enabled contract can compute the intersection on the encrypted sets (using techniques like private set intersection via FHE). The result could be an encrypted list of common customer IDs that only a mutually trusted third-party (or the customers themselves, via some mechanism) can decrypt. Alternatively, a ZK approach: one party proves to the other in zero-knowledge that “we have N customers in common and here is an encryption of those IDs” with a proof that the encryption indeed corresponds to common entries. This way, they can proceed with a campaign to those N customers without ever exchanging their full lists in plaintext. Similar scenarios: computing supply chain metrics across competitors without revealing individual supplier details, or banks collating credit info without sharing full client data.

  • Secure Multi-Party Computation (MPC) on Blockchain: FHE and ZK essentially bring MPC concepts on-chain. Complex business logic spanning multiple organizations can be encoded in a smart contract such that each org’s inputs are secret-shared or encrypted. The contract (as an MPC facilitator) produces outputs like profit splits, cost calculations, or joint risk assessments that everyone can trust. For example, suppose several energy companies want to settle a marketplace of power trading. They could feed their encrypted bids and offers into a smart contract auction; the contract computes the clearing prices and allocations on encrypted bids, and outputs each company’s allocation and cost just to that company (via encryption to their public key). No company sees others’ bids, protecting competitive info, but the auction result is fair and verifiable. This combination of blockchain transparency and MPC privacy could revolutionize consortiums and enterprise consortia that currently rely on trusted third parties.

Decentralized Machine Learning (ZKML and FHE-ML)

Bringing machine learning to blockchains in a verifiable and private way is an emerging frontier:

  • Verifiable ML Inference: Using ZK proofs, one can prove that “a machine learning model f, when given input x, produces output y” without revealing either x (if it’s private data) or the inner workings of f (if the model weights are proprietary). This is crucial for AI services on blockchain – e.g., a decentralized AI oracle that provides predictions or classifications. A ZK-coprocessor can run the model off-chain (since models can be large and expensive to evaluate) and post a proof of the result. For instance, an oracle could prove the statement “The satellite image provided shows at least 50% tree cover” to support a carbon credit contract, without revealing the satellite image or possibly even the model. This is known as ZKML and projects are working on optimizing circuit-friendly neural nets. It ensures the integrity of AI outputs used in smart contracts (no cheating or arbitrary outputs) and can preserve confidentiality of the input data and model parameters.

  • Training with Privacy and Auditability: Training an ML model is even more computation-intensive, but if achievable, it would allow blockchain-based model marketplaces. Multiple data providers could contribute to training a model under FHE so that the training algorithm runs on encrypted data. The result might be an encrypted model that only the buyer can decrypt. Throughout training, ZK proofs could be supplied periodically to prove that the training was following the protocol (preventing a malicious trainer from inserting a backdoor, for example). While fully on-chain ML training is far off given costs, a hybrid approach could use off-chain compute with ZK proofs for critical parts. One could imagine a decentralized Kaggle-like competition where participants train models on private datasets and submit ZK proofs of the model’s accuracy on encrypted test data to determine a winner – all without revealing the datasets or the test data.

  • Personalized AI and Data Ownership: With these technologies, users could retain ownership of their personal data and still benefit from AI. For example, a user’s mobile device could use FHE to encrypt their usage data and send it to an analytics contract which computes a personalized AI model (like a recommendation model) just for them. The model is encrypted and only the user’s device can decrypt and use it locally. The platform (maybe a social network) never sees the raw data or model, but the user gets the AI benefit. If the platform wants aggregated insights, it could request ZK proofs of certain aggregate patterns from the contract without accessing individual data.

Additional Areas

  • Gaming: On-chain games often struggle with hiding secret information (e.g. hidden card hands, fog-of-war in strategy games). FHE can enable hidden state games where the game logic runs on encrypted state. For example, a poker game contract could shuffle and deal encrypted cards; players get decryptions of their own cards, but the contract and others only see ciphertext. Betting logic can use ZK proofs to ensure a player isn’t bluffing about an action (or to reveal the winning hand at the end in a verifiably fair way). Similarly, random seeds for NFT minting or game outcomes can be generated and proven fair without exposing the seed (preventing manipulation). This can greatly enhance blockchain gaming, allowing it to support the same dynamics as traditional games.

  • Voting and Governance: DAOs could use privacy tech for secret ballots on-chain, eliminating vote buying and pressure. FHE-VM could tally votes that are cast in encrypted form, and only final totals are decrypted. ZK proofs can assure each vote was valid (came from an eligible voter, who hasn’t voted twice) without revealing who voted for what. This provides verifiability (everyone can verify the proofs and tally) while keeping individual votes secret – crucial for unbiased governance.

  • Secure Supply Chain and IoT: In supply chains, partners might want to share proof of certain properties (origin, quality metrics) without exposing full details to competitors. For instance, an IoT sensor on a food shipment could continuously send encrypted temperature data to a blockchain. A contract could use FHE to check if the temperature stayed in a safe range throughout transit. If a threshold was exceeded, it can trigger an alert or penalty, but it doesn’t have to reveal the entire temperature log publicly – maybe only a proof or an aggregate like “90th percentile temp”. This builds trust in supply chain automation while respecting confidentiality of process data.

Each of these use cases leverages the core ability: compute on or verify data without revealing the data. This capability can fundamentally change how we handle sensitive information in decentralized systems. It reduces the trade-off between transparency and privacy that has limited blockchain adoption in areas dealing with private data.

Conclusion

Blockchain technology is entering a new era of programmable privacy, where data confidentiality and smart contract functionality go hand in hand. The paradigms of FHE-VM and ZK-coprocessors, while technically distinct, both strive to expand the scope of blockchain applications by decoupling what we can compute from what we must reveal.

Fully Homomorphic Encryption Virtual Machines keep computations on-chain and encrypted, preserving decentralization and composability but demanding advances in efficiency. Zero-Knowledge coprocessors shift heavy lifting off-chain, enabling virtually unbounded computation under cryptographic guarantees, and are already proving their worth in scaling and enhancing Ethereum. The choice between them (and hybrids thereof) will depend on the use case: if real-time interaction with private state is needed, an FHE approach might be more suitable; if extremely complex computation or integration with existing code is required, a ZK-coprocessor might be the way to go. In many cases, they are complementary – indeed, we see ZK proofs bolstering FHE integrity, and FHE potentially helping ZK by handling private data for provers.

For developers, these technologies will introduce new design patterns. We will think in terms of encrypted variables and proof verification as first-class elements of dApp architecture. Tooling is rapidly evolving: high-level languages and SDKs are abstracting away cryptographic details (e.g. Zama’s libraries making FHE types as easy as native types, or RISC Zero’s templates for proof requests). In a few years, writing a confidential smart contract could feel almost as straightforward as writing a regular one, just with privacy “built-in” by default.

The implications for the data economy are profound. Individuals and enterprises will be more willing to put data or logic on-chain when they can control its visibility. This can unlock cross-organization collaborations, new financial products, and AI models that were previously untenable due to privacy concerns. Regulators, too, may come to embrace these techniques as they allow compliance checks and audits via cryptographic means (e.g. proving taxes are paid correctly on-chain without exposing all transactions).

We are still in the early days – current FHE-VM prototypes have performance limits, and ZK proofs, while much faster than before, can still be a bottleneck for extremely complex tasks. But continuous research and engineering efforts (including specialized hardware, as evidenced by companies like Optalysys pushing optical FHE acceleration) are quickly eroding these barriers. The funding pouring into this space (e.g. Zama’s unicorn status, Paradigm’s investment in Axiom) underscores a strong belief that privacy features will be as fundamental to Web3 as transparency was to Web1/2.

In conclusion, programmable privacy via FHE-VMs and ZK-coprocessors heralds a new class of dApps that are trustless, decentralized, and confidential. From DeFi trades that reveal no details, to health research that protects patient data, to machine learning models trained across the world without exposing raw data – the possibilities are vast. As these technologies mature, blockchain platforms will no longer force the trade-off between utility and privacy, enabling broader adoption in industries that require confidentiality. The future of Web3 is one where *users and organizations can confidently transact and compute with sensitive data on-chain, knowing the blockchain will verify integrity while keeping their secrets safe*.

Sources: The information in this report is drawn from technical documentation and recent research blogs of leading projects in this space, including Cypher’s and Zama’s FHEVM documentation, detailed analyses from Trail of Bits on Axiom’s circuits, RISC Zero’s developer guides and blog posts, as well as industry articles highlighting use cases of confidential blockchain tech. These sources and more have been cited throughout to provide further reading and evidence for the described architectures and applications.