The Trust Gap in On-Chain AI
We keep hearing about AI agents managing DeFi positions, running prediction markets, and generating oracle feeds. But here is the uncomfortable question nobody is answering well enough: how do you actually trust that the AI did what it claims?
Right now, most on-chain AI systems work like this: an off-chain model runs inference, produces an output, and pushes that result on-chain. Maybe there is a multisig. Maybe there is a reputation score for the operator. But at its core, you are trusting someone who says “my model produced X” without any cryptographic evidence that this is true. You cannot verify the model architecture. You cannot verify the weights. You cannot verify the input data. You are essentially taking their word for it.
This is where Zero-Knowledge Machine Learning (ZK-ML) enters the picture, and I believe it is the single most important primitive for making on-chain AI trustworthy.
What ZK-ML Actually Does
At a high level, ZK-ML lets you translate a neural network’s forward pass into a cryptographic arithmetic circuit. When the model runs inference, it generates a zero-knowledge proof (typically a zk-SNARK or zk-STARK) that mathematically certifies: this specific output was derived from this specific model architecture and these specific weights, applied to this specific input. The verifier can confirm this statement without ever seeing the model weights or the input data.
Think of it like a sealed envelope analogy. You hand someone a sealed envelope containing your model and data. The ZK proof is a certificate attached to the outside that says “the answer inside is 42, and I can prove the computation was done correctly,” without anyone ever opening the envelope.
The key technical challenge has been the conversion from floating-point arithmetic (what ML models use) to finite field arithmetic (what ZK proofs use). This quantization step necessarily introduces precision loss. Frameworks like EZKL handle this by exporting models to ONNX format and then compiling them into Halo2-based proof circuits. You define your acceptable precision tolerance and the circuit handles the rest.
Where We Are in 2026
The progress in the last 18 months has been substantial. A few highlights that changed my perspective:
-
Recursive SNARKs are becoming standard. Most ZK-ML frameworks now support proof folding, which means proof size does not scale linearly with model complexity anymore. A ResNet-50 inference proof that used to be over 1 GB can now compress to under 100 KB. This makes on-chain verification economically feasible.
-
Proof generation times are dropping into practical territory. We are seeing 1-5 second proof generation for medium-complexity models, which means a DeFi agent can observe the market, compute a decision, generate a proof, and execute a trade all within a single block window on many chains.
-
Lagrange’s DeepProve and similar systems now offer dynamic zk-SNARKs that can efficiently update proofs when underlying data changes without recomputing from scratch. This is critical for continuously running AI agents.
-
On-chain verification is getting cheaper. Stellar just shipped native Groth16 zk-SNARK verification in smart contracts using BN254 cryptography. Ethereum L2s with built-in ZK verification have been doing this for a while, and the trend is clearly toward every major chain supporting native proof verification.
The Three Killer Use Cases
1. Trustless Oracle Feeds Powered by AI
Imagine an oracle that aggregates data from multiple sources, runs a trained anomaly detection model to filter out manipulation attempts, and produces a price feed. With ZK-ML, the oracle can prove it ran the exact model it committed to, on the actual data it received, and produced the reported output. No trust in the operator needed. This completely changes the oracle trust model from reputation-based to proof-based.
2. Verifiable Prediction Markets
Prediction markets that use AI resolution agents currently depend on the honesty of whoever runs the model. With ZK-ML, the resolution model’s architecture and weights can be committed on-chain at market creation. When resolution time comes, the operator runs the model on the relevant data and generates a ZK proof that the resolution matches the committed model. Anyone can verify. No disputes needed.
3. Auditable Automated Trading
This is the one I am most excited about. DeFi protocols increasingly use AI for rebalancing, liquidation decisions, and risk parameter adjustments. ZK-ML allows every single decision to carry a cryptographic receipt proving it came from the approved model, not from some backdoor or manual override. The proof generates in 1-5 seconds, the on-chain verification costs a fraction of a regular transaction, and the entire audit trail is immutable.
The Hard Problems That Remain
I do not want to oversell this. There are real limitations:
-
Large language models are still out of reach. We can prove inference for CNNs, small transformers, and various regression or classification models. But GPT-class models with billions of parameters remain computationally infeasible for ZK proving. The proving time and memory requirements scale roughly quadratically with parameter count.
-
The quantization accuracy trade-off is non-trivial. Converting from float32 to the fixed-point arithmetic that ZK circuits require means your proved model may produce slightly different outputs than the original. For some applications this is fine. For high-precision financial models, the error bounds need careful analysis.
-
Tooling maturity is still early. EZKL and a handful of other frameworks are usable, but the developer experience is nowhere near as smooth as deploying a regular smart contract. Circuit compilation times can be long, debugging is painful, and the documentation assumes deep familiarity with both ML and cryptography.
What I Want to Discuss
I have been building ZK-ML circuits for verifiable inference on DeFi risk models, and the results are promising but the engineering effort is significant. I am curious about this community’s experience and thinking:
- For those building AI agents on-chain: what trust model are you currently using, and would ZK proofs of inference change your architecture?
- For oracle builders: how do you see ZK-ML fitting into existing oracle networks? Complementary to existing validator staking, or a replacement?
- For DeFi protocol designers: what is the minimum proof generation time that would make ZK-ML practical for your use case?
The gap between “AI on blockchain” marketing and actual verifiable AI is enormous right now. ZK-ML is, in my view, the only technology that can credibly close it. But we need the infrastructure, tooling, and most importantly the demand from protocol teams to make it happen.
Would love to hear from builders who are working at this intersection or considering it.