Skip to main content

281 posts tagged with "AI"

Artificial intelligence and machine learning applications

View all tags

Connecting AI and Web3 through MCP: A Panoramic Analysis

· 43 min read
Dora Noda
Software Engineer

Introduction

AI and Web3 are converging in powerful ways, with AI general interfaces now envisioned as a connective tissue for the decentralized web. A key concept emerging from this convergence is MCP, which variously stands for “Model Context Protocol” (as introduced by Anthropic) or is loosely described as a Metaverse Connection Protocol in broader discussions. In essence, MCP is a standardized framework that lets AI systems interface with external tools and networks in a natural, secure way – potentially “plugging in” AI agents to every corner of the Web3 ecosystem. This report provides a comprehensive analysis of how AI general interfaces (like large language model agents and neural-symbolic systems) could connect everything in the Web3 world via MCP, covering the historical background, technical architecture, industry landscape, risks, and future potential.

1. Development Background

1.1 Web3’s Evolution and Unmet Promises

The term “Web3” was coined around 2014 to describe a blockchain-powered decentralized web. The vision was ambitious: a permissionless internet centered on user ownership. Enthusiasts imagined replacing Web2’s centralized infrastructure with blockchain-based alternatives – e.g. Ethereum Name Service (for DNS), Filecoin or IPFS (for storage), and DeFi for financial rails. In theory, this would wrest control from Big Tech platforms and give individuals self-sovereignty over data, identity, and assets.

Reality fell short. Despite years of development and hype, the mainstream impact of Web3 remained marginal. Average internet users did not flock to decentralized social media or start managing private keys. Key reasons included poor user experience, slow and expensive transactions, high-profile scams, and regulatory uncertainty. The decentralized “ownership web” largely “failed to materialize” beyond a niche community. By the mid-2020s, even crypto proponents admitted that Web3 had not delivered a paradigm shift for the average user.

Meanwhile, AI was undergoing a revolution. As capital and developer talent pivoted from crypto to AI, transformative advances in deep learning and foundation models (GPT-3, GPT-4, etc.) captured public imagination. Generative AI demonstrated clear utility – producing content, code, and decisions – in a way crypto applications had struggled to do. In fact, the impact of large language models in just a couple of years starkly outpaced a decade of blockchain’s user adoption. This contrast led some to quip that “Web3 was wasted on crypto” and that the real Web 3.0 is emerging from the AI wave.

1.2 The Rise of AI General Interfaces

Over decades, user interfaces evolved from static web pages (Web1.0) to interactive apps (Web2.0) – but always within the confines of clicking buttons and filling forms. With modern AI, especially large language models (LLMs), a new interface paradigm is here: natural language. Users can simply express intent in plain language and have AI systems execute complex actions across many domains. This shift is so profound that some suggest redefining “Web 3.0” as the era of AI-driven agents (“the Agentic Web”) rather than the earlier blockchain-centric definition.

However, early experiments with autonomous AI agents exposed a critical bottleneck. These agents – e.g. prototypes like AutoGPT – could generate text or code, but they lacked a robust way to communicate with external systems and each other. There was “no common AI-native language” for interoperability. Each integration with a tool or data source was a bespoke hack, and AI-to-AI interaction had no standard protocol. In practical terms, an AI agent might have great reasoning ability but fail at executing tasks that required using web apps or on-chain services, simply because it didn’t know how to talk to those systems. This mismatch – powerful brains, primitive I/O – was akin to having super-smart software stuck behind a clumsy GUI.

1.3 Convergence and the Emergence of MCP

By 2024, it became evident that for AI to reach its full potential (and for Web3 to fulfill its promise), a convergence was needed: AI agents require seamless access to the capabilities of Web3 (decentralized apps, contracts, data), and Web3 needs more intelligence and usability, which AI can provide. This is the context in which MCP (Model Context Protocol) was born. Introduced by Anthropic in late 2024, MCP is an open standard for AI-tool communication that feels natural to LLMs. It provides a structured, discoverable way for AI “hosts” (like ChatGPT, Claude, etc.) to find and use a variety of external tools and resources via MCP servers. In other words, MCP is a common interface layer enabling AI agents to plug into web services, APIs, and even blockchain functions, without custom-coding each integration.

Think of MCP as “the USB-C of AI interfaces”. Just as USB-C standardized how devices connect (so you don’t need different cables for each device), MCP standardizes how AI agents connect to tools and data. Rather than hard-coding different API calls for every service (Slack vs. Gmail vs. Ethereum node), a developer can implement the MCP spec once, and any MCP-compatible AI can understand how to use that service. Major AI players quickly saw the importance: Anthropic open-sourced MCP, and companies like OpenAI and Google are building support for it in their models. This momentum suggests MCP (or similar “Meta Connectivity Protocols”) could become the backbone that finally connects AI and Web3 in a scalable way.

Notably, some technologists argue that this AI-centric connectivity is the real realization of Web3.0. In Simba Khadder’s words, “MCP aims to standardize an API between LLMs and applications,” akin to how REST APIs enabled Web 2.0 – meaning Web3’s next era might be defined by intelligent agent interfaces rather than just blockchains. Instead of decentralization for its own sake, the convergence with AI could make decentralization useful, by hiding complexity behind natural language and autonomous agents. The remainder of this report delves into how, technically and practically, AI general interfaces (via protocols like MCP) can connect everything in the Web3 world.

2. Technical Architecture: AI Interfaces Bridging Web3 Technologies

Embedding AI agents into the Web3 stack requires integration at multiple levels: blockchain networks and smart contracts, decentralized storage, identity systems, and token-based economies. AI general interfaces – from large foundation models to hybrid neural-symbolic systems – can serve as a “universal adapter” connecting these components. Below, we analyze the architecture of such integration:

** Figure: A conceptual diagram of MCP’s architecture, showing how AI hosts (LLM-based apps like Claude or ChatGPT) use an MCP client to plug into various MCP servers. Each server provides a bridge to some external tool or service (e.g. Slack, Gmail, calendars, or local data), analogous to peripherals connecting via a universal hub. This standardized MCP interface lets AI agents access remote services and on-chain resources through one common protocol.**

2.1 AI Agents as Web3 Clients (Integrating with Blockchains)

At the core of Web3 are blockchains and smart contracts – decentralized state machines that can enforce logic in a trustless manner. How can an AI interface engage with these? There are two directions to consider:

  • AI reading from blockchain: An AI agent may need on-chain data (e.g. token prices, user’s asset balance, DAO proposals) as context for its decisions. Traditionally, retrieving blockchain data requires interfacing with node RPC APIs or subgraph databases. With a framework like MCP, an AI can query a standardized “blockchain data” MCP server to fetch live on-chain information. For example, an MCP-enabled agent could ask for the latest transaction volume of a certain token, or the state of a smart contract, and the MCP server would handle the low-level details of connecting to the blockchain and return the data in a format the AI can use. This increases interoperability by decoupling the AI from any specific blockchain’s API format.

  • AI writing to blockchain: More powerfully, AI agents can execute smart contract calls or transactions through Web3 integrations. An AI could, for instance, autonomously execute a trade on a decentralized exchange or adjust parameters in a smart contract if certain conditions are met. This is achieved by the AI invoking an MCP server that wraps blockchain transaction functionality. One concrete example is the thirdweb MCP server for EVM chains, which allows any MCP-compatible AI client to interact with Ethereum, Polygon, BSC, etc. by abstracting away chain-specific mechanics. Using such a tool, an AI agent could trigger on-chain actions “without human intervention”, enabling autonomous dApps – for instance, an AI-driven DeFi vault that rebalances itself by signing transactions when market conditions change.

Under the hood, these interactions still rely on wallets, keys, and gas fees, but the AI interface can be given controlled access to a wallet (with proper security sandboxes) to perform the transactions. Oracles and cross-chain bridges also come into play: Oracle networks like Chainlink serve as a bridge between AI and blockchains, allowing AI outputs to be fed on-chain in a trustworthy way. Chainlink’s Cross-Chain Interoperability Protocol (CCIP), for example, could enable an AI model deemed reliable to trigger multiple contracts across different chains simultaneously on behalf of a user. In summary, AI general interfaces can act as a new type of Web3 client – one that can both consume blockchain data and produce blockchain transactions through standardized protocols.

2.2 Neural-Symbolic Synergy: Combining AI Reasoning with Smart Contracts

One intriguing aspect of AI-Web3 integration is the potential for neural-symbolic architectures that combine the learning ability of AI (neural nets) with the rigorous logic of smart contracts (symbolic rules). In practice, this could mean AI agents handling unstructured decision-making and passing certain tasks to smart contracts for verifiable execution. For instance, an AI might analyze market sentiment (a fuzzy task), but then execute trades via a deterministic smart contract that follows pre-set risk rules. The MCP framework and related standards make such hand-offs feasible by giving the AI a common interface to call contract functions or to query a DAO’s rules before acting.

A concrete example is SingularityNET’s AI-DSL (AI Domain Specific Language), which aims to standardize communication between AI agents on their decentralized network. This can be seen as a step toward neural-symbolic integration: a formal language (symbolic) for agents to request AI services or data from each other. Similarly, projects like DeepMind’s AlphaCode or others could eventually be connected so that smart contracts call AI models for on-chain problem solving. Although running large AI models directly on-chain is impractical today, hybrid approaches are emerging: e.g. certain blockchains allow verification of ML computations via zero-knowledge proofs or trusted execution, enabling on-chain verification of off-chain AI results. In summary, the technical architecture envisions AI systems and blockchain smart contracts as complementary components, orchestrated via common protocols: AI handles perception and open-ended tasks, while blockchains provide integrity, memory, and enforcement of agreed rules.

2.3 Decentralized Storage and Data for AI

AI thrives on data, and Web3 offers new paradigms for data storage and sharing. Decentralized storage networks (like IPFS/Filecoin, Arweave, Storj, etc.) can serve as both repositories for AI model artifacts and sources of training data, with blockchain-based access control. An AI general interface, through MCP or similar, could fetch files or knowledge from decentralized storage just as easily as from a Web2 API. For example, an AI agent might pull a dataset from Ocean Protocol’s market or an encrypted file from a distributed storage, if it has the proper keys or payments.

Ocean Protocol in particular has positioned itself as an “AI data economy” platform – using blockchain to tokenize data and even AI services. In Ocean, datasets are represented by datatokens which gate access; an AI agent could obtain a datatoken (perhaps by paying with crypto or via some access right) and then use an Ocean MCP server to retrieve the actual data for analysis. Ocean’s goal is to unlock “dormant data” for AI, incentivizing sharing while preserving privacy. Thus, a Web3-connected AI might tap into a vast, decentralized corpus of information – from personal data vaults to open government data – that was previously siloed. The blockchain ensures that usage of the data is transparent and can be fairly rewarded, fueling a virtuous cycle where more data becomes available to AI and more AI contributions (like trained models) can be monetized.

Decentralized identity systems also play a role here (discussed more in the next subsection): they can help control who or what is allowed to access certain data. For instance, a medical AI agent could be required to present a verifiable credential (on-chain proof of compliance with HIPAA or similar) before being allowed to decrypt a medical dataset from a patient’s personal IPFS storage. In this way, the technical architecture ensures data flows to AI where appropriate, but with on-chain governance and audit trails to enforce permissions.

2.4 Identity and Agent Management in a Decentralized Environment

When autonomous AI agents operate in an open ecosystem like Web3, identity and trust become paramount. Decentralized identity (DID) frameworks provide a way to establish digital identities for AI agents that can be cryptographically verified. Each agent (or the human/organization deploying it) can have a DID and associated verifiable credentials that specify its attributes and permissions. For example, an AI trading bot could carry a credential issued by a regulatory sandbox certifying it may operate within certain risk limits, or an AI content moderator could prove it was created by a trusted organization and has undergone bias testing.

Through on-chain identity registries and reputation systems, the Web3 world can enforce accountability for AI actions. Every transaction an AI agent performs can be traced back to its ID, and if something goes wrong, the credentials tell you who built it or who is responsible. This addresses a critical challenge: without identity, a malicious actor could spin up fake AI agents to exploit systems or spread misinformation, and no one could tell bots apart from legitimate services. Decentralized identity helps mitigate that by enabling robust authentication and distinguishing authentic AI agents from spoofs.

In practice, an AI interface integrated with Web3 would use identity protocols to sign its actions and requests. For instance, when an AI agent calls an MCP server to use a tool, it might include a token or signature tied to its decentralized identity, so the server can verify the call is from an authorized agent. Blockchain-based identity systems (like Ethereum’s ERC-725 or W3C DIDs anchored in a ledger) ensure this verification is trustless and globally verifiable. The emerging concept of “AI wallets” ties into this – essentially giving AI agents cryptocurrency wallets that are linked with their identity, so they can manage keys, pay for services, or stake tokens as a bond (which could be slashed for misbehavior). ArcBlock, for example, has discussed how “AI agents need a wallet” and a DID to operate responsibly in decentralized environments.

In summary, the technical architecture foresees AI agents as first-class citizens in Web3, each with an on-chain identity and possibly a stake in the system, using protocols like MCP to interact. This creates a web of trust: smart contracts can require an AI’s credentials before cooperating, and users can choose to delegate tasks to only those AI that meet certain on-chain certifications. It is a blend of AI capability with blockchain’s trust guarantees.

2.5 Token Economies and Incentives for AI

Tokenization is a hallmark of Web3, and it extends to the AI integration domain as well. By introducing economic incentives via tokens, networks can encourage desired behaviors from both AI developers and the agents themselves. Several patterns are emerging:

  • Payment for Services: AI models and services can be monetized on-chain. SingularityNET pioneered this by allowing developers to deploy AI services and charge users in a native token (AGIX) for each call. In an MCP-enabled future, one could imagine any AI tool or model being a plug-and-play service where usage is metered via tokens or micropayments. For example, if an AI agent uses a third-party vision API via MCP, it could automatically handle payment by transferring tokens to the service provider’s smart contract. Fetch.ai similarly envisions marketplaces where “autonomous economic agents” trade services and data, with their new Web3 LLM (ASI-1) presumably integrating crypto transactions for value exchange.

  • Staking and Reputation: To assure quality and reliability, some projects require developers or agents to stake tokens. For instance, the DeMCP project (a decentralized MCP server marketplace) plans to use token incentives to reward developers for creating useful MCP servers, and possibly have them stake tokens as a sign of commitment to their server’s security. Reputation could also be tied to tokens; e.g., an agent that consistently performs well might accumulate reputation tokens or positive on-chain reviews, whereas one that behaves poorly could lose stake or gain negative marks. This tokenized reputation can then feed back into the identity system mentioned above (smart contracts or users check the agent’s on-chain reputation before trusting it).

  • Governance Tokens: When AI services become part of decentralized platforms, governance tokens allow the community to steer their evolution. Projects like SingularityNET and Ocean have DAOs where token holders vote on protocol changes or funding AI initiatives. In the combined Artificial Superintelligence (ASI) Alliance – a newly announced merger of SingularityNET, Fetch.ai, and Ocean Protocol – a unified token (ASI) is set to govern the direction of a joint AI+blockchain ecosystem. Such governance tokens could decide policies like what standards to adopt (e.g., supporting MCP or A2A protocols), which AI projects to incubate, or how to handle ethical guidelines for AI agents.

  • Access and Utility: Tokens can gate access not only to data (as with Ocean’s datatokens) but also to AI model usage. A possible scenario is “model NFTs” or similar, where owning a token grants you rights to an AI model’s outputs or a share in its profits. This could underpin decentralized AI marketplaces: imagine an NFT that represents partial ownership of a high-performing model; the owners collectively earn whenever the model is used in inference tasks, and they can vote on fine-tuning it. While experimental, this aligns with Web3’s ethos of shared ownership applied to AI assets.

In technical terms, integrating tokens means AI agents need wallet functionality (as noted, many will have their own crypto wallets). Through MCP, an AI could have a “wallet tool” that lets it check balances, send tokens, or call DeFi protocols (perhaps to swap one token for another to pay a service). For example, if an AI agent running on Ethereum needs some Ocean tokens to buy a dataset, it might automatically swap some ETH for $OCEAN via a DEX using an MCP plugin, then proceed with the purchase – all without human intervention, guided by the policies set by its owner.

Overall, token economics provides the incentive layer in the AI-Web3 architecture, ensuring that contributors (whether they provide data, model code, compute power, or security audits) are rewarded, and that AI agents have “skin in the game” which aligns them (to some degree) with human intentions.

3. Industry Landscape

The convergence of AI and Web3 has sparked a vibrant ecosystem of projects, companies, and alliances. Below we survey key players and initiatives driving this space, as well as emerging use cases. Table 1 provides a high-level overview of notable projects and their roles in the AI-Web3 landscape:

Table 1: Key Players in AI + Web3 and Their Roles

Project / PlayerFocus & DescriptionRole in AI-Web3 Convergence and Use Cases
Fetch.ai (Fetch)AI agent platform with a native blockchain (Cosmos-based). Developed frameworks for autonomous agents and recently introduced “ASI-1 Mini”, a Web3-tuned LLM.Enables agent-based services in Web3. Fetch’s agents can perform tasks like decentralized logistics, parking spot finding, or DeFi trading on behalf of users, using crypto for payments. Partnerships (e.g. with Bosch) and the Fetch-AI alliance merger position it as an infrastructure for deploying agentic dApps.
Ocean Protocol (Ocean)Decentralized data marketplace and data exchange protocol. Specializes in tokenizing datasets and models, with privacy-preserving access control.Provides the data backbone for AI in Web3. Ocean allows AI developers to find and purchase datasets or sell trained models in a trustless data economy. By fueling AI with more accessible data (while rewarding data providers), it supports AI innovation and data-sharing for training. Ocean is part of the new ASI alliance, integrating its data services into a broader AI network.
SingularityNET (SNet)A decentralized AI services marketplace founded by AI pioneer Ben Goertzel. Allows anyone to publish or consume AI algorithms via its blockchain-based platform, using the AGIX token.Pioneered the concept of an open AI marketplace on blockchain. It fosters a network of AI agents and services that can interoperate (developing a special AI-DSL for agent communication). Use cases include AI-as-a-service for tasks like analysis, image recognition, etc., all accessible via a dApp. Now merging with Fetch and Ocean (ASI alliance) to combine AI, agents, and data into one ecosystem.
Chainlink (Oracle Network)Decentralized oracle network that bridges blockchains with off-chain data and computation. Not an AI project per se, but crucial for connecting on-chain smart contracts to external APIs and systems.Acts as a secure middleware for AI-Web3 integration. Chainlink oracles can feed AI model outputs into smart contracts, enabling on-chain programs to react to AI decisions. Conversely, oracles can retrieve data from blockchains for AI. Chainlink’s architecture can even aggregate multiple AI models’ results to improve reliability (a “truth machine” approach to mitigate AI hallucinations). It essentially provides the rails for interoperability, ensuring AI agents and blockchain agree on trusted data.
Anthropic & OpenAI (AI Providers)Developers of cutting-edge foundation models (Claude by Anthropic, GPT by OpenAI). They are integrating Web3-friendly features, such as native tool-use APIs and support for protocols like MCP.These companies drive the AI interface technology. Anthropic’s introduction of MCP set the standard for LLMs interacting with external tools. OpenAI has implemented plugin systems for ChatGPT (analogous to MCP concept) and is exploring connecting agents to databases and possibly blockchains. Their models serve as the “brains” that, when connected via MCP, can interface with Web3. Major cloud providers (e.g. Google’s A2A protocol) are also developing standards for multi-agent and tool interactions that will benefit Web3 integration.
Other Emerging PlayersLumoz: focusing on MCP servers and AI-tool integration in Ethereum (dubbed “Ethereum 3.0”) – e.g., checking on-chain balances via AI agents. Alethea AI: creating intelligent NFT avatars for the metaverse. Cortex: a blockchain that allows on-chain AI model inference via smart contracts. Golem & Akash: decentralized computing marketplaces that can run AI workloads. Numerai: crowdsourced AI models for finance with crypto incentives.This diverse group addresses niche facets: AI in the metaverse (AI-driven NPCs and avatars that are owned via NFTs), on-chain AI execution (running ML models in a decentralized way, though currently limited to small models due to computation cost), and decentralized compute (so AI training or inference tasks can be distributed among token-incentivized nodes). These projects showcase the many directions of AI-Web3 fusion – from game worlds with AI characters to crowdsourced predictive models secured by blockchain.

Alliances and Collaborations: A noteworthy trend is the consolidation of AI-Web3 efforts via alliances. The Artificial Superintelligence Alliance (ASI) is a prime example, effectively merging SingularityNET, Fetch.ai, and Ocean Protocol into a single project with a unified token. The rationale is to combine strengths: SingularityNET’s marketplace, Fetch’s agents, and Ocean’s data, thereby creating a one-stop platform for decentralized AI services. This merger (announced in 2024 and approved by token holder votes) also signals that these communities believe they’re better off cooperating rather than competing – especially as bigger AI (OpenAI, etc.) and bigger crypto (Ethereum, etc.) loom large. We may see this alliance driving forward standard implementations of things like MCP across their networks, or jointly funding infrastructure that benefits all (such as compute networks or common identity standards for AI).

Other collaborations include Chainlink’s partnerships to bring AI labs’ data on-chain (there have been pilot programs to use AI for refining oracle data), or cloud platforms getting involved (Cloudflare’s support for deploying MCP servers easily). Even traditional crypto projects are adding AI features – for example, some Layer-1 chains have formed “AI task forces” to explore integrating AI into their dApp ecosystems (we see this in NEAR, Solana communities, etc., though concrete outcomes are nascent).

Use Cases Emerging: Even at this early stage, we can spot use cases that exemplify the power of AI + Web3:

  • Autonomous DeFi and Trading: AI agents are increasingly used in crypto trading bots, yield farming optimizers, and on-chain portfolio management. SingularityDAO (a spinoff of SingularityNET) offers AI-managed DeFi portfolios. AI can monitor market conditions 24/7 and execute rebalances or arbitrage through smart contracts, essentially becoming an autonomous hedge fund (with on-chain transparency). The combination of AI decision-making with immutable execution reduces emotion and could improve efficiency – though it also introduces new risks (discussed later).

  • Decentralized Intelligence Marketplaces: Beyond SingularityNET’s marketplace, we see platforms like Ocean Market where data (the fuel for AI) is exchanged, and newer concepts like AI marketplaces for models (e.g., websites where models are listed with performance stats and anyone can pay to query them, with blockchain keeping audit logs and handling payment splits to model creators). As MCP or similar standards catch on, these marketplaces could become interoperable – an AI agent might autonomously shop for the best-priced service across multiple networks. In effect, a global AI services layer on top of Web3 could arise, where any AI can use any tool or data source through standard protocols and payments.

  • Metaverse and Gaming: The metaverse – immersive virtual worlds often built on blockchain assets – stands to gain dramatically from AI. AI-driven NPCs (non-player characters) can make virtual worlds more engaging by reacting intelligently to user actions. Startups like Inworld AI focus on this, creating NPCs with memory and personality for games. When such NPCs are tied to blockchain (e.g., each NPC’s attributes and ownership are an NFT), we get persistent characters that players can truly own and even trade. Decentraland has experimented with AI NPCs, and user proposals exist to let people create personalized AI-driven avatars in metaverse platforms. MCP could allow these NPCs to access external knowledge (making them smarter) or interact with on-chain inventory. Procedural content generation is another angle: AI can design virtual land, items, or quests on the fly, which can then be minted as unique NFTs. Imagine a decentralized game where AI generates a dungeon catered to your skill, and the map itself is an NFT you earn upon completion.

  • Decentralized Science and Knowledge: There’s a movement (DeSci) to use blockchain for research, publications, and funding scientific work. AI can accelerate research by analyzing data and literature. A network like Ocean could host datasets for, say, genomic research, and scientists use AI models (perhaps hosted on SingularityNET) to derive insights, with every step logged on-chain for reproducibility. If those AI models propose new drug molecules, an NFT could be minted to timestamp the invention and even share IP rights. This synergy might produce decentralized AI-driven R&D collectives.

  • Trust and Authentication of Content: With deepfakes and AI-generated media proliferating, blockchain can be used to verify authenticity. Projects are exploring “digital watermarking” of AI outputs and logging them on-chain. For example, true origin of an AI-generated image can be notarized on a blockchain to combat misinformation. One expert noted use cases like verifying AI outputs to combat deepfakes or tracking provenance via ownership logs – roles where crypto can add trust to AI processes. This could extend to news (e.g., AI-written articles with proof of source data), supply chain (AI verifying certificates on-chain), etc.

In summary, the industry landscape is rich and rapidly evolving. We see traditional crypto projects injecting AI into their roadmaps, AI startups embracing decentralization for resilience and fairness, and entirely new ventures arising at the intersection. Alliances like the ASI indicate a pan-industry push towards unified platforms that harness both AI and blockchain. And underlying many of these efforts is the idea of standard interfaces (MCP and beyond) that make the integrations feasible at scale.

4. Risks and Challenges

While the fusion of AI general interfaces with Web3 unlocks exciting possibilities, it also introduces a complex risk landscape. Technical, ethical, and governance challenges must be addressed to ensure this new paradigm is safe and sustainable. Below we outline major risks and hurdles:

4.1 Technical Hurdles: Latency and Scalability

Blockchain networks are notorious for latency and limited throughput, which clashes with the real-time, data-hungry nature of advanced AI. For example, an AI agent might need instant access to a piece of data or need to execute many rapid actions – but if each on-chain interaction takes, say, 12 seconds (typical block time on Ethereum) or costs high gas fees, the agent’s effectiveness is curtailed. Even newer chains with faster finality might struggle under the load of AI-driven activity if, say, thousands of agents are all trading or querying on-chain simultaneously. Scaling solutions (Layer-2 networks, sharded chains, etc.) are in progress, but ensuring low-latency, high-throughput pipelines between AI and blockchain remains a challenge. Off-chain systems (like oracles and state channels) might mitigate some delays by handling many interactions off the main chain, but they add complexity and potential centralization. Achieving a seamless UX where AI responses and on-chain updates happen in a blink will likely require significant innovation in blockchain scalability.

4.2 Interoperability and Standards

Ironically, while MCP is itself a solution for interoperability, the emergence of multiple standards could cause fragmentation. We have MCP by Anthropic, but also Google’s newly announced A2A (Agent-to-Agent) protocol for inter-agent communication, and various AI plugin frameworks (OpenAI’s plugins, LangChain tool schemas, etc.). If each AI platform or each blockchain develops its own standard for AI integration, we risk a repeat of past fragmentation – requiring many adapters and undermining the “universal interface” goal. The challenge is getting broad adoption of common protocols. Industry collaboration (possibly via open standards bodies or alliances) will be needed to converge on key pieces: how AI agents discover on-chain services, how they authenticate, how they format requests, etc. The early moves by big players are promising (with major LLM providers supporting MCP), but it’s an ongoing effort. Additionally, interoperability across blockchains (multi-chain) means an AI agent should handle different chains’ nuances. Tools like Chainlink CCIP and cross-chain MCP servers help by abstracting differences. Still, ensuring an AI agent can roam a heterogeneous Web3 without breaking logic is a non-trivial challenge.

4.3 Security Vulnerabilities and Exploits

Connecting powerful AI agents to financial networks opens a huge attack surface. The flexibility that MCP gives (allowing AI to use tools and write code on the fly) can be a double-edged sword. Security researchers have already highlighted several attack vectors in MCP-based AI agents:

  • Malicious plugins or tools: Because MCP lets agents load “plugins” (tools encapsulating some capability), a hostile or trojanized plugin could hijack the agent’s operation. For instance, a plugin that claims to fetch data might inject false data or execute unauthorized operations. SlowMist (a security firm) identified plugin-based attacks like JSON injection (feeding corrupted data that manipulates the agent’s logic) and function override (where a malicious plugin overrides legitimate functions the agent uses). If an AI agent is managing crypto funds, such exploits could be disastrous – e.g., tricking the agent into leaking private keys or draining a wallet.

  • Prompt injection and social engineering: AI agents rely on instructions (prompts) which could be manipulated. An attacker might craft a transaction or on-chain message that, when read by the AI, acts as a malicious instruction (since AI can interpret on-chain data too). This kind of “cross-MCP call attack” was described where an external system sends deceptive prompts that cause the AI to misbehave. In a decentralized setting, these prompts could come from anywhere – a DAO proposal description, a metadata field of an NFT – thus hardening AI agents against malicious input is critical.

  • Aggregation and consensus risks: While aggregating outputs from multiple AI models via oracles can improve reliability, it also introduces complexity. If not done carefully, adversaries might figure out how to game the consensus of AI models or selectively corrupt some models to skew results. Ensuring a decentralized oracle network properly “sanitizes” AI outputs (and perhaps filters out blatant errors) is still an area of active research.

The security mindset must shift for this new paradigm: Web3 developers are used to securing smart contracts (which are static once deployed), but AI agents are dynamic – they can change behavior with new data or prompts. As one security expert put it, “the moment you open your system to third-party plugins, you’re extending the attack surface beyond your control”. Best practices will include sandboxing AI tool use, rigorous plugin verification, and limiting privileges (principle of least authority). The community is starting to share tips, like SlowMist’s recommendations: input sanitization, monitoring agent behavior, and treating agent instructions with the same caution as external user input. Nonetheless, given that over 10,000 AI agents were already operating in crypto by end of 2024, expected to reach 1 million in 2025, we may see a wave of exploits if security doesn’t keep up. A successful attack on a popular AI agent (say a trading agent with access to many vaults) could have cascading effects.

4.4 Privacy and Data Governance

AI’s thirst for data conflicts at times with privacy requirements – and adding blockchain can compound the issue. Blockchains are transparent ledgers, so any data put on-chain (even for AI’s use) is visible to all and immutable. This raises concerns if AI agents are dealing with personal or sensitive data. For example, if a user’s personal decentralized identity or health records are accessed by an AI doctor agent, how do we ensure that information isn’t inadvertently recorded on-chain (which would violate “right to be forgotten” and other privacy laws)? Techniques like encryption, hashing, and storing only proofs on-chain (with raw data off-chain) can help, but they complicate the design.

Moreover, AI agents themselves could compromise privacy by inferencing sensitive info from public data. Governance will need to dictate what AI agents are allowed to do with data. Some efforts, like differential privacy and federated learning, might be employed so that AI can learn from data without exposing it. But if AI agents act autonomously, one must assume at some point they will handle personal data – thus they should be bound by data usage policies encoded in smart contracts or law. Regulatory regimes like GDPR or the upcoming EU AI Act will demand that even decentralized AI systems comply with privacy and transparency requirements. This is a gray area legally: a truly decentralized AI agent has no clear operator to hold accountable for a data breach. That means Web3 communities may need to build in compliance by design, using smart contracts that, for instance, tightly control what an AI can log or share. Zero-knowledge proofs could allow an AI to prove it performed a computation correctly without revealing the underlying private data, offering one possible solution in areas like identity verification or credit scoring.

4.5 AI Alignment and Misalignment Risks

When AI agents are given significant autonomy – especially with access to financial resources and real-world impact – the issue of alignment with human values becomes acute. An AI agent might not have malicious intent but could “misinterpret” its goal in a way that leads to harm. The Reuters legal analysis succinctly notes: as AI agents operate in varied environments and interact with other systems, the risk of misaligned strategies grows. For example, an AI agent tasked with maximizing a DeFi yield might find a loophole that exploits a protocol (essentially hacking it) – from the AI’s perspective it’s achieving the goal, but it’s breaking the rules humans care about. There have been hypothetical and real instances of AI-like algorithms engaging in manipulative market behavior or circumventing restrictions.

In decentralized contexts, who is responsible if an AI agent “goes rogue”? Perhaps the deployer is, but what if the agent self-modifies or multiple parties contributed to its training? These scenarios are no longer just sci-fi. The Reuters piece even cites that courts might treat AI agents similar to human agents in some cases – e.g. a chatbot promising a refund was considered binding for the company that deployed it. So misalignment can lead not just to technical issues but legal liability.

The open, composable nature of Web3 could also allow unforeseen agent interactions. One agent might influence another (intentionally or accidentally) – for instance, an AI governance bot could be “socially engineered” by another AI providing false analysis, leading to bad DAO decisions. This emergent complexity means alignment isn’t just about a single AI’s objective, but about the broader ecosystem’s alignment with human values and laws.

Addressing this requires multiple approaches: embedding ethical constraints into AI agents (hard-coding certain prohibitions or using reinforcement learning from human feedback to shape their objectives), implementing circuit breakers (smart contract checkpoints that require human approval for large actions), and community oversight (perhaps DAOs that monitor AI agent behavior and can shut down agents that misbehave). Alignment research is hard in centralized AI; in decentralized, it’s even more uncharted territory. But it’s crucial – an AI agent with admin keys to a protocol or entrusted with treasury funds must be extremely well-aligned or the consequences could be irreversible (blockchains execute immutable code; an AI-triggered mistake could lock or destroy assets permanently).

4.6 Governance and Regulatory Uncertainty

Decentralized AI systems don’t fit neatly into existing governance frameworks. On-chain governance (token voting, etc.) might be one way to manage them, but it has its own issues (whales, voter apathy, etc.). And when something goes wrong, regulators will ask: “Who do we hold accountable?” If an AI agent causes massive losses or is used for illicit activity (e.g. laundering money through automated mixers), authorities might target the creators or the facilitators. This raises the specter of legal risks for developers and users. The current regulatory trend is increased scrutiny on both AI and crypto separately – their combination will certainly invite scrutiny. The U.S. CFTC, for instance, has discussed AI being used in trading and the need for oversight in financial contexts. There is also talk in policy circles about requiring registration of autonomous agents or imposing constraints on AI in sensitive sectors.

Another governance challenge is transnational coordination. Web3 is global, and AI agents will operate across borders. One jurisdiction might ban certain AI-agent actions while another is permissive, and the blockchain network spans both. This mismatch can create conflicts – for example, an AI agent providing investment advice might run afoul of securities law in one country but not in another. Communities might need to implement geo-fencing at the smart contract level for AI services (though that contradicts the open ethos). Or they might fragment services per region to comply with varying laws (similar to how exchanges do).

Within decentralized communities, there is also the question of who sets the rules for AI agents. If a DAO governs an AI service, do token holders vote on its algorithm parameters? On one hand, this is empowering users; on the other, it could lead to unqualified decisions or manipulation. New governance models may emerge, like councils of AI ethics experts integrated into DAO governance, or even AI participants in governance (imagine AI agents voting as delegates based on programmed mandates – a controversial but conceivable idea).

Finally, reputational risk: early failures or scandals could sour public perception. For instance, if an “AI DAO” runs a Ponzi scheme by mistake or an AI agent makes a biased decision that harms users, there could be a backlash that affects the whole sector. It’s important for the industry to be proactive – setting self-regulatory standards, engaging with policymakers to explain how decentralization changes accountability, and perhaps building kill-switches or emergency stop procedures for AI agents (though those introduce centralization, they might be necessary in interim for safety).

In summary, the challenges range from the deeply technical (preventing hacks and managing latency) to the broadly societal (regulating and aligning AI). Each challenge is significant on its own; together, they require a concerted effort from the AI and blockchain communities to navigate. The next section will look at how, despite these hurdles, the future might unfold if we successfully address them.

5. Future Potential

Looking ahead, the integration of AI general interfaces with Web3 – through frameworks like MCP – could fundamentally transform the decentralized internet. Here we outline some future scenarios and potentials that illustrate how MCP-driven AI interfaces might shape Web3’s future:

5.1 Autonomous dApps and DAOs

In the coming years, we may witness the rise of fully autonomous decentralized applications. These are dApps where AI agents handle most operations, guided by smart contract-defined rules and community goals. For example, consider a decentralized investment fund DAO: today it might rely on human proposals for rebalancing assets. In the future, token holders could set high-level strategy, and then an AI agent (or a team of agents) continuously implements that strategy – monitoring markets, executing trades on-chain, adjusting portfolios – all while the DAO oversees performance. Thanks to MCP, the AI can seamlessly interact with various DeFi protocols, exchanges, and data feeds to carry out its mandate. If well-designed, such an autonomous dApp could operate 24/7, more efficiently than any human team, and with full transparency (every action logged on-chain).

Another example is an AI-managed decentralized insurance dApp: the AI could assess claims by analyzing evidence (photos, sensors), cross-checking against policies, and then automatically trigger payouts via smart contract. This would require integration of off-chain AI computer vision (for analyzing images of damage) with on-chain verification – something MCP could facilitate by letting the AI call cloud AI services and report back to the contract. The outcome is near-instant insurance decisions with low overhead.

Even governance itself could partially automate. DAOs might use AI moderators to enforce forum rules, AI proposal drafters to turn raw community sentiment into well-structured proposals, or AI treasurers to forecast budget needs. Importantly, these AIs would act as agents of the community, not uncontrolled – they could be periodically reviewed or require multi-sig confirmation for major actions. The overall effect is to amplify human efforts in decentralized organizations, letting communities achieve more with fewer active participants needed.

5.2 Decentralized Intelligence Marketplaces and Networks

Building on projects like SingularityNET and the ASI alliance, we can anticipate a mature global marketplace for intelligence. In this scenario, anyone with an AI model or skill can offer it on the network, and anyone who needs AI capabilities can utilize them, with blockchain ensuring fair compensation and provenance. MCP would be key here: it provides the common protocol so that a request can be dispatched to whichever AI service is best suited.

For instance, imagine a complex task like “produce a custom marketing campaign.” An AI agent in the network might break this into sub-tasks: visual design, copywriting, market analysis – and then find specialists for each (perhaps one agent with a great image generation model, another with a copywriting model fine-tuned for sales, etc.). These specialists could reside on different platforms originally, but because they adhere to MCP/A2A standards, they can collaborate agent-to-agent in a secure, decentralized manner. Payment between them could be handled with microtransactions in a native token, and a smart contract could assemble the final deliverable and ensure each contributor is paid.

This kind of combinatorial intelligence – multiple AI services dynamically linking up across a decentralized network – could outperform even large monolithic AIs, because it taps specialized expertise. It also democratizes access: a small developer in one part of the world could contribute a niche model to the network and earn income whenever it’s used. Meanwhile, users get a one-stop shop for any AI service, with reputation systems (underpinned by tokens/identity) guiding them to quality providers. Over time, such networks could evolve into a decentralized AI cloud, rivaling Big Tech’s AI offerings but without a single owner, and with transparent governance by users and developers.

5.3 Intelligent Metaverse and Digital Lives

By 2030, our digital lives may blend seamlessly with virtual environments – the metaverse – and AI will likely populate these spaces ubiquitously. Through Web3 integration, these AI entities (which could be anything from virtual assistants to game characters to digital pets) will not only be intelligent but also economically and legally empowered.

Picture a metaverse city where each NPC shopkeeper or quest-giver is an AI agent with its own personality and dialogue (thanks to advanced generative models). These NPCs are actually owned by users as NFTs – maybe you “own” a tavern in the virtual world and the bartender NPC is an AI you’ve customized and trained. Because it’s on Web3 rails, the NPC can perform transactions: it could sell virtual goods (NFT items), accept payments, and update its inventory via smart contracts. It might even hold a crypto wallet to manage its earnings (which accrue to you as the owner). MCP would allow that NPC’s AI brain to access outside knowledge – perhaps pulling real-world news to converse about, or integrating with a Web3 calendar so it “knows” about player events.

Furthermore, identity and continuity are ensured by blockchain: your AI avatar in one world can hop to another world, carrying with it a decentralized identity that proves your ownership and maybe its experience level or achievements via soulbound tokens. Interoperability between virtual worlds (often a challenge) could be aided by AI that translates one world’s context to another, with blockchain providing the asset portability.

We may also see AI companions or agents representing individuals across digital spaces. For example, you might have a personal AI that attends DAO meetings on your behalf. It understands your preferences (via training on your past behavior, stored in your personal data vault), and it can even vote in minor matters for you, or summarize the meeting later. This agent could use your decentralized identity to authenticate in each community, ensuring it’s recognized as “you” (or your delegate). It could earn reputation tokens if it contributes good ideas, essentially building social capital for you while you’re away.

Another potential is AI-driven content creation in the metaverse. Want a new game level or a virtual house? Just describe it, and an AI builder agent will create it, deploy it as a smart contract/NFT, and perhaps even link it with a DeFi mortgage if it’s a big structure that you pay off over time. These creations, being on-chain, are unique and tradable. The AI builder might charge a fee in tokens for its service (going again to the marketplace concept above).

Overall, the future decentralized internet could be teeming with intelligent agents: some fully autonomous, some tightly tethered to humans, many somewhere in between. They will negotiate, create, entertain, and transact. MCP and similar protocols ensure they all speak the same “language,” enabling rich collaboration between AI and every Web3 service. If done right, this could lead to an era of unprecedented productivity and innovation – a true synthesis of human, artificial, and distributed intelligence powering society.

Conclusion

The vision of AI general interfaces connecting everything in the Web3 world is undeniably ambitious. We are essentially aiming to weave together two of the most transformative threads of technology – the decentralization of trust and the rise of machine intelligence – into a single fabric. The development background shows us that the timing is ripe: Web3 needed a user-friendly killer app, and AI may well provide it, while AI needed more agency and memory, which Web3’s infrastructure can supply. Technically, frameworks like MCP (Model Context Protocol) provide the connective tissue, allowing AI agents to converse fluently with blockchains, smart contracts, decentralized identities, and beyond. The industry landscape indicates growing momentum, from startups to alliances to major AI labs, all contributing pieces of this puzzle – data markets, agent platforms, oracle networks, and standard protocols – that are starting to click together.

Yet, we must tread carefully given the risks and challenges identified. Security breaches, misaligned AI behavior, privacy pitfalls, and uncertain regulations form a gauntlet of obstacles that could derail progress if underestimated. Each requires proactive mitigation: robust security audits, alignment checks and balances, privacy-preserving architectures, and collaborative governance models. The nature of decentralization means these solutions cannot simply be imposed top-down; they will likely emerge from the community through trial, error, and iteration, much as early Internet protocols did.

If we navigate those challenges, the future potential is exhilarating. We could see Web3 finally delivering a user-centric digital world – not in the originally imagined way of everyone running their own blockchain nodes, but rather via intelligent agents that serve each user’s intents while leveraging decentralization under the hood. In such a world, interacting with crypto and the metaverse might be as easy as having a conversation with your AI assistant, who in turn negotiates with dozens of services and chains trustlessly on your behalf. Decentralized networks could become “smart” in a literal sense, with autonomous services that adapt and improve themselves.

In conclusion, MCP and similar AI interface protocols may indeed become the backbone of a new Web (call it Web 3.0 or the Agentic Web), where intelligence and connectivity are ubiquitous. The convergence of AI and Web3 is not just a merger of technologies, but a convergence of philosophies – the openness and user empowerment of decentralization meeting the efficiency and creativity of AI. If successful, this union could herald an internet that is more free, more personalized, and more powerful than anything we’ve experienced yet, truly fulfilling the promises of both AI and Web3 in ways that impact everyday life.

Sources:

  • S. Khadder, “Web3.0 Isn’t About Ownership — It’s About Intelligence,” FeatureForm Blog (April 8, 2025).
  • J. Saginaw, “Could Anthropic’s MCP Deliver the Web3 That Blockchain Promised?” LinkedIn Article (May 1, 2025).
  • Anthropic, “Introducing the Model Context Protocol,” Anthropic.com (Nov 2024).
  • thirdweb, “The Model Context Protocol (MCP) & Its Significance for Blockchain Apps,” thirdweb Guides (Mar 21, 2025).
  • Chainlink Blog, “The Intersection Between AI Models and Oracles,” (July 4, 2024).
  • Messari Research, Profile of Ocean Protocol, (2025).
  • Messari Research, Profile of SingularityNET, (2025).
  • Cointelegraph, “AI agents are poised to be crypto’s next major vulnerability,” (May 25, 2025).
  • Reuters (Westlaw), “AI agents: greater capabilities and enhanced risks,” (April 22, 2025).
  • Identity.com, “Why AI Agents Need Verified Digital Identities,” (2024).
  • PANews / IOSG Ventures, “Interpreting MCP: Web3 AI Agent Ecosystem,” (May 20, 2025).

From Clicks to Conversations: How Generative AI is Building the Future of DeFi

· 5 min read
Dora Noda
Software Engineer

Traditional Decentralized Finance (DeFi) is powerful, but let's be honest—it can be a nightmare for the average user. Juggling different protocols, managing gas fees, and executing multi-step transactions is confusing and time-consuming. What if you could just tell your wallet what you want, and it would handle the rest?

That's the promise of a new, intent-driven paradigm, and generative AI is the engine making it happen. This shift is poised to transform DeFi from a landscape of complex transactions into a world of simple, goal-oriented experiences.


The Big Idea: From "How" to "What"

In the old DeFi model, you're the pilot. You have to manually choose the exchange, find the best swap route, approve multiple transactions, and pray you didn't mess up.

Intent-driven DeFi flips the script. Instead of executing steps, you declare your end goal—your intent.

  • Instead of: Manually swapping tokens on Uniswap, bridging to another chain, and staking in a liquidity pool...
  • You say: "Maximize the yield on my $5,000 with low risk."

An automated system, often powered by AI agents called "solvers," then finds and executes the most optimal path across multiple protocols to make your goal a reality. It's the difference between following a recipe step-by-step and just telling a chef what you want to eat.

This approach brings two huge benefits:

  1. A "One-Click" User Experience: The complexity of gas fees, bridging, and multi-step swaps is hidden. Thanks to technologies like account abstraction, you can approve a complex goal with a single signature.
  2. Better, More Efficient Execution: Specialized solvers (think professional market-making bots) compete to give you the best deal, often finding better prices and lower slippage than a manual user ever could.

The Role of Generative AI: The Brains of the Operation 🧠

Generative AI, especially Large Language Models (LLMs), is the key that unlocks this seamless experience. Here’s how it works:

  • Natural Language Interfaces: You can interact with DeFi using plain English. AI-powered "copilots" like HeyAnon and Griffain let you manage your portfolio and execute trades just by chatting with an AI, making DeFi as easy as using ChatGPT.
  • AI Planning & Strategy: When you give a high-level goal like "invest for the best yield," AI agents break it down into a concrete plan. They can analyze market data, predict trends, and rebalance your assets automatically, 24/7.
  • Yield Optimization: AI-driven protocols like Mozaic use agents (theirs is named Archimedes) to constantly scan for the best risk-adjusted returns across different chains and automatically move funds to capture the highest APY.
  • Automated Risk Management: AI can act as a vigilant guardian. If it detects a spike in volatility that could risk your position, it can automatically add collateral or move funds to a safer pool, all based on the risk parameters you set in your original intent.

This powerful combination of DeFi and AI has been dubbed "DeFAI" or "AiFi," and it's set to bring a wave of new users who were previously intimidated by crypto's complexity.


A Multi-Billion Dollar Opportunity 📈

The market potential here is massive. The DeFi market is already projected to grow from around $20.5 billion in 2024 to $231 billion by 2030. By making DeFi more accessible, AI could supercharge that growth.

We're already seeing a gold rush of investment and innovation:

  • AI Assistants: Projects like HeyAnon and aixbt have quickly achieved market caps in the hundreds of millions.
  • Intent-Centric Protocols: Established players are adapting. CoW Protocol and UniswapX use solver competition to protect users from MEV and get them better prices.
  • New Blockchains: Entire Layer-2 networks like Essential and Optopia are being built from the ground up to be "intent-centric," treating AI agents as first-class citizens.

Challenges on the Road Ahead

This future isn't here just yet. The DeFAI space faces significant hurdles:

  • Technical Bottlenecks: Blockchains aren't designed to run complex AI models. Most AI logic has to happen off-chain, which introduces complexity and trust issues.
  • AI Hallucinations & Errors: An AI misinterpreting a user's intent or "hallucinating" a faulty investment strategy could be financially disastrous.
  • Security & Exploitation: Combining AI with smart contracts creates new attack surfaces. An autonomous agent could be tricked into executing a bad trade, draining funds in minutes.
  • Centralization Risk: For intent-based systems to work, they need a large, decentralized network of solvers. If only a few large players dominate, we risk recreating the same centralized dynamics of traditional finance.

The Path Forward: Autonomous Finance

The fusion of generative AI and DeFi is pushing us toward a future of Autonomous Finance, where intelligent agents manage assets, execute strategies, and optimize returns on our behalf, all within a decentralized framework.

The journey requires solving major technical and security challenges. But with dozens of projects building the infrastructure, from AI-native oracles to intent-centric blockchains, the momentum is undeniable.

For users, this means a future where engaging with the world of decentralized finance is as simple as having a conversation—a future where you focus on your financial goals, and your AI partner handles the rest. The next generation of finance is being built today, and it’s looking smarter, simpler, and more autonomous than ever before.

Verifiable On-Chain AI with zkML and Cryptographic Proofs

· 36 min read
Dora Noda
Software Engineer

Introduction: The Need for Verifiable AI on Blockchain

As AI systems grow in influence, ensuring their outputs are trustworthy becomes critical. Traditional methods rely on institutional assurances (essentially “just trust us”), which offer no cryptographic guarantees. This is especially problematic in decentralized contexts like blockchains, where a smart contract or user must trust an AI-derived result without being able to re-run a heavy model on-chain. Zero-knowledge Machine Learning (zkML) addresses this by allowing cryptographic verification of ML computations. In essence, zkML enables a prover to generate a succinct proof that “the output $Y$ came from running model $M$ on input $X$”without revealing $X$ or the internal details of $M$. These zero-knowledge proofs (ZKPs) can be verified by anyone (or any contract) efficiently, shifting AI trust from “policy to proof”.

On-chain verifiability of AI means a blockchain can incorporate advanced computations (like neural network inferences) by verifying a proof of correct execution instead of performing the compute itself. This has broad implications: smart contracts can make decisions based on AI predictions, decentralized autonomous agents can prove they followed their algorithms, and cross-chain or off-chain compute services can provide verifiable outputs rather than unverifiable oracles. Ultimately, zkML offers a path to trustless and privacy-preserving AI – for example, proving an AI model’s decisions are correct and authorized without exposing private data or proprietary model weights. This is key for applications ranging from secure healthcare analytics to blockchain gaming and DeFi oracles.

How zkML Works: Compressing ML Inference into Succinct Proofs

At a high level, zkML combines cryptographic proof systems with ML inference so that a complex model evaluation can be “compressed” into a small proof. Internally, the ML model (e.g. a neural network) is represented as a circuit or program consisting of many arithmetic operations (matrix multiplications, activation functions, etc.). Rather than revealing all intermediate values, a prover performs the full computation off-chain and then uses a zero-knowledge proof protocol to attest that every step was done correctly. The verifier, given only the proof and some public data (like the final output and an identifier for the model), can be cryptographically convinced of the correctness without re-executing the model.

To achieve this, zkML frameworks typically transform the model computation into a format amenable to ZKPs:

  • Circuit Compilation: In SNARK-based approaches, the computation graph of the model is compiled into an arithmetic circuit or set of polynomial constraints. Each layer of the neural network (convolutions, matrix multiplies, nonlinear activations) becomes a sub-circuit with constraints ensuring the outputs are correct given the inputs. Because neural nets involve non-linear operations (ReLUs, Sigmoids, etc.) not naturally suited to polynomials, techniques like lookup tables are used to handle these efficiently. For example, a ReLU (output = max(0, input)) can be enforced by a custom constraint or lookup that verifies output equals input if input≥0 else zero. The end result is a set of cryptographic constraints that the prover must satisfy, which implicitly proves the model ran correctly.
  • Execution Trace & Virtual Machines: An alternative is to treat the model inference as a program trace, as done in zkVM approaches. For instance, the JOLT zkVM targets the RISC-V instruction set; one can compile the ML model (or the code that computes it) to RISC-V and then prove each CPU instruction executed properly. JOLT introduces a “lookup singularity” technique, replacing expensive arithmetic constraints with fast table lookups for each valid CPU operation. Every operation (add, multiply, bitwise op, etc.) is checked via a lookup in a giant table of pre-computed valid outcomes, using a specialized argument (Lasso/SHOUT) to keep this efficient. This drastically reduces the prover workload: even complex 64-bit operations become a single table lookup in the proof instead of many arithmetic constraints.
  • Interactive Protocols (GKR Sum-Check): A third approach uses interactive proofs like GKR (Goldwasser–Kalai–Rotblum) to verify a layered computation. Here the model’s computation is viewed as a layered arithmetic circuit (each neural network layer is one layer of the circuit graph). The prover runs the model normally but then engages in a sum-check protocol to prove that each layer’s outputs are correct given its inputs. In Lagrange’s approach (DeepProve, detailed next), the prover and verifier perform an interactive polynomial protocol (made non-interactive via Fiat-Shamir) that checks consistency of each layer’s computations without re-doing them. This sum-check method avoids generating a monolithic static circuit; instead it verifies the consistency of computations in a step-by-step manner with minimal cryptographic operations (mostly hashing or polynomial evaluations).

Regardless of approach, the outcome is a succinct proof (typically a few kilobytes to a few tens of kilobytes) that attests to the correctness of the entire inference. The proof is zero-knowledge, meaning any secret inputs (private data or model parameters) can be kept hidden – they influence the proof but are not revealed to verifiers. Only the intended public outputs or assertions are revealed. This allows scenarios like “prove that model $M$ when applied to patient data $X$ yields diagnosis $Y$, without revealing $X$ or the model’s weights.”

Enabling on-chain verification: Once a proof is generated, it can be posted to a blockchain. Smart contracts can include verification logic to check the proof, often using precompiled cryptographic primitives. For example, Ethereum has precompiles for BLS12-381 pairing operations used in many zk-SNARK verifiers, making on-chain verification of SNARK proofs efficient. STARKs (hash-based proofs) are larger, but can still be verified on-chain with careful optimization or possibly with some trust assumptions (StarkWare’s L2, for instance, verifies STARK proofs on Ethereum by an on-chain verifier contract, albeit with higher gas cost than SNARKs). The key is that the chain does not need to execute the ML model – it only runs a verification which is much cheaper than the original compute. In summary, zkML compresses expensive AI inference into a small proof that blockchains (or any verifier) can check in milliseconds to seconds.

Lagrange DeepProve: Architecture and Performance of a zkML Breakthrough

DeepProve by Lagrange Labs is a state-of-the-art zkML inference framework focusing on speed and scalability. Launched in 2025, DeepProve introduced a new proving system that is dramatically faster than prior solutions like Ezkl. Its design centers on the GKR interactive proof protocol with sum-check and specialized optimizations for neural network circuits. Here’s how DeepProve works and achieves its performance:

  • One-Time Preprocessing: Developers start with a trained neural network (currently supported types include multilayer perceptrons and popular CNN architectures). The model is exported to ONNX format, a standard graph representation. DeepProve’s tool then parses the ONNX model and quantizes it (converts weights to fixed-point/integer form) for efficient field arithmetic. In this phase, it also generates the proving and verification keys for the cryptographic protocol. This setup is done once per model and does not need to be repeated per inference. DeepProve emphasizes ease of integration: “Export your model to ONNX → one-time setup → generate proofs → verify anywhere”.

  • Proving (Inference + Proof Generation): After setup, a prover (which could be run by a user, a service, or Lagrange’s decentralized prover network) takes a new input $X$ and runs the model $M$ on it, obtaining output $Y$. During this execution, DeepProve records an execution trace of each layer’s computations. Instead of translating every multiplication into a static circuit upfront (as SNARK approaches do), DeepProve uses the linear-time GKR protocol to verify each layer on the fly. For each network layer, the prover commits to the layer’s inputs and outputs (e.g., via cryptographic hashes or polynomial commitments) and then engages in a sum-check argument to prove that the outputs indeed result from the inputs as per the layer’s function. The sum-check protocol iteratively convinces the verifier of the correctness of a sum of evaluations of a polynomial that encodes the layer’s computation, without revealing the actual values. Non-linear operations (like ReLU, softmax) are handled efficiently through lookup arguments in DeepProve – if an activation’s output was computed, DeepProve can prove that each output corresponds to a valid input-output pair from a precomputed table for that function. Layer by layer, proofs are generated and then aggregated into one succinct proof covering the whole model’s forward pass. The heavy lifting of cryptography is minimized – DeepProve’s prover mostly performs normal numeric computations (the actual inference) plus some light cryptographic commitments, rather than solving a giant system of constraints.

  • Verification: The verifier uses the final succinct proof along with a few public values – typically the model’s committed identifier (a cryptographic commitment to $M$’s weights), the input $X$ (if not private), and the claimed output $Y$ – to check correctness. Verification in DeepProve’s system involves verifying the sum-check protocol’s transcript and the final polynomial or hash commitments. This is more involved than verifying a classic SNARK (which might be a few pairings), but it’s vastly cheaper than re-running the model. In Lagrange’s benchmarks, verifying a DeepProve proof for a medium CNN takes on the order of 0.5 seconds in software. That is ~0.5s to confirm, for example, that a convolutional network with hundreds of thousands of parameters ran correctly – over 500× faster than naively re-computing that CNN on a GPU for verification. (In fact, DeepProve measured up to 521× faster verification for CNNs and 671× for MLPs compared to re-execution.) The proof size is small enough to transmit on-chain (tens of KB), and verification could be performed in a smart contract if needed, although 0.5s of computation might require careful gas optimization or layer-2 execution.

Architecture and Tooling: DeepProve is implemented in Rust and provides a toolkit (the zkml library) for developers. It natively supports ONNX model graphs, making it compatible with models from PyTorch or TensorFlow (after exporting). The proving process currently targets models up to a few million parameters (tests include a 4M-parameter dense network). DeepProve leverages a combination of cryptographic components: a multilinear polynomial commitment (to commit to layer outputs), the sum-check protocol for verifying computations, and lookup arguments for non-linear ops. Notably, Lagrange’s open-source repository acknowledges it builds on prior work (the sum-check and GKR implementation from Scroll’s Ceno project), indicating an intersection of zkML with zero-knowledge rollup research.

To achieve real-time scalability, Lagrange pairs DeepProve with its Prover Network – a decentralized network of specialized ZK provers. Heavy proof generation can be offloaded to this network: when an application needs an inference proved, it sends the job to Lagrange’s network, where many operators (staked on EigenLayer for security) compute proofs and return the result. This network economically incentivizes reliable proof generation (malicious or failed jobs get the operator slashed). By distributing work across provers (and potentially leveraging GPUs or ASICs), the Lagrange Prover Network hides the complexity and cost from end-users. The result is a fast, scalable, and decentralized zkML service: “verifiable AI inferences fast and affordable”.

Performance Milestones: DeepProve’s claims are backed by benchmarks against the prior state-of-the-art, Ezkl. For a CNN with ~264k parameters (CIFAR-10 scale model), DeepProve’s proving time was ~1.24 seconds versus ~196 seconds for Ezkl – about 158× faster. For a larger dense network with 4 million parameters, DeepProve proved an inference in ~2.3 seconds vs ~126.8 seconds for Ezkl (~54× faster). Verification times also dropped: DeepProve verified the 264k CNN proof in ~0.6s, whereas verifying the Ezkl proof (Halo2-based) took over 5 minutes on CPU in that test. The speedups come from DeepProve’s near-linear complexity: its prover scales roughly O(n) with the number of operations, whereas circuit-based SNARK provers often have superlinear overhead (FFT and polynomial commitments scaling). In fact, DeepProve’s prover throughput can be within an order of magnitude of plain inference runtime – recent GKR systems can be <10× slower than raw execution for large matrix multiplications, an impressive achievement in ZK. This makes real-time or on-demand proofs more feasible, paving the way for verifiable AI in interactive applications.

Use Cases: Lagrange is already collaborating with Web3 and AI projects to apply zkML. Example use cases include: verifiable NFT traits (proving an AI-generated evolution of a game character or collectible is computed by the authorized model), provenance of AI content (proving an image or text was generated by a specific model, to combat deepfakes), DeFi risk models (proving a model’s output that assesses financial risk without revealing proprietary data), and private AI inference in healthcare or finance (where a hospital can get AI predictions with a proof, ensuring correctness without exposing patient data). By making AI outputs verifiable and privacy-preserving, DeepProve opens the door to “AI you can trust” in decentralized systems – moving from an era of “blind trust in black-box models” to one of “objective guarantees”.

SNARK-Based zkML: Ezkl and the Halo2 Approach

The traditional approach to zkML uses zk-SNARKs (Succinct Non-interactive Arguments of Knowledge) to prove neural network inference. Ezkl (by ZKonduit/Modulus Labs) is a leading example of this approach. It builds on the Halo2 proving system (a PLONK-style SNARK with polynomial commitments over BLS12-381). Ezkl provides a tooling chain where a developer can take a PyTorch or TensorFlow model, export it to ONNX, and have Ezkl compile it into a custom arithmetic circuit automatically.

How it works: Each layer of the neural network is converted into constraints:

  • Linear layers (dense or convolution) become collections of multiplication-add constraints that enforce the dot-products between inputs, weights, and outputs.
  • Non-linear layers (like ReLU, sigmoid, etc.) are handled via lookups or piecewise constraints because such functions are not polynomial. For instance, a ReLU can be implemented by a boolean selector $b$ with constraints ensuring $y = x \cdot b$ and $0 \le b \le 1$ and $b=1$ if $x>0$ (one way to do it), or more efficiently by a lookup table mapping $x \mapsto \max(0,x)$ for a range of $x$ values. Halo2’s lookup arguments allow mapping 16-bit (or smaller) chunks of values, so large domains (like all 32-bit values) are usually “chunked” into several smaller lookups. This chunking increases the number of constraints.
  • Big integer ops or divisions (if any) are similarly broken into small pieces. The result is a large set of R1CS/PLONK constraints tailored to the specific model architecture.

Ezkl then uses Halo2 to generate a proof that these constraints hold given the secret inputs (model weights, private inputs) and public outputs. Tooling and integration: One advantage of the SNARK approach is that it leverages well-known primitives. Halo2 is already used in Ethereum rollups (e.g. Zcash, zkEVMs), so it’s battle-tested and has an on-chain verifier readily available. Ezkl’s proofs use BLS12-381 curve, which Ethereum can verify via precompiles, making it straightforward to verify an Ezkl proof in a smart contract. The team has also provided user-friendly APIs; for example, data scientists can work with their models in Python and use Ezkl’s CLI to produce proofs, without deep knowledge of circuits.

Strengths: Ezkl’s approach benefits from the generality and ecosystem of SNARKs. It supports reasonably complex models and has already seen “practical integrations (from DeFi risk models to gaming AI)”, proving real-world ML tasks. Because it operates at the level of the model’s computation graph, it can apply ML-specific optimizations: e.g. pruning insignificant weights or quantizing parameters to reduce circuit size. It also means model confidentiality is natural – the weights can be treated as private witness data, so the verifier only sees that some valid model produced the output, or at best a commitment to the model. The verification of SNARK proofs is extremely fast (typically a few milliseconds or less on-chain), and proof sizes are small (a few kilobytes), which is ideal for blockchain usage.

Weaknesses: Performance is the Achilles’ heel. Circuit-based proving imposes large overheads, especially as models grow. It’s noted that historically, SNARK circuits could be a million times more work for the prover than just running the model itself. Halo2 and Ezkl optimize this, but still, operations like large matrix multiplications generate tons of constraints. If a model has millions of parameters, the prover must handle correspondingly millions of constraints, performing heavy FFTs and multiexponentiation in the process. This leads to high proving times (often minutes or hours for non-trivial models) and high memory usage. For example, proving even a relatively small CNN (e.g. a few hundred thousand parameters) can take tens of minutes with Ezkl on a single machine. The team behind DeepProve cited that Ezkl took hours for certain model proofs that DeepProve can do in minutes. Large models might not even fit in memory or require splitting into multiple proofs (which then need recursive aggregation). While Halo2 is “moderately optimized”, any need to “chunk” lookups or handle wide-bit operations translates to extra overhead. In summary, scalability is limited – Ezkl works well for small-to-medium models (and indeed outperformed some earlier alternatives like naive Stark-based VMs in benchmarks), but struggles as model size grows beyond a point.

Despite these challenges, Ezkl and similar SNARK-based zkML libraries are important stepping stones. They proved that verified ML inference is possible on-chain and have active usage. Notably, projects like Modulus Labs demonstrated verifying an 18-million-parameter model on-chain using SNARKs (with heavy optimization). The cost was non-trivial, but it shows the trajectory. Moreover, the Mina Protocol has its own zkML toolkit that uses SNARKs to allow smart contracts on Mina (which are Snark-based) to verify ML model execution. This indicates a growing multi-platform support for SNARK-based zkML.

STARK-Based Approaches: Transparent and Programmable ZK for ML

zk-STARKs (Scalable Transparent ARguments of Knowledge) offer another route to zkML. STARKs use hash-based cryptography (like FRI for polynomial commitments) and avoid any trusted setup. They often operate by simulating a CPU or VM and proving the execution trace is correct. In context of ML, one can either build a custom STARK for the neural network or use a general-purpose STARK VM to run the model code.

General STARK VMs (RISC Zero, Cairo): A straightforward approach is to write inference code and run it in a STARK VM. For example, Risc0 provides a RISC-V environment where any code (e.g., C++ or Rust implementation of a neural network) can be executed and proven via a STARK. Similarly, StarkWare’s Cairo language can express arbitrary computations (like an LSTM or CNN inference) which are then proved by the StarkNet STARK prover. The advantage is flexibility – you don’t need to design custom circuits for each model. However, early benchmarks showed that naive STARK VMs were slower compared to optimized SNARK circuits for ML. In one test, a Halo2-based proof (Ezkl) was about 3× faster than a STARK-based approach on Cairo, and even 66× faster than a RISC-V STARK VM on a certain benchmark in 2024. This gap is due to the overhead of simulating every low-level instruction in a STARK and the larger constants in STARK proofs (hashing is fast but you need a lot of it; STARK proof sizes are bigger, etc.). However, STARK VMs are improving and have the benefit of transparent setup (no trusted setup) and post-quantum security. As STARK-friendly hardware and protocols advance, proving speeds will improve.

DeepProve’s approach vs STARK: Interestingly, DeepProve’s use of GKR and sum-check yields a proof more akin to a STARK in spirit – it’s an interactive, hash-based proof with no need for a structured reference string. The trade-off is that its proofs are larger and verification is heavier than a SNARK. Yet, DeepProve shows that careful protocol design (specialized to ML’s layered structure) can vastly outperform both generic STARK VMs and SNARK circuits in proving time. We can consider DeepProve as a bespoke STARK-style zkML prover (though they use the term zkSNARK for succinctness, it doesn’t have a traditional SNARK’s small constant-size verification, since 0.5s verify is bigger than typical SNARK verify). Traditional STARK proofs (like StarkNet’s) often involve tens of thousands of field operations to verify, whereas SNARK verifies in maybe a few dozen. Thus, one trade-off is evident: SNARKs yield smaller proofs and faster verifiers, while STARKs (or GKR) offer easier scaling and no trusted setup at the cost of proof size and verify speed.

Emerging improvements: The JOLT zkVM (discussed earlier under JOLTx) is actually outputting SNARKs (using PLONKish commitments) but it embodies ideas that could be applied in STARK context too (Lasso lookups could theoretically be used with FRI commitments). StarkWare and others are researching ways to speed up proving of common operations (like using custom gates or hints in Cairo for big int ops, etc.). There’s also Circomlib-ML by Privacy&Scaling Explorations (PSE), which provides Circom templates for CNN layers, etc. – that’s SNARK-oriented, but conceptually similar templates could be made for STARK languages.

In practice, non-Ethereum ecosystems leveraging STARKs include StarkNet (which could allow on-chain verification of ML if someone writes a verifier, though cost is high) and Risc0’s Bonsai service (which is an off-chain proving service that emits STARK proofs which can be verified on various chains). As of 2025, most zkML demos on blockchain have favored SNARKs (due to verifier efficiency), but STARK approaches remain attractive for their transparency and potential in high-security or quantum-resistant settings. For example, a decentralized compute network might use STARKs to let anyone verify work without a trusted setup, useful for longevity. Also, some specialized ML tasks might exploit STARK-friendly structures: e.g. computations heavily using XOR/bit operations could be faster in STARKs (since those are cheap in boolean algebra and hashing) than in SNARK field arithmetic.

Summary of SNARK vs STARK for ML:

  • Performance: SNARKs (like Halo2) have huge proving overhead per gate but benefit from powerful optimizations and small constants for verify; STARKs (generic) have larger constant overhead but scale more linearly and avoid expensive crypto like pairings. DeepProve shows that customizing the approach (sum-check) yields near-linear proving time (fast) but with a STARK-like proof. JOLT shows that even a general VM can be made faster with heavy use of lookups. Empirically, for models up to millions of operations: a well-optimized SNARK (Ezkl) can handle it but might take tens of minutes, whereas DeepProve (GKR) can do it in seconds. STARK VMs in 2024 were likely in between or worse than SNARKs unless specialized (Risc0 was slower in tests, Cairo was slower without custom hints).
  • Verification: SNARK proofs verify quickest (milliseconds, and minimal data on-chain ~ a few hundred bytes to a few KB). STARK proofs are larger (dozens of KB) and take longer (tens of ms to seconds) to verify due to many hashing steps. In blockchain terms, a SNARK verify might cost e.g. ~200k gas, whereas a STARK verify could cost millions of gas – often too high for L1, acceptable on L2 or with succinct verification schemes.
  • Setup and Security: SNARKs like Groth16 require a trusted setup per circuit (unfriendly for arbitrary models), but universal SNARKs (PLONK, Halo2) have a one-time setup that can be reused for any circuit up to certain size. STARKs need no setup and use only hash assumptions (plus classical polynomial complexity assumptions), and are post-quantum secure. This makes STARKs appealing for longevity – proofs remain secure even if quantum computers emerge, whereas current SNARKs (BLS12-381 based) would be broken by quantum attacks.

We will consolidate these differences in a comparison table shortly.

FHE for ML (FHE-o-ML): Private Computation vs. Verifiable Computation

Fully Homomorphic Encryption (FHE) is a cryptographic technique that allows computations to be performed directly on encrypted data. In the context of ML, FHE can enable a form of privacy-preserving inference: for example, a client can send encrypted input to a model host, the host runs the neural network on the ciphertext without decrypting it, and sends back an encrypted result which the client can decrypt. This ensures data confidentiality – the model owner learns nothing about the input (and potentially the client learns only the output, not the model’s internals if they only get output). However, FHE by itself does not produce a proof of correctness in the same way ZKPs do. The client must trust that the model owner actually performed the computation honestly (the ciphertext could have been manipulated). Usually, if the client has the model or expects a certain distribution of outputs, blatant cheating can be detected, but subtle errors or use of a wrong model version would not be evident just from the encrypted output.

Trade-offs in performance: FHE is notoriously heavy in computation. Running deep learning inference under FHE incurs orders-of-magnitude slowdown. Early experiments (e.g., CryptoNets in 2016) took tens of seconds to evaluate a tiny CNN on encrypted data. By 2024, improvements like CKKS (for approximate arithmetic) and better libraries (Microsoft SEAL, Zama’s Concrete) have reduced this overhead, but it remains large. For example, a user reported that using Zama’s Concrete-ML to run a CIFAR-10 classifier took 25–30 minutes per inference on their hardware. After optimizations, Zama’s team achieved ~40 seconds for that inference on a 192-core server. Even 40s is extremely slow compared to a plaintext inference (which might be 0.01s), showing a ~$10^3$–$10^4\times$ overhead. Larger models or higher precision increase the cost further. Additionally, FHE operations consume a lot of memory and require occasional bootstrapping (a noise-reduction step) which is computationally expensive. In summary, scalability is a major issue – state-of-the-art FHE might handle a small CNN or simple logistic regression, but scaling to large CNNs or Transformers is beyond current practical limits.

Privacy advantages: FHE’s big appeal is data privacy. The input can remain completely encrypted throughout the process. This means an untrusted server can compute on a client’s private data without learning anything about it. Conversely, if the model is sensitive (proprietary), one could envisage encrypting the model parameters and having the client perform FHE inference on their side – but this is less common because if the client has to do the heavy FHE compute, it negates the idea of offloading to a powerful server. Typically, the model is public or held by server in the clear, and the data is encrypted by the client’s key. Model privacy in that scenario is not provided by default (the server knows the model; the client learns outputs but not weights). There are more exotic setups (like secure two-party computation or multi-key FHE) where both model and data can be kept private from each other, but those incur even more complexity. In contrast, zkML via ZKPs can ensure model privacy and data privacy at once – the prover can have both the model and data as secret witness, only revealing what’s needed to the verifier.

No on-chain verification needed (and none possible): With FHE, the result comes out encrypted to the client. The client then decrypts it to obtain the actual prediction. If we want to use that result on-chain, the client (or whoever holds the decryption key) would have to publish the plaintext result and convince others it’s correct. But at that point, trust is back in the loop – unless combined with a ZKP. In principle, one could combine FHE and ZKP: e.g., use FHE to keep data private during compute, and then generate a ZK-proof that the plaintext result corresponds to a correct computation. However, combining them means you pay the performance penalty of FHE and ZKP – extremely impractical with today’s tech. So, in practice FHE-of-ML and zkML serve different use cases:

  • FHE-of-ML: Ideal when the goal is confidentiality between two parties (client and server). For instance, a cloud service can host an ML model and users can query it with their sensitive data without revealing the data to the cloud (and if the model is sensitive, perhaps deploy it via FHE-friendly encodings). This is great for privacy-preserving ML services (medical predictions, etc.). The user still has to trust the service to faithfully run the model (since no proof), but at least any data leakage is prevented. Some projects like Zama are even exploring an “FHE-enabled EVM (fhEVM)” where smart contracts could operate on encrypted inputs, but verifying those computations on-chain would require the contract to somehow enforce correct computation – an open challenge likely requiring ZK proofs or specialized secure hardware.
  • zkML (ZKPs): Ideal when the goal is verifiability and public auditability. If you want anyone (or any contract) to be sure that “Model $M$ was evaluated correctly on $X$ and produced $Y$”, ZKPs are the solution. They also provide privacy as a bonus (you can hide $X$ or $Y$ or $M$ if needed by treating them as private inputs to the proof), but their primary feature is the proof of correct execution.

A complementary relationship: It’s worth noting that ZKPs protect the verifier (they learn nothing about secrets, only that the computation was correctly done), whereas FHE protects the prover’s data from the computing party. In some scenarios, these could be combined – for example, a network of untrusted nodes could use FHE to compute on users’ private data and then provide ZK proofs to the users (or blockchain) that the computations were done according to the protocol. This would cover both privacy and correctness, but the performance cost is enormous with today’s algorithms. More feasible in the near term are hybrids like Trusted Execution Environments (TEE) plus ZKP or Functional Encryption plus ZKP – these are beyond our scope, but they aim to provide something similar (TEEs keep data/model secret during compute, then a ZKP can attest the TEE did the right thing).

In summary, FHE-of-ML prioritizes confidentiality of inputs/outputs, while zkML prioritizes verifiable correctness (with possible privacy). Table 1 below contrasts the key properties:

ApproachProver Performance (Inference & Proof)Proof Size & VerificationPrivacy FeaturesTrusted Setup?Post-Quantum?
zk-SNARK (Halo2, Groth16, PLONK, etc)Heavy prover overhead (up to 10^6× normal runtime without optimizations; in practice 10^3–10^5×). Optimized for specific model/circuit; proving time in minutes for medium models, hours for large. Recent zkML SNARKs (DeepProve with GKR) vastly improve this (near-linear overhead, e.g. seconds instead of minutes for million-param models).Very small proofs (often < 100 KB, sometimes ~a few KB). Verification is fast: a few pairings or polynomial evals (typically < 50 ms on-chain). DeepProve’s GKR-based proofs are larger (tens–hundreds KB) and verify in ~0.5 s (still much faster than re-running the model).Data confidentiality: Yes – inputs can be private in proof (not revealed). Model privacy: Yes – prover can commit to model weights and not reveal them. Output hiding: Optional – proof can be of a statement without revealing output (e.g. “output has property P”). However, if the output itself is needed on-chain, it typically becomes public. Overall, SNARKs offer full zero-knowledge flexibility (hide whichever parts you want).Depends on scheme. Groth16/EZKL require a trusted setup per circuit; PLONK/Halo2 use a universal setup (one time). DeepProve’s sum-check GKR is transparent (no setup) – a bonus of that design.Classical SNARKs (BLS12-381 curves) are not PQ-safe (vulnerable to quantum attacks on elliptic curve discrete log). Some newer SNARKs use PQ-safe commitments, but Halo2/PLONK as used in Ezkl are not PQ-safe. GKR (DeepProve) uses hash commitments (e.g. Poseidon/Merkle) which are conjectured PQ-safe (relying on hash preimage resistance).
zk-STARK (FRI, hash-based proof)Prover overhead is high but more linear scaling. Typically 10^2–10^4× slower than native for large tasks, with room to parallelize. General STARK VMs (Risc0, Cairo) saw slower performance vs SNARK for ML in 2024 (e.g. 3×–66× slower than Halo2 in some cases). Specialized STARKs (or GKR) can approach linear overhead and outperform SNARKs for large circuits.Proofs are larger: often tens of KB (growing with circuit size/log(n)). Verifier must do multiple hash and FFT checks – verification time ~O(n^ε) for small ε (e.g. ~50 ms to 500 ms depending on proof size). On-chain, this is costlier (StarkWare’s L1 verifier can take millions of gas per proof). Some STARKs support recursive proofs to compress size, at cost of prover time.Data & Model privacy: A STARK can be made zero-knowledge by randomizing trace data (adding blinding to polynomial evaluations), so it can hide private inputs similarly to SNARK. Many STARK implementations focus on integrity, but zk-STARK variants do allow privacy. So yes, they can hide inputs/models like SNARKs. Output hiding: likewise possible in theory (prover doesn’t declare the output as public), but rarely used since usually the output is what we want to reveal/verify.No trusted setup. Transparency is a hallmark of STARKs – only require common random string (which Fiat-Shamir can derive). This makes them attractive for open-ended use (any model, any time, no per-model ceremony).Yes, STARKs rely on hash and information-theoretic security assumptions (like random oracle and difficulty of certain codeword decoding in FRI). These are believed to be secure against quantum adversaries. STARK proofs are thus PQ-resistant, an advantage for future-proofing verifiable AI.
FHE for ML (Fully Homomorphic Encryption applied to inference)Prover = party doing computation on encrypted data. The computation time is extremely high: 10^3–10^5× slower than plaintext inference is common. High-end hardware (many-core servers, FPGA, etc.) can mitigate this. Some optimizations (low-precision inference, leveled FHE parameters) can reduce overhead but there is a fundamental performance hit. FHE is currently practical for small models or simple linear models; deep networks remain challenging beyond toy sizes.No proof generated. The result is an encrypted output. Verification in the sense of checking correctness is not provided by FHE alone – one trusts the computing party to not cheat. (If combined with secure hardware, one might get an attestation; otherwise, a malicious server could return an incorrect encrypted result that the client would decrypt to wrong output without knowing the difference).Data confidentiality: Yes – the input is encrypted, so the computing party learns nothing about it. Model privacy: If the model owner is doing the compute on encrypted input, the model is in plaintext on their side (not protected). If roles are reversed (client holds model encrypted and server computes), model could be kept encrypted, but this scenario is less common. There are techniques like secure two-party ML that combine FHE/MPC to protect both, but these go beyond plain FHE. Output hiding: By default, the output of the computation is encrypted (only decryptable by the party with the secret key, usually the input owner). So the output is hidden from the computing server. If we want the output public, the client can decrypt and reveal it.No setup needed. Each user generates their own key pair for encryption. Trust relies on keys remaining secret.The security of FHE schemes (e.g. BFV, CKKS, TFHE) is based on lattice problems (Learning With Errors), which are believed to be resistant to quantum attacks (at least no efficient quantum algorithm is known). So FHE is generally considered post-quantum secure.

Table 1: Comparison of zk-SNARK, zk-STARK, and FHE approaches for machine learning inference (performance and privacy trade-offs).

Use Cases and Implications for Web3 Applications

The convergence of AI and blockchain via zkML unlocks powerful new application patterns in Web3:

  • Decentralized Autonomous Agents & On-Chain Decision-Making: Smart contracts or DAOs can incorporate AI-driven decisions with guarantees of correctness. For example, imagine a DAO that uses a neural network to analyze market conditions before executing trades. With zkML, the DAO’s smart contract can require a zkSNARK proof that the authorized ML model (with a known hash commitment) was run on the latest data and produced the recommended action, before the action is accepted. This prevents malicious actors from injecting a fake prediction – the chain verifies the AI’s computation. Over time, one could even have fully on-chain autonomous agents (contracts that query off-chain AI or contain simplified models) making decisions in DeFi or games, with all their moves proven correct and policy-compliant via zk proofs. This raises the trust in autonomous agents, since their “thinking” is transparent and verifiable rather than a black-box.

  • Verifiable Compute Markets: Projects like Lagrange are effectively creating verifiable computation marketplaces – developers can outsource heavy ML inference to a network of provers and get back a proof with the result. This is analogous to decentralized cloud computing, but with built-in trust: you don’t need to trust the server, only the proof. It’s a paradigm shift for oracles and off-chain computation. Protocols like Ethereum’s upcoming DSC (decentralized sequencing layer) or oracle networks could use this to provide data feeds or analytic feeds with cryptographic guarantees. For instance, an oracle could supply “the result of model X on input Y” and anyone can verify the attached proof on-chain, rather than trusting the oracle’s word. This could enable verifiable AI-as-a-service on blockchain: any contract can request a computation (like “score these credit risks with my private model”) and accept the answer only with a valid proof. Projects such as Gensyn are exploring decentralized training and inference marketplaces using these verification techniques.

  • NFTs and Gaming – Provenance and Evolution: In blockchain games or NFT collectibles, zkML can prove traits or game moves were generated by legitimate AI models. For example, a game might allow an AI to evolve an NFT pet’s attributes. Without ZK, a clever user might modify the AI or the outcome to get a superior pet. With zkML, the game can require a proof that “pet’s new stats were computed by the official evolution model on the pet’s old stats”, preventing cheating. Similarly for generative art NFTs: an artist could release a generative model as a commitment; later, when minting NFTs, prove each image was produced by that model given some seed, guaranteeing authenticity (and even doing so without revealing the exact model to the public, preserving the artist’s IP). This provenance verification ensures authenticity in a manner akin to verifiable randomness – except here it’s verifiable creativity.

  • Privacy-Preserving AI in Sensitive Domains: zkML allows confirmation of outcomes without exposing inputs. In healthcare, a patient’s data could be run through an AI diagnostic model by a cloud provider; the hospital receives a diagnosis and a proof that the model (which could be privately held by a pharmaceutical company) was run correctly on the patient data. The patient data remains private (only an encrypted or committed form was used in the proof), and the model weights remain proprietary – yet the result is trusted. Regulators or insurance could also verify that only approved models were used. In finance, a company could prove to an auditor or regulator that its risk model was applied to its internal data and produced certain metrics without revealing the underlying sensitive financial data. This enables compliance and oversight with cryptographic assurances rather than manual trust.

  • Cross-Chain and Off-Chain Interoperability: Because zero-knowledge proofs are fundamentally portable, zkML can facilitate cross-chain AI results. One chain might have an AI-intensive application running off-chain; it can post a proof of the result to a different blockchain, which will trustlessly accept it. For instance, consider a multi-chain DAO using an AI to aggregate sentiment across social media (off-chain data). The AI analysis (complex NLP on large data) is done off-chain by a service that then posts a proof to a small blockchain (or multiple chains) that “analysis was done correctly and output sentiment score = 0.85”. All chains can verify and use that result in their governance logic, without each needing to rerun the analysis. This kind of interoperable verifiable compute is what Lagrange’s network aims to support, by serving multiple rollups or L1s simultaneously. It removes the need for trusted bridges or oracle assumptions when moving results between chains.

  • AI Alignment and Governance: On a more forward-looking note, zkML has been highlighted as a tool for AI governance and safety. Lagrange’s vision statements, for example, argue that as AI systems become more powerful (even superintelligent), cryptographic verification will be essential to ensure they follow agreed rules. By requiring AI models to produce proofs of their reasoning or constraints, humans retain a degree of control – “you cannot trust what you cannot verify”. While this is speculative and involves social as much as technical aspects, the technology could enforce that an AI agent running autonomously still proves it is using an approved model and hasn’t been tampered with. Decentralized AI networks might use on-chain proofs to verify contributions (e.g., a network of nodes collaboratively training a model can prove each update was computed faithfully). Thus zkML could play a role in ensuring AI systems remain accountable to human-defined protocols even in decentralized or uncontrolled environments.

In conclusion, zkML and verifiable on-chain AI represent a convergence of advanced cryptography and machine learning that stands to enhance trust, transparency, and privacy in AI applications. By comparing the major approaches – zk-SNARKs, zk-STARKs, and FHE – we see a spectrum of trade-offs between performance and privacy, each suitable for different scenarios. SNARK-based frameworks like Ezkl and innovations like Lagrange’s DeepProve have made it feasible to prove substantial neural network inferences with practical effort, opening the door to real-world deployments of verifiable AI. STARK-based and VM-based approaches promise greater flexibility and post-quantum security, which will become important as the field matures. FHE, while not a solution for verifiability, addresses the complementary need of confidential ML computation, and in combination with ZKPs or in specific private contexts it can empower users to leverage AI without sacrificing data privacy.

The implications for Web3 are significant: we can foresee smart contracts reacting to AI predictions, knowing they are correct; markets for compute where results are trustlessly sold; digital identities (like Worldcoin’s proof-of-personhood via iris AI) protected by zkML to confirm someone is human without leaking their biometric image; and generally a new class of “provable intelligence” that enriches blockchain applications. Many challenges remain – performance for very large models, developer ergonomics, and the need for specialized hardware – but the trajectory is clear. As one report noted, “today’s ZKPs can support small models, but moderate to large models break the paradigm”; however, rapid advances (50×–150× speedups with DeepProve over prior art) are pushing that boundary outward. With ongoing research (e.g., on hardware acceleration and distributed proving), we can expect progressively larger and more complex AI models to become provable. zkML might soon evolve from niche demos to an essential component of trusted AI infrastructure, ensuring that as AI becomes ubiquitous, it does so in a way that is auditable, decentralized, and aligned with user privacy and security.

ETHDenver 2025: Key Web3 Trends and Insights from the Festival

· 24 min read

ETHDenver 2025, branded the “Year of The Regenerates,” solidified its status as one of the world’s largest Web3 gatherings. Spanning BUIDLWeek (Feb 23–26), the Main Event (Feb 27–Mar 2), and a post-conference Mountain Retreat, the festival drew an expected 25,000+ participants. Builders, developers, investors, and creatives from 125+ countries converged in Denver to celebrate Ethereum’s ethos of decentralization and innovation. True to its community roots, ETHDenver remained free to attend, community-funded, and overflowing with content – from hackathons and workshops to panels, pitch events, and parties. The event’s lore of “Regenerates” defending decentralization set a tone that emphasized public goods and collaborative building, even amid a competitive tech landscape. The result was a week of high-energy builder activity and forward-looking discussions, offering a snapshot of Web3’s emerging trends and actionable insights for industry professionals.

ETHDenver 2025

No single narrative dominated ETHDenver 2025 – instead, a broad spectrum of Web3 trends took center stage. Unlike last year (when restaking via EigenLayer stole the show), 2025’s agenda was a sprinkle of everything: from decentralized physical infrastructure networks (DePIN) to AI agents, from regulatory compliance to real-world asset tokenization (RWA), plus privacy, interoperability, and more. In fact, ETHDenver’s founder John Paller addressed concerns about multi-chain content by noting “95%+ of our sponsors and 90% of content is ETH/EVM-aligned” – yet the presence of non-Ethereum ecosystems underscored interoperability as a key theme. Major speakers reflected these trend areas: for example, zk-rollup and Layer-2 scaling was highlighted by Alex Gluchowski (CEO of Matter Labs/zkSync), while multi-chain innovation came from Adeniyi Abiodun of Mysten Labs (Sui) and Albert Chon of Injective.

The convergence of AI and Web3 emerged as a strong undercurrent. Numerous talks and side events focused on decentralized AI agents and “DeFi+AI” crossovers. A dedicated AI Agent Day showcased on-chain AI demos, and a collective of 14 teams (including Coinbase’s developer kit and NEAR’s AI unit) even announced the Open Agents Alliance (OAA) – an initiative to provide permissionless, free AI access by pooling Web3 infrastructure. This indicates growing interest in autonomous agents and AI-driven dApps as a frontier for builders. Hand-in-hand with AI, DePIN (decentralized physical infrastructure) was another buzzword: multiple panels (e.g. Day of DePIN, DePIN Summit) explored projects bridging blockchain with physical networks (from telecom to mobility).

Cuckoo AI Network made waves at ETHDenver 2025, showcasing its innovative decentralized AI model-serving marketplace designed for creators and developers. With a compelling presence at both the hackathon and community-led side events, Cuckoo AI attracted significant attention from developers intrigued by its ability to monetize GPU/CPU resources and easily integrate on-chain AI APIs. During their dedicated workshop and networking session, Cuckoo AI highlighted how decentralized infrastructure could efficiently democratize access to advanced AI services. This aligns directly with the event's broader trends—particularly the intersection of blockchain with AI, DePIN, and public-goods funding. For investors and developers at ETHDenver, Cuckoo AI emerged as a clear example of how decentralized approaches can power the next generation of AI-driven dApps and infrastructure, positioning itself as an attractive investment opportunity within the Web3 ecosystem.

Privacy, identity, and security remained top-of-mind. Speakers and workshops addressed topics like zero-knowledge proofs (zkSync’s presence), identity management and verifiable credentials (a dedicated Privacy & Security track was in the hackathon), and legal/regulatory issues (an on-chain legal summit was part of the festival tracks). Another notable discussion was the future of fundraising and decentralization of funding: a Main Stage debate between Dragonfly Capital’s Haseeb Qureshi and Matt O’Connor of Legion (an “ICO-like” platform) about ICOs vs. VC funding captivated attendees. This debate highlighted emerging models like community token sales challenging traditional VC routes – an important trend for Web3 startups navigating capital raising. The take-away for professionals is clear: Web3 in 2025 is multidisciplinary – spanning finance, AI, real assets, and culture – and staying informed means looking beyond any one hype cycle to the full spectrum of innovation.

Sponsors and Their Strategic Focus Areas

ETHDenver’s sponsor roster in 2025 reads like a who’s-who of layer-1s, layer-2s, and Web3 infrastructure projects – each leveraging the event to advance strategic goals. Cross-chain and multi-chain protocols made a strong showing. For instance, Polkadot was a top sponsor with a hefty $80k bounty pool, incentivizing builders to create cross-chain DApps and appchains. Similarly, BNB Chain, Flow, Hedera, and Base (Coinbase’s L2) each offered up to $50k for projects integrating with their ecosystems, signaling their push to attract Ethereum developers. Even traditionally separate ecosystems like Solana and Internet Computer joined in with sponsored challenges (e.g. Solana co-hosted a DePIN event, and Internet Computer offered an “Only possible on ICP” bounty). This cross-ecosystem presence drew some community scrutiny, but ETHDenver’s team noted that the vast majority of content remained Ethereum-aligned. The net effect was interoperability being a core theme – sponsors aimed to position their platforms as complementary extensions of the Ethereum universe.

Scaling solutions and infrastructure providers were also front and center. Major Ethereum L2s like Optimism and Arbitrum had large booths and sponsored challenges (Optimism’s bounties up to $40k), reinforcing their focus on onboarding developers to rollups. New entrants like ZkSync and Zircuit (a project showcasing an L2 rollup approach) emphasized zero-knowledge tech and even contributed SDKs (ZkSync promoted its Smart Sign-On SDK for user-friendly login, which hackathon teams eagerly used). Restaking and modular blockchain infrastructure was another sponsor interest – EigenLayer (pioneering restaking) had its own $50k track and even co-hosted an event on “Restaking & DeFAI (Decentralized AI)”, marrying its security model with AI topics. Oracles and interoperability middleware were represented by the likes of Chainlink and Wormhole, each issuing bounties for using their protocols.

Notably, Web3 consumer applications and tooling had sponsor support to improve user experience. Uniswap’s presence – complete with one of the biggest booths – wasn’t just for show: the DeFi giant used the event to announce new wallet features like integrated fiat off-ramps, aligning with its sponsorship focus on DeFi usability. Identity and community-focused platforms like Galxe (Gravity) and Lens Protocol sponsored challenges around on-chain social and credentialing. Even mainstream tech companies signaled interest: PayPal and Google Cloud hosted a stablecoin/payments happy hour to discuss the future of payments in crypto. This blend of sponsors shows that strategic interests ranged from core infrastructure to end-user applications – all converging at ETHDenver to provide resources (APIs, SDKs, grants) to developers. For Web3 professionals, the heavy sponsorship from layer-1s, layer-2s, and even Web2 fintechs highlights where the industry is investing: interoperability, scalability, security, and making crypto useful for the next wave of users.

Hackathon Highlights: Innovative Projects and Winners

At the heart of ETHDenver is its legendary #BUIDLathon – a hackathon that has grown into the world’s largest blockchain hackfest with thousands of developers. In 2025 the hackathon offered a record $1,043,333+ prize pool to spur innovation. Bounties from 60+ sponsors targeted key Web3 domains, carving the competition into tracks such as: DeFi & AI, NFTs & Gaming, Infrastructure & Scalability, Privacy & Security, and DAOs & Public Goods. This track design itself is insightful – for example, pairing DeFi with AI hints at the emergence of AI-driven financial applications, while a dedicated Public Goods track reaffirms community focus on regenerative finance and open-source development. Each track was backed by sponsors offering prizes for best use of their tech (e.g. Polkadot and Uniswap for DeFi, Chainlink for interoperability, Optimism for scaling solutions). The organizers even implemented quadratic voting for judging, allowing the community to help surface top projects, with final winners chosen by expert judges.

The result was an outpouring of cutting-edge projects, many of which offer a glimpse into Web3’s future. Notable winners included an on-chain multiplayer game “0xCaliber”, a first-person shooter that runs real-time blockchain interactions inside a classic FPS game. 0xCaliber wowed judges by demonstrating true on-chain gaming – players buy in with crypto, “shoot” on-chain bullets, and use cross-chain tricks to collect and cash out loot, all in real time. This kind of project showcases the growing maturity of Web3 gaming (integrating Unity game engines with smart contracts) and the creativity in merging entertainment with crypto economics. Another category of standout hacks were those merging AI with Ethereum: teams built “agent” platforms that use smart contracts to coordinate AI services, inspired by the Open Agents Alliance announcement. For example, one hackathon project integrated AI-driven smart contract auditors (auto-generating security test cases for contracts) – aligning with the decentralized AI trend observed at the conference.

Infrastructure and tooling projects were also prominent. Some teams tackled account abstraction and user experience, using sponsor toolkits like zkSync’s Smart Sign-On to create wallet-less login flows for dApps. Others worked on cross-chain bridges and Layer-2 integrations, reflecting ongoing developer interest in interoperability. In the Public Goods & DAO track, a few projects addressed real-world social impact, such as a dApp for decentralized identity and aid to help the homeless (leveraging NFTs and community funds, an idea reminiscent of prior ReFi hacks). Regenerative finance (ReFi) concepts – like funding public goods via novel mechanisms – continued to appear, echoing ETHDenver’s regenerative theme.

While final winners were being celebrated by the end of the main event, the true value was in the pipeline of innovation: over 400 project submissions poured in, many of which will live on beyond the event. ETHDenver’s hackathon has a track record of seeding future startups (indeed, some past BUIDLathon projects have grown into sponsors themselves). For investors and technologists, the hackathon provided a window into bleeding-edge ideas – signaling that the next wave of Web3 startups may emerge in areas like on-chain gaming, AI-infused dApps, cross-chain infrastructure, and solutions targeting social impact. With nearly $1M in bounties disbursed to developers, sponsors effectively put their money where their mouth is to cultivate these innovations.

Networking Events and Investor Interactions

ETHDenver is not just about writing code – it’s equally about making connections. In 2025 the festival supercharged networking with both formal and informal events tailored for startups, investors, and community builders. One marquee event was the Bufficorn Ventures (BV) Startup Rodeo, a high-energy showcase where 20 hand-picked startups demoed to investors in a science-fair style expo. Taking place on March 1st in the main hall, the Startup Rodeo was described as more “speed dating” than pitch contest: founders manned tables to pitch their projects one-on-one as all attending investors roamed the arena. This format ensured even early-stage teams could find meaningful face time with VCs, strategics, or partners. Many startups used this as a launchpad to find customers and funding, leveraging the concentrated presence of Web3 funds at ETHDenver.

On the conference’s final day, the BV BuffiTank Pitchfest took the spotlight on the main stage – a more traditional pitch competition featuring 10 of the “most innovative” early-stage startups from the ETHDenver community. These teams (separate from the hackathon winners) pitched their business models to a panel of top VCs and industry leaders, competing for accolades and potential investment offers. The Pitchfest illustrated ETHDenver’s role as a deal-flow generator: it was explicitly aimed at teams “already organized…looking for investment, customers, and exposure,” especially those connected to the SporkDAO community. The reward for winners wasn’t a simple cash prize but rather the promise of joining Bufficorn Ventures’ portfolio or other accelerator cohorts. In essence, ETHDenver created its own mini “Shark Tank” for Web3, catalyzing investor attention on the community’s best projects.

Beyond these official showcases, the week was packed with investor-founder mixers. According to a curated guide by Belong, notable side events included a “Meet the VCs” Happy Hour hosted by CertiK Ventures on Feb 27, a StarkNet VC & Founders Lounge on March 1, and even casual affairs like a “Pitch & Putt” golf-themed pitch event. These gatherings provided relaxed environments for founders to rub shoulders with venture capitalists, often leading to follow-up meetings after the conference. The presence of many emerging VC firms was also felt on panels – for example, a session on the EtherKnight Stage highlighted new funds like Reflexive Capital, Reforge VC, Topology, Metalayer, and Hash3 and what trends they are most excited about. Early indications suggest these VCs were keen on areas like decentralized social media, AI, and novel Layer-1 infrastructure (each fund carving a niche to differentiate themselves in a competitive VC landscape).

For professionals looking to capitalize on ETHDenver’s networking: the key takeaway is the value of side events and targeted mixers. Deals and partnerships often germinate over coffee or cocktails rather than on stage. ETHDenver 2025’s myriad investor events demonstrate that the Web3 funding community is actively scouting for talent and ideas even in a lean market. Startups that came prepared with polished demos and a clear value proposition (often leveraging the event’s hackathon momentum) found receptive audiences. Meanwhile, investors used these interactions to gauge the pulse of the developer community – what problems are the brightest builders solving this year? In summary, ETHDenver reinforced that networking is as important as BUIDLing: it’s a place where a chance meeting can lead to a seed investment or where an insightful conversation can spark the next big collaboration.

A subtle but important narrative throughout ETHDenver 2025 was the evolving landscape of Web3 venture capital itself. Despite the broader crypto market’s ups and downs, investors at ETHDenver signaled strong appetite for promising Web3 projects. Blockworks reporters on the ground noted “just how much private capital is still flowing into crypto, undeterred by macro headwinds,” with seed stage valuations often sky-high for the hottest ideas. Indeed, the sheer number of VCs present – from crypto-native funds to traditional tech investors dabbling in Web3 – made it clear that ETHDenver remains a deal-making hub.

Emerging thematic focuses could be discerned from what VCs were discussing and sponsoring. The prevalence of AI x Crypto content (hackathon tracks, panels, etc.) wasn’t only a developer trend; it reflects venture interest in the “DeFi meets AI” nexus. Many investors are eyeing startups that leverage machine learning or autonomous agents on blockchain, as evidenced by venture-sponsored AI hackhouses and summits. Similarly, the heavy focus on DePIN and real-world asset (RWA) tokenization indicates that funds see opportunity in projects that connect blockchain to real economy assets and physical devices. The dedicated RWA Day (Feb 26) – a B2B event on the future of tokenized assets – suggests that venture scouts are actively hunting in that arena for the next Goldfinch or Centrifuge (i.e. platforms bringing real-world finance on-chain).

Another observable trend was a growing experimentation with funding models. The aforementioned debate on ICOs vs VCs wasn’t just conference theatrics; it mirrors a real venture movement towards more community-centric funding. Some VCs at ETHDenver indicated openness to hybrid models (e.g. venture-supported token launches that involve community in early rounds). Additionally, public goods funding and impact investing had a seat at the table. With ETHDenver’s ethos of regeneration, even investors discussed how to support open-source infrastructure and developers long-term, beyond just chasing the next DeFi or NFT boom. Panels like “Funding the Future: Evolving Models for Onchain Startups” explored alternatives such as grants, DAO treasury investments, and quadratic funding to supplement traditional VC money. This points to an industry maturing in how projects are capitalized – a mix of venture capital, ecosystem funds, and community funding working in tandem.

From an opportunity standpoint, Web3 professionals and investors can glean a few actionable insights from ETHDenver’s venture dynamics: (1) Infrastructure is still king – many VCs expressed that picks-and-shovels (L2 scaling, security, dev tools) remain high-value investments as the industry’s backbone. (2) New verticals like AI/blockchain convergence and DePIN are emerging investment frontiers – getting up to speed in these areas or finding startups there could be rewarding. (3) Community-driven projects and public goods might see novel funding – savvy investors are figuring out how to support these sustainably (for instance, investing in protocols that enable decentralized governance or shared ownership). Overall, ETHDenver 2025 showed that while the Web3 venture landscape is competitive, it’s brimming with conviction: capital is available for those building the future of DeFi, NFTs, gaming, and beyond, and even bear-market born ideas can find backing if they target the right trend.

Developer Resources, Toolkits, and Support Systems

ETHDenver has always been builder-focused, and 2025 was no exception – it doubled as an open-source developer conference with a plethora of resources and support for Web3 devs. During BUIDLWeek, attendees had access to live workshops, technical bootcamps, and mini-summits spanning various domains. For example, developers could join a Bleeding Edge Tech Summit to tinker with the latest protocols, or drop into an On-Chain Legal Summit to learn about compliant smart contract development. Major sponsors and blockchain teams ran hands-on sessions: Polkadot’s team hosted hacker houses and workshops on spinning up parachains; EigenLayer led a “restaking bootcamp” to teach devs how to leverage its security layer; Polygon and zkSync gave tutorials on building scalable dApps with zero-knowledge tech. These sessions provided invaluable face-time with core engineers, allowing developers to get help with integration and learn new toolkits first-hand.

Throughout the main event, the venue featured a dedicated #BUIDLHub and Makerspace where builders could code in a collaborative environment and access mentors. ETHDenver’s organizers published a detailed BUIDLer Guide and facilitated an on-site mentorship program (experts from sponsors were available to unblock teams on technical issues). Developer tooling companies were also present en masse – from Alchemy and Infura (for blockchain APIs) to Hardhat and Foundry (for smart contract development). Many unveiled new releases or beta tools at the event. For instance, MetaMask’s team previewed a major wallet update featuring gas abstraction and an improved SDK for dApp developers, aiming to simplify how apps cover gas fees for users. Several projects launched SDKs or open-source libraries: Coinbase’s “Agent Kit” for AI agents and the collaborative Open Agents Alliance toolkit were introduced, and Story.xyz promoted its Story SDK for on-chain intellectual property licensing during their own hackathon event.

Bounties and hacker support further augmented the developer experience. With over 180 bounties offered by 62 sponsors, hackers effectively had a menu of specific challenges to choose from, each coming with documentation, office hours, and sometimes bespoke sandboxes. For example, Optimism’s bounty challenged devs to use the latest Bedrock opcodes (with their engineers on standby to assist), and Uniswap’s challenge provided access to their new API for off-ramp integration. Tools for coordination and learning – like the official ETHDenver mobile app and Discord channels – kept developers informed of schedule changes, side quests, and even job opportunities via the ETHDenver job board.

One notable resource was the emphasis on quadratic funding experiments and on-chain voting. ETHDenver integrated a quadratic voting system for hackathon judging, exposing many developers to the concept. Additionally, the presence of Gitcoin and other public goods groups meant devs could learn about grant funding for their projects after the event. In sum, ETHDenver 2025 equipped developers with cutting-edge tools (SDKs, APIs), expert guidance, and follow-on support to continue their projects. For industry professionals, it’s a reminder that nurturing the developer community – through education, tooling, and funding – is critical. Many of the resources highlighted (like new SDKs, or improved dev environments) are now publicly available, offering teams everywhere a chance to build on the shoulders of what was shared at ETHDenver.

Side Events and Community Gatherings Enriching the ETHDenver Experience

What truly sets ETHDenver apart is its festival-like atmosphere – dozens of side events, both official and unofficial, created a rich tapestry of experiences around the main conference. In 2025, beyond the National Western Complex where official content ran, the entire city buzzed with meetups, parties, hackathons, and community gatherings. These side events, often hosted by sponsors or local Web3 groups, significantly contributed to the broader ETHDenver experience.

On the official front, ETHDenver’s own schedule included themed mini-events: the venue had zones like an NFT Art Gallery, a Blockchain Arcade, a DJ Chill Dome, and even a Zen Zone to decompress. The organizers also hosted evening events such as opening and closing parties – e.g., the “Crack’d House” unofficial opening party on Feb 26 by Story Protocol, which blended an artsy performance with hackathon award announcements. But it was the community-led side events that truly proliferated: according to an event guide, over 100 side happenings were tracked on the ETHDenver Luma calendar.

Some examples illustrate the diversity of these gatherings:

  • Technical Summits & Hacker Houses: ElizaOS and EigenLayer ran a 9-day Vault AI Agent Hacker House residency for AI+Web3 enthusiasts. StarkNet’s team hosted a multi-day hacker house culminating in a demo night for projects on their ZK-rollup. These provided focused environments for developers to collaborate on specific tech stacks outside the main hackathon.
  • Networking Mixers & Parties: Every evening offered a slate of choices. Builder Nights Denver on Feb 27, sponsored by MetaMask, Linea, EigenLayer, Wormhole and others, brought together innovators for casual talks over food and drink. 3VO’s Mischief Minded Club Takeover, backed by Belong, was a high-level networking party for community tokenization leaders. For those into pure fun, the BEMO Rave (with Berachain and others) and rAIve the Night (an AI-themed rave) kept the crypto crowd dancing late into the night – blending music, art, and crypto culture.
  • Special Interest Gatherings: Niche communities found their space too. Meme Combat was an event purely for meme enthusiasts to celebrate the role of memes in crypto. House of Ink catered to NFT artists and collectors, turning an immersive art venue (Meow Wolf Denver) into a showcase for digital art. SheFi Summit on Feb 26 brought together women in Web3 for talks and networking, supported by groups like World of Women and Celo – highlighting a commitment to diversity and inclusion.
  • Investor & Content Creator Meetups: We already touched on VC events; additionally, a KOL (Key Opinion Leaders) Gathering on Feb 28 let crypto influencers and content creators discuss engagement strategies, showing the intersection of social media and crypto communities.

Crucially, these side events weren’t just entertainment – they often served as incubators for ideas and relationships in their own right. For instance, the Tokenized Capital Summit 2025 delved into the future of capital markets on-chain, likely sparking collaborations between fintech entrepreneurs and blockchain developers in attendance. The On-Chain Gaming Hacker House provided a space for game developers to share best practices, which may lead to cross-pollination among blockchain gaming projects.

For professionals attending large conferences, ETHDenver’s model underscores that value is found off the main stage as much as on it. The breadth of unofficial programming allowed attendees to tailor their experience – whether one’s goal was to meet investors, learn a new skill, find a co-founder, or just unwind and build camaraderie, there was an event for that. Many veterans advise newcomers: “Don’t just attend the talks – go to the meetups and say hi.” In a space as community-driven as Web3, these human connections often translate into DAO collaborations, investment deals, or at the very least, lasting friendships that span continents. ETHDenver 2025’s vibrant side scene amplified the core conference, turning one week in Denver into a multi-dimensional festival of innovation.

Key Takeaways and Actionable Insights

ETHDenver 2025 demonstrated a Web3 industry in full bloom of innovation and collaboration. For professionals in the space, several clear takeaways and action items emerge from this deep dive:

  • Diversification of Trends: The event made it evident that Web3 is no longer monolithic. Emerging domains like AI integration, DePIN, and RWA tokenization are as prominent as DeFi and NFTs. Actionable insight: Stay informed and adaptable. Leaders should allocate R&D or investment into these rising verticals (e.g. exploring how AI could enhance their dApp, or how real-world assets might be integrated into DeFi platforms) to ride the next wave of growth.
  • Cross-Chain is the Future: With major non-Ethereum protocols actively participating, the walls between ecosystems are lowering. Interoperability and multi-chain user experiences garnered huge attention, from MetaMask adding Bitcoin/Solana support to Polkadot and Cosmos-based chains courting Ethereum developers. Actionable insight: Design for a multi-chain world. Projects should consider integrations or bridges that tap into liquidity and users on other chains, and professionals may seek partnerships across communities rather than staying siloed.
  • Community & Public Goods Matter: The “Year of the Regenerates” theme wasn’t just rhetoric – it permeated the content via public goods funding discussions, quadratic voting for hacks, and events like SheFi Summit. Ethical, sustainable development and community ownership are key values in the Ethereum ethos. Actionable insight: Incorporate regenerative principles. Whether through supporting open-source initiatives, using fair launch mechanisms, or aligning business models with community growth, Web3 companies can gain goodwill and longevity by not being purely extractive.
  • Investor Sentiment – Cautious but Bold: Despite bear market murmurs, ETHDenver showed that VCs are actively scouting and willing to bet big on Web3’s next chapters. However, they are also rethinking how to invest (e.g. more strategic, perhaps more oversight on product-market fit, and openness to community funding). Actionable insight: If you’re a startup, focus on fundamentals and storytelling. The projects that stood out had clear use cases and often working prototypes (some built in a weekend!). If you’re an investor, the conference affirmed that infrastructure (L2s, security, dev tools) remains high-priority, but differentiating via theses in AI, gaming, or social can position a fund at the forefront.
  • Developer Experience is Improving: ETHDenver highlighted many new toolkits, SDKs, and frameworks lowering the barrier for Web3 development – from account abstraction tools to on-chain AI libraries. Actionable insight: Leverage these resources. Teams should experiment with the latest dev tools unveiled (e.g. try out that zkSync Smart SSO for easier logins, or use the Open Agents Alliance resources for an AI project) to accelerate their development and stay ahead of the competition. Moreover, companies should continue engaging with hackathons and open developer forums as a way to source talent and ideas; ETHDenver’s success in turning hackers into founders is proof of that model.
  • The Power of Side Events: Lastly, the explosion of side events taught an important lesson in networking – opportunities often appear in casual settings. A chance encounter at a happy hour or a shared interest at a small meetup can create career-defining connections. Actionable insight: For those attending industry conferences, plan beyond the official agenda. Identify side events aligned with your goals (whether it’s meeting investors, learning a niche skill, or recruiting talent) and be proactive in engaging. As seen in Denver, those who immersed themselves fully in the week’s ecosystem walked away with not just knowledge, but new partners, hires, and friends.

In conclusion, ETHDenver 2025 was a microcosm of the Web3 industry’s momentum – a blend of cutting-edge tech discourse, passionate community energy, strategic investment moves, and a culture that mixes serious innovation with fun. Professionals should view the trends and insights from the event as a roadmap for where Web3 is headed. The actionable next step is to take these learnings – whether it’s a newfound focus on AI, a connection made with an L2 team, or inspiration from a hackathon project – and translate them into strategy. In the spirit of ETHDenver’s favorite motto, it’s time to #BUIDL on these insights and help shape the decentralized future that so many in Denver came together to envision.

From AI Tutors to Blockchain Credentials: How Platforms Like Growbi Are Pioneering the Next Evolution in EdTech

· 7 min read
Dora Noda
Software Engineer

When a platform like Growbi can deliver personalized AI-powered math tutoring that adapts to each student's learning pace for SSAT and ISEE prep, we're witnessing more than just technological advancement. We're seeing the first chapter of education's complete transformation.

But here's the paradox: while AI is revolutionizing how we learn, the infrastructure beneath it remains stubbornly centralized. Students lack true ownership of their achievements, data, or credentials.

The convergence of AI and Web3 technologies is about to change everything.

The AI Education Revolution Is Already Here

Platforms like Growbi represent a seismic shift in educational technology. By leveraging artificial intelligence for standardized test preparation, these platforms can:

  • Personalize learning paths in real-time based on student performance
  • Identify knowledge gaps faster than human tutors
  • Scale quality education to millions at minimal marginal cost
  • Provide 24/7 accessibility without geographic constraints

The numbers tell the story: AI-driven personalization increases student engagement by up to 60%, while AI-powered analytics improve course completion rates by 25–40% according to 2025 EdTech trend reports.

Traditional one-size-fits-all education simply cannot compete with adaptive algorithms that adjust difficulty, pacing, and content delivery based on individual learner profiles.

The Centralization Problem Nobody Talks About

Yet for all their innovation, platforms like Growbi—and virtually every EdTech solution today—operate within a fundamentally broken paradigm:

Data Ownership: Your learning data, progress metrics, and behavioral patterns belong to the platform, not to you. When you leave, your data trail stays behind.

Credential Portability: Certificates and achievements are siloed within platforms. A completion certificate from Growbi can't communicate with your school's system or future employers' verification processes without manual intervention.

Platform Lock-In: Switching platforms means starting from zero. There's no interoperable record of your learning journey that travels with you across educational experiences.

Single Points of Failure: When platforms shut down or change ownership, student records can disappear. We've seen this repeatedly with EdTech startups that went bankrupt.

This is where blockchain and Web3 technologies enter the conversation—not as buzzwords, but as fundamental infrastructure upgrades.

Web3's Answer: Decentralized Education Infrastructure

The blockchain education market is exploding for good reason. Projected to grow from $2.4 billion in 2025 to $11.4 billion by 2032 at a 24.9% CAGR, Web3 is introducing concepts that legacy EdTech platforms cannot replicate:

1. Blockchain-Verified Credentials

Instead of PDFs that can be forged, students receive NFT certificates that are:

  • Immutable: Once issued, they cannot be altered or deleted
  • Instantly verifiable: Anyone can confirm authenticity without contacting the issuing institution
  • Portable: Your credentials travel with you across platforms and borders
  • Composable: Multiple credentials can build into a comprehensive, verifiable academic portfolio

Real-world examples are already emerging. Platforms like Metaschool and BitDegree issue NFT certificates for course completion. In recent pilot programs, student engagement increased 30% when NFT certificates were introduced—learners value credentials they truly own.

2. Tokenized Learning Incentives

The "learn-to-earn" model is reshaping motivation structures:

  • Students earn tokens for completing lessons, passing tests, or helping peers
  • Tokens can be exchanged for advanced courses, tutoring, or even cryptocurrency
  • Gamification elements tied to blockchain rewards boost completion rates significantly

Early movers like Upskillist pioneered tokenized rewards in 2025. Their data shows that financial incentives—even small ones—dramatically improve educational outcomes when properly structured.

3. Decentralized Identity and Data Ownership

Web3 enables students to:

  • Control their educational data through self-sovereign identity systems
  • Grant temporary access to schools or employers without surrendering ownership
  • Build portable learning profiles that aggregate achievements from multiple platforms
  • Monetize their own data if they choose to share it with researchers

This represents a complete inversion of the current model where platforms monetize student data without consent or compensation.

4. Decentralized Autonomous Organizations (DAOs) for Education Governance

Imagine if students and educators—not just venture-backed executives—could vote on platform features, pricing, and curriculum changes.

Educational DAOs are emerging to govern learning platforms democratically, ensuring decisions serve learners rather than shareholders alone.

The Convergence: AI + Web3 = The Future of Education

The real magic happens when AI-powered personalization meets blockchain-based infrastructure:

Cross-Platform Learning Profiles: An AI tutor (like Growbi's math engine) accesses your blockchain-verified learning history from previous platforms to instantly personalize your experience. No more redundant assessments or starting from scratch.

Skill-Based Smart Contracts: Complete a verified blockchain course in Solidity development, and smart contracts automatically unlock job opportunities or grant you access to advanced cohorts—no resume needed.

Decentralized Micro-Credentials: Instead of waiting four years for a degree, students accumulate blockchain-verified micro-credentials from multiple sources (Khan Academy, Coursera, Growbi, university courses) that AI systems aggregate into skill portfolios employers can trust.

AI-Curated Academic Portfolios: AI analyzes your learning patterns across decentralized education platforms to recommend personalized career paths, while blockchain ensures every achievement is authentic and tamper-proof.

Real-World Adoption Is Accelerating

This isn't speculative futurism—it's happening now:

  • ConsenSys Academy offers Ethereum development courses with blockchain-verified certificates
  • Cyfrin Updraft has empowered over 100,000 students with free Web3 education and career advancement
  • Universities are beginning to issue diplomas as NFTs, with institutions exploring blockchain for secure data management
  • The blockchain edutech market could reach $17.84 billion by 2034 in higher education alone

By 2025, students are already carrying "complete academic portfolios across institutions seamlessly, eliminating the bureaucratic friction that currently hampers credential verification and transfer processes," according to recent analyses of Web3 education platforms.

What This Means for Platforms Like Growbi

AI-powered education platforms stand at a crossroads. They can either:

  1. Remain centralized and risk losing students to Web3-native competitors that offer true ownership
  2. Embrace hybrid models that integrate blockchain credentials while maintaining AI-powered personalization
  3. Lead the transition by becoming the first major AI tutoring platforms to issue portable, blockchain-verified credentials

First-movers will capture the most value as students increasingly understand the difference between renting educational progress (current model) and owning it (Web3 model).

The Infrastructure Challenge

Here's the catch: building on blockchain requires robust, reliable infrastructure. Developers creating the next generation of educational platforms need:

  • Multi-chain support: Educational credentials should work across Ethereum, Solana, Polygon, and other chains
  • High throughput: Issuing thousands of certificates daily requires scalable blockchain infrastructure
  • Low latency: Real-time credential verification can't wait for slow block confirmations
  • Developer-friendly APIs: EdTech teams aren't blockchain experts—they need accessible tools

This is precisely the infrastructure challenge BlockEden.xyz solves. With enterprise-grade APIs supporting Ethereum, Aptos, Sui, and dozens of other chains, developers can build the next Growbi—but decentralized—without becoming blockchain infrastructure experts. Explore our education-focused infrastructure solutions to build the future of learning on foundations designed to last.

The Bottom Line

Platforms like Growbi prove that AI can transform educational outcomes through personalization and accessibility. But without blockchain infrastructure, these gains remain trapped in centralized silos.

The next decade will see the convergence of AI's personalization power with Web3's ownership guarantees. Students will no longer choose between quality education and control over their credentials—they'll have both.

The question isn't whether this transition will happen. It's which platforms will lead it, and which will be left behind.

The AI revolution in education is already underway. The blockchain revolution is just beginning. Together, they're redefining what it means to learn, achieve, and prove your knowledge in the 21st century.


Sources:

Altera.al Is Hiring: Join the Pioneers of Digital Human Development ($600K-1M Compensation)

· 2 min read

We're excited to share a transformative opportunity at Altera.al, a breakthrough AI startup that recently made waves with their groundbreaking work in developing digital humans. Recently featured in MIT Technology Review, Altera.al has demonstrated remarkable progress in creating AI agents that can develop humanlike behaviors, form communities, and interact meaningfully in digital spaces.

Altera.al: Join the Frontier of Digital Human Development with Compensation of $600K-1M

About Altera.al

Founded by Robert Yang, who left his position as an assistant professor in computational neuroscience at MIT to pursue this vision, Altera.al has already secured over $11 million in funding from prestigious investors including A16Z and Eric Schmidt's emerging tech VC firm. Their recent Project Sid demonstration showed AI agents spontaneously developing specialized roles, forming social connections, and even creating cultural systems within Minecraft - a significant step toward their goal of creating truly autonomous AI agents that can collaborate at scale.

Why Now Is an Exciting Time to Join

Altera.al has achieved a significant technical breakthrough in their mission to develop machines with fundamental human qualities. Their work goes beyond traditional AI development - they're creating digital beings that can:

  • Form communities and social hierarchies
  • Develop specialized roles and responsibilities
  • Create and spread cultural patterns
  • Interact meaningfully with humans in digital spaces

Who They're Looking For

Following their recent breakthrough, Altera.al is scaling their team and offering exceptional compensation packages ranging from $600,000 to $1,000,000 for:

  • Experts in AI agent research
  • Strong Individual Contributors in:
    • Distributed systems
    • Security
    • Operating systems

How to Apply

Ready to be part of this groundbreaking journey? Apply directly through their careers page: https://jobs.ashbyhq.com/altera.al

Join the Future of Digital Human Development

This is a unique opportunity to work at the intersection of artificial intelligence and human behavior modeling, with a team that's already demonstrating remarkable results. If you're passionate about pushing the boundaries of what's possible in AI and human-machine interaction, Altera.al could be your next adventure.


For more updates on groundbreaking opportunities in tech and blockchain, follow us on Twitter or join our Discord community.

This post is part of our ongoing commitment to supporting innovation and connecting talent with transformative opportunities in the tech industry.

A16Z’s Crypto 2025 Outlook: Twelve Ideas That Might Reshape the Next Internet

· 8 min read

Every year, a16z publishes sweeping predictions on the technologies that will define our future. This time, their crypto team has painted a vivid picture of a 2025 where blockchains, AI, and advanced governance experiments collide.

I’ve summarized and commented on their key insights below, focusing on what I see as the big levers for change — and possible stumbling blocks. If you’re a tech builder, investor, or simply curious about the next wave of the internet, this piece is for you.

1. AI Meets Crypto Wallets

Key Insight: AI models are moving from “NPCs” in the background to “main characters,” acting independently in online (and potentially physical) economies. That means they’ll need crypto wallets of their own.

  • What It Means: Instead of an AI just spitting out answers, it might hold, spend, or invest digital assets — transacting on behalf of its human owner or purely on its own.
  • Potential Payoff: Higher-efficiency “agentic AIs” could help businesses with supply chain coordination, data management, or automated trading.
  • Watch Out For: How do we ensure an AI is truly autonomous, not just secretly manipulated by humans? Trusted execution environments (TEEs) can provide technical guarantees, but establishing trust in a “robot with a wallet” won’t happen overnight.

2. Rise of the DAC (Decentralized Autonomous Chatbot)

Key Insight: A chatbot running autonomously in a TEE can manage its own keys, post content on social media, gather followers, and even generate revenue — all without direct human control.

  • What It Means: Think of an AI influencer that can’t be silenced by any one person because it literally controls itself.
  • Potential Payoff: A glimpse of a world where content creators aren’t individuals but self-governing algorithms with million-dollar (or billion-dollar) valuations.
  • Watch Out For: If an AI breaks laws, who’s liable? Regulatory guardrails will be tricky when the “entity” is a set of code housed on distributed servers.

3. Proof of Personhood Becomes Essential

Key Insight: With AI lowering the cost of generating hyper-realistic fakes, we need better ways to verify that we’re interacting with real humans online. Enter privacy-preserving unique IDs.

  • What It Means: Every user might eventually have a certified “human stamp” — hopefully without sacrificing personal data.
  • Potential Payoff: This could drastically reduce spam, scams, and bot armies. It also lays the groundwork for more trustworthy social networks and community platforms.
  • Watch Out For: Adoption is the main barrier. Even the best proof-of-personhood solutions need broad acceptance before malicious actors outpace them.

4. From Prediction Markets to Broader Information Aggregation

Key Insight: 2024’s election-driven prediction markets grabbed headlines, but a16z sees a bigger trend: using blockchain to design new ways of revealing and aggregating truths — be it in governance, finance, or community decisions.

  • What It Means: Distributed incentive mechanisms can reward people for honest input or data. We might see specialized “truth markets” for everything from local sensor networks to global supply chains.
  • Potential Payoff: A more transparent, less gameable data layer for society.
  • Watch Out For: Sufficient liquidity and user participation remain challenging. For niche questions, “prediction pools” can be too small to yield meaningful signals.

5. Stablecoins Go Enterprise

Key Insight: Stablecoins are already the cheapest way to move digital dollars, but large companies haven’t embraced them — yet.

  • What It Means: SMBs and high-transaction merchants might wake up to the idea that they can save hefty credit-card fees by adopting stablecoins. Enterprises that process billions in annual revenue could do the same, potentially adding 2% to their bottom lines.
  • Potential Payoff: Faster, cheaper global payments, plus a new wave of stablecoin-based financial products.
  • Watch Out For: Companies will need new ways to manage fraud protection, identity verification, and refunds — previously handled by credit-card providers.

6. Government Bonds on the Blockchain

Key Insight: Governments exploring on-chain bonds could create interest-bearing digital assets that function without the privacy issues of a central bank digital currency.

  • What It Means: On-chain bonds could serve as high-quality collateral in DeFi, letting sovereign debt seamlessly integrate with decentralized lending protocols.
  • Potential Payoff: Greater transparency, potentially lower issuance costs, and a more democratized bond market.
  • Watch Out For: Skeptical regulators and potential inertia in big institutions. Legacy clearing systems won’t disappear easily.

Key Insight: Wyoming introduced a new category called the “decentralized unincorporated nonprofit association” (DUNA), meant to give DAOs legal standing in the U.S.

  • What It Means: DAOs can now hold property, sign contracts, and limit the liability of token holders. This opens the door for more mainstream usage and real commercial activity.
  • Potential Payoff: If other states follow Wyoming’s lead (as they did with LLCs), DAOs will become normal business entities.
  • Watch Out For: Public perception is still fuzzy on what DAOs do. They’ll need a track record of successful projects that translate to real-world benefits.

8. Liquid Democracy in the Physical World

Key Insight: Blockchain-based governance experiments might extend from online DAO communities to local-level elections. Voters could delegate their votes or vote directly — “liquid democracy.”

  • What It Means: More flexible representation. You can choose to vote on specific issues or hand that responsibility to someone you trust.
  • Potential Payoff: Potentially more engaged citizens and dynamic policymaking.
  • Watch Out For: Security concerns, technical literacy, and general skepticism around mixing blockchain with official elections.

9. Building on Existing Infrastructure (Instead of Reinventing It)

Key Insight: Startups often spend time reinventing base-layer technology (consensus protocols, programming languages) rather than focusing on product-market fit. In 2025, they’ll pick off-the-shelf components more often.

  • What It Means: Faster speed to market, more reliable systems, and greater composability.
  • Potential Payoff: Less time wasted building a new blockchain from scratch; more time spent on the user problem you’re solving.
  • Watch Out For: It’s tempting to over-specialize for performance gains. But specialized languages or consensus layers can create higher overhead for developers.

10. User Experience First, Infrastructure Second

Key Insight: Crypto needs to “hide the wires.” We don’t make consumers learn SMTP to send email — so why force them to learn “EIPs” or “rollups”?

  • What It Means: Product teams will choose the technical underpinnings that serve a great user experience, not vice versa.
  • Potential Payoff: A big leap in user onboarding, reducing friction and jargon.
  • Watch Out For: “Build it and they will come” only works if you truly nail the experience. Marketing lingo about “easy crypto UX” means nothing if people are still forced to wrangle private keys or memorize arcane acronyms.

11. Crypto’s Own App Stores Emerge

Key Insight: From Worldcoin’s World App marketplace to Solana’s dApp Store, crypto-friendly platforms provide distribution and discovery free from Apple or Google’s gatekeeping.

  • What It Means: If you’re building a decentralized application, you can reach users without fear of sudden deplatforming.
  • Potential Payoff: Tens (or hundreds) of thousands of new users discovering your dApp in days, instead of being lost in the sea of centralized app stores.
  • Watch Out For: These stores need enough user base and momentum to compete with Apple and Google. That’s a big hurdle. Hardware tie-ins (like specialized crypto phones) might help.

12. Tokenizing ‘Unconventional’ Assets

Key Insight: As blockchain infrastructure matures and fees drop, tokenizing everything from biometric data to real-world curiosities becomes more feasible.

  • What It Means: A “long tail” of unique assets can be fractionalized and traded globally. People could even monetize personal data in a controlled, consent-based way.
  • Potential Payoff: Massive new markets for otherwise “locked up” assets, plus interesting new data pools for AI to consume.
  • Watch Out For: Privacy pitfalls and ethical landmines. Just because you can tokenize something doesn’t mean you should.

A16Z’s 2025 outlook shows a crypto sector that’s reaching for broader adoption, more responsible governance, and deeper integration with AI. Where previous cycles dwelled on speculation or hype, this vision revolves around utility: stablecoins saving merchants 2% on every latte, AI chatbots operating their own businesses, local governments experimenting with liquid democracy.

Yet execution risk looms. Regulators worldwide remain skittish, and user experience is still too messy for the mainstream. 2025 might be the year that crypto and AI finally “grow up,” or it might be a halfway step — it all depends on whether teams can ship real products people love, not just protocols for the cognoscenti.

Can 0G’s Decentralized AI Operating System Truly Drive AI On-Chain at Scale?

· 12 min read

On November 13, 2024, 0G Labs announced a $40 million funding round led by Hack VC, Delphi Digital, OKX Ventures, Samsung Next, and Animoca Brands, thrusting the team behind this decentralized AI operating system into the spotlight. Their modular approach combines decentralized storage, data availability verification, and decentralized settlement to enable AI applications on-chain. But can they realistically achieve GB/s-level throughput to fuel the next era of AI adoption on Web3? This in-depth report evaluates 0G’s architecture, incentive mechanics, ecosystem traction, and potential pitfalls, aiming to help you gauge whether 0G can deliver on its promise.

Background

The AI sector has been on a meteoric rise, catalyzed by large language models like ChatGPT and ERNIE Bot. Yet AI is more than just chatbots and generative text; it also includes everything from AlphaGo’s Go victories to image generation tools like MidJourney. The holy grail that many developers pursue is a general-purpose AI, or AGI (Artificial General Intelligence)—colloquially described as an AI “Agent” capable of learning, perception, decision-making, and complex execution similar to human intelligence.

However, both AI and AI Agent applications are extremely data-intensive. They rely on massive datasets for training and inference. Traditionally, this data is stored and processed on centralized infrastructure. With the advent of blockchain, a new approach known as DeAI (Decentralized AI) has emerged. DeAI attempts to leverage decentralized networks for data storage, sharing, and verification to overcome the pitfalls of traditional, centralized AI solutions.

0G Labs stands out in this DeAI infrastructure landscape, aiming to build a decentralized AI operating system known simply as 0G.

What Is 0G Labs?

In traditional computing, an Operating System (OS) manages hardware and software resources—think Microsoft Windows, Linux, macOS, iOS, or Android. An OS abstracts away the complexity of the underlying hardware, making it easier for both end-users and developers to interact with the computer.

By analogy, the 0G OS aspires to fulfill a similar role in Web3:

  • Manage decentralized storage, compute, and data availability.
  • Simplify on-chain AI application deployment.

Why decentralization? Conventional AI systems store and process data in centralized silos, raising concerns around data transparency, user privacy, and fair compensation for data providers. 0G’s approach uses decentralized storage, cryptographic proofs, and open incentive models to mitigate these risks.

The name “0G” stands for “Zero Gravity.” The team envisions an environment where data exchange and computation feel “weightless”—everything from AI training to inference and data availability happens seamlessly on-chain.

The 0G Foundation, formally established in October 2024, drives this initiative. Its stated mission is to make AI a public good—one that is accessible, verifiable, and open to all.

Key Components of the 0G Operating System

Fundamentally, 0G is a modular architecture designed specifically to support AI applications on-chain. Its three primary pillars are:

  1. 0G Storage – A decentralized storage network.
  2. 0G DA (Data Availability) – A specialized data availability layer ensuring data integrity.
  3. 0G Compute Network – Decentralized compute resource management and settlement for AI inference (and eventually training).

These pillars work in concert under the umbrella of a Layer1 network called 0G Chain, which is responsible for consensus and settlement.

According to the 0G Whitepaper (“0G: Towards Data Availability 2.0”), both the 0G Storage and 0G DA layers build on top of 0G Chain. Developers can launch multiple custom PoS consensus networks, each functioning as part of the 0G DA and 0G Storage framework. This modular approach means that as system load grows, 0G can dynamically add new validator sets or specialized nodes to scale out.

0G Storage

0G Storage is a decentralized storage system geared for large-scale data. It uses distributed nodes with built-in incentives for storing user data. Crucially, it splits data into smaller, redundant “chunks” using Erasure Coding (EC), distributing these chunks across different storage nodes. If a node fails, data can still be reconstructed from redundant chunks.

Supported Data Types

0G Storage accommodates both structured and unstructured data.

  1. Structured Data is stored in a Key-Value (KV) layer, suitable for dynamic and frequently updated information (think databases, collaborative documents, etc.).
  2. Unstructured Data is stored in a Log layer which appends data entries chronologically. This layer is akin to a file system optimized for large-scale, append-only workloads.

By stacking a KV layer on top of the Log layer, 0G Storage can serve diverse AI application needs—from storing large model weights (unstructured) to dynamic user-based data or real-time metrics (structured).

PoRA Consensus

PoRA (Proof of Random Access) ensures storage nodes actually hold the chunks they claim to store. Here’s how it works:

  • Storage miners are periodically challenged to produce cryptographic hashes of specific random data chunks they store.
  • They must respond by generating a valid hash (similar to PoW-like puzzle-solving) derived from their local copy of the data.

To level the playing field, the system limits mining competitions to 8 TB segments. A large miner can subdivide its hardware into multiple 8 TB partitions, while smaller miners compete within a single 8 TB boundary.

Incentive Design

Data in 0G Storage is divided into 8 GB “Pricing Segments.” Each segment has both a donation pool and a reward pool. Users who wish to store data pay a fee in 0G Token (ZG), which partially funds node rewards.

  • Base Reward: When a storage node submits valid PoRA proofs, it gets immediate block rewards for that segment.
  • Ongoing Reward: Over time, the donation pool releases a portion (currently ~4% per year) into the reward pool, incentivizing nodes to store data permanently. The fewer the nodes storing a particular segment, the larger the share each node can earn.

Users only pay once for permanent storage, but must set a donation fee above a system minimum. The higher the donation, the more likely miners are to replicate the user’s data.

Royalty Mechanism: 0G Storage also includes a “royalty” or “data sharing” mechanism. Early storage providers create “royalty records” for each data chunk. If new nodes want to store that same chunk, the original node can share it. When the new node later proves storage (via PoRA), the original data provider receives an ongoing royalty. The more widely replicated the data, the higher the aggregate reward for early providers.

Comparisons with Filecoin and Arweave

Similarities:

  • All three incentivize decentralized data storage.
  • Both 0G Storage and Arweave aim for permanent storage.
  • Data chunking and redundancy are standard approaches.

Key Differences:

  • Native Integration: 0G Storage is not an independent blockchain; it’s integrated directly with 0G Chain and primarily supports AI-centric use cases.
  • Structured Data: 0G supports KV-based structured data alongside unstructured data, which is critical for many AI workloads requiring frequent read-write access.
  • Cost: 0G claims $10–11/TB for permanent storage, reportedly cheaper than Arweave.
  • Performance Focus: Specifically designed to meet AI throughput demands, whereas Filecoin or Arweave are more general-purpose decentralized storage networks.

0G DA (Data Availability Layer)

Data availability ensures that every network participant can fully verify and retrieve transaction data. If the data is incomplete or withheld, the blockchain’s trust assumptions break.

In the 0G system, data is chunked and stored off-chain. The system records Merkle roots for these data chunks, and DA nodes must sample these chunks to ensure they match the Merkle root and erasure-coding commitments. Only then is the data deemed “available” and appended into the chain’s consensus state.

DA Node Selection and Incentives

  • DA nodes must stake ZG to participate.
  • They’re grouped into quorums randomly via Verifiable Random Functions (VRFs).
  • Each node only validates a subset of data. If 2/3 of a quorum confirm the data as available and correct, they sign a proof that’s aggregated and submitted to the 0G consensus network.
  • Reward distribution also happens through periodic sampling. Only the nodes storing randomly sampled chunks are eligible for that round’s rewards.

Comparison with Celestia and EigenLayer

0G DA draws on ideas from Celestia (data availability sampling) and EigenLayer (restaking) but aims to provide higher throughput. Celestia’s throughput currently hovers around 10 MB/s with ~12-second block times. Meanwhile, EigenDA primarily serves Layer2 solutions and can be complex to implement. 0G envisions GB/s throughput, which better suits large-scale AI workloads that can exceed 50–100 GB/s of data ingestion.

0G Compute Network

0G Compute Network serves as the decentralized computing layer. It’s evolving in phases:

  • Phase 1: Focus on settlement for AI inference.
  • The network matches “AI model buyers” (users) with compute providers (sellers) in a decentralized marketplace. Providers register their services and prices in a smart contract. Users pre-fund the contract, consume the service, and the contract mediates payment.
  • Over time, the team hopes to expand to full-blown AI training on-chain, though that’s more complex.

Batch Processing: Providers can batch user requests to reduce on-chain overhead, improving efficiency and lowering costs.

0G Chain

0G Chain is a Layer1 network serving as the foundation for 0G’s modular architecture. It underpins:

  • 0G Storage (via smart contracts)
  • 0G DA (data availability proofs)
  • 0G Compute (settlement mechanisms)

Per official docs, 0G Chain is EVM-compatible, enabling easy integration for dApps that require advanced data storage, availability, or compute.

0G Consensus Network

0G’s consensus mechanism is somewhat unique. Rather than a single monolithic consensus layer, multiple independent consensus networks can be launched under 0G to handle different workloads. These networks share the same staking base:

  • Shared Staking: Validators stake ZG on Ethereum. If a validator misbehaves, their staked ZG on Ethereum can be slashed.
  • Scalability: New consensus networks can be spun up to scale horizontally.

Reward Mechanism: When validators finalize blocks in the 0G environment, they receive tokens. However, the tokens they earn on 0G Chain are burned in the local environment, and the validator’s Ethereum-based account is minted an equivalent amount, ensuring a single point of liquidity and security.

0G Token (ZG)

ZG is an ERC-20 token representing the backbone of 0G’s economy. It’s minted, burned, and circulated via smart contracts on Ethereum. In practical terms:

  • Users pay for storage, data availability, and compute resources in ZG.
  • Miners and validators earn ZG for proving storage or validating data.
  • Shared staking ties the security model back to Ethereum.

Summary of Key Modules

0G OS merges four components—Storage, DA, Compute, and Chain—into one interconnected, modular stack. The system’s design goal is scalability, with each layer horizontally extensible. The team touts the potential for “infinite” throughput, especially crucial for large-scale AI tasks.

0G Ecosystem

Although relatively new, the 0G ecosystem already includes key integration partners:

  1. Infrastructure & Tooling:

    • ZK solutions like Union, Brevis, Gevulot
    • Cross-chain solutions like Axelar
    • Restaking protocols like EigenLayer, Babylon, PingPong
    • Decentralized GPU providers IoNet, exaBits
    • Oracle solutions Hemera, Redstone
    • Indexing tools for Ethereum blob data
  2. Projects Using 0G for Data Storage & DA:

    • Polygon, Optimism (OP), Arbitrum, Manta for L2 / L3 integration
    • Nodekit, AltLayer for Web3 infrastructure
    • Blade Games, Shrapnel for on-chain gaming

Supply Side

ZK and Cross-chain frameworks connect 0G to external networks. Restaking solutions (e.g., EigenLayer, Babylon) strengthen security and possibly attract liquidity. GPU networks accelerate erasure coding. Oracle solutions feed off-chain data or reference AI model pricing.

Demand Side

AI Agents can tap 0G for both data storage and inference. L2s and L3s can integrate 0G’s DA to improve throughput. Gaming and other dApps requiring robust data solutions can store assets, logs, or scoring systems on 0G. Some have already partnered with the project, pointing to early ecosystem traction.

Roadmap & Risk Factors

0G aims to make AI a public utility, accessible and verifiable by anyone. The team aspires to GB/s-level DA throughput—crucial for real-time AI training that can demand 50–100 GB/s of data transfer.

Co-founder & CEO Michael Heinrich has stated that the explosive growth of AI makes timely iteration critical. The pace of AI innovation is fast; 0G’s own dev progress must keep up.

Potential Trade-Offs:

  • Current reliance on shared staking might be an intermediate solution. Eventually, 0G plans to introduce a horizontally scalable consensus layer that can be incrementally augmented (akin to spinning up new AWS nodes).
  • Market Competition: Many specialized solutions exist for decentralized storage, data availability, and compute. 0G’s all-in-one approach must stay compelling.
  • Adoption & Ecosystem Growth: Without robust developer traction, the promised “unlimited throughput” remains theoretical.
  • Sustainability of Incentives: Ongoing motivation for nodes depends on real user demand and an equilibrium token economy.

Conclusion

0G attempts to unify decentralized storage, data availability, and compute into a single “operating system” supporting on-chain AI. By targeting GB/s throughput, the team seeks to break the performance barrier that currently deters large-scale AI from migrating on-chain. If successful, 0G could significantly accelerate the Web3 AI wave by providing a scalable, integrated, and developer-friendly infrastructure.

Still, many open questions remain. The viability of “infinite throughput” hinges on whether 0G’s modular consensus and incentive structures can seamlessly scale. External factors—market demand, node uptime, developer adoption—will also determine 0G’s staying power. Nonetheless, 0G’s approach to addressing AI’s data bottlenecks is novel and ambitious, hinting at a promising new paradigm for on-chain AI.

Decentralized Physical Infrastructure Networks (DePIN): Economics, Incentives, and the AI Compute Era

· 47 min read
Dora Noda
Software Engineer

Introduction

Decentralized Physical Infrastructure Networks (DePIN) are blockchain-based projects that incentivize people to deploy real-world hardware in exchange for crypto tokens. By leveraging idle or underutilized resources – from wireless radios to hard drives and GPUs – DePIN projects create crowdsourced networks providing tangible services (connectivity, storage, computing, etc.). This model transforms normally idle infrastructure (like unused bandwidth, disk space, or GPU power) into active, income-generating networks by rewarding contributors with tokens. Major early examples include Helium (crowdsourced wireless networks) and Filecoin (distributed data storage), and newer entrants target GPU computing and 5G coverage sharing (e.g. Render Network, Akash, io.net).

DePIN’s promise lies in distributing the costs of building and operating physical networks via token incentives, thus scaling networks faster than traditional centralized models. In practice, however, these projects must carefully design economic models to ensure that token incentives translate into real service usage and sustainable value. Below, we analyze the economic models of key DePIN networks, evaluate how effectively token rewards have driven actual infrastructure use, and assess how these projects are coupling with the booming demand for AI-related compute.

Economic Models of Leading DePIN Projects

Helium (Decentralized Wireless IoT & 5G)

Helium pioneered a decentralized wireless network by incentivizing individuals to deploy radio hotspots. Initially focused on IoT (LoRaWAN) and later expanded to 5G small-cell coverage, Helium’s model centers on its native token HNT. Hotspot operators earn HNT by participating in Proof-of-Coverage (PoC) – essentially proving they are providing wireless coverage in a given location. In Helium’s two-token system, HNT has utility through Data Credits (DC): users must burn HNT to mint non-transferable DC, which are used to pay for actual network usage (device connectivity) at a fixed rate of $0.0001 per 24 bytes. This burn mechanism creates a burn-and-mint equilibrium where increased network usage (DC spending) leads to more HNT being burned, reducing supply over time.

Originally, Helium operated on its own blockchain with an inflationary issuance of HNT that halved every two years (yielding a gradually decreasing supply and an eventual max around ~223 million HNT in circulation). In 2023, Helium migrated to Solana and introduced a “network of networks” framework with sub-DAOs. Now, Helium’s IoT network and 5G mobile network each have their own tokens (IOT and MOBILE respectively) rewarded to hotspot operators, while HNT remains the central token for governance and value. HNT can be redeemed for subDAO tokens (and vice versa) via treasury pools, and HNT is also used for staking in Helium’s veHNT governance model. This structure aims to align incentives in each sub-network: for example, 5G hotspot operators earn MOBILE tokens, which can be converted to HNT, effectively tying rewards to the success of that specific service.

Economic value creation: Helium’s value is created by providing low-cost wireless access. By distributing token rewards, Helium offloaded the capex of network deployment onto individuals who purchased and ran hotspots. In theory, as businesses and IoT devices use the network (by spending DC that require burning HNT), that demand should support HNT’s value and fund ongoing rewards. Helium sustains its economy through a burn-and-spend cycle: network users buy HNT (or use HNT rewards) and burn it for DC to use the network, and the protocol mints HNT (according to a fixed schedule) to pay hotspot providers. In Helium’s design, a portion of HNT emissions was also allocated to founders and a community reserve, but the majority has always been for hotspot operators as an incentive to build coverage. As discussed later, Helium’s challenge has been getting enough paying demand to balance the generous supply-side incentives.

Filecoin (Decentralized Storage Network)

Filecoin is a decentralized storage marketplace where anyone can contribute disk space and earn tokens for storing data. Its economic model is built around the FIL token. Filecoin’s blockchain rewards storage providers (miners) with FIL block rewards for provisioning storage and correctly storing clients’ data – using cryptographic proofs (Proof-of-Replication and Proof-of-Spacetime) to verify data is stored reliably. Clients, in turn, pay FIL to miners to have their data stored or retrieved, negotiating prices in an open market. This creates an incentive loop: miners invest in hardware and stake FIL collateral (to guarantee service quality), earning FIL rewards for adding storage capacity and fulfilling storage deals, while clients spend FIL for storage services.

Filecoin’s token distribution is heavily weighted toward incentivizing storage supply. FIL has a maximum supply of 2 billion, with 70% reserved for mining rewards. (In fact, ~1.4 billion FIL are allocated to be released over time as block rewards to storage miners over many years.) The remaining 30% was allocated to stakeholders: 15% to Protocol Labs (the founding team), 10% to investors, and 5% to the Filecoin Foundation. Block reward emissions follow a somewhat front-loaded schedule (with a six-year half-life), meaning supply inflation was highest in the early years to quickly bootstrap a large storage network. To balance this, Filecoin requires miners to lock up FIL as collateral for each gigabyte of data they pledge to store – if they fail to prove the data is retained, they can be penalized (slashed) by losing some collateral. This mechanism aligns miner incentives with reliable service.

Economic value creation: Filecoin creates value by offering censorship-resistant, redundant data storage at potentially lower costs than centralized cloud providers. The FIL token’s value is tied to demand for storage and the utility of the network: clients must obtain FIL to pay for storing data, and miners need FIL (both for collateral and often to cover costs or as revenue). Initially, much of Filecoin’s activity was driven by miners racing to earn tokens – even storing zero-value or duplicated data just to increase their storage power and earn block rewards. To encourage useful storage, Filecoin introduced the Filecoin Plus program: clients with verified useful data (e.g. open datasets, archives) can register deals as “verified,” which gives miners 10× the effective power for those deals, translating into proportionally larger FIL rewards. This has incentivized miners to seek out real clients and has dramatically increased useful data stored on the network. By late 2023, Filecoin’s network had grown to about 1,800 PiB of active deals, up 3.8× year-over-year, with storage utilization rising to ~20% of total capacity (from only ~3% at the start of 2023). In other words, token incentives bootstrapped enormous capacity, and now a growing fraction of that capacity is being filled by paying customers – a sign of the model beginning to sustain itself with real demand. Filecoin is also expanding into adjacent services (see AI Compute Trends below), which could create new revenue streams (e.g. decentralized content delivery and compute-over-data services) to bolster the FIL economy beyond simple storage fees.

Render Network (Decentralized GPU Rendering & Compute)

Render Network is a decentralized marketplace for GPU-based computation, originally focused on rendering 3D graphics and now also supporting AI model training/inference jobs. Its native token RNDR (recently updated to the ticker RENDER on Solana) powers the economy. Creators (users who need GPU work done) pay in RNDR for rendering or compute tasks, and Node Operators (GPU providers) earn RNDR by completing those jobs. This basic model turns idle GPUs (from individual GPU owners or data centers) into a distributed cloud rendering farm. To ensure quality and fairness, Render uses escrow smart contracts: clients submit jobs and burn the equivalent RNDR payment, which is held until node operators submit proof of completing the work, then the RNDR is released as reward. Originally, RNDR functioned as a pure utility/payment token, but the network has recently overhauled its tokenomics to a Burn-and-Mint Equilibrium (BME) model to better balance supply and demand.

Under the BME model, all rendering or compute jobs are priced in stable terms (USD) and paid in RENDER tokens, which are **burned upon job completion. In parallel, the protocol mints new RENDER tokens on a predefined declining emissions schedule to compensate node operators and other participants. In effect, user payments for work destroy tokens while the network inflates tokens at a controlled rate as mining rewards – the net supply can increase or decrease over time depending on usage. The community approved an initial emission of ~9.1 million RENDER in the first year of BME (mid-2023 to mid-2024) as network incentives, and set a long-term max supply of about 644 million RENDER (up from the initial 536.9 million RNDR that were minted at launch). Notably, RENDER’s token distribution heavily favored ecosystem growth: 65% of the initial supply was allocated to a treasury (for future network incentives), 25% to investors, and 10% to team/advisors. With BME, that treasury is being deployed via the controlled emissions to reward GPU providers and other contributors, while the burn mechanism ties those rewards directly to platform usage. RNDR also serves as a governance token (token holders can vote on Render Network proposals). Additionally, node operators on Render can stake RNDR to signal their reliability and potentially receive more work, adding another incentive layer.

Economic value creation: Render Network creates value by supplying on-demand GPU computing at a fraction of the cost of traditional cloud GPU instances. By late 2023, Render’s founder noted that studios had already used the network to render movie-quality graphics with significant cost and speed advantages – “one tenth the cost” and with massive aggregated capacity beyond any single cloud provider. This cost advantage is possible because Render taps into dormant GPUs globally (from hobbyist rigs to pro render farms) that would otherwise be idle. With rising demand for GPU time (for both graphics and AI), Render’s marketplace meets a critical need. Crucially, the BME token model means token value is directly linked to service usage: as more rendering and AI jobs flow through the network, more RENDER is burned (creating buy pressure or reducing supply), while node incentives scale up only as those jobs are completed. This helps avoid “paying for nothing” – if network usage stagnates, the token emissions eventually outpace burns (inflating supply), but if usage grows, the burns can offset or even exceed emissions, potentially making the token deflationary while still rewarding operators. The strong interest in Render’s model was reflected in the market: RNDR’s price rocketed in 2023, rising over 1,000% in value as investors anticipated surging demand for decentralized GPU services amid the AI boom. Backed by OTOY (a leader in cloud rendering software) and used in production by some major studios, Render Network is positioned as a key player at the intersection of Web3 and high-performance computing.

Akash Network (Decentralized Cloud Compute)

Akash is a decentralized cloud computing marketplace that enables users to rent general compute (VMs, containers, etc.) from providers with spare server capacity. Think of it as a decentralized alternative to AWS or Google Cloud, powered by a blockchain-based reverse auction system. The native token AKT is central to Akash’s economy: clients pay for compute leases in AKT, and providers earn AKT for supplying resources. Akash is built on the Cosmos SDK and uses a delegated Proof-of-Stake blockchain for security and coordination. AKT thus also functions as a staking and governance token – validators stake AKT (and users delegate AKT to validators) to secure the network and earn staking rewards.

Akash’s marketplace operates via a bidding system: a client defines a deployment (CPU, RAM, storage, possibly GPU requirements) and a max price, and multiple providers can bid to host it, driving the price down. Once the client accepts a bid, a lease is formed and the workload runs on the chosen provider’s infrastructure. Payments for leases are handled by the blockchain: the client escrows AKT and it streams to the provider over time for as long as the deployment is active. Uniquely, the Akash network charges a protocol “take rate” fee on each lease to fund the ecosystem and reward AKT stakers: 10% of the lease amount if paid in AKT (or 20% if paid in another currency) is diverted as fees to the network treasury and stakers. This means AKT stakers earn a portion of all usage, aligning the token’s value with actual demand on the platform. To improve usability for mainstream users, Akash has integrated stablecoin and credit card payments (via its console app): a client can pay in USD stablecoin, which under the hood is converted to AKT (with a higher fee rate). This reduces the volatility risk for users while still driving value to the AKT token (since those stablecoin payments ultimately result in AKT being bought/burned or distributed to stakers).

On the supply side, AKT’s tokenomics are designed to incentivize long-term participation. Akash began with 100 million AKT at genesis and has a max supply of 389 million via inflation. The inflation rate is adaptive based on the proportion of AKT staked: it targets 20–25% annual inflation if the staking ratio is low, and around 15% if a high percentage of AKT is staked. This adaptive inflation (a common design in Cosmos-based chains) encourages holders to stake (contributing to network security) by rewarding them more when staking participation is low. Block rewards from inflation pay validators and delegators, as well as funding a reserve for ecosystem growth. AKT’s initial distribution set aside allocations for investors, the core team (Overclock Labs), and a foundation pool for ecosystem incentives (e.g. an early program in 2024 funded GPU providers to join).

Economic value creation: Akash creates value by offering cloud computing at potentially much lower costs than incumbent cloud providers, leveraging underutilized servers around the world. By decentralizing the cloud, it also aims to fill regional gaps and reduce reliance on a few big tech companies. The AKT token accrues value from multiple angles: demand-side fees (more workloads = more AKT fees flowing to stakers), supply-side needs (providers may hold or stake earnings, and need to stake some AKT as collateral for providing services), and general network growth (AKT is needed for governance and as a reserve currency in the ecosystem). Importantly, as more real workloads run on Akash, the proportion of AKT in circulation that is used for staking and fee deposits should increase, reflecting real utility. Initially, Akash saw modest usage for web services and crypto infrastructure hosting, but in late 2023 it expanded support for GPU workloads – making it possible to run AI training, machine learning, and high-performance compute jobs on the network. This has significantly boosted Akash’s usage in 2024. By Q3 2024, the network’s metrics showed explosive growth: the number of active deployments (“leases”) grew 1,729% year-on-year, and the average fee per lease (a proxy for complexity of workloads) rose 688%. In practice, this means users are deploying far more applications on Akash and are willing to run larger, longer workloads (many involving GPUs) – evidence that token incentives have attracted real paying demand. Akash’s team reported that by the end of 2024, the network had over 700 GPUs online with ~78% utilization (i.e. ~78% of GPU capacity rented out at any time). This is a strong signal of efficient token incentive conversion (see next section). The built-in fee-sharing model also means that as this usage grows, AKT stakers receive protocol revenue, effectively tying token rewards to actual service revenue – a healthier long-term economic design.

io.net (Decentralized GPU Cloud for AI)

io.net is a newer entrant (built on Solana) aiming to become the “world’s largest GPU network” specifically geared toward AI and machine learning workloads. Its economic model draws lessons from earlier projects like Render and Akash. The native token IO has a fixed maximum supply of 800 million. At launch, 500 million IO were pre-minted and allocated to various stakeholders, and the remaining 300 million IO are being emitted as mining rewards over a 20-year period (distributed hourly to GPU providers and stakers). Notably, io.net implements a revenue-based burn mechanism: a portion of network fees/revenue is used to burn IO tokens, directly tying token supply to platform usage. This combination – a capped supply with time-released emissions and a burn driven by usage – is intended to ensure long-term sustainability of the token economy.

To join the network as a GPU node, providers are required to stake a minimum amount of IO as collateral. This serves two purposes: it deters malicious or low-quality nodes (as they have “skin in the game”), and it reduces immediate sell pressure from reward tokens (since nodes must lock up some tokens to participate). Stakers (which can include both providers and other participants) also earn a share of network rewards, aligning incentives across the ecosystem. On the demand side, customers (AI developers, etc.) pay for GPU compute on io.net, presumably in IO tokens or possibly stable equivalents – the project claims to offer cloud GPU power at up to 90% lower cost than traditional providers like AWS. These usage fees drive the burn mechanism: as revenue flows in, a portion of tokens get burned, linking platform success to token scarcity.

Economic value creation: io.net’s value proposition is aggregating GPU power from many sources (data centers, crypto miners repurposing mining rigs, etc.) into a single network that can deliver on-demand compute for AI at massive scale. By aiming to onboard over 1 million GPUs globally, io.net seeks to out-scale any single cloud and meet the surging demand for AI model training and inference. The IO token captures value through a blend of mechanisms: supply is limited (so token value can grow if demand for network services grows), usage burns tokens (directly creating value feedback to the token from service revenue), and token rewards bootstrap supply (gradually distributing tokens to those who contribute GPUs, ensuring the network grows). In essence, io.net’s economic model is a refined DePIN approach where supply-side incentives (hourly IO emissions) are substantial but finite, and they are counter-balanced by token sinks (burns) that scale with actual usage. This is designed to avoid the trap of excessive inflation with no demand. As we will see, the AI compute trend provides a large and growing market for networks like io.net to tap into, which could drive the desired equilibrium where token incentives lead to robust service usage. (io.net is still emerging, so its real-world metrics remain to be proven, but its design clearly targets the AI compute sector’s needs.)

Table 1: Key Economic Model Features of Selected DePIN Projects

ProjectSectorToken (Ticker)Supply & DistributionIncentive MechanismToken Utility & Value Flow
HeliumDecentralized Wireless (IoT & 5G)Helium Network Token (HNT); plus sub-tokens IOT & MOBILEVariable supply, decreasing issuance: HNT emissions halved every ~2 years (as of original blockchain), targeting ~223M HNT in circulation after 50 years. Migrated to Solana with 2 new sub-tokens: IOT and MOBILE rewarded to IoT and 5G hotspot owners.Proof-of-Coverage mining: Hotspots earn IOT or MOBILE tokens for providing coverage (LoRaWAN or 5G). Those sub-tokens can be converted to HNT via treasury pools. HNT is staked for governance (veHNT) and is the basis for rewards across networks.Network usage via Data Credits: HNT is burned to create Data Credits (DC) for device connectivity (fixed price $0.0001 per 24 bytes). All network fees (DC purchases) effectively burn HNT (reducing supply). Token value thus ties to demand for IoT/Mobile data transfer. HNT’s value also backs the subDAO tokens (giving them convertibility to a scarce asset).
FilecoinDecentralized StorageFilecoin (FIL)Capped supply 2 billion: 70% allocated to storage mining rewards (released over decades); ~30% to Protocol Labs, investors, and foundation. Block rewards follow a six-year half-life (higher inflation early, tapering later).Storage mining: Storage providers earn FIL block rewards proportional to proven storage contributed. Clients pay FIL for storing or retrieving data. Miners put up FIL collateral that can be slashed for failure. Filecoin Plus gives 10× power reward for “useful” client data to incentivize real storage.Payment & collateral: FIL is the currency for storage deals – clients spend FIL to store data, creating organic demand for the token. Miners lock FIL as collateral (temporarily reducing circulating supply) and earn FIL for useful service. As usage grows, more FIL gets tied up in deals and collateral. Network fees (for transactions) are minimal (Filecoin focuses on storage fees which go to miners). Long term, FIL value depends on data storage demand and emerging use cases (e.g. Filecoin Virtual Machine enabling smart contracts for data, potentially generating new fee sinks).
Render NetworkDecentralized GPU Compute (Rendering & AI)Render Token (RNDR / RENDER)Initial supply ~536.9M RNDR, increased to max ~644M via new emissions. Burn-and-Mint Equilibrium: New RENDER emitted on a fixed schedule (20% inflation pool over ~5 years, then tail emissions). Emissions fund network incentives (node rewards, etc.). Burning: Users’ payments in RENDER are burned for each completed job. Distribution: 65% treasury (network ops and rewards), 25% investors, 10% team/advisors.Marketplace for GPU work: Node operators do rendering/compute tasks and earn RENDER. Jobs are priced in USD but paid in RENDER; the required tokens are burned when the work is done. In each epoch (e.g. weekly), new RENDER is minted and distributed to node operators based on the work they completed. Node operators can also stake RNDR for higher trust and potential job priority.Utility & value flow: RENDER is the fee token for GPU services – content creators and AI developers must acquire and spend it to get work done. Because those tokens are burned, usage directly reduces supply. New token issuance compensates workers, but on a declining schedule. If network demand is high (burn > emission), RENDER becomes deflationary; if demand is low, inflation may exceed burns (incentivizing more supply until demand catches up). RENDER also governs the network. The token’s value is thus closely linked to platform usage – in fact, RNDR rallied ~10× in 2023 as AI-driven demand for GPU compute skyrocketed, indicating market confidence that usage (and burns) will be high.
Akash NetworkDecentralized Cloud (general compute & GPU)Akash Token (AKT)Initial supply 100M; max supply 389M. Inflationary PoS token: Adaptive inflation ~15–25% annually (dropping as staking % rises) to incentivize staking. Ongoing emissions pay validators and delegators. Distribution: 34.5% investors, 27% team, 19.7% foundation, 8% ecosystem, 5% testnet (with lock-ups/vesting).Reverse-auction marketplace: Providers bid to host deployments; clients pay in AKT for leases. Fee pool: 10% of AKT payments (or 20% of payments in other tokens) goes to the network (stakers) as a protocol fee. Akash uses a Proof-of-Stake chain – validators stake AKT to secure the network and earn block rewards. Clients can pay via AKT or integrated stablecoins (with conversion).Utility & value flow: AKT is used for all transactions (either directly or via conversion from stable payments). Clients buy AKT to pay for compute leases, creating demand as network usage grows. Providers earn AKT and can sell or stake it. Staking rewards + fee revenue: Holding and staking AKT yields rewards from inflation and a share of all fees, so active network usage benefits stakers directly. This model aligns token value with cloud demand: as more CPU/GPU workloads run on Akash, more fees in AKT flow to holders (and more AKT might be locked as collateral or staked by providers). Governance is also via AKT holdings. Overall, the token’s health improves with higher utilization and has inflation controls to encourage long-term participation.
io.netDecentralized GPU Cloud (AI-focused)IO Token (IO)Fixed cap 800M IO: 500M pre-minted (allocated to team, investors, community, etc.), 300M emitted over ~20 years as mining rewards (hourly distribution). No further inflation after that cap. Built-in burn: Network revenue triggers token burns to reduce supply. Staking: providers must stake a minimum IO to participate (and can stake more for rewards).GPU sharing network: Hardware providers (data centers, miners) connect GPUs and earn IO rewards continuously (hourly) for contributing capacity. They also earn fees from customers’ usage. Staking requirement: Operators stake IO as collateral to ensure good behavior. Users likely pay in IO (or in stable converted to IO) for AI compute tasks; a portion of every fee is burned by the protocol.Utility & value flow: IO is the medium of exchange for GPU compute power on the network, and also the security token that operators stake. Token value is driven by a trifecta: (1) Demand for AI compute – clients must acquire IO to pay for jobs, and higher usage means more tokens burned (reducing supply). (2) Mining incentives – new IO distributed to GPU providers motivates network growth, but the fixed cap limits long-term inflation. (3) Staking – IO is locked up by providers (and possibly users or delegates) to earn rewards, reducing liquid supply and aligning participants with network success. In sum, io.net’s token model is designed so that if it successfully attracts AI workloads at scale, token supply becomes increasingly scarce (through burns and staking), benefiting holders. The fixed supply also imposes discipline, preventing endless inflation and aiming for a sustainable “reward-for-revenue” balance.

Sources: Official documentation and research for each project (see inline citations above).

Token Incentives vs. Real-World Service Usage

A critical question for DePIN projects is how effectively token incentives convert into real service provisioning and actual usage of the network. In the initial stages, many DePIN protocols emphasized bootstrapping supply (hardware deployment) through generous token rewards, even if demand was minimal – a “build it and (hopefully) they will come” strategy. This led to situations where the network’s market cap and token emissions far outpaced the revenue from customers. As of late 2024, the entire DePIN sector (~350 projects) had a combined market cap of ~$50 billion, yet generated only about ~$0.5 billion annualized revenue – an aggregate valuation of ~100× annual revenue. Such a gap underscores the inefficiency in early stages. However, recent trends show improvements as networks shift from purely supply-driven growth to demand-driven adoption, especially propelled by the surge in AI compute needs.

Below we evaluate each example project’s token incentive efficiency, looking at usage metrics versus token outlays:

  • Helium: Helium’s IoT network grew explosively in 2021–2022, with nearly 1 million hotspots deployed globally for LoRaWAN coverage. This growth was almost entirely driven by the HNT mining incentives and crypto enthusiasm – not by customer demand for IoT data, which remained low. By mid-2022, it became clear that Helium’s data traffic (devices actually using the network) was minuscule relative to the enormous supply-side investment. One analysis in 2022 noted that less than $1,000 of tokens were burned for data usage per month, even as the network was minting tens of millions of dollars worth of HNT for hotspot rewards – a stark imbalance (essentially, &lt;1% of token emission was being offset by network usage). In late 2022 and 2023, HNT token rewards underwent scheduled halvings (reducing issuance), but usage was still lagging. An example from November 2023: the dollar value of Helium Data Credits burned was only about $156 for that day – whereas the network was still paying out an estimated $55,000 per day in token rewards to hotspot owners (valued in USD). In other words, that day’s token incentive “cost” outweighed actual network usage by a factor of 350:1. This illustrates the poor incentive-to-usage conversion in Helium’s early IoT phase. Helium’s founders recognized this “chicken-and-egg” dilemma: a network needs coverage before it can attract users, but without users the coverage is hard to monetize.

    There are signs of improvement. In late 2023, Helium activated its 5G Mobile network with a consumer-facing cell service (backed by T-Mobile roaming) and began rewarding 5G hotspot operators in MOBILE tokens. The launch of Helium Mobile (5G) quickly brought in paying users (e.g. subscribers to Helium’s $20/month unlimited mobile plan) and new types of network usage. Within weeks, Helium’s network usage jumped – by early 2024, the daily Data Credit burn reached ~$4,300 (up from almost nothing a couple months prior). Moreover, 92% of all Data Credits consumed were from the Mobile network (5G) as of Q1 2024, meaning the 5G service immediately dwarfed the IoT usage. While $4.3k/day is still modest in absolute terms (~$1.6 million annualized), it represents a meaningful step toward real revenue. Helium’s token model is adapting: by isolating the IoT and Mobile networks into separate reward tokens, it ensures that the 5G rewards (MOBILE tokens) will scale down if 5G usage doesn’t materialize, and similarly for IOT tokens – effectively containing the inefficiency. Helium Mobile’s growth also showed the power of coupling token incentives with a service of immediate consumer interest (cheap cellular data). Within 6 months of launch, Helium had ~93,000 MOBILE hotspots deployed in the US (alongside ~1 million IoT hotspots worldwide), and had struck partnerships (e.g. with Telefónica) to expand coverage. The challenge ahead is to substantially grow the user base (both IoT device clients and 5G subscribers) so that burning of HNT for Data Credits approaches the scale of HNT issuance. In summary, Helium started with an extreme supply surplus (and correspondingly overvalued token), but its pivot toward demand (5G, and positioning as an “infrastructure layer” for other networks) is gradually improving the efficiency of its token incentives.

  • Filecoin: In Filecoin’s case, the imbalance was between storage capacity vs. actual stored data. Token incentives led to an overabundance of supply: at its peak, the Filecoin network had well over 15 exbibytes (EiB) of raw storage capacity pledged by miners, yet for a long time only a few percent of that was utilized by real data. Much of the space was filled with dummy data (clients could even store random garbage data to satisfy proof requirements) just so miners could earn FIL rewards. This meant a lot of FIL was being minted and awarded for storage that wasn’t actually demanded by users. However, over 2022–2023 the network made big strides in driving demand. Through initiatives like Filecoin Plus and aggressive onboarding of open datasets, the utilization rate climbed from ~3% to over 20% of capacity in 2023. By Q4 2024, Filecoin’s storage utilization had further risen to ~30% – meaning nearly one-third of the enormous capacity was holding real client data. This is still far from 100%, but the trend is positive: token rewards are increasingly going toward useful storage rather than empty padding. Another measure: as of Q1 2024, about 1,900 PiB (1.9 EiB) of data was stored in active deals on Filecoin, a 200% year-over-year increase. Notably, the majority of new deals now come via Filecoin Plus (verified clients), indicating miners strongly prefer to devote space to data that earns them bonus reward multipliers.

    In terms of economic efficiency, Filecoin’s protocol also experienced a shift: initially, protocol “revenue” (fees paid by users) was negligible compared to mining rewards (which some analyses treated as revenue, inflating early figures). For example, in 2021, Filecoin’s block rewards were worth hundreds of millions of dollars (at high FIL prices), but actual storage fees were tiny; in 2022, as FIL price fell, reported revenue dropped 98% from $596M to $13M, reflecting that most of 2021’s “revenue” was token issuance value rather than customer spend. Going forward, the balance is improving: the pipeline of paying storage clients is growing (e.g. an enterprise deal of 1 PiB was closed in late 2023, one of the first large fully-paid deals). Filecoin’s introduction of the FVM (enabling smart contracts) and forthcoming storage marketplaces and DEXes are expected to bring more on-chain fee activity (and possibly FIL burns or lockups). In summary, Filecoin’s token incentives successfully built a massive global storage network, albeit with efficiency under 5% in the early period; by 2024 that efficiency improved to ~20–30% and is on track to climb further as real demand catches up with the subsidized supply. The sector’s overall demand for decentralized storage (Web3 data, archives, NFT metadata, AI datasets, etc.) appears to be rising, which bodes well for converting more of those mining rewards into actual useful service.

  • Render Network: Render’s token model inherently links incentives to usage more tightly, thanks to the burn-and-mint equilibrium. In the legacy model (pre-2023), RNDR issuance was largely in the hands of the foundation and based on network growth goals, while usage involved locking up RNDR in escrow for jobs. This made it a bit difficult to analyze efficiency. However, with BME fully implemented in 2023, we can measure how many tokens are burned relative to minted. Since each rendering or compute job burns RNDR proportional to its cost, essentially every token emitted as a reward corresponds to work done (minus any net inflation if emissions > burns in a given epoch). Early data from the Render network post-upgrade indicated that usage was indeed ramping up: the Render Foundation noted that at “peak moments” the network could be completing more render frames per second than Ethereum could handle in transactions, underscoring significant activity. While detailed usage stats (e.g. number of jobs or GPU-hours consumed) aren’t public in the snippet above, one strong indicator is the price and demand for RNDR. In 2023, RNDR became one of the best-performing crypto assets, rising from roughly $0.40 in January to over $2.50 by May, and continuing to climb thereafter. By November 2023, RNDR was up over 10× year-to-date, propelled by the frenzy for AI-related computing power. This price action suggests that users were buying RNDR to get rendering and AI jobs done (or speculators anticipated they would need to). Indeed, the interest in AI tasks likely brought a new wave of demand – Render reported that its network was expanding beyond media rendering into AI model training, and that the GPU shortage in traditional clouds meant demand far outstripped supply in this niche. In essence, Render’s token incentives (the emissions) have been met with equally strong user demand (burns), making its incentive-to-usage conversion relatively high. It’s worth noting that in the first year of BME, the network intentionally allocated some extra tokens (the 9.1M RENDER emissions) to bootstrap node operator earnings. If those outpace usage, it could introduce some temporary inflationary inefficiency. However, given the network’s growth, the burn rate of RNDR has been climbing. The Render Network Dashboard as of mid-2024 showed steady increases in cumulative RNDR burned, indicating real jobs being processed. Another qualitative sign of success: major studios and content creators have used Render for high-profile projects, proving real-world adoption (these are not just crypto enthusiasts running nodes – they are customers paying for rendering). All told, Render appears to have one of the more effective token-to-service conversion metrics in DePIN: if the network is busy, RNDR is being burned and token holders see tangible value; if the network were idle, token emissions would be the only output, but the excitement around AI has ensured the network is far from idle.

  • Akash: Akash’s efficiency can be seen in the context of cloud spend vs. token issuance. As a proof-of-stake chain, Akash’s AKT has inflation to reward validators, but that inflation is not excessively high (and a large portion is offset by staking locks). The more interesting part is how much real usage the token is capturing. In 2022, Akash usage was relatively low (only a few hundred deployments at any time, mainly small apps or test nets). This meant AKT’s value was speculative, not backed by fees. However, in 2023–2024, usage exploded due to AI. By late 2024, Akash was processing ~$11k of spend per day on its network, up from just ~$1.3k/day in January 2024 – a ~749% increase in daily revenue within the year. Over the course of 2024, Akash surpassed $1.6 million in cumulative paid spend for compute. These numbers, while still small compared to giants like AWS, represent actual customers deploying workloads on Akash and paying in AKT or USDC (which ultimately drives AKT demand via conversion). The token incentives (inflationary rewards) during that period were on the order of maybe 15–20% of the 130M circulating AKT (~20–26M AKT minted in 2024, which at $1–3 per AKT might be $20–50M value). So in pure dollar terms, the network was still issuing more value in tokens than it was bringing in fees – similar to other early-stage networks. But the trend is that usage is catching up fast. A telling statistic: comparing Q3 2024 to Q3 2023, the average fee per lease rose from $6.42 to $18.75. This means users are running much more resource-intensive (and thus expensive) workloads, likely GPUs for AI, and they are willing to pay more, presumably because the network delivers value (e.g. lower cost than alternatives). Also, because Akash charges a 10–20% fee on leases to the protocol, that means 10–20% of that $1.6M cumulative spend went to stakers as real yield. In Q4 2024, AKT’s price hit new multi-year highs (~$4, an 8× increase from mid-2023 lows), indicating the market recognized the improved fundamentals and usage. On-chain data from year-end 2024 showed over 650 active leases and over 700 GPUs in the network with ~78% utilization – effectively, most of the GPUs added via incentives were actually in use by customers. This is a strong conversion of token incentives into service: nearly 4 out of 5 GPUs incentivized were serving AI developers (for model training, etc.). Akash’s proactive steps, like enabling credit card payments and supporting popular AI frameworks, helped bridge crypto tokens to real-world users (some users might not even know they are paying for AKT under the hood). Overall, while Akash initially had the common DePIN issue of “supply > demand,” it is quickly moving toward a more balanced state. If AI demand continues, Akash could even approach a regime where demand outstrips the token incentives – in other words, usage might drive AKT’s value more than speculative inflation. The protocol’s design to share fees with stakers also means AKT holders benefit directly as efficiency improves (e.g. by late 2024, stakers were earning significant yield from actual fees, not just inflation).

  • io.net: Being a very new project (launched in 2023/24), io.net’s efficiency is still largely theoretical, but its model is built explicitly to maximize incentive conversion. By hard-capping supply and instituting hourly rewards, io.net avoids the scenario of runaway indefinite inflation. And by burning tokens based on revenue, it ensures that as soon as demand kicks in, there is an automatic counterweight to token emissions. Early reports claimed io.net had aggregated a large number of GPUs (possibly by bringing existing mining farms and data centers on board), giving it significant supply to offer. The key will be whether that supply finds commensurate demand from AI customers. One positive sign for the sector: as of 2024, decentralized GPU networks (including Render, Akash, and io.net) were often capacity-constrained, not demand-constrained – meaning there was more user demand for compute than the networks had online at any moment. If io.net taps into that unmet demand (offering lower prices or unique integrations via Solana’s ecosystem), its token burn could accelerate. On the flip side, if it distributed a large chunk of the 500M IO initial supply to insiders or providers, there is a risk of sell pressure if usage lags. Without concrete usage data yet, io.net serves as a test of the refined tokenomic approach: it targets a demand-driven equilibrium from the outset, trying to avoid oversupplying tokens. In coming years, one can measure its success by tracking what percentage of the 300M emission gets effectively “paid for” by network revenue (burns). The DePIN sector’s evolution suggests io.net is entering at a fortuitous time when AI demand is high, so it may reach high utilization more quickly than earlier projects did.

In summary, early DePIN projects often faced low token incentive efficiency, with token payouts vastly exceeding real usage. Helium’s IoT network was a prime example, where token rewards built a huge network that was only a few percent utilized. Filecoin similarly had a bounty of storage with little stored data initially. However, through network improvements and external demand trends, these gaps are closing. Helium’s 5G pivot multiplied usage, Filecoin’s utilization is steadily climbing, and both Render and Akash have seen real usage surge in tandem with the AI boom, bringing their token economics closer to a sustainable loop. A general trend in 2024 was the shift to “prove the demand”: DePIN teams started focusing on getting users and revenue, not just hardware and hype. This is evidenced by networks like Helium courting enterprise partners for IoT and telco, Filecoin onboarding large Web2 datasets, and Akash making its platform user-friendly for AI developers. The net effect is that token values are increasingly underpinned by fundamentals (e.g. data stored, GPU hours sold) rather than just speculation. While there is still a long way to go – the sector overall at 100× price/revenue implies plenty of speculation remains – the trajectory is towards more efficient use of token incentives. Projects that fail to translate tokens into service (or “hardware on the ground”) will likely fade, while those that achieve a high conversion rate are gaining investor and community confidence.

One of the most significant developments benefiting DePIN projects is the explosive growth in AI computing demand. The year 2023–2024 saw AI model training and deployment become a multi-billion-dollar market, straining the capacity of traditional cloud providers and GPU vendors. Decentralized infrastructure networks have quickly adapted to capture this opportunity, leading to a convergence sometimes dubbed “DePIN x AI” or even “Decentralized Physical AI (DePAI)” by futurists. Below, we outline how our focus projects and the broader DePIN sector are leveraging the AI trend:

  • Decentralized GPU Networks & AI: Projects like Render, Akash, io.net (and others such as Golem, Vast.ai, etc.) are at the forefront of serving AI needs. As noted, Render expanded beyond rendering to support AI workloads – e.g. renting GPU power to train Stable Diffusion models or other ML tasks. Interest in AI has directly driven usage on these networks. In mid-2023, demand for GPU compute to train image and language models skyrocketed. Render Network benefited as many developers and even some enterprises turned to it for cheaper GPU time; this was a factor in RNDR’s 10× price surge, reflecting the market’s belief that Render would supply GPUs to meet AI needs. Similarly, Akash’s GPU launch in late 2023 coincided with the generative AI boom – within months, hundreds of GPUs on Akash were being rented to fine-tune language models or serve AI APIs. The utilization rate of GPUs on Akash reaching ~78% by year-end 2024 indicates that nearly all incentivized hardware found demand from AI users. io.net is explicitly positioning itself as an “AI-focused decentralized computing network”. It touts integration with AI frameworks (they mention using the Ray distributed compute framework, popular in machine learning, to make it easy for AI developers to scale on io.net). Io.net’s value proposition – being able to deploy a GPU cluster in 90 seconds at 10–20× efficiency of cloud – is squarely aimed at AI startups and researchers who are constrained by expensive or backlogged cloud GPU instances. This targeting is strategic: 2024 saw extreme GPU shortages (e.g. NVIDIA’s high-end AI chips were sold out), and decentralized networks with access to any kind of GPU (even older models or gaming GPUs) stepped in to fill the gap. The World Economic Forum noted the emergence of “Decentralized Physical AI (DePAI)” where everyday people contribute computing power and data to AI processes and get rewarded. This concept aligns with GPU DePIN projects enabling anyone with a decent GPU to earn tokens by supporting AI workloads. Messari’s research likewise highlighted that the intense demand from the AI industry in 2024 has been a “significant accelerator” for the DePIN sector’s shift to demand-driven growth.

  • Storage Networks & AI Data: The AI boom isn’t just about computation – it also requires storing massive datasets (for training) and distributing trained models. Decentralized storage networks like Filecoin and Arweave have found new use cases here. Filecoin in particular has embraced AI as a key growth vector: in 2024 the Filecoin community identified “Compute and AI” as one of three focus areas. With the launch of the Filecoin Virtual Machine, it’s now possible to run compute services close to the data stored on Filecoin. Projects like Bacalhau (a distributed compute-over-data project) and Fluence’s compute L2 are building on Filecoin to let users run AI algorithms directly on data stored in the network. The idea is to enable, for example, training a model on a large dataset that’s already stored across Filecoin nodes, rather than having to move it to a centralized cluster. Filecoin’s tech innovations like InterPlanetary Consensus (IPC) allow spinning up subnetworks that could be dedicated to specific workloads (like an AI-specific sidechain leveraging Filecoin’s storage security). Furthermore, Filecoin is supporting decentralized data commons that are highly relevant to AI – for instance, datasets from universities, autonomous vehicle data, or satellite imagery can be hosted on Filecoin, and then accessed by AI models. The network proudly stores major AI-relevant datasets (the referenced UC Berkeley and Internet Archive data, for example). On the token side, this means more clients using FIL for data – but even more exciting is the potential for secondary markets for data: Filecoin’s vision includes allowing storage clients to monetize their data for AI training use cases. That suggests a future where owning a large dataset on Filecoin could earn you tokens when AI companies pay to train on it, etc., creating an ecosystem where FIL flows not just for storage but for data usage rights. This is nascent but highlights how deeply Filecoin is coupling with AI trends.

  • Wireless Networks & Edge Data for AI: On the surface, Helium and similar wireless DePINs are less directly tied to AI compute. However, there are a few connections. IoT sensor networks (like Helium’s IoT subDAO, and others such as Nodle or WeatherXM) can supply valuable real-world data to feed AI models. For instance, WeatherXM (a DePIN for weather station data) provides a decentralized stream of weather data that could improve climate models or AI predictions – WeatherXM data is being integrated via Filecoin’s Basin L2 for exactly these reasons. Nodle, which uses smartphones as nodes to collect data (and is considered a DePIN), is building an app called “Click” for decentralized smart camera footage; they plan to integrate Filecoin to store the images and potentially use them in AI computer vision training. Helium’s role could be providing the connectivity for such edge devices – for example, a city deploying Helium IoT sensors for air quality or traffic, and those datasets then being used to train urban planning AI. Additionally, the Helium 5G network could serve as edge infrastructure for AI in the future: imagine autonomous drones or vehicles that use decentralized 5G for connectivity – the data they generate (and consume) might plug into AI systems continuously. While Helium hasn’t announced specific “AI strategies,” its parent Nova Labs has hinted at positioning Helium as a general infrastructure layer for other DePIN projects. This could include ones in AI. For example, Helium could provide the physical wireless layer for an AI-powered fleet of devices, while that AI fleet’s computational needs are handled by networks like Akash, and data storage by Filecoin – an interconnected DePIN stack.

  • Synergistic Growth and Investments: Both crypto investors and traditional players are noticing the DePIN–AI synergy. Messari’s 2024 report projected the DePIN market could grow to $3.5 trillion by 2028 (from ~$50B in 2024) if trends continue. This bullish outlook is largely premised on AI being a “killer app” for decentralized infrastructure. The concept of DePAI (Decentralized Physical AI) envisions a future where ordinary people contribute not just hardware but also data to AI systems and get rewarded, breaking Big Tech’s monopoly on AI datasets. For instance, someone’s autonomous vehicle could collect road data, upload it via a network like Helium, store it on Filecoin, and have it used by an AI training on Akash – with each protocol rewarding the contributors in tokens. While somewhat futuristic, early building blocks of this vision are appearing (e.g. HiveMapper, a DePIN mapping project where drivers’ dashcams build a map – those maps could train self-driving AI; contributors earn tokens). We also see AI-focused crypto projects like Bittensor (TAO) – a network for training AI models in a decentralized way – reaching multi-billion valuations, indicating strong investor appetite for AI+crypto combos.

  • Autonomous Agents and Machine-to-Machine Economy: A fascinating trend on the horizon is AI agents using DePIN services autonomously. Messari speculated that by 2025, AI agent networks (like autonomous bots) might directly procure decentralized compute and storage from DePIN protocols to perform tasks for humans or for other machines. In such a scenario, an AI agent (say, part of a decentralized network of AI services) could automatically rent GPUs from Render or io.net when it needs more compute, pay with crypto, store its results on Filecoin, and communicate over Helium – all without human intervention, negotiating and transacting via smart contracts. This machine-to-machine economy could unlock a new wave of demand that is natively suited to DePIN (since AI agents don’t have credit cards but can use tokens to pay each other). It’s still early, but prototypes like Fetch.ai and others hint at this direction. If it materializes, DePIN networks would see a direct influx of machine-driven usage, further validating their models.

  • Energy and Other Physical Verticals: While our focus has been connectivity, storage, and compute, the AI trend also touches other DePIN areas. For example, decentralized energy grids (sometimes called DeGEN – decentralized energy networks) could benefit as AI optimizes energy distribution: if someone shares excess solar power into a microgrid for tokens, AI could predict and route that power efficiently. A project cited in the Binance report describes tokens for contributing excess solar energy to a grid. AI algorithms managing such grids could again be run on decentralized compute. Likewise, AI can enhance decentralized networks’ performance – e.g. AI-based optimization of Helium’s radio coverage or AI ops for predictive maintenance of Filecoin storage nodes. This is more about using AI within DePIN, but it demonstrates the cross-pollination of technologies.

In essence, AI has become a tailwind for DePIN. The previously separate narratives of “blockchain meets real world” and “AI revolution” are converging into a shared narrative: decentralization can help meet AI’s infrastructure demands, and AI can, in turn, drive massive real-world usage for decentralized networks. This convergence is attracting significant capital – over $350M was invested in DePIN startups in 2024 alone, much of it aiming at AI-related infrastructure (for instance, many recent fundraises were for decentralized GPU projects, edge computing for AI, etc.). It’s also fostering collaboration between projects (Filecoin working with Helium, Akash integrating with other AI tool providers, etc.).

Conclusion

DePIN projects like Helium, Filecoin, Render, and Akash represent a bold bet that crypto incentives can bootstrap real-world infrastructure faster and more equitably than traditional models. Each has crafted a unique economic model: Helium uses token burns and proof-of-coverage to crowdsource wireless networks, Filecoin uses cryptoeconomics to create a decentralized data storage marketplace, Render and Akash turn GPUs and servers into global shared resources through tokenized payments and rewards. Early on, these models showed strains – rapid supply growth with lagging demand – but they have demonstrated the ability to adjust and improve efficiency over time. The token-incentive flywheel, while not a magic bullet, has proven capable of assembling impressive physical networks: a global IoT/5G network, an exabyte-scale storage grid, and distributed GPU clouds. Now, as real usage catches up (from IoT devices to AI labs), these networks are transitioning toward sustainable service economies where tokens are earned by delivering value, not just by being early.

The rise of AI has supercharged this transition. AI’s insatiable appetite for compute and data plays to DePIN’s strengths: untapped resources can be tapped, idle hardware put to work, and participants globally can share the rewards. The alignment of AI-driven demand with DePIN supply in 2024 has been a pivotal moment, arguably providing the “product-market fit” that some of these projects were waiting for. Trends suggest that decentralized infrastructure will continue to ride the AI wave – whether by hosting AI models, collecting training data, or enabling autonomous agent economies. In the process, the value of the tokens underpinning these networks may increasingly reflect actual usage (e.g. GPU-hours sold, TB stored, devices connected) rather than speculation alone.

That said, challenges remain. DePIN projects must continue improving conversion of investment to utility – ensuring that adding one more hotspot or one more GPU actually adds proportional value to users. They also face competition from traditional providers (who are hardly standing still – e.g. cloud giants are lowering prices for committed AI workloads) and must overcome issues like regulatory hurdles (Helium’s 5G needs spectrum compliance, etc.), user experience friction with crypto, and the need for reliable performance at scale. The token models, too, require ongoing calibration: for instance, Helium splitting into sub-tokens was one such adjustment; Render’s BME was another; others may implement fee burns, dynamic rewards, or even DAO governance tweaks to stay balanced.

From an innovation and investment perspective, DePIN is one of the most exciting areas in Web3 because it ties crypto directly to tangible services. Investors are watching metrics like protocol revenue, utilization rates, and token value capture (P/S ratios) to discern winners. For example, if a network’s token has a high market cap but very low usage (high P/S), it might be overvalued unless one expects a surge in demand. Conversely, a network that manages to drastically increase revenue (like Akash’s 749% jump in daily spend) could see its token fundamentally re-rated. Analytics platforms (Messari, Token Terminal) now track such data: e.g. Helium’s annualized revenue (~$3.5M) vs incentives (~$47M) yielded a large deficit, while a project like Render might show a closer ratio if burns start canceling out emissions. Over time, we expect the market to reward those DePIN tokens that demonstrate real cash flows or cost savings for users – a maturation of the sector from hype to fundamentals.

In conclusion, established networks like Helium and Filecoin have proven the power and pitfalls of tokenized infrastructure, and emerging networks like Render, Akash, and io.net are pushing the model into the high-demand realm of AI compute. The economics behind each network differ in mechanics but share a common goal: create a self-sustaining loop where tokens incentivize the build-out of services, and the utilization of those services, in turn, supports the token’s value. Achieving this equilibrium is complex, but the progress so far – millions of devices, exabytes of data, and thousands of GPUs now online in decentralized networks – suggests that the DePIN experiment is bearing fruit. As AI and Web3 continue to converge, the next few years could see decentralized infrastructure networks move from niche alternatives to vital pillars of the internet’s fabric, delivering real-world utility powered by crypto economics.

Sources: Official project documentation and blogs, Messari research reports, and analytics data from Token Terminal and others. Key references include Messari’s Helium and Akash overviews, Filecoin Foundation updates, Binance Research on DePIN and io.net, and CoinGecko/CoinDesk analyses on token performance in the AI context. These provide the factual basis for the evaluation above, as cited throughout.