Skip to main content

13 posts tagged with "AI"

View All Tags

Can 0G’s Decentralized AI Operating System Truly Drive AI On-Chain at Scale?

· 12 min read

On November 13, 2024, 0G Labs announced a $40 million funding round led by Hack VC, Delphi Digital, OKX Ventures, Samsung Next, and Animoca Brands, thrusting the team behind this decentralized AI operating system into the spotlight. Their modular approach combines decentralized storage, data availability verification, and decentralized settlement to enable AI applications on-chain. But can they realistically achieve GB/s-level throughput to fuel the next era of AI adoption on Web3? This in-depth report evaluates 0G’s architecture, incentive mechanics, ecosystem traction, and potential pitfalls, aiming to help you gauge whether 0G can deliver on its promise.

Background

The AI sector has been on a meteoric rise, catalyzed by large language models like ChatGPT and ERNIE Bot. Yet AI is more than just chatbots and generative text; it also includes everything from AlphaGo’s Go victories to image generation tools like MidJourney. The holy grail that many developers pursue is a general-purpose AI, or AGI (Artificial General Intelligence)—colloquially described as an AI “Agent” capable of learning, perception, decision-making, and complex execution similar to human intelligence.

However, both AI and AI Agent applications are extremely data-intensive. They rely on massive datasets for training and inference. Traditionally, this data is stored and processed on centralized infrastructure. With the advent of blockchain, a new approach known as DeAI (Decentralized AI) has emerged. DeAI attempts to leverage decentralized networks for data storage, sharing, and verification to overcome the pitfalls of traditional, centralized AI solutions.

0G Labs stands out in this DeAI infrastructure landscape, aiming to build a decentralized AI operating system known simply as 0G.

What Is 0G Labs?

In traditional computing, an Operating System (OS) manages hardware and software resources—think Microsoft Windows, Linux, macOS, iOS, or Android. An OS abstracts away the complexity of the underlying hardware, making it easier for both end-users and developers to interact with the computer.

By analogy, the 0G OS aspires to fulfill a similar role in Web3:

  • Manage decentralized storage, compute, and data availability.
  • Simplify on-chain AI application deployment.

Why decentralization? Conventional AI systems store and process data in centralized silos, raising concerns around data transparency, user privacy, and fair compensation for data providers. 0G’s approach uses decentralized storage, cryptographic proofs, and open incentive models to mitigate these risks.

The name “0G” stands for “Zero Gravity.” The team envisions an environment where data exchange and computation feel “weightless”—everything from AI training to inference and data availability happens seamlessly on-chain.

The 0G Foundation, formally established in October 2024, drives this initiative. Its stated mission is to make AI a public good—one that is accessible, verifiable, and open to all.

Key Components of the 0G Operating System

Fundamentally, 0G is a modular architecture designed specifically to support AI applications on-chain. Its three primary pillars are:

  1. 0G Storage – A decentralized storage network.
  2. 0G DA (Data Availability) – A specialized data availability layer ensuring data integrity.
  3. 0G Compute Network – Decentralized compute resource management and settlement for AI inference (and eventually training).

These pillars work in concert under the umbrella of a Layer1 network called 0G Chain, which is responsible for consensus and settlement.

According to the 0G Whitepaper (“0G: Towards Data Availability 2.0”), both the 0G Storage and 0G DA layers build on top of 0G Chain. Developers can launch multiple custom PoS consensus networks, each functioning as part of the 0G DA and 0G Storage framework. This modular approach means that as system load grows, 0G can dynamically add new validator sets or specialized nodes to scale out.

0G Storage

0G Storage is a decentralized storage system geared for large-scale data. It uses distributed nodes with built-in incentives for storing user data. Crucially, it splits data into smaller, redundant “chunks” using Erasure Coding (EC), distributing these chunks across different storage nodes. If a node fails, data can still be reconstructed from redundant chunks.

Supported Data Types

0G Storage accommodates both structured and unstructured data.

  1. Structured Data is stored in a Key-Value (KV) layer, suitable for dynamic and frequently updated information (think databases, collaborative documents, etc.).
  2. Unstructured Data is stored in a Log layer which appends data entries chronologically. This layer is akin to a file system optimized for large-scale, append-only workloads.

By stacking a KV layer on top of the Log layer, 0G Storage can serve diverse AI application needs—from storing large model weights (unstructured) to dynamic user-based data or real-time metrics (structured).

PoRA Consensus

PoRA (Proof of Random Access) ensures storage nodes actually hold the chunks they claim to store. Here’s how it works:

  • Storage miners are periodically challenged to produce cryptographic hashes of specific random data chunks they store.
  • They must respond by generating a valid hash (similar to PoW-like puzzle-solving) derived from their local copy of the data.

To level the playing field, the system limits mining competitions to 8 TB segments. A large miner can subdivide its hardware into multiple 8 TB partitions, while smaller miners compete within a single 8 TB boundary.

Incentive Design

Data in 0G Storage is divided into 8 GB “Pricing Segments.” Each segment has both a donation pool and a reward pool. Users who wish to store data pay a fee in 0G Token (ZG), which partially funds node rewards.

  • Base Reward: When a storage node submits valid PoRA proofs, it gets immediate block rewards for that segment.
  • Ongoing Reward: Over time, the donation pool releases a portion (currently ~4% per year) into the reward pool, incentivizing nodes to store data permanently. The fewer the nodes storing a particular segment, the larger the share each node can earn.

Users only pay once for permanent storage, but must set a donation fee above a system minimum. The higher the donation, the more likely miners are to replicate the user’s data.

Royalty Mechanism: 0G Storage also includes a “royalty” or “data sharing” mechanism. Early storage providers create “royalty records” for each data chunk. If new nodes want to store that same chunk, the original node can share it. When the new node later proves storage (via PoRA), the original data provider receives an ongoing royalty. The more widely replicated the data, the higher the aggregate reward for early providers.

Comparisons with Filecoin and Arweave

Similarities:

  • All three incentivize decentralized data storage.
  • Both 0G Storage and Arweave aim for permanent storage.
  • Data chunking and redundancy are standard approaches.

Key Differences:

  • Native Integration: 0G Storage is not an independent blockchain; it’s integrated directly with 0G Chain and primarily supports AI-centric use cases.
  • Structured Data: 0G supports KV-based structured data alongside unstructured data, which is critical for many AI workloads requiring frequent read-write access.
  • Cost: 0G claims $10–11/TB for permanent storage, reportedly cheaper than Arweave.
  • Performance Focus: Specifically designed to meet AI throughput demands, whereas Filecoin or Arweave are more general-purpose decentralized storage networks.

0G DA (Data Availability Layer)

Data availability ensures that every network participant can fully verify and retrieve transaction data. If the data is incomplete or withheld, the blockchain’s trust assumptions break.

In the 0G system, data is chunked and stored off-chain. The system records Merkle roots for these data chunks, and DA nodes must sample these chunks to ensure they match the Merkle root and erasure-coding commitments. Only then is the data deemed “available” and appended into the chain’s consensus state.

DA Node Selection and Incentives

  • DA nodes must stake ZG to participate.
  • They’re grouped into quorums randomly via Verifiable Random Functions (VRFs).
  • Each node only validates a subset of data. If 2/3 of a quorum confirm the data as available and correct, they sign a proof that’s aggregated and submitted to the 0G consensus network.
  • Reward distribution also happens through periodic sampling. Only the nodes storing randomly sampled chunks are eligible for that round’s rewards.

Comparison with Celestia and EigenLayer

0G DA draws on ideas from Celestia (data availability sampling) and EigenLayer (restaking) but aims to provide higher throughput. Celestia’s throughput currently hovers around 10 MB/s with ~12-second block times. Meanwhile, EigenDA primarily serves Layer2 solutions and can be complex to implement. 0G envisions GB/s throughput, which better suits large-scale AI workloads that can exceed 50–100 GB/s of data ingestion.

0G Compute Network

0G Compute Network serves as the decentralized computing layer. It’s evolving in phases:

  • Phase 1: Focus on settlement for AI inference.
  • The network matches “AI model buyers” (users) with compute providers (sellers) in a decentralized marketplace. Providers register their services and prices in a smart contract. Users pre-fund the contract, consume the service, and the contract mediates payment.
  • Over time, the team hopes to expand to full-blown AI training on-chain, though that’s more complex.

Batch Processing: Providers can batch user requests to reduce on-chain overhead, improving efficiency and lowering costs.

0G Chain

0G Chain is a Layer1 network serving as the foundation for 0G’s modular architecture. It underpins:

  • 0G Storage (via smart contracts)
  • 0G DA (data availability proofs)
  • 0G Compute (settlement mechanisms)

Per official docs, 0G Chain is EVM-compatible, enabling easy integration for dApps that require advanced data storage, availability, or compute.

0G Consensus Network

0G’s consensus mechanism is somewhat unique. Rather than a single monolithic consensus layer, multiple independent consensus networks can be launched under 0G to handle different workloads. These networks share the same staking base:

  • Shared Staking: Validators stake ZG on Ethereum. If a validator misbehaves, their staked ZG on Ethereum can be slashed.
  • Scalability: New consensus networks can be spun up to scale horizontally.

Reward Mechanism: When validators finalize blocks in the 0G environment, they receive tokens. However, the tokens they earn on 0G Chain are burned in the local environment, and the validator’s Ethereum-based account is minted an equivalent amount, ensuring a single point of liquidity and security.

0G Token (ZG)

ZG is an ERC-20 token representing the backbone of 0G’s economy. It’s minted, burned, and circulated via smart contracts on Ethereum. In practical terms:

  • Users pay for storage, data availability, and compute resources in ZG.
  • Miners and validators earn ZG for proving storage or validating data.
  • Shared staking ties the security model back to Ethereum.

Summary of Key Modules

0G OS merges four components—Storage, DA, Compute, and Chain—into one interconnected, modular stack. The system’s design goal is scalability, with each layer horizontally extensible. The team touts the potential for “infinite” throughput, especially crucial for large-scale AI tasks.

0G Ecosystem

Although relatively new, the 0G ecosystem already includes key integration partners:

  1. Infrastructure & Tooling:

    • ZK solutions like Union, Brevis, Gevulot
    • Cross-chain solutions like Axelar
    • Restaking protocols like EigenLayer, Babylon, PingPong
    • Decentralized GPU providers IoNet, exaBits
    • Oracle solutions Hemera, Redstone
    • Indexing tools for Ethereum blob data
  2. Projects Using 0G for Data Storage & DA:

    • Polygon, Optimism (OP), Arbitrum, Manta for L2 / L3 integration
    • Nodekit, AltLayer for Web3 infrastructure
    • Blade Games, Shrapnel for on-chain gaming

Supply Side

ZK and Cross-chain frameworks connect 0G to external networks. Restaking solutions (e.g., EigenLayer, Babylon) strengthen security and possibly attract liquidity. GPU networks accelerate erasure coding. Oracle solutions feed off-chain data or reference AI model pricing.

Demand Side

AI Agents can tap 0G for both data storage and inference. L2s and L3s can integrate 0G’s DA to improve throughput. Gaming and other dApps requiring robust data solutions can store assets, logs, or scoring systems on 0G. Some have already partnered with the project, pointing to early ecosystem traction.

Roadmap & Risk Factors

0G aims to make AI a public utility, accessible and verifiable by anyone. The team aspires to GB/s-level DA throughput—crucial for real-time AI training that can demand 50–100 GB/s of data transfer.

Co-founder & CEO Michael Heinrich has stated that the explosive growth of AI makes timely iteration critical. The pace of AI innovation is fast; 0G’s own dev progress must keep up.

Potential Trade-Offs:

  • Current reliance on shared staking might be an intermediate solution. Eventually, 0G plans to introduce a horizontally scalable consensus layer that can be incrementally augmented (akin to spinning up new AWS nodes).
  • Market Competition: Many specialized solutions exist for decentralized storage, data availability, and compute. 0G’s all-in-one approach must stay compelling.
  • Adoption & Ecosystem Growth: Without robust developer traction, the promised “unlimited throughput” remains theoretical.
  • Sustainability of Incentives: Ongoing motivation for nodes depends on real user demand and an equilibrium token economy.

Conclusion

0G attempts to unify decentralized storage, data availability, and compute into a single “operating system” supporting on-chain AI. By targeting GB/s throughput, the team seeks to break the performance barrier that currently deters large-scale AI from migrating on-chain. If successful, 0G could significantly accelerate the Web3 AI wave by providing a scalable, integrated, and developer-friendly infrastructure.

Still, many open questions remain. The viability of “infinite throughput” hinges on whether 0G’s modular consensus and incentive structures can seamlessly scale. External factors—market demand, node uptime, developer adoption—will also determine 0G’s staying power. Nonetheless, 0G’s approach to addressing AI’s data bottlenecks is novel and ambitious, hinting at a promising new paradigm for on-chain AI.

Decentralized Physical Infrastructure Networks (DePIN): Economics, Incentives, and the AI Compute Era

· 47 min read
Dora Noda
Software Engineer

Introduction

Decentralized Physical Infrastructure Networks (DePIN) are blockchain-based projects that incentivize people to deploy real-world hardware in exchange for crypto tokens. By leveraging idle or underutilized resources – from wireless radios to hard drives and GPUs – DePIN projects create crowdsourced networks providing tangible services (connectivity, storage, computing, etc.). This model transforms normally idle infrastructure (like unused bandwidth, disk space, or GPU power) into active, income-generating networks by rewarding contributors with tokens. Major early examples include Helium (crowdsourced wireless networks) and Filecoin (distributed data storage), and newer entrants target GPU computing and 5G coverage sharing (e.g. Render Network, Akash, io.net).

DePIN’s promise lies in distributing the costs of building and operating physical networks via token incentives, thus scaling networks faster than traditional centralized models. In practice, however, these projects must carefully design economic models to ensure that token incentives translate into real service usage and sustainable value. Below, we analyze the economic models of key DePIN networks, evaluate how effectively token rewards have driven actual infrastructure use, and assess how these projects are coupling with the booming demand for AI-related compute.

Economic Models of Leading DePIN Projects

Helium (Decentralized Wireless IoT & 5G)

Helium pioneered a decentralized wireless network by incentivizing individuals to deploy radio hotspots. Initially focused on IoT (LoRaWAN) and later expanded to 5G small-cell coverage, Helium’s model centers on its native token HNT. Hotspot operators earn HNT by participating in Proof-of-Coverage (PoC) – essentially proving they are providing wireless coverage in a given location. In Helium’s two-token system, HNT has utility through Data Credits (DC): users must burn HNT to mint non-transferable DC, which are used to pay for actual network usage (device connectivity) at a fixed rate of $0.0001 per 24 bytes. This burn mechanism creates a burn-and-mint equilibrium where increased network usage (DC spending) leads to more HNT being burned, reducing supply over time.

Originally, Helium operated on its own blockchain with an inflationary issuance of HNT that halved every two years (yielding a gradually decreasing supply and an eventual max around ~223 million HNT in circulation). In 2023, Helium migrated to Solana and introduced a “network of networks” framework with sub-DAOs. Now, Helium’s IoT network and 5G mobile network each have their own tokens (IOT and MOBILE respectively) rewarded to hotspot operators, while HNT remains the central token for governance and value. HNT can be redeemed for subDAO tokens (and vice versa) via treasury pools, and HNT is also used for staking in Helium’s veHNT governance model. This structure aims to align incentives in each sub-network: for example, 5G hotspot operators earn MOBILE tokens, which can be converted to HNT, effectively tying rewards to the success of that specific service.

Economic value creation: Helium’s value is created by providing low-cost wireless access. By distributing token rewards, Helium offloaded the capex of network deployment onto individuals who purchased and ran hotspots. In theory, as businesses and IoT devices use the network (by spending DC that require burning HNT), that demand should support HNT’s value and fund ongoing rewards. Helium sustains its economy through a burn-and-spend cycle: network users buy HNT (or use HNT rewards) and burn it for DC to use the network, and the protocol mints HNT (according to a fixed schedule) to pay hotspot providers. In Helium’s design, a portion of HNT emissions was also allocated to founders and a community reserve, but the majority has always been for hotspot operators as an incentive to build coverage. As discussed later, Helium’s challenge has been getting enough paying demand to balance the generous supply-side incentives.

Filecoin (Decentralized Storage Network)

Filecoin is a decentralized storage marketplace where anyone can contribute disk space and earn tokens for storing data. Its economic model is built around the FIL token. Filecoin’s blockchain rewards storage providers (miners) with FIL block rewards for provisioning storage and correctly storing clients’ data – using cryptographic proofs (Proof-of-Replication and Proof-of-Spacetime) to verify data is stored reliably. Clients, in turn, pay FIL to miners to have their data stored or retrieved, negotiating prices in an open market. This creates an incentive loop: miners invest in hardware and stake FIL collateral (to guarantee service quality), earning FIL rewards for adding storage capacity and fulfilling storage deals, while clients spend FIL for storage services.

Filecoin’s token distribution is heavily weighted toward incentivizing storage supply. FIL has a maximum supply of 2 billion, with 70% reserved for mining rewards. (In fact, ~1.4 billion FIL are allocated to be released over time as block rewards to storage miners over many years.) The remaining 30% was allocated to stakeholders: 15% to Protocol Labs (the founding team), 10% to investors, and 5% to the Filecoin Foundation. Block reward emissions follow a somewhat front-loaded schedule (with a six-year half-life), meaning supply inflation was highest in the early years to quickly bootstrap a large storage network. To balance this, Filecoin requires miners to lock up FIL as collateral for each gigabyte of data they pledge to store – if they fail to prove the data is retained, they can be penalized (slashed) by losing some collateral. This mechanism aligns miner incentives with reliable service.

Economic value creation: Filecoin creates value by offering censorship-resistant, redundant data storage at potentially lower costs than centralized cloud providers. The FIL token’s value is tied to demand for storage and the utility of the network: clients must obtain FIL to pay for storing data, and miners need FIL (both for collateral and often to cover costs or as revenue). Initially, much of Filecoin’s activity was driven by miners racing to earn tokens – even storing zero-value or duplicated data just to increase their storage power and earn block rewards. To encourage useful storage, Filecoin introduced the Filecoin Plus program: clients with verified useful data (e.g. open datasets, archives) can register deals as “verified,” which gives miners 10× the effective power for those deals, translating into proportionally larger FIL rewards. This has incentivized miners to seek out real clients and has dramatically increased useful data stored on the network. By late 2023, Filecoin’s network had grown to about 1,800 PiB of active deals, up 3.8× year-over-year, with storage utilization rising to ~20% of total capacity (from only ~3% at the start of 2023). In other words, token incentives bootstrapped enormous capacity, and now a growing fraction of that capacity is being filled by paying customers – a sign of the model beginning to sustain itself with real demand. Filecoin is also expanding into adjacent services (see AI Compute Trends below), which could create new revenue streams (e.g. decentralized content delivery and compute-over-data services) to bolster the FIL economy beyond simple storage fees.

Render Network (Decentralized GPU Rendering & Compute)

Render Network is a decentralized marketplace for GPU-based computation, originally focused on rendering 3D graphics and now also supporting AI model training/inference jobs. Its native token RNDR (recently updated to the ticker RENDER on Solana) powers the economy. Creators (users who need GPU work done) pay in RNDR for rendering or compute tasks, and Node Operators (GPU providers) earn RNDR by completing those jobs. This basic model turns idle GPUs (from individual GPU owners or data centers) into a distributed cloud rendering farm. To ensure quality and fairness, Render uses escrow smart contracts: clients submit jobs and burn the equivalent RNDR payment, which is held until node operators submit proof of completing the work, then the RNDR is released as reward. Originally, RNDR functioned as a pure utility/payment token, but the network has recently overhauled its tokenomics to a Burn-and-Mint Equilibrium (BME) model to better balance supply and demand.

Under the BME model, all rendering or compute jobs are priced in stable terms (USD) and paid in RENDER tokens, which are **burned upon job completion. In parallel, the protocol mints new RENDER tokens on a predefined declining emissions schedule to compensate node operators and other participants. In effect, user payments for work destroy tokens while the network inflates tokens at a controlled rate as mining rewards – the net supply can increase or decrease over time depending on usage. The community approved an initial emission of ~9.1 million RENDER in the first year of BME (mid-2023 to mid-2024) as network incentives, and set a long-term max supply of about 644 million RENDER (up from the initial 536.9 million RNDR that were minted at launch). Notably, RENDER’s token distribution heavily favored ecosystem growth: 65% of the initial supply was allocated to a treasury (for future network incentives), 25% to investors, and 10% to team/advisors. With BME, that treasury is being deployed via the controlled emissions to reward GPU providers and other contributors, while the burn mechanism ties those rewards directly to platform usage. RNDR also serves as a governance token (token holders can vote on Render Network proposals). Additionally, node operators on Render can stake RNDR to signal their reliability and potentially receive more work, adding another incentive layer.

Economic value creation: Render Network creates value by supplying on-demand GPU computing at a fraction of the cost of traditional cloud GPU instances. By late 2023, Render’s founder noted that studios had already used the network to render movie-quality graphics with significant cost and speed advantages – “one tenth the cost” and with massive aggregated capacity beyond any single cloud provider. This cost advantage is possible because Render taps into dormant GPUs globally (from hobbyist rigs to pro render farms) that would otherwise be idle. With rising demand for GPU time (for both graphics and AI), Render’s marketplace meets a critical need. Crucially, the BME token model means token value is directly linked to service usage: as more rendering and AI jobs flow through the network, more RENDER is burned (creating buy pressure or reducing supply), while node incentives scale up only as those jobs are completed. This helps avoid “paying for nothing” – if network usage stagnates, the token emissions eventually outpace burns (inflating supply), but if usage grows, the burns can offset or even exceed emissions, potentially making the token deflationary while still rewarding operators. The strong interest in Render’s model was reflected in the market: RNDR’s price rocketed in 2023, rising over 1,000% in value as investors anticipated surging demand for decentralized GPU services amid the AI boom. Backed by OTOY (a leader in cloud rendering software) and used in production by some major studios, Render Network is positioned as a key player at the intersection of Web3 and high-performance computing.

Akash Network (Decentralized Cloud Compute)

Akash is a decentralized cloud computing marketplace that enables users to rent general compute (VMs, containers, etc.) from providers with spare server capacity. Think of it as a decentralized alternative to AWS or Google Cloud, powered by a blockchain-based reverse auction system. The native token AKT is central to Akash’s economy: clients pay for compute leases in AKT, and providers earn AKT for supplying resources. Akash is built on the Cosmos SDK and uses a delegated Proof-of-Stake blockchain for security and coordination. AKT thus also functions as a staking and governance token – validators stake AKT (and users delegate AKT to validators) to secure the network and earn staking rewards.

Akash’s marketplace operates via a bidding system: a client defines a deployment (CPU, RAM, storage, possibly GPU requirements) and a max price, and multiple providers can bid to host it, driving the price down. Once the client accepts a bid, a lease is formed and the workload runs on the chosen provider’s infrastructure. Payments for leases are handled by the blockchain: the client escrows AKT and it streams to the provider over time for as long as the deployment is active. Uniquely, the Akash network charges a protocol “take rate” fee on each lease to fund the ecosystem and reward AKT stakers: 10% of the lease amount if paid in AKT (or 20% if paid in another currency) is diverted as fees to the network treasury and stakers. This means AKT stakers earn a portion of all usage, aligning the token’s value with actual demand on the platform. To improve usability for mainstream users, Akash has integrated stablecoin and credit card payments (via its console app): a client can pay in USD stablecoin, which under the hood is converted to AKT (with a higher fee rate). This reduces the volatility risk for users while still driving value to the AKT token (since those stablecoin payments ultimately result in AKT being bought/burned or distributed to stakers).

On the supply side, AKT’s tokenomics are designed to incentivize long-term participation. Akash began with 100 million AKT at genesis and has a max supply of 389 million via inflation. The inflation rate is adaptive based on the proportion of AKT staked: it targets 20–25% annual inflation if the staking ratio is low, and around 15% if a high percentage of AKT is staked. This adaptive inflation (a common design in Cosmos-based chains) encourages holders to stake (contributing to network security) by rewarding them more when staking participation is low. Block rewards from inflation pay validators and delegators, as well as funding a reserve for ecosystem growth. AKT’s initial distribution set aside allocations for investors, the core team (Overclock Labs), and a foundation pool for ecosystem incentives (e.g. an early program in 2024 funded GPU providers to join).

Economic value creation: Akash creates value by offering cloud computing at potentially much lower costs than incumbent cloud providers, leveraging underutilized servers around the world. By decentralizing the cloud, it also aims to fill regional gaps and reduce reliance on a few big tech companies. The AKT token accrues value from multiple angles: demand-side fees (more workloads = more AKT fees flowing to stakers), supply-side needs (providers may hold or stake earnings, and need to stake some AKT as collateral for providing services), and general network growth (AKT is needed for governance and as a reserve currency in the ecosystem). Importantly, as more real workloads run on Akash, the proportion of AKT in circulation that is used for staking and fee deposits should increase, reflecting real utility. Initially, Akash saw modest usage for web services and crypto infrastructure hosting, but in late 2023 it expanded support for GPU workloads – making it possible to run AI training, machine learning, and high-performance compute jobs on the network. This has significantly boosted Akash’s usage in 2024. By Q3 2024, the network’s metrics showed explosive growth: the number of active deployments (“leases”) grew 1,729% year-on-year, and the average fee per lease (a proxy for complexity of workloads) rose 688%. In practice, this means users are deploying far more applications on Akash and are willing to run larger, longer workloads (many involving GPUs) – evidence that token incentives have attracted real paying demand. Akash’s team reported that by the end of 2024, the network had over 700 GPUs online with ~78% utilization (i.e. ~78% of GPU capacity rented out at any time). This is a strong signal of efficient token incentive conversion (see next section). The built-in fee-sharing model also means that as this usage grows, AKT stakers receive protocol revenue, effectively tying token rewards to actual service revenue – a healthier long-term economic design.

io.net (Decentralized GPU Cloud for AI)

io.net is a newer entrant (built on Solana) aiming to become the “world’s largest GPU network” specifically geared toward AI and machine learning workloads. Its economic model draws lessons from earlier projects like Render and Akash. The native token IO has a fixed maximum supply of 800 million. At launch, 500 million IO were pre-minted and allocated to various stakeholders, and the remaining 300 million IO are being emitted as mining rewards over a 20-year period (distributed hourly to GPU providers and stakers). Notably, io.net implements a revenue-based burn mechanism: a portion of network fees/revenue is used to burn IO tokens, directly tying token supply to platform usage. This combination – a capped supply with time-released emissions and a burn driven by usage – is intended to ensure long-term sustainability of the token economy.

To join the network as a GPU node, providers are required to stake a minimum amount of IO as collateral. This serves two purposes: it deters malicious or low-quality nodes (as they have “skin in the game”), and it reduces immediate sell pressure from reward tokens (since nodes must lock up some tokens to participate). Stakers (which can include both providers and other participants) also earn a share of network rewards, aligning incentives across the ecosystem. On the demand side, customers (AI developers, etc.) pay for GPU compute on io.net, presumably in IO tokens or possibly stable equivalents – the project claims to offer cloud GPU power at up to 90% lower cost than traditional providers like AWS. These usage fees drive the burn mechanism: as revenue flows in, a portion of tokens get burned, linking platform success to token scarcity.

Economic value creation: io.net’s value proposition is aggregating GPU power from many sources (data centers, crypto miners repurposing mining rigs, etc.) into a single network that can deliver on-demand compute for AI at massive scale. By aiming to onboard over 1 million GPUs globally, io.net seeks to out-scale any single cloud and meet the surging demand for AI model training and inference. The IO token captures value through a blend of mechanisms: supply is limited (so token value can grow if demand for network services grows), usage burns tokens (directly creating value feedback to the token from service revenue), and token rewards bootstrap supply (gradually distributing tokens to those who contribute GPUs, ensuring the network grows). In essence, io.net’s economic model is a refined DePIN approach where supply-side incentives (hourly IO emissions) are substantial but finite, and they are counter-balanced by token sinks (burns) that scale with actual usage. This is designed to avoid the trap of excessive inflation with no demand. As we will see, the AI compute trend provides a large and growing market for networks like io.net to tap into, which could drive the desired equilibrium where token incentives lead to robust service usage. (io.net is still emerging, so its real-world metrics remain to be proven, but its design clearly targets the AI compute sector’s needs.)

Table 1: Key Economic Model Features of Selected DePIN Projects

ProjectSectorToken (Ticker)Supply & DistributionIncentive MechanismToken Utility & Value Flow
HeliumDecentralized Wireless (IoT & 5G)Helium Network Token (HNT); plus sub-tokens IOT & MOBILEVariable supply, decreasing issuance: HNT emissions halved every ~2 years (as of original blockchain), targeting ~223M HNT in circulation after 50 years. Migrated to Solana with 2 new sub-tokens: IOT and MOBILE rewarded to IoT and 5G hotspot owners.Proof-of-Coverage mining: Hotspots earn IOT or MOBILE tokens for providing coverage (LoRaWAN or 5G). Those sub-tokens can be converted to HNT via treasury pools. HNT is staked for governance (veHNT) and is the basis for rewards across networks.Network usage via Data Credits: HNT is burned to create Data Credits (DC) for device connectivity (fixed price $0.0001 per 24 bytes). All network fees (DC purchases) effectively burn HNT (reducing supply). Token value thus ties to demand for IoT/Mobile data transfer. HNT’s value also backs the subDAO tokens (giving them convertibility to a scarce asset).
FilecoinDecentralized StorageFilecoin (FIL)Capped supply 2 billion: 70% allocated to storage mining rewards (released over decades); ~30% to Protocol Labs, investors, and foundation. Block rewards follow a six-year half-life (higher inflation early, tapering later).Storage mining: Storage providers earn FIL block rewards proportional to proven storage contributed. Clients pay FIL for storing or retrieving data. Miners put up FIL collateral that can be slashed for failure. Filecoin Plus gives 10× power reward for “useful” client data to incentivize real storage.Payment & collateral: FIL is the currency for storage deals – clients spend FIL to store data, creating organic demand for the token. Miners lock FIL as collateral (temporarily reducing circulating supply) and earn FIL for useful service. As usage grows, more FIL gets tied up in deals and collateral. Network fees (for transactions) are minimal (Filecoin focuses on storage fees which go to miners). Long term, FIL value depends on data storage demand and emerging use cases (e.g. Filecoin Virtual Machine enabling smart contracts for data, potentially generating new fee sinks).
Render NetworkDecentralized GPU Compute (Rendering & AI)Render Token (RNDR / RENDER)Initial supply ~536.9M RNDR, increased to max ~644M via new emissions. Burn-and-Mint Equilibrium: New RENDER emitted on a fixed schedule (20% inflation pool over ~5 years, then tail emissions). Emissions fund network incentives (node rewards, etc.). Burning: Users’ payments in RENDER are burned for each completed job. Distribution: 65% treasury (network ops and rewards), 25% investors, 10% team/advisors.Marketplace for GPU work: Node operators do rendering/compute tasks and earn RENDER. Jobs are priced in USD but paid in RENDER; the required tokens are burned when the work is done. In each epoch (e.g. weekly), new RENDER is minted and distributed to node operators based on the work they completed. Node operators can also stake RNDR for higher trust and potential job priority.Utility & value flow: RENDER is the fee token for GPU services – content creators and AI developers must acquire and spend it to get work done. Because those tokens are burned, usage directly reduces supply. New token issuance compensates workers, but on a declining schedule. If network demand is high (burn > emission), RENDER becomes deflationary; if demand is low, inflation may exceed burns (incentivizing more supply until demand catches up). RENDER also governs the network. The token’s value is thus closely linked to platform usage – in fact, RNDR rallied ~10× in 2023 as AI-driven demand for GPU compute skyrocketed, indicating market confidence that usage (and burns) will be high.
Akash NetworkDecentralized Cloud (general compute & GPU)Akash Token (AKT)Initial supply 100M; max supply 389M. Inflationary PoS token: Adaptive inflation ~15–25% annually (dropping as staking % rises) to incentivize staking. Ongoing emissions pay validators and delegators. Distribution: 34.5% investors, 27% team, 19.7% foundation, 8% ecosystem, 5% testnet (with lock-ups/vesting).Reverse-auction marketplace: Providers bid to host deployments; clients pay in AKT for leases. Fee pool: 10% of AKT payments (or 20% of payments in other tokens) goes to the network (stakers) as a protocol fee. Akash uses a Proof-of-Stake chain – validators stake AKT to secure the network and earn block rewards. Clients can pay via AKT or integrated stablecoins (with conversion).Utility & value flow: AKT is used for all transactions (either directly or via conversion from stable payments). Clients buy AKT to pay for compute leases, creating demand as network usage grows. Providers earn AKT and can sell or stake it. Staking rewards + fee revenue: Holding and staking AKT yields rewards from inflation and a share of all fees, so active network usage benefits stakers directly. This model aligns token value with cloud demand: as more CPU/GPU workloads run on Akash, more fees in AKT flow to holders (and more AKT might be locked as collateral or staked by providers). Governance is also via AKT holdings. Overall, the token’s health improves with higher utilization and has inflation controls to encourage long-term participation.
io.netDecentralized GPU Cloud (AI-focused)IO Token (IO)Fixed cap 800M IO: 500M pre-minted (allocated to team, investors, community, etc.), 300M emitted over ~20 years as mining rewards (hourly distribution). No further inflation after that cap. Built-in burn: Network revenue triggers token burns to reduce supply. Staking: providers must stake a minimum IO to participate (and can stake more for rewards).GPU sharing network: Hardware providers (data centers, miners) connect GPUs and earn IO rewards continuously (hourly) for contributing capacity. They also earn fees from customers’ usage. Staking requirement: Operators stake IO as collateral to ensure good behavior. Users likely pay in IO (or in stable converted to IO) for AI compute tasks; a portion of every fee is burned by the protocol.Utility & value flow: IO is the medium of exchange for GPU compute power on the network, and also the security token that operators stake. Token value is driven by a trifecta: (1) Demand for AI compute – clients must acquire IO to pay for jobs, and higher usage means more tokens burned (reducing supply). (2) Mining incentives – new IO distributed to GPU providers motivates network growth, but the fixed cap limits long-term inflation. (3) Staking – IO is locked up by providers (and possibly users or delegates) to earn rewards, reducing liquid supply and aligning participants with network success. In sum, io.net’s token model is designed so that if it successfully attracts AI workloads at scale, token supply becomes increasingly scarce (through burns and staking), benefiting holders. The fixed supply also imposes discipline, preventing endless inflation and aiming for a sustainable “reward-for-revenue” balance.

Sources: Official documentation and research for each project (see inline citations above).

Token Incentives vs. Real-World Service Usage

A critical question for DePIN projects is how effectively token incentives convert into real service provisioning and actual usage of the network. In the initial stages, many DePIN protocols emphasized bootstrapping supply (hardware deployment) through generous token rewards, even if demand was minimal – a “build it and (hopefully) they will come” strategy. This led to situations where the network’s market cap and token emissions far outpaced the revenue from customers. As of late 2024, the entire DePIN sector (~350 projects) had a combined market cap of ~$50 billion, yet generated only about ~$0.5 billion annualized revenue – an aggregate valuation of ~100× annual revenue. Such a gap underscores the inefficiency in early stages. However, recent trends show improvements as networks shift from purely supply-driven growth to demand-driven adoption, especially propelled by the surge in AI compute needs.

Below we evaluate each example project’s token incentive efficiency, looking at usage metrics versus token outlays:

  • Helium: Helium’s IoT network grew explosively in 2021–2022, with nearly 1 million hotspots deployed globally for LoRaWAN coverage. This growth was almost entirely driven by the HNT mining incentives and crypto enthusiasm – not by customer demand for IoT data, which remained low. By mid-2022, it became clear that Helium’s data traffic (devices actually using the network) was minuscule relative to the enormous supply-side investment. One analysis in 2022 noted that less than $1,000 of tokens were burned for data usage per month, even as the network was minting tens of millions of dollars worth of HNT for hotspot rewards – a stark imbalance (essentially, <1% of token emission was being offset by network usage). In late 2022 and 2023, HNT token rewards underwent scheduled halvings (reducing issuance), but usage was still lagging. An example from November 2023: the dollar value of Helium Data Credits burned was only about $156 for that day – whereas the network was still paying out an estimated $55,000 per day in token rewards to hotspot owners (valued in USD). In other words, that day’s token incentive “cost” outweighed actual network usage by a factor of 350:1. This illustrates the poor incentive-to-usage conversion in Helium’s early IoT phase. Helium’s founders recognized this “chicken-and-egg” dilemma: a network needs coverage before it can attract users, but without users the coverage is hard to monetize.

    There are signs of improvement. In late 2023, Helium activated its 5G Mobile network with a consumer-facing cell service (backed by T-Mobile roaming) and began rewarding 5G hotspot operators in MOBILE tokens. The launch of Helium Mobile (5G) quickly brought in paying users (e.g. subscribers to Helium’s $20/month unlimited mobile plan) and new types of network usage. Within weeks, Helium’s network usage jumped – by early 2024, the daily Data Credit burn reached ~$4,300 (up from almost nothing a couple months prior). Moreover, 92% of all Data Credits consumed were from the Mobile network (5G) as of Q1 2024, meaning the 5G service immediately dwarfed the IoT usage. While $4.3k/day is still modest in absolute terms (~$1.6 million annualized), it represents a meaningful step toward real revenue. Helium’s token model is adapting: by isolating the IoT and Mobile networks into separate reward tokens, it ensures that the 5G rewards (MOBILE tokens) will scale down if 5G usage doesn’t materialize, and similarly for IOT tokens – effectively containing the inefficiency. Helium Mobile’s growth also showed the power of coupling token incentives with a service of immediate consumer interest (cheap cellular data). Within 6 months of launch, Helium had ~93,000 MOBILE hotspots deployed in the US (alongside ~1 million IoT hotspots worldwide), and had struck partnerships (e.g. with Telefónica) to expand coverage. The challenge ahead is to substantially grow the user base (both IoT device clients and 5G subscribers) so that burning of HNT for Data Credits approaches the scale of HNT issuance. In summary, Helium started with an extreme supply surplus (and correspondingly overvalued token), but its pivot toward demand (5G, and positioning as an “infrastructure layer” for other networks) is gradually improving the efficiency of its token incentives.

  • Filecoin: In Filecoin’s case, the imbalance was between storage capacity vs. actual stored data. Token incentives led to an overabundance of supply: at its peak, the Filecoin network had well over 15 exbibytes (EiB) of raw storage capacity pledged by miners, yet for a long time only a few percent of that was utilized by real data. Much of the space was filled with dummy data (clients could even store random garbage data to satisfy proof requirements) just so miners could earn FIL rewards. This meant a lot of FIL was being minted and awarded for storage that wasn’t actually demanded by users. However, over 2022–2023 the network made big strides in driving demand. Through initiatives like Filecoin Plus and aggressive onboarding of open datasets, the utilization rate climbed from ~3% to over 20% of capacity in 2023. By Q4 2024, Filecoin’s storage utilization had further risen to ~30% – meaning nearly one-third of the enormous capacity was holding real client data. This is still far from 100%, but the trend is positive: token rewards are increasingly going toward useful storage rather than empty padding. Another measure: as of Q1 2024, about 1,900 PiB (1.9 EiB) of data was stored in active deals on Filecoin, a 200% year-over-year increase. Notably, the majority of new deals now come via Filecoin Plus (verified clients), indicating miners strongly prefer to devote space to data that earns them bonus reward multipliers.

    In terms of economic efficiency, Filecoin’s protocol also experienced a shift: initially, protocol “revenue” (fees paid by users) was negligible compared to mining rewards (which some analyses treated as revenue, inflating early figures). For example, in 2021, Filecoin’s block rewards were worth hundreds of millions of dollars (at high FIL prices), but actual storage fees were tiny; in 2022, as FIL price fell, reported revenue dropped 98% from $596M to $13M, reflecting that most of 2021’s “revenue” was token issuance value rather than customer spend. Going forward, the balance is improving: the pipeline of paying storage clients is growing (e.g. an enterprise deal of 1 PiB was closed in late 2023, one of the first large fully-paid deals). Filecoin’s introduction of the FVM (enabling smart contracts) and forthcoming storage marketplaces and DEXes are expected to bring more on-chain fee activity (and possibly FIL burns or lockups). In summary, Filecoin’s token incentives successfully built a massive global storage network, albeit with efficiency under 5% in the early period; by 2024 that efficiency improved to ~20–30% and is on track to climb further as real demand catches up with the subsidized supply. The sector’s overall demand for decentralized storage (Web3 data, archives, NFT metadata, AI datasets, etc.) appears to be rising, which bodes well for converting more of those mining rewards into actual useful service.

  • Render Network: Render’s token model inherently links incentives to usage more tightly, thanks to the burn-and-mint equilibrium. In the legacy model (pre-2023), RNDR issuance was largely in the hands of the foundation and based on network growth goals, while usage involved locking up RNDR in escrow for jobs. This made it a bit difficult to analyze efficiency. However, with BME fully implemented in 2023, we can measure how many tokens are burned relative to minted. Since each rendering or compute job burns RNDR proportional to its cost, essentially every token emitted as a reward corresponds to work done (minus any net inflation if emissions > burns in a given epoch). Early data from the Render network post-upgrade indicated that usage was indeed ramping up: the Render Foundation noted that at “peak moments” the network could be completing more render frames per second than Ethereum could handle in transactions, underscoring significant activity. While detailed usage stats (e.g. number of jobs or GPU-hours consumed) aren’t public in the snippet above, one strong indicator is the price and demand for RNDR. In 2023, RNDR became one of the best-performing crypto assets, rising from roughly $0.40 in January to over $2.50 by May, and continuing to climb thereafter. By November 2023, RNDR was up over 10× year-to-date, propelled by the frenzy for AI-related computing power. This price action suggests that users were buying RNDR to get rendering and AI jobs done (or speculators anticipated they would need to). Indeed, the interest in AI tasks likely brought a new wave of demand – Render reported that its network was expanding beyond media rendering into AI model training, and that the GPU shortage in traditional clouds meant demand far outstripped supply in this niche. In essence, Render’s token incentives (the emissions) have been met with equally strong user demand (burns), making its incentive-to-usage conversion relatively high. It’s worth noting that in the first year of BME, the network intentionally allocated some extra tokens (the 9.1M RENDER emissions) to bootstrap node operator earnings. If those outpace usage, it could introduce some temporary inflationary inefficiency. However, given the network’s growth, the burn rate of RNDR has been climbing. The Render Network Dashboard as of mid-2024 showed steady increases in cumulative RNDR burned, indicating real jobs being processed. Another qualitative sign of success: major studios and content creators have used Render for high-profile projects, proving real-world adoption (these are not just crypto enthusiasts running nodes – they are customers paying for rendering). All told, Render appears to have one of the more effective token-to-service conversion metrics in DePIN: if the network is busy, RNDR is being burned and token holders see tangible value; if the network were idle, token emissions would be the only output, but the excitement around AI has ensured the network is far from idle.

  • Akash: Akash’s efficiency can be seen in the context of cloud spend vs. token issuance. As a proof-of-stake chain, Akash’s AKT has inflation to reward validators, but that inflation is not excessively high (and a large portion is offset by staking locks). The more interesting part is how much real usage the token is capturing. In 2022, Akash usage was relatively low (only a few hundred deployments at any time, mainly small apps or test nets). This meant AKT’s value was speculative, not backed by fees. However, in 2023–2024, usage exploded due to AI. By late 2024, Akash was processing ~$11k of spend per day on its network, up from just ~$1.3k/day in January 2024 – a ~749% increase in daily revenue within the year. Over the course of 2024, Akash surpassed $1.6 million in cumulative paid spend for compute. These numbers, while still small compared to giants like AWS, represent actual customers deploying workloads on Akash and paying in AKT or USDC (which ultimately drives AKT demand via conversion). The token incentives (inflationary rewards) during that period were on the order of maybe 15–20% of the 130M circulating AKT (~20–26M AKT minted in 2024, which at $1–3 per AKT might be $20–50M value). So in pure dollar terms, the network was still issuing more value in tokens than it was bringing in fees – similar to other early-stage networks. But the trend is that usage is catching up fast. A telling statistic: comparing Q3 2024 to Q3 2023, the average fee per lease rose from $6.42 to $18.75. This means users are running much more resource-intensive (and thus expensive) workloads, likely GPUs for AI, and they are willing to pay more, presumably because the network delivers value (e.g. lower cost than alternatives). Also, because Akash charges a 10–20% fee on leases to the protocol, that means 10–20% of that $1.6M cumulative spend went to stakers as real yield. In Q4 2024, AKT’s price hit new multi-year highs (~$4, an 8× increase from mid-2023 lows), indicating the market recognized the improved fundamentals and usage. On-chain data from year-end 2024 showed over 650 active leases and over 700 GPUs in the network with ~78% utilization – effectively, most of the GPUs added via incentives were actually in use by customers. This is a strong conversion of token incentives into service: nearly 4 out of 5 GPUs incentivized were serving AI developers (for model training, etc.). Akash’s proactive steps, like enabling credit card payments and supporting popular AI frameworks, helped bridge crypto tokens to real-world users (some users might not even know they are paying for AKT under the hood). Overall, while Akash initially had the common DePIN issue of “supply > demand,” it is quickly moving toward a more balanced state. If AI demand continues, Akash could even approach a regime where demand outstrips the token incentives – in other words, usage might drive AKT’s value more than speculative inflation. The protocol’s design to share fees with stakers also means AKT holders benefit directly as efficiency improves (e.g. by late 2024, stakers were earning significant yield from actual fees, not just inflation).

  • io.net: Being a very new project (launched in 2023/24), io.net’s efficiency is still largely theoretical, but its model is built explicitly to maximize incentive conversion. By hard-capping supply and instituting hourly rewards, io.net avoids the scenario of runaway indefinite inflation. And by burning tokens based on revenue, it ensures that as soon as demand kicks in, there is an automatic counterweight to token emissions. Early reports claimed io.net had aggregated a large number of GPUs (possibly by bringing existing mining farms and data centers on board), giving it significant supply to offer. The key will be whether that supply finds commensurate demand from AI customers. One positive sign for the sector: as of 2024, decentralized GPU networks (including Render, Akash, and io.net) were often capacity-constrained, not demand-constrained – meaning there was more user demand for compute than the networks had online at any moment. If io.net taps into that unmet demand (offering lower prices or unique integrations via Solana’s ecosystem), its token burn could accelerate. On the flip side, if it distributed a large chunk of the 500M IO initial supply to insiders or providers, there is a risk of sell pressure if usage lags. Without concrete usage data yet, io.net serves as a test of the refined tokenomic approach: it targets a demand-driven equilibrium from the outset, trying to avoid oversupplying tokens. In coming years, one can measure its success by tracking what percentage of the 300M emission gets effectively “paid for” by network revenue (burns). The DePIN sector’s evolution suggests io.net is entering at a fortuitous time when AI demand is high, so it may reach high utilization more quickly than earlier projects did.

In summary, early DePIN projects often faced low token incentive efficiency, with token payouts vastly exceeding real usage. Helium’s IoT network was a prime example, where token rewards built a huge network that was only a few percent utilized. Filecoin similarly had a bounty of storage with little stored data initially. However, through network improvements and external demand trends, these gaps are closing. Helium’s 5G pivot multiplied usage, Filecoin’s utilization is steadily climbing, and both Render and Akash have seen real usage surge in tandem with the AI boom, bringing their token economics closer to a sustainable loop. A general trend in 2024 was the shift to “prove the demand”: DePIN teams started focusing on getting users and revenue, not just hardware and hype. This is evidenced by networks like Helium courting enterprise partners for IoT and telco, Filecoin onboarding large Web2 datasets, and Akash making its platform user-friendly for AI developers. The net effect is that token values are increasingly underpinned by fundamentals (e.g. data stored, GPU hours sold) rather than just speculation. While there is still a long way to go – the sector overall at 100× price/revenue implies plenty of speculation remains – the trajectory is towards more efficient use of token incentives. Projects that fail to translate tokens into service (or “hardware on the ground”) will likely fade, while those that achieve a high conversion rate are gaining investor and community confidence.

One of the most significant developments benefiting DePIN projects is the explosive growth in AI computing demand. The year 2023–2024 saw AI model training and deployment become a multi-billion-dollar market, straining the capacity of traditional cloud providers and GPU vendors. Decentralized infrastructure networks have quickly adapted to capture this opportunity, leading to a convergence sometimes dubbed “DePIN x AI” or even “Decentralized Physical AI (DePAI)” by futurists. Below, we outline how our focus projects and the broader DePIN sector are leveraging the AI trend:

  • Decentralized GPU Networks & AI: Projects like Render, Akash, io.net (and others such as Golem, Vast.ai, etc.) are at the forefront of serving AI needs. As noted, Render expanded beyond rendering to support AI workloads – e.g. renting GPU power to train Stable Diffusion models or other ML tasks. Interest in AI has directly driven usage on these networks. In mid-2023, demand for GPU compute to train image and language models skyrocketed. Render Network benefited as many developers and even some enterprises turned to it for cheaper GPU time; this was a factor in RNDR’s 10× price surge, reflecting the market’s belief that Render would supply GPUs to meet AI needs. Similarly, Akash’s GPU launch in late 2023 coincided with the generative AI boom – within months, hundreds of GPUs on Akash were being rented to fine-tune language models or serve AI APIs. The utilization rate of GPUs on Akash reaching ~78% by year-end 2024 indicates that nearly all incentivized hardware found demand from AI users. io.net is explicitly positioning itself as an “AI-focused decentralized computing network”. It touts integration with AI frameworks (they mention using the Ray distributed compute framework, popular in machine learning, to make it easy for AI developers to scale on io.net). Io.net’s value proposition – being able to deploy a GPU cluster in 90 seconds at 10–20× efficiency of cloud – is squarely aimed at AI startups and researchers who are constrained by expensive or backlogged cloud GPU instances. This targeting is strategic: 2024 saw extreme GPU shortages (e.g. NVIDIA’s high-end AI chips were sold out), and decentralized networks with access to any kind of GPU (even older models or gaming GPUs) stepped in to fill the gap. The World Economic Forum noted the emergence of “Decentralized Physical AI (DePAI)” where everyday people contribute computing power and data to AI processes and get rewarded. This concept aligns with GPU DePIN projects enabling anyone with a decent GPU to earn tokens by supporting AI workloads. Messari’s research likewise highlighted that the intense demand from the AI industry in 2024 has been a “significant accelerator” for the DePIN sector’s shift to demand-driven growth.

  • Storage Networks & AI Data: The AI boom isn’t just about computation – it also requires storing massive datasets (for training) and distributing trained models. Decentralized storage networks like Filecoin and Arweave have found new use cases here. Filecoin in particular has embraced AI as a key growth vector: in 2024 the Filecoin community identified “Compute and AI” as one of three focus areas. With the launch of the Filecoin Virtual Machine, it’s now possible to run compute services close to the data stored on Filecoin. Projects like Bacalhau (a distributed compute-over-data project) and Fluence’s compute L2 are building on Filecoin to let users run AI algorithms directly on data stored in the network. The idea is to enable, for example, training a model on a large dataset that’s already stored across Filecoin nodes, rather than having to move it to a centralized cluster. Filecoin’s tech innovations like InterPlanetary Consensus (IPC) allow spinning up subnetworks that could be dedicated to specific workloads (like an AI-specific sidechain leveraging Filecoin’s storage security). Furthermore, Filecoin is supporting decentralized data commons that are highly relevant to AI – for instance, datasets from universities, autonomous vehicle data, or satellite imagery can be hosted on Filecoin, and then accessed by AI models. The network proudly stores major AI-relevant datasets (the referenced UC Berkeley and Internet Archive data, for example). On the token side, this means more clients using FIL for data – but even more exciting is the potential for secondary markets for data: Filecoin’s vision includes allowing storage clients to monetize their data for AI training use cases. That suggests a future where owning a large dataset on Filecoin could earn you tokens when AI companies pay to train on it, etc., creating an ecosystem where FIL flows not just for storage but for data usage rights. This is nascent but highlights how deeply Filecoin is coupling with AI trends.

  • Wireless Networks & Edge Data for AI: On the surface, Helium and similar wireless DePINs are less directly tied to AI compute. However, there are a few connections. IoT sensor networks (like Helium’s IoT subDAO, and others such as Nodle or WeatherXM) can supply valuable real-world data to feed AI models. For instance, WeatherXM (a DePIN for weather station data) provides a decentralized stream of weather data that could improve climate models or AI predictions – WeatherXM data is being integrated via Filecoin’s Basin L2 for exactly these reasons. Nodle, which uses smartphones as nodes to collect data (and is considered a DePIN), is building an app called “Click” for decentralized smart camera footage; they plan to integrate Filecoin to store the images and potentially use them in AI computer vision training. Helium’s role could be providing the connectivity for such edge devices – for example, a city deploying Helium IoT sensors for air quality or traffic, and those datasets then being used to train urban planning AI. Additionally, the Helium 5G network could serve as edge infrastructure for AI in the future: imagine autonomous drones or vehicles that use decentralized 5G for connectivity – the data they generate (and consume) might plug into AI systems continuously. While Helium hasn’t announced specific “AI strategies,” its parent Nova Labs has hinted at positioning Helium as a general infrastructure layer for other DePIN projects. This could include ones in AI. For example, Helium could provide the physical wireless layer for an AI-powered fleet of devices, while that AI fleet’s computational needs are handled by networks like Akash, and data storage by Filecoin – an interconnected DePIN stack.

  • Synergistic Growth and Investments: Both crypto investors and traditional players are noticing the DePIN–AI synergy. Messari’s 2024 report projected the DePIN market could grow to $3.5 trillion by 2028 (from ~$50B in 2024) if trends continue. This bullish outlook is largely premised on AI being a “killer app” for decentralized infrastructure. The concept of DePAI (Decentralized Physical AI) envisions a future where ordinary people contribute not just hardware but also data to AI systems and get rewarded, breaking Big Tech’s monopoly on AI datasets. For instance, someone’s autonomous vehicle could collect road data, upload it via a network like Helium, store it on Filecoin, and have it used by an AI training on Akash – with each protocol rewarding the contributors in tokens. While somewhat futuristic, early building blocks of this vision are appearing (e.g. HiveMapper, a DePIN mapping project where drivers’ dashcams build a map – those maps could train self-driving AI; contributors earn tokens). We also see AI-focused crypto projects like Bittensor (TAO) – a network for training AI models in a decentralized way – reaching multi-billion valuations, indicating strong investor appetite for AI+crypto combos.

  • Autonomous Agents and Machine-to-Machine Economy: A fascinating trend on the horizon is AI agents using DePIN services autonomously. Messari speculated that by 2025, AI agent networks (like autonomous bots) might directly procure decentralized compute and storage from DePIN protocols to perform tasks for humans or for other machines. In such a scenario, an AI agent (say, part of a decentralized network of AI services) could automatically rent GPUs from Render or io.net when it needs more compute, pay with crypto, store its results on Filecoin, and communicate over Helium – all without human intervention, negotiating and transacting via smart contracts. This machine-to-machine economy could unlock a new wave of demand that is natively suited to DePIN (since AI agents don’t have credit cards but can use tokens to pay each other). It’s still early, but prototypes like Fetch.ai and others hint at this direction. If it materializes, DePIN networks would see a direct influx of machine-driven usage, further validating their models.

  • Energy and Other Physical Verticals: While our focus has been connectivity, storage, and compute, the AI trend also touches other DePIN areas. For example, decentralized energy grids (sometimes called DeGEN – decentralized energy networks) could benefit as AI optimizes energy distribution: if someone shares excess solar power into a microgrid for tokens, AI could predict and route that power efficiently. A project cited in the Binance report describes tokens for contributing excess solar energy to a grid. AI algorithms managing such grids could again be run on decentralized compute. Likewise, AI can enhance decentralized networks’ performance – e.g. AI-based optimization of Helium’s radio coverage or AI ops for predictive maintenance of Filecoin storage nodes. This is more about using AI within DePIN, but it demonstrates the cross-pollination of technologies.

In essence, AI has become a tailwind for DePIN. The previously separate narratives of “blockchain meets real world” and “AI revolution” are converging into a shared narrative: decentralization can help meet AI’s infrastructure demands, and AI can, in turn, drive massive real-world usage for decentralized networks. This convergence is attracting significant capital – over $350M was invested in DePIN startups in 2024 alone, much of it aiming at AI-related infrastructure (for instance, many recent fundraises were for decentralized GPU projects, edge computing for AI, etc.). It’s also fostering collaboration between projects (Filecoin working with Helium, Akash integrating with other AI tool providers, etc.).

Conclusion

DePIN projects like Helium, Filecoin, Render, and Akash represent a bold bet that crypto incentives can bootstrap real-world infrastructure faster and more equitably than traditional models. Each has crafted a unique economic model: Helium uses token burns and proof-of-coverage to crowdsource wireless networks, Filecoin uses cryptoeconomics to create a decentralized data storage marketplace, Render and Akash turn GPUs and servers into global shared resources through tokenized payments and rewards. Early on, these models showed strains – rapid supply growth with lagging demand – but they have demonstrated the ability to adjust and improve efficiency over time. The token-incentive flywheel, while not a magic bullet, has proven capable of assembling impressive physical networks: a global IoT/5G network, an exabyte-scale storage grid, and distributed GPU clouds. Now, as real usage catches up (from IoT devices to AI labs), these networks are transitioning toward sustainable service economies where tokens are earned by delivering value, not just by being early.

The rise of AI has supercharged this transition. AI’s insatiable appetite for compute and data plays to DePIN’s strengths: untapped resources can be tapped, idle hardware put to work, and participants globally can share the rewards. The alignment of AI-driven demand with DePIN supply in 2024 has been a pivotal moment, arguably providing the “product-market fit” that some of these projects were waiting for. Trends suggest that decentralized infrastructure will continue to ride the AI wave – whether by hosting AI models, collecting training data, or enabling autonomous agent economies. In the process, the value of the tokens underpinning these networks may increasingly reflect actual usage (e.g. GPU-hours sold, TB stored, devices connected) rather than speculation alone.

That said, challenges remain. DePIN projects must continue improving conversion of investment to utility – ensuring that adding one more hotspot or one more GPU actually adds proportional value to users. They also face competition from traditional providers (who are hardly standing still – e.g. cloud giants are lowering prices for committed AI workloads) and must overcome issues like regulatory hurdles (Helium’s 5G needs spectrum compliance, etc.), user experience friction with crypto, and the need for reliable performance at scale. The token models, too, require ongoing calibration: for instance, Helium splitting into sub-tokens was one such adjustment; Render’s BME was another; others may implement fee burns, dynamic rewards, or even DAO governance tweaks to stay balanced.

From an innovation and investment perspective, DePIN is one of the most exciting areas in Web3 because it ties crypto directly to tangible services. Investors are watching metrics like protocol revenue, utilization rates, and token value capture (P/S ratios) to discern winners. For example, if a network’s token has a high market cap but very low usage (high P/S), it might be overvalued unless one expects a surge in demand. Conversely, a network that manages to drastically increase revenue (like Akash’s 749% jump in daily spend) could see its token fundamentally re-rated. Analytics platforms (Messari, Token Terminal) now track such data: e.g. Helium’s annualized revenue (~$3.5M) vs incentives (~$47M) yielded a large deficit, while a project like Render might show a closer ratio if burns start canceling out emissions. Over time, we expect the market to reward those DePIN tokens that demonstrate real cash flows or cost savings for users – a maturation of the sector from hype to fundamentals.

In conclusion, established networks like Helium and Filecoin have proven the power and pitfalls of tokenized infrastructure, and emerging networks like Render, Akash, and io.net are pushing the model into the high-demand realm of AI compute. The economics behind each network differ in mechanics but share a common goal: create a self-sustaining loop where tokens incentivize the build-out of services, and the utilization of those services, in turn, supports the token’s value. Achieving this equilibrium is complex, but the progress so far – millions of devices, exabytes of data, and thousands of GPUs now online in decentralized networks – suggests that the DePIN experiment is bearing fruit. As AI and Web3 continue to converge, the next few years could see decentralized infrastructure networks move from niche alternatives to vital pillars of the internet’s fabric, delivering real-world utility powered by crypto economics.

Sources: Official project documentation and blogs, Messari research reports, and analytics data from Token Terminal and others. Key references include Messari’s Helium and Akash overviews, Filecoin Foundation updates, Binance Research on DePIN and io.net, and CoinGecko/CoinDesk analyses on token performance in the AI context. These provide the factual basis for the evaluation above, as cited throughout.

Decentralized AI: Permissionless LLM Inference on BlockEden.xyz

· 5 min read
Dora Noda
Software Engineer

BlockEden.xyz, known for its Remote Procedure Call (RPC) infrastructure, is expanding into AI inference services. This evolution leverages its open-source, permissionless design to create a marketplace where model researchers, hardware operators, API providers, and users interact seamlessly. The network's Relay Mining algorithm ensures a transparent and verifiable service, presenting a unique opportunity for large model AI researchers to monetize their work without infrastructure maintenance.

The Core Problem

The AI landscape faces significant challenges, including:

  • Restricted Model-Serving Environments: Resource-intensive infrastructure limits AI researchers' ability to experiment with various models.
  • Unsustainable Business Models for Open Source Innovation: Independent engineers struggle to monetize their work, relying on major infrastructure providers.
  • Unequal Market Access: Enterprise-grade models dominate, leaving mid-tier models and users underserved.

BlockEden.xyz’s Unique Value Proposition

BlockEden.xyz addresses these issues by decoupling the infrastructure layer from the product and services layer, ensuring an open and decentralized framework. This setup enables high-quality service delivery and aligns incentives among all network participants.

Key benefits include:

  • Established Network: Utilizing an existing network of BlockEden.xyz's services to streamline model access and service quality.
  • Separation of Concerns: Each stakeholder focuses on their strengths, improving overall ecosystem efficiency.
  • Incentive Alignment: Cryptographic proofs and performance measurements drive competition and transparency.
  • Permissionless Models & Supply: An open marketplace for cost-effective hardware supply.

Decentralized AI Inference Stakeholders

Model Providers: Coordinators

Coordinators manage the product and services layer, optimizing service quality and providing seamless access for applications. Coordinators discreetly ensure supplier integrity by posing as regular users, offering unbiased performance assessments.

Model Users: Applications

Applications typically use first-party coordinators but can also access the network with a third-party for enhanced privacy and cost savings. Direct access allows for diverse use case experimentation and eliminates intermediary costs.

Model Suppliers: Hardware Operators

Suppliers run inference nodes to earn tokens. Their competencies in DevOps, hardware maintenance, and logging are crucial for network growth. The permissionless approach encourages participation from various hardware providers, including those with idle or dormant resources.

Model Sources: Engineers & Researchers

Researchers and institutions that open-source models can earn revenue based on usage. This model incentivizes innovation without the need for infrastructure maintenance, providing a sustainable business model for open-source contributors.

Working with Cuckoo Network

BlockEden.xyz collaborates with Cuckoo Network to revolutionize AI inference through a decentralized and permissionless infrastructure. This partnership focuses on leveraging both platforms' strengths to create a seamless and efficient ecosystem for AI model deployment and monetization.

Key Collaboration Areas

  • Infrastructure Integration: Combining BlockEden.xyz's robust RPC infrastructure with Cuckoo Network's decentralized model-serving capabilities to offer a scalable and resilient AI inference service.
  • Model Distribution: Facilitating the distribution of open-source AI models across the network, enabling researchers to reach a broader audience and monetize their innovations without the need for extensive infrastructure.
  • Quality Assurance: Implementing mechanisms for continuous monitoring and assessment of model performance and supplier integrity, ensuring high-quality service delivery and reliability.
  • Economic Incentives: Aligning economic incentives across all stakeholders through cryptographic proofs and performance-based rewards, fostering a competitive and transparent marketplace.
  • Privacy and Security: Enhancing privacy-preserving operations and secure model inference through advanced technologies like Trusted Execution Environments (TEE) and decentralized data storage solutions.
  • Community and Support: Building a supportive community for AI researchers and developers, providing resources, guidance, and incentives to drive innovation and adoption within the decentralized AI ecosystem.

By partnering with Cuckoo Network, BlockEden.xyz aims to create a holistic and decentralized approach to AI inference, empowering researchers, developers, and users with a robust, transparent, and efficient platform for AI model deployment and utilization. You can now try decentralized text-to-image API at https://blockeden.xyz/api-marketplace/cuckoo-ai.

Input/Output of a Decentralized Inference Network

LLM Inputs to Cuckoo Network:

  • Open-source models
  • Demand from end-users or Applications
  • Aggregated supply from commodity hardware
  • Quality of service guarantees

LLM Outputs from Cuckoo Network:

  • No downtime
  • Seamless model experimentation
  • Public model evaluation
  • Privacy-preserving operations
  • Censorship-free models

Web3 Ecosystem Integrations

BlockEden.xyz's RPC protocol can integrate with other Web3 protocols to enhance Decentralized AI (DecAI):

Data & Storage Networks: Seamless integration with decentralized storage solutions like Filecoin/IPFS and Arweave for model storage and data integrity.

Compute Networks: Complementary services leveraging decentralized computing layers like Akash and Render, supporting both dedicated and idle hardware.

Inference Networks: Flexible deployment models and robust ecosystems supporting diverse inference tasks.

Applications: AI agents, consumer apps, and IoT devices benefit from DecAI inference for personalized services, data privacy, and edge decision-making.

Summary

BlockEden.xyz's established infrastructure and economic design unlock new opportunities for open-source AI. By providing a decentralized and verifiable service, it bridges the gap between open-source AI and Web3, enabling innovative, sustainable, and reliable services. This approach allows for greater model diversity, better market access for SMEs, and a new business model for open-source researchers. Future developments will continue to expand the ecosystem, ensuring BlockEden.xyz remains a robust and adaptable solution in the evolving AI and blockchain landscapes.

Unveiling the Integration of OpenAI ChatGPT API in BlockEden.xyz's API Marketplace

· 4 min read
Dora Noda
Software Engineer

We are glad to announce that BlockEden.xyz, web3 developers' go-to platform for API marketplace, has added a new, powerful capability – OpenAI API. Yes, you heard it right! Developers, tech enthusiasts, and AI pioneers can now leverage the cutting-edge machine learning models offered by OpenAI, directly through BlockEden's API Marketplace.

Before we dive into the how-to guide, let's understand what OpenAI API brings to the table. OpenAI API is a gateway to AI models developed by OpenAI, such as the industry-renowned GPT-3, the state-of-the-art transformer-based language model known for its remarkable ability to understand and generate human-like text. The API enables developers to use this advanced technology for a variety of applications, including drafting emails, writing code, answering questions, creating written content, tutoring, language translation, and much more.

Now, let's see how you can incorporate the power of OpenAI API into your applications using BlockEden.xyz. You can do it in three ways: using Python, using JavaScript (Node.js), or using curl directly from the command line. In this blog, we're going to provide the basic setup for each method, using a simple "Hello, World!" example.

The API key below is public and subject to change and rate limit. Get your own BLOCKEDEN_API_KEY from https://blockeden.xyz/dash instead.

Python:

Using Python, you can use the OpenAI API as shown in the following snippet:

import openai

BLOCKEDEN_API_KEY = "8UuXzatAZYDBJC6YZTKD"
openai.api_key = ""
openai.api_base = "https://api.blockeden.xyz/openai/" + BLOCKEDEN_API_KEY + "/v1"

response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-16k",
messages=[{"role": "user", "content": "hello, world!"}],
temperature=0,
max_tokens=2048,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)

print(response["choices"])

JavaScript (Node.js):

You can also utilize the OpenAI API with JavaScript. Here's how you can do it:

const { Configuration, OpenAIApi } = require("openai");

const BLOCKEDEN_API_KEY = "8UuXzatAZYDBJC6YZTKD";
const configuration = new Configuration({
basePath: "https://api.blockeden.xyz/openai/" + BLOCKEDEN_API_KEY + "/v1"
});
const openai = new OpenAIApi(configuration);

(async () => {
const response = await openai.createChatCompletion({
model: "gpt-3.5-turbo-16k",
messages: [{role: "user", content: "hello, world!"}],
temperature: 0,
max_tokens: 2048,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
});

console.log(JSON.stringify(response.data.choices, null, 2));
})()

cURL:

Last but not least, you can call the OpenAI API using curl directly from your terminal:

curl https://api.blockeden.xyz/openai/8UuXzatAZYDBJC6YZTKD/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo-16k",
"messages": [{"role": "user", "content": "hello, world!"}],
"temperature": 0,
"max_tokens": 2048,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}'

So, what's next? Dive in, experiment, and discover how you can leverage the power of OpenAI API for your projects, be it for chatbots, content generation, or any other NLP-based application. The possibilities are as vast as your imagination. With BlockEden.xyz's seamless integration with OpenAI, let's redefine the boundaries of what's possible.

For more information on OpenAI's capabilities, models, and usage, visit the official OpenAI documentation.

Happy Coding!

What is BlockEden.xyz

BlockEden.xyz is an API marketplace powering DApps of all sizes for Sui, Aptos, Solana, and 12 EVM blockchains. Why do our customers choose us?

  1. High availability. We maintain 99.9% uptime since our first API - Aptos main net launch.
  2. Inclusive API offerings and community. Our services have expanded to include Sui, Ethereum, IoTeX, Solana, Polygon, Polygon zkEVM, Filecoin, Harmony, BSC, Arbitrum, Optimism, Gnosis, Arbitrum Nova & EthStorage Galileo. Our community 10x.pub has 4000+ web3 innovators from Silicon Valley, Seattle, and NYC.
  3. Security. With over $45 million worth of tokens staked with us, our clients trust us to provide reliable and secure solutions for their web3 and blockchain needs.

We provide a comprehensive suite of services designed to empower every participant in the blockchain space, focusing on three key areas:

  • For blockchain protocol builders, we ensure robust security and decentralization by operating nodes and making long-term ecosystem contributions.
  • For DApp developers, we build user-friendly APIs to streamline development and unleash the full potential of decentralized applications.
  • For token holders, we offer a reliable staking service to maximize rewards and optimize asset management.