Skip to main content

DePIN's AI Pivot: How Decentralized Infrastructure Became the GPU Cloud That Big Tech Didn't Build

· 9 min read
Dora Noda
Software Engineer

The three highest-revenue DePIN projects in 2026 share one thing in common: they all sell GPU compute to AI companies. Not storage. Not wireless bandwidth. Not sensor data. Compute — the single most constrained resource in the global technology stack.

That fact alone tells you everything about where Decentralized Physical Infrastructure Networks have landed after years of searching for product-market fit. The sector that once ran on token incentives and speculative flywheel economics now generates real revenue from the most demanding buyers in tech: AI model developers who need GPUs yesterday.

From Token Flywheel to Revenue Engine

DePIN's origin story was elegantly simple: use token rewards to bootstrap physical infrastructure networks that would be too expensive for any single company to build. Contributors supply hardware — GPUs, routers, sensors, storage drives — and earn tokens in return. As the network grows, the token appreciates, attracting more contributors in a virtuous cycle.

The problem was that most early DePIN projects never closed the demand side. Contributors flooded in chasing token rewards, but actual paying customers remained scarce. The flywheel spun, but only in one direction.

The AI boom changed that equation entirely.

Global AI compute demand is growing 37% year-over-year through 2026, and GPU infrastructure spending is projected to surge from $83 billion in 2025 to $353 billion by 2030. Meanwhile, cloud GPU waitlists at AWS, Azure, and Google Cloud stretch weeks to months for high-end hardware. Enterprises training models or running inference at scale face a brutal supply constraint — exactly the kind of market failure that decentralized networks are built to solve.

The Numbers Behind the Pivot

The DePIN sector now spans over 1,500 active projects with a combined market capitalization between $30 billion and $50 billion. But the revenue concentration tells a sharper story: AI-related DePIN projects account for 48% of the sector's total market cap, and the top three revenue-generating projects — Aethir, Virtuals Protocol, and IONET — are all focused on decentralized AI compute.

Venture capital has followed the signal. Over $744 million was invested across 165+ DePIN startups between January 2024 and July 2025, with total sector funding approaching $1 billion in 2025 alone. In January 2026, Escape Velocity raised a $62 million fund specifically to back DePIN founders, a bet that the infrastructure layer is ready for institutional-grade workloads.

The World Economic Forum went further, projecting the DePIN category will grow to $3.5 trillion by 2028 — a 70-fold expansion from current levels. The WEF report coined a new term for the convergence: "DePAI" (Decentralized Physical AI), recognizing that AI workloads have become the primary commercial driver for decentralized infrastructure.

Akash Network: From Marketplace to Hyperscaler

Akash Network exemplifies the pivot from decentralized marketplace to AI-ready compute provider. The platform reported 428% year-over-year growth in usage through late 2025, with GPU utilization consistently above 80% — metrics that would make any cloud provider envious.

New lease creation surged 42% quarter-over-quarter in Q3 2025, rising from 19,000 to 27,000 leases, signaling genuine demand rather than speculative activity. These are paid compute jobs, not airdrop farmers.

The most significant development came with Starcluster, a protocol-owned compute system that represents Akash's boldest bet yet. Starcluster combines centrally managed datacenters with Akash's decentralized GPU marketplace to form what the team calls a "planetary mesh" optimized for AI training and inference. The initiative includes acquiring approximately 7,200 NVIDIA GB200 GPUs — the same hardware powering the latest generation of frontier AI models — operated by vetted, enterprise-grade datacenter "Nodekeepers."

This hybrid architecture acknowledges a practical reality: pure decentralization has limits for workloads demanding deterministic latency and massive parallel throughput. Starcluster bridges the gap between crypto-native idealism and enterprise requirements, positioning Akash to compete for hyperscale AI contracts that previously defaulted to AWS or CoreWeave.

Render Network: The Inference Play

While Akash targets the full compute spectrum, Render Network has made a strategic bet on AI inference — the workload category projected to account for two-thirds of total AI compute usage by 2026.

The reasoning is sound. Training large models requires concentrated clusters of thousands of GPUs running for weeks. Inference — running trained models to generate predictions, text, or images — is far more distributed. It happens everywhere, all the time, and increasingly at the edge. This workload profile maps naturally onto a decentralized network of GPU nodes scattered across geographies.

Render launched Dispersed in December 2025 as its dedicated compute subnet for AI inference and edge machine learning. The platform has scaled to over 5,600 active GPU nodes worldwide and is integrating enterprise-grade NVIDIA H200 hardware to expand capacity for demanding workloads.

Real-world adoption is moving beyond proof-of-concept. Jember, an AI financial trust company, uses Render's infrastructure for asynchronous inference workflows, demonstrating how distributed compute can power verifiable AI systems in regulated industries. THINK deploys Render nodes to support the Think Agent Standard, a permissionless protocol for building on-chain AI agents — a use case that barely existed a year ago.

At CES 2026, Render showcased partnerships targeting explosive GPU demand for edge ML workloads, marking a successful expansion from its creative rendering origins to general-purpose AI compute. The network's dual-use capability — serving both 3D rendering and AI inference — provides revenue diversification that pure-play AI compute networks lack.

io.net: The Aggregation Layer

io.net takes a different architectural approach. Rather than building a vertically integrated compute network, io.net positions itself as an aggregation layer that sources and pools GPU supply at scale, then presents that capacity through a cloud-like abstraction for buyers.

This aggregation-first model addresses one of DePIN's persistent challenges: fragmented supply. Individual GPU contributors range from gaming PC owners with idle RTX 4090s to small datacenter operators with racks of A100s. Without aggregation, this supply is too heterogeneous and unreliable for enterprise workloads. io.net's abstraction layer standardizes the experience, making decentralized GPU capacity feel like a conventional cloud API.

The approach trades some decentralization purity for practical usability — a trade-off that increasingly defines the winning DePIN strategies.

The Cost Advantage That Actually Matters

DePIN's most compelling value proposition is brutally simple: decentralized compute costs 50-85% less than centralized cloud equivalents.

This isn't a marginal savings. For an AI startup spending $500,000 monthly on AWS GPU instances, switching to decentralized compute could reduce costs to $75,000-$250,000. At enterprise scale, the savings compound into competitive advantage — or survival.

The cost structure works because DePIN networks monetize existing idle capacity rather than building purpose-built datacenters. Contributors bear the capital expenditure. The protocol handles orchestration. Buyers pay only for compute consumed. There are no massive upfront buildouts, no real estate costs, no cooling infrastructure to amortize across the price.

This model has limits — not every workload tolerates the latency variance and reliability trade-offs of distributed compute. But for inference, batch processing, fine-tuning, and many training configurations, the economics are increasingly decisive.

From Speculation to Infrastructure

The DePIN sector's AI pivot represents something rare in crypto: a genuine transition from speculation-driven to utility-driven economics. When a network's revenue comes from AI companies paying for GPU-hours rather than traders speculating on token price, the valuation framework fundamentally changes.

Several structural trends reinforce this trajectory:

  • Inference dominance: As AI shifts from training to deployment, the proportion of compute spent on inference grows. Inference is inherently distributed and latency-tolerant for many applications, favoring decentralized architectures.

  • Enterprise cost pressure: AI infrastructure costs are unsustainable for most companies at current cloud pricing. The 50-85% cost reduction offered by DePIN networks creates genuine pull demand, not just crypto-native experimentation.

  • Regulatory tailwinds: Data sovereignty requirements in the EU, India, and other jurisdictions favor geographically distributed compute that can process data locally. DePIN's distributed node architecture is a natural fit.

  • Hardware democratization: Each GPU generation makes the previous one cheaper but still capable. DePIN networks extend the productive life of hardware that cloud providers would retire, creating an ever-growing supply pool.

What Could Go Wrong

The bull case is compelling, but significant risks remain.

Reliability at scale is unproven for mission-critical AI workloads. Enterprise SLAs demand 99.99% uptime, and decentralized networks haven't consistently delivered that level of reliability. Akash's Starcluster hybrid approach tacitly acknowledges this gap.

Regulatory uncertainty around decentralized compute could create headwinds. If governments decide that GPU marketplaces need the same compliance frameworks as cloud providers (data residency, export controls on compute), the cost advantage narrows.

Centralization pressure is perhaps the deepest irony. The most successful DePIN networks are becoming more centralized over time — adding protocol-owned hardware, vetting node operators, and implementing quality-of-service guarantees. At some point, the distinction between a "decentralized GPU cloud" and "a cloud provider with token incentives" becomes philosophical.

The Road to $3.5 Trillion

The World Economic Forum's $3.5 trillion projection for DePIN by 2028 would require the sector to grow roughly 70-fold from current levels. That sounds audacious until you consider the addressable market: global cloud infrastructure spending alone exceeds $300 billion annually, and AI is adding hundreds of billions more.

DePIN doesn't need to replace cloud computing. It needs to capture the overflow — the workloads that can't get GPU access at any price, the startups that can't afford cloud margins, the inference tasks that benefit from edge proximity. Even a single-digit percentage of the global compute market would validate the WEF's thesis.

The projects that win this race will be those that solved the hardest problem in DePIN: making decentralized infrastructure boring. Not revolutionary, not disruptive, not Web3-native — just cheaper, faster, and available when the alternative isn't. In 2026, that's exactly what Akash, Render, and io.net are building toward.

For developers and enterprises exploring decentralized compute infrastructure, BlockEden.xyz provides enterprise-grade blockchain API services that complement DePIN networks — enabling applications to interact with on-chain compute marketplaces, monitor network performance, and build on the infrastructure layer powering the AI economy.