Skip to main content

Akave's Zero-Egress Bet: Can Flat-Rate DePIN Storage Actually Unseat AWS S3 for AI?

· 11 min read
Dora Noda
Software Engineer

Pull 2 terabytes of training data from AWS S3 to your GPU cluster and the bill arrives before the model does: roughly $184 in egress charges, on top of storage, on top of PUT/GET requests. Do it twice a day across a dozen experiments and the surprise line item starts to rival the storage itself. For AI teams, the cloud bill has become an economics problem disguised as an infrastructure problem — and a Austin-based DePIN startup named Akave thinks flat-rate, egress-free storage is the lever that finally breaks it.

Akave raised $6.65 million in March 2026 to build what it calls "the world's first decentralized enterprise data layer for AI and analytics." Its pitch is unusually specific: $14.99 per terabyte per month, zero egress fees, S3-compatible, backed by Filecoin for archival durability, with cryptographic receipts for every write. That's it. No tiers, no request fees, no bandwidth meter ticking every time a training container pulls a dataset. The question isn't whether the pricing is attractive — it obviously is. The question is whether the architecture can hold up as AI workloads scale into petabytes, and whether enterprises will trust a DePIN-backed stack for data they'd previously only hand to a hyperscaler.

The Egress Tax That Ate AI Budgets

AWS S3's sticker price is not the problem. Standard storage runs about $0.023/GB per month in us-east-1, which works out to roughly $920/month for a 40TB training corpus — annoying but manageable. Egress is where the math breaks. After the first 100GB free, S3 egress to the internet starts at $0.09/GB, stepping down slowly to $0.05/GB above 150TB. Pull 10TB of training data out to an external GPU provider and you're looking at $921.60 in transfer alone. Do it repeatedly — which is what AI pipelines actually do — and the "hidden" egress charge eclipses storage within a quarter.

This is not a pricing quirk. It's an architectural choice that assumes storage and compute live together inside one cloud. The moment an AI team splits them — because GPU capacity sits at CoreWeave, Lambda, or an on-prem cluster while data still sits in S3 — every epoch, every checkpoint restore, every data-parallel reread becomes a billable event. AI data fabrics multiply this problem: datasets get duplicated across preprocessing, training, validation, and analytics stages, each boundary potentially a paywall.

The industry's informal workaround has been CloudFront, because S3-to-CloudFront in-region transfer is free, so teams route data through a CDN that wasn't really designed for the job. It's a tell. When customers are architecturally twisting themselves to avoid a line item, the line item is no longer pricing — it's a tax.

What Akave Is Actually Selling

Akave Cloud is deliberately boring in the way serious infrastructure needs to be boring. The interface is S3-compatible — same SDKs, same GET and PUT semantics — so migrating a training pipeline is closer to changing an endpoint than rewriting code. Pricing is a single flat rate: $14.99 per terabyte per month, no egress, no per-request fees, no retrieval penalties. If your container pulls 500GB or 2TB of training data, it costs exactly $0 in transfer.

Underneath the familiar API, the architecture looks nothing like S3. Data is chunked, encrypted client-side, and distributed across the Akave network using 32-of-16 Reed-Solomon erasure coding, which Akave claims delivers 11 nines of durability. Long-term archival is anchored to Filecoin, the same network that underwrites a growing share of decentralized storage economics. Every write generates an on-chain receipt, and every retrieval is cryptographically verifiable — which matters less for cat photos and a lot more for AI training artifacts that regulators, auditors, or downstream model consumers may need to verify were unmodified.

The flagship piece for enterprises is the O3 gateway, an S3-compatible front door that can be hosted by Akave or self-hosted inside a customer's own infrastructure. The self-hosted version is the tell: teams with strict data residency or sovereignty requirements run O3 locally, hold their own encryption keys, and define their own access policies while still benefiting from the distributed backend. For sectors that historically couldn't touch decentralized storage — healthcare data, defense-adjacent AI, EU-regulated workloads — that configuration is meaningful.

Customer logos already include Intuizi, LaserSETI, and 375ai running production workloads, and the cap table reads like a who's-who of protocol-aligned capital: Protocol Labs, Filecoin Foundation, Avalanche, Blockchain Builders Fund, No Limit Holdings, Blockchange, Lightshift, and Big Brain Holdings. A partnership with Akash Network bundles decentralized GPU compute at around 70% below hyperscaler prices with Akave's zero-egress storage into what both companies are marketing as "sovereign AI infrastructure."

Reading the Room: Where Akave Sits in the Storage Stack

The decentralized storage landscape has matured dramatically. In January 2026, Filecoin launched Onchain Cloud on mainnet, positioning itself as a full-stack decentralized alternative to AWS with compute, verifiable retrieval, and automated payments. Storacha Forge, one of the earliest Onchain Cloud services, offers warm storage at $5.99 per terabyte. The broader DePIN sector has grown from roughly $5.2 billion in market cap in 2024 to over $19 billion by late 2025 — close to 270% growth — as AI demand, enterprise adoption, and DePIN infrastructure quality all crossed usability thresholds at roughly the same time.

Against that backdrop, Akave occupies a specific niche that neither Filecoin nor Arweave natively fills:

  • Filecoin is brilliant at long-tail archival and economic incentives but historically required deals, retrieval markets, and tooling that don't look like S3. Akave essentially packages Filecoin's durability into an S3-compatible interface with a flat rate.
  • Arweave sells permanence: one-time payment, indefinite storage, no retrieval guarantees. That's the right tool for immutable artifacts — NFT assets, on-chain documents, compliance archives — but a poor fit for the hot, mutable datasets AI training pipelines churn through.
  • Cloudflare R2 already offers zero egress and is the centralized benchmark Akave's pricing explicitly targets. R2 wins on latency, ecosystem integrations, and track record; Akave counters with sovereignty, verifiability, and a trust model that doesn't depend on a single provider's uptime — a point sharpened by the global Cloudflare outage in November 2025 that exposed how many "decentralized" apps still lived on one company's edge.
  • MinIO, the open-source self-hosted S3 alternative, recently shifted to a source-only model that spooked enterprises who'd built stacks assuming predictable community editions. Akave has been quietly pitching itself as a migration target for MinIO users who wanted self-host ergonomics without assuming their own operations burden.

The clearest way to understand Akave is as a pricing and interface arbitrage on decentralized storage primitives: take Filecoin's durability, wrap it in S3 semantics, put a flat-rate meter on top, and sell the result to AI teams who are already bleeding on egress.

Why Timing Matters: The Power and Data Gravity Pincer

At NVIDIA GTC 2026, Jensen Huang described AI as a "five-layer cake" with energy forming the foundation — every unit of machine intelligence ultimately a conversion of electricity into computation. The Department of Energy and Lawrence Berkeley National Laboratory project US data centers could consume up to 12% of total US electricity by 2030, up from about 4.4% today (roughly 176 TWh). The IEA's 2026 projection has global data centers hitting 1,000 TWh this year — Japan-scale power consumption, dedicated to compute.

The knock-on effect is that where data sits increasingly determines where compute can run. Hyperscalers are supply-constrained on power. GPU capacity is popping up wherever grid interconnects allow: Texas, the Nordics, the Middle East, secondary US markets. If your training data is pinned to us-east-1 and your GPUs are in Reykjavík or Abu Dhabi, you're paying egress to move bits to the silicon. Zero-egress, compute-agnostic storage turns data into a first-class citizen of a multi-cloud, multi-geography world — exactly the world AI economics is now forcing.

That's the real reason a pricing model like Akave's lands now rather than three years ago. When compute was abundant and cheap, egress was a rounding error. In an AI-constrained grid, egress is strategy.

The Skeptical Case: What Could Go Wrong

Three legitimate concerns temper the bull case.

First, latency and throughput at petabyte scale. AI training pipelines are bandwidth-hungry and latency-sensitive. S3 isn't just cheap storage with a nice API — it's a globally distributed edge network with decades of optimization. Akave's erasure coding and decentralized retrieval add hops. Production customers like 375ai suggest it's viable for common workloads, but teams considering multi-hundred-gigabit-per-second training feeds should benchmark carefully before committing.

Second, enterprise procurement inertia. Flat pricing is great; so is sovereignty. But enterprise security, legal, and compliance teams move on a timescale measured in quarters, and DePIN is still a novel procurement category for most Fortune 500 CIOs. Akave's self-hosted O3 gateway is partially an answer to this — "it's our hardware running their software" is easier to approve than "our data lives on a blockchain" — but the sales cycle is real.

Third, economics are only cheap if the network stays healthy. Filecoin and Akave's incentive layers assume a population of storage providers willing to underwrite capacity at the offered price. If AI demand spikes faster than supply, flat pricing either compresses provider margins or quietly gets re-tiered. Hyperscalers can subsidize; DePIN networks have to balance.

None of these are fatal. All of them mean Akave's challenge is less about whether the cost pitch lands and more about whether the operational story is boring enough for a Fortune 500 SRE to sign off.

The Bigger Pattern: Storage as a Wedge Into AI Infrastructure

The most interesting thing about Akave isn't the $14.99 price tag. It's what the price tag is trying to accomplish strategically. Storage is a low-margin commodity, but it's also the layer with the most data gravity — whoever owns the dataset owns the default answer to "where should we train?" and eventually "where should we inference?" The Akash x Akave partnership is a clear signal of this: decentralized GPU compute at 70% below hyperscaler prices means nothing if your data lives somewhere that charges you to leave. Bundle them, and the economics become an integrated alternative to the AWS stack rather than two discounts stapled together.

Expect this pattern to repeat across the DePIN-for-AI category through 2026. Storage networks will court compute networks, compute networks will court inference gateways, and inference gateways will court agent frameworks — all trying to assemble a vertical that can quote a single, predictable price against what is still, from the customer's perspective, a single bundled hyperscaler experience. The winners will be the ones who feel like infrastructure, not like crypto.

Akave is a credible early contender because it refuses to look like crypto at the surface: S3 endpoint, flat rate, audit-friendly receipts, real customers. The decentralized bits are under the hood, where — if Akave is right — they should be.


For developers building the next generation of Web3 and AI-native applications, BlockEden.xyz provides enterprise-grade RPC, indexing, and API infrastructure across 25+ chains, with the reliability profile serious production workloads demand. Explore our API marketplace to build on infrastructure designed for the long haul.

Sources