Skip to main content

281 posts tagged with "AI"

Artificial intelligence and machine learning applications

View all tags

Qwen Goes Onchain: How 0G × Alibaba Cloud Rewired the AI Stack for Autonomous Agents

· 10 min read
Dora Noda
Software Engineer

For the first time in the short history of AI, a hyperscaler has handed the keys to its flagship large language model to a blockchain. On April 21, 2026, the 0G Foundation and Alibaba Cloud announced a partnership that makes Qwen — the world's most-downloaded open-source LLM family — directly callable by autonomous agents on-chain, with inference priced in tokens rather than API keys.

Read that again. No account signup. No credit card. No rate-limit form. An agent with a wallet can just call Qwen3.6 and pay per million tokens in $0G, the same way a contract calls a Uniswap pool. That single architectural change — treating foundation-model inference as a programmable resource instead of a SaaS product — may be the most consequential crypto-AI story of the year.

Bittensor's Two-Front Governance Crisis: Latent 11 Inherits the Codebase as TAO Bleeds $900M

· 11 min read
Dora Noda
Software Engineer

In the same three weeks that Bittensor co-founder Const proposed rewriting the network's voting rights and Covenant AI walked away from its three flagship subnets, a quieter event reshaped the protocol's future even more profoundly: on April 2, 2026, the Opentensor Foundation transferred ownership of nine core GitHub repositories — including the Bittensor Python SDK and the btcli command-line tool — to a new entity called Latent 11.

The handoff was framed as decentralization. In practice, it concentrates control of Bittensor's only client implementation in a single new organization, at the exact moment the network's governance is unraveling. It is the rare crypto story where every plausible reading — bullish, bearish, and existential — depends on what happens in the next six months.

Bittensor's SN3 Bets the Network on a Trillion-Parameter Training Run

· 11 min read
Dora Noda
Software Engineer

In March 2026, a few dozen anonymous miners on home internet connections trained a 72-billion-parameter language model that scored within striking distance of Meta's Llama 2 70B. Six weeks later, the team that led that effort walked out, dumped $10 million worth of TAO, and called Bittensor's decentralization "theatre." Now the surviving community wants to do it again — at fourteen times the scale, in roughly four weeks, with the entire decentralized AI thesis riding on the result.

This is the story of how Bittensor's Subnet 3 — recently rebranded Teutonic after the Covenant AI exit — talked itself into a 1-trillion-parameter training run timed to land squarely in Grayscale's TAO ETF SEC review window. It's a wager that the protocol's incentive layer is more important than the people who built it, and that the same network that survived a governance crisis can ship the "DeepSeek moment" for decentralized AI before regulators decide whether to let Wall Street buy in.

How a 72B model became the high-water mark for permissionless AI

The story starts on March 10, 2026, when Subnet 3 — then operating under the name Templar — announced Covenant-72B, a 72-billion-parameter model trained on roughly 1.1 trillion tokens by more than 70 independent miners coordinating across the public internet. It was, by a wide margin, the largest decentralized LLM pre-training run ever completed.

The benchmark that mattered: an MMLU score of 67.1, putting Covenant-72B in the same neighborhood as Meta's Llama 2 70B — a model produced by one of the best-funded AI labs on the planet. NVIDIA CEO Jensen Huang publicly compared the effort to a "modern folding@home for AI." Templar's subnet token surged, and at peak its market valuation crossed $1.5 billion.

The technical breakthrough wasn't the model architecture. It was the coordination layer. Two pieces did the heavy lifting:

  • SparseLoCo, a communication-efficient training algorithm that reduced inter-node bandwidth requirements by 146x through sparsification, 2-bit quantization, and error feedback. Without it, a frontier-scale training run on residential internet would be physically impossible — gradient sync alone would saturate every miner's connection.
  • Gauntlet, Bittensor's blockchain-validated incentive system that scored each miner's contribution via loss evaluation and OpenSkill rankings, paying TAO to the high-quality nodes and slashing the rest.

Together they produced something genuinely new: a permissionless network of anonymous contributors, coordinating only through cryptographic incentives, training a model competitive with billion-dollar lab outputs.

Then it broke.

The Covenant exit: $900 million erased in twelve hours

On April 10, 2026, Sam Dare — founder of Covenant AI, the team behind three of Bittensor's most valuable subnets (SN3 Templar, SN39 Basilica, and SN81 Grail) — announced he was leaving. Within hours he liquidated approximately 37,000 TAO, roughly $10.2 million, and published a parting accusation: that co-founder Jacob Steeves ("Const") wielded centralized control over the protocol, and that Bittensor's decentralization was performance, not architecture.

The market reaction was immediate. TAO crashed 20–28% depending on the measurement window, erasing roughly $650–900 million in market cap inside a 12-hour span. Subnet alpha tokens fared worse — Grail (SN81) was down 67% at the bottom. Around $10 million in long positions liquidated.

Two facts blunted the panic:

  1. The subnets didn't die. Community miners restarted SN3, SN39, and SN81 from open-source code without a central operator. The infrastructure Covenant built was, in fact, recoverable from the public artifacts — which arguably proves the decentralization thesis Dare disputed.
  2. 70% of TAO supply remained staked through the disruption. Long-term holders didn't follow Dare to the exit.

But the network had a credibility problem. If Covenant — the team that delivered Bittensor's marquee technical achievement — could leave at the top and crater the token, what stops the next subnet operator from doing the same?

The Conviction Mechanism: locking in the people who can leave

Const's response landed on April 20, 2026, ten days after Dare walked. BIT-0011, branded the Conviction Mechanism, proposes a Locked Stake regime that forces subnet owners to time-lock TAO for months or years in exchange for a "conviction score" that maps to voting rights and subnet ownership.

The mechanics:

  • The conviction score starts at 100% and decays over 30-day intervals if tokens aren't replenished into the lock-up.
  • Voting power and ownership rights diminish in lockstep with the decay, making sudden capital flight economically expensive rather than just embarrassing.
  • The system targets the mature subnets first — SN3, SN39, and SN81 — exactly the three that Covenant ran.

The dark joke: BIT-0011 was reportedly drafted by Sam Dare himself before his exit. The departing founder wrote the rules designed to prevent founders from departing.

The proposal addresses a real structural weakness — subnet operators could previously dump positions with no governance penalty — but it also concentrates power in the hands of long-term lockers, which is its own form of centralization. Whether that's the right trade depends on what you think Bittensor's main risk is: founder defection or oligarchic capture.

Teutonic and the trillion-parameter moonshot

Against that backdrop, the rebranded Teutonic subnet (SN3, formerly Templar) has committed publicly to a 1-trillion-parameter decentralized training run for mid-to-late May 2026. That's roughly 14x the scale of Covenant-72B, on the same fundamental architecture, with a community-restored team rather than the original Covenant engineers.

The strategic timing is impossible to miss. Grayscale filed its S-1 amendment for the spot Bittensor Trust ETF (proposed ticker GTAO) on NYSE Arca on April 2, 2026. The SEC's decision window is currently tracked for August 2026. A successful 1T-parameter training run in May would land at the peak of regulator deliberation — exactly when "is this a real technology or a meme?" becomes the load-bearing question. Grayscale already raised TAO's weighting inside its broader AI fund to 43.06% on April 7, the largest single-asset reallocation that fund has ever made.

The bull case writes itself: ship a credible 1T-parameter decentralized model, become the "DeepSeek moment" the ETF approval needs to justify institutional inflow, and reprice the entire decentralized AI category in one quarter.

The bear case is engineering, not marketing.

Why scaling decentralized training is hard in ways frontier labs don't face

Centralized 1T+ models — GPT-5, Claude 4.7 Opus, Gemini 2.5 Ultra — are trained inside facilities where every GPU is wired to every other GPU through purpose-built fabrics like NVLink and InfiniBand, with sub-microsecond latencies and terabit-per-second bandwidth. Even in those conditions, gradient synchronization is the bottleneck. Published research consistently finds that over 90% of LLM training time can be spent on communication rather than compute when scaling is naive.

Teutonic's miners are coordinating across ~100ms WAN latencies on residential internet. The only reason Covenant-72B was possible at all is SparseLoCo's 146x compression of communication volume. Pushing to 1T parameters changes the math in three uncomfortable ways:

  1. Gradient size scales roughly linearly with parameter count. A 14x model means 14x as much data to synchronize per step, even before considering optimizer state.
  2. Cross-node coordination overhead historically scales super-linearly with worker count. If Teutonic doubles its node pool from ~70 to ~256, the all-reduce communication cost doesn't just double — it can grow by 4–10x depending on topology.
  3. Failure modes compound. A node dropping out mid-step in a 70-node network is a small slashing event. In a 256-node network running 14x larger gradients, the same drop can stall the entire training round.

None of this is unsolvable. There's a body of decentralized training research — heterogeneous low-bandwidth pre-training, FusionLLM, communication-computation overlap, delayed gradient compensation — that targets exactly this regime. But almost all of it has been validated at the 7B–70B scale. A 1T-parameter run on geographically distributed commodity hardware would be a research contribution in its own right, not just a product launch.

The honest read: Teutonic is taking on a research-grade engineering challenge with a marketing-grade deadline. Either it works and becomes the credibility event the entire dTAO ecosystem needs, or it stalls publicly during the SEC's most attentive review window.

The decentralized AI training landscape Teutonic must survive

Teutonic isn't the only project trying to claim the "credible decentralized 1T-param" milestone in 2026. The competitive map is filling out fast:

  • Gensyn launched its mainnet on April 22, 2026 — the same day this article goes out — pairing the launch with Delphi Markets, an AI-driven matching layer for compute jobs. By close of day Gensyn was reporting hashrate equivalent to 5,000+ NVIDIA H100s. Where Bittensor sells permissionless coordination plus a token-incentive flywheel, Gensyn is positioning as a verifiable AI compute marketplace with cryptographic proofs of correct execution.
  • Ritual has gone in the opposite direction, leaning into inference rather than training. Its Infernet technology lets any smart contract request an AI output and receive cryptographic proof that the specified model was used unmodified. That's the "verifiable AI in DeFi" thesis, not the "train frontier models from scratch" thesis.
  • Ambient and Origins Network are making adjacent bets — different incentive designs, different verification strategies, similar long-term goal of breaking centralized labs' monopoly on frontier training.

These projects don't directly compete on the same milestone, but they all compete for the same finite pool of attention and capital. If Gensyn's mainnet captures the "decentralized AI is here" narrative through commercial workloads, Teutonic's May training run becomes a referendum on whether Bittensor's specific approach — subnet competition plus token-weighted incentives — is the right architecture or the first iteration that gets surpassed.

Why this matters beyond TAO

Three things are getting tested simultaneously over the next four to six weeks:

Whether decentralized training scales. If Teutonic succeeds, the "Bitcoin of decentralized AI compute" thesis survives. If it fails, the Covenant exit reads as the moment subnet-based training peaked — a 72B ceiling rather than a 72B foundation.

Whether the Conviction Mechanism is the right governance fix. Locking in subnet operators prevents another Covenant-style dump but creates a new failure mode where long-term lockers can entrench. Bitcoin Core's distributed maintainer model, Solana Labs' continued centralized core development, and Sui's Mysten Labs concentration are three different answers to the same question — whether protocol complexity demands a strong central maintainer the community must trust. Bittensor is now running its own version of that experiment in real time.

Whether the ETF window forces decentralized AI to ship on TradFi's calendar. The SEC's August decision window is a hard deadline for a narrative that wants to be "DeepSeek moment" rather than "interesting research project." That's a healthy forcing function or a recipe for over-promising — depending on what gets shipped.

For builders watching from the infrastructure side, the underlying signal is simpler: AI agents and decentralized training networks are about to generate a new tier of on-chain query load — model registry lookups, attestation proofs, gradient checkpoint hashes, subnet performance data — that doesn't fit neatly into the human-facing dApp pattern existing RPC infrastructure was built for.

BlockEden.xyz provides enterprise-grade RPC and indexing infrastructure across 27+ chains for teams building the AI-meets-crypto stack. Explore our API marketplace to build on rails designed for both human and machine traffic.

Sources

InfoFi Is the New DeFi: How Information Finance Became Web3's $10B Sector in 2026

· 12 min read
Dora Noda
Software Engineer

In March 2026, prediction markets traded $25.7 billion in a single month. That is more notional volume than most mid-cap equity indices. It is not a bubble, and it is not a meme. It is the clearest signal yet that a new asset class — information itself — has finally found a price.

Welcome to InfoFi.

For years, crypto tried to financialize everything: loans, art, cat pictures, liquidity positions, even carbon. But the one thing markets have always struggled to price — the quality of a prediction, the trust of a person, the value of a dataset — stayed stubbornly analog. That changed in 2026. Three previously separate experiments (prediction markets, on-chain reputation, and AI data marketplaces) converged into a single sector with a single thesis: put skin in the game behind information, and the information gets better.

Wall Street has a name for this thesis. It calls it Information Finance. And on current trajectory, InfoFi will cross $10 billion in sector value before the end of this year.

The Great Miner Pivot: Why Public Bitcoin Miners Dumped 32,000 BTC in Q1 2026 to Become AI Companies

· 11 min read
Dora Noda
Software Engineer

In the first three months of 2026, publicly listed Bitcoin miners liquidated more BTC than they sold in all of 2025 combined — a record 32,000+ coins shoveled out of treasuries to fund a mass migration into artificial intelligence infrastructure. Marathon Digital alone offloaded 15,133 BTC for roughly $1.1 billion in March. Riot Platforms sold 3,778 BTC for $289.5 million. Core Scientific liquidated $175 million worth in January and signaled it would dump "substantially all" remaining holdings before the quarter closed.

This is not a margin call. It is a reclassification. The companies once marketed to investors as "the public market's purest Bitcoin proxy" are quietly becoming something else entirely: high-density power providers that happen to run some ASICs on the side. And the deeper that transformation goes, the louder the question becomes — what happens to Bitcoin's security backbone when the people who built it stop caring whether it survives?

Virtuals Protocol + BitRobot: When AI Agents Start Paying Robots

· 11 min read
Dora Noda
Software Engineer

The first time an autonomous on-chain agent paid a physical robot to pick up a coffee cup, no human was in the loop. No purchase order. No invoice. No bank wire. Just a smart contract, an x402 micropayment, and a humanoid arm that obeyed because the money cleared. That moment, quiet and uncelebrated, marked the dissolution of a boundary that the AI agent narrative had treated as load-bearing for two years: the wall between digital agents that trade tokens and physical machines that move atoms.

Virtuals Protocol's Q1 2026 integration with BitRobot Network is the first production system to dismantle that wall at scale. By wiring 17,000+ on-chain AI agents into a Solana-based subnet of robotic infrastructure, Virtuals has done something the embodied AI thesis has been gesturing at since OpenAI's robotics demos in 2018 but never quite delivered: it has given software agents wallets, identities, and task queues that reach into warehouses, sidewalks, and coffee shops. The implications run from a $4.44 billion embodied AI market in 2025 toward a projected $23 billion by 2030, and they reframe what "agentic commerce" actually means.

From Digital Trading to Physical Tasks

For most of 2024 and 2025, AI agent tokens lived in a tightly-bounded sandbox. Agents on Virtuals, ai16z, and similar platforms posted on social media, traded memecoins, ran DeFi strategies, and occasionally made each other laugh. Critics correctly noted that this was a closed loop — agents transacting with agents about things that only existed on chain. The real economy, the one with shipping pallets and delivery vans and broken HVAC units, remained untouched.

BitRobot changes the topology of that loop. Co-developed by FrodoBots Lab and Protocol Labs after an $8 million seed round backed by Solana Ventures, Virtuals Protocol, and Solana co-founders Anatoly Yakovenko and Raj Gokal, BitRobot is structured as a constellation of subnets. Each subnet contributes one specialized output that embodied AI needs: navigation data, manipulation skills, simulation environments, or model evaluation. Subnet 5, called SeeSaw, was launched directly with Virtuals as a partnership product — users record short videos of mundane tasks like tying shoelaces or folding laundry, upload them, and earn token rewards while the data trains the next generation of robotic policy models.

The numbers tell the adoption story bluntly. SeeSaw has already logged more than 500,000 completed tasks since its iOS launch in October 2025. The first on-chain agent to actually drive a physical machine, called SAM, is operating humanoid robots around the clock and posting its observations to X. None of this requires that you believe in the agent economy as a religious matter. It requires only that you accept the data: machine-controlled actions are now being initiated by smart contracts, paid for in tokens, and verified by on-chain evaluators.

The Three-Layer Standards Stack

What makes the Virtuals + BitRobot integration more than a one-off demo is the standards work happening underneath it. Three Ethereum and HTTP-level protocols arrived in early 2026 to make agent-to-machine commerce composable rather than artisanal:

  • x402 is an HTTP payment standard that lets agents settle micropayments in the same handshake as an API call. Built on the long-dormant HTTP 402 status code, it processed roughly $600 million in AI micropayments in its first months of production use, with Google Cloud and AWS adopting it as a billing primitive for agent-driven inference.
  • ERC-8004 is an Ethereum identity and reputation standard for AI agents. It answers the question every counterparty needs answered before signing a contract: who is this agent, what is its track record, and is it trustworthy enough to do business with?
  • ERC-8183, jointly launched by the Ethereum Foundation's dAI team and Virtuals Protocol on March 10, 2026, is the commercial layer. It introduces a job escrow primitive in which a Client deposits funds, a Provider executes the work, and an Evaluator verifies completion before the escrow releases.

The shorthand is useful: x402 says "how to pay," ERC-8004 says "who you are paying," ERC-8183 says "how to settle a dispute when the cleaning robot leaves a streak on your floor." Together they form an internet-native commerce stack designed for parties that cannot rely on courts, credit cards, or chargebacks. For embodied AI, that stack is not a luxury. It is the only available substrate, because legal contracts struggle to accommodate counterparties that are software agents owned by other software agents managed by token holders scattered across forty jurisdictions.

Why Solana for Robots, Ethereum for Commerce

The Virtuals + BitRobot integration is quietly multi-chain in a way that reveals architectural intent. BitRobot lives on Solana because robot data collection is a high-throughput, low-margin activity — paying contributors fractions of a cent for each video clip demands the kind of fee economics Ethereum L1 cannot provide. Virtuals, born on Base and active on Arbitrum, lives where institutional liquidity and the bulk of the agent commerce standards reside. The integration uses Solana for the physical-world data layer and Ethereum-aligned chains for the commerce layer.

This is the same pattern that crystallized in 2024 around stablecoin payments: Tron and Solana for the cheap, frequent transactions; Ethereum for the high-value, low-frequency settlements. The machine economy appears to be inheriting that division of labor rather than collapsing it. Anyone betting on a single-chain winner for embodied AI is likely to be disappointed, because the workload is naturally bimodal.

Comparing the Embodied AI Approaches

The Virtuals + BitRobot model is not the only attempt to commercialize embodied AI in 2026, and it is worth setting it against the alternatives:

  • Figure AI has raised over a billion dollars to build centralized humanoid robots for warehouse and manufacturing customers. Figure's economic model is classical capital equipment leasing: customers pay monthly for robot-hours. There is no token, no permissionless contributor base, and no mechanism for a third-party developer to extend or specialize the robots without going through Figure's commercial team.
  • Tesla Optimus is corporate-controlled in the deepest sense. The robots, the training data, the policy models, and the deployment decisions all live inside one company. Optimus is impressive engineering, but it sits entirely outside any open economic protocol.
  • OpenMind is pursuing what its team calls an "Android for robotics" — an open platform layer where any robot manufacturer can run a shared operating system. The philosophy overlaps with BitRobot's, but OpenMind has explicitly avoided crypto rails so far, betting that hardware OEMs are still uncomfortable with token-mediated incentives.
  • peaq Network is the closest philosophical cousin. peaq's Layer 1 has onboarded more than 3.3 million machines with verified identities and processed over 200 million transactions across 60 DePIN applications, framing itself as the foundational chain for the machine economy. The difference is that peaq is bottom-up infrastructure, while Virtuals + BitRobot is top-down composition of an existing agent economy with an existing robotics dataset.

The real question is not which approach wins. It is whether the open, multi-chain, token-incentivized model produces enough velocity in data collection and agent deployment to outrun the centralized alternatives before they lock in winner-take-most network effects.

The Market Math

The embodied AI market was valued at roughly $4.44 billion in 2025 and is projected to grow at a 39% CAGR to reach $23 billion by 2030, according to Research and Markets. The broader robotics technology market sits at $108 billion in 2025 and is on track to reach $376 billion by 2034 at a 15% CAGR. These are not crypto-native markets, but they are the addressable surface that crypto-native infrastructure now claims to coordinate.

Stack on top of that the AI-crypto sector itself, which trades in a roughly $52 billion combined market cap and counts Virtuals among its largest sub-protocols. Virtuals processed $13.23 billion in monthly trading volume in late 2025 and powers agents like Ethy AI, which has handled more than 2 million autonomous transactions. The capital is concentrated, the agent inventory is real, and the bridges to physical machinery are now live. The remaining question is how much of that $23 billion embodied AI TAM gets channeled through token-mediated rails versus traditional procurement contracts.

The bullish case is that any sufficiently autonomous robotic fleet will need a payment layer that operates without human approval at every transaction, and that requirement maps cleanly onto stablecoin-and-token rails rather than ACH transfers. The bearish case is that enterprise customers will demand SOC 2 compliance, KYC counterparties, and traditional contractual remedies that crypto-native systems cannot easily offer, pushing the embodied AI market toward boring centralized procurement no matter what the agents do under the hood.

What This Means for Builders

For developers and infrastructure providers, the Virtuals + BitRobot integration creates several concrete openings worth tracking:

  • Data labeling and contribution markets are no longer hypothetical. SeeSaw's 500,000 tasks suggest that consumer-grade contributors will participate in robot training when the rewards are denominated in liquid tokens. This is the closest thing to a working scaled DePIN flywheel for AI training data.
  • Agent reputation as a service becomes a real product category once ERC-8004 has counterparties who care. Agents that can prove uptime, dispute history, and successful job completion will command higher rates and access to higher-value escrowed work.
  • Multi-chain abstraction matters more, not less. Builders who have to bridge Solana data layers to Ethereum commerce layers to Base agent-spawning environments will need infrastructure that hides the seams. Reliable RPC, consistent indexing, and unified API access across these chains is the difference between a working agent and an idle one.

The Closing Frame

The Virtuals + BitRobot integration is not yet a transformed economy. It is a working prototype of one. The 17,000 agents managing physical robots are doing so at a pace measured in thousands of transactions per day, not millions, and the use cases skew toward training data collection rather than mission-critical industrial automation. Skeptics will point out, fairly, that the gap between SAM driving a humanoid for X clout and an autonomous fleet of warehouse robots negotiating contracts with a logistics company is enormous.

But the boundary that mattered most has been crossed. On-chain identity, on-chain payment, and on-chain dispute resolution now extend to physical actuators. Whatever the embodied AI market becomes between now and 2030, a meaningful share of it will run on rails that look more like Virtuals + BitRobot than like SAP. The question for the next eighteen months is which subnet, which standard, and which chain captures the most useful workloads first.

BlockEden.xyz provides enterprise-grade RPC and indexing infrastructure across Solana, Base, Ethereum, and other chains powering the AI agent and machine economy stack. Explore our API marketplace to build agent-driven applications on infrastructure designed for the multi-chain era.

Sources

Akave's Zero-Egress Bet: Can Flat-Rate DePIN Storage Actually Unseat AWS S3 for AI?

· 11 min read
Dora Noda
Software Engineer

Pull 2 terabytes of training data from AWS S3 to your GPU cluster and the bill arrives before the model does: roughly $184 in egress charges, on top of storage, on top of PUT/GET requests. Do it twice a day across a dozen experiments and the surprise line item starts to rival the storage itself. For AI teams, the cloud bill has become an economics problem disguised as an infrastructure problem — and a Austin-based DePIN startup named Akave thinks flat-rate, egress-free storage is the lever that finally breaks it.

Akave raised $6.65 million in March 2026 to build what it calls "the world's first decentralized enterprise data layer for AI and analytics." Its pitch is unusually specific: $14.99 per terabyte per month, zero egress fees, S3-compatible, backed by Filecoin for archival durability, with cryptographic receipts for every write. That's it. No tiers, no request fees, no bandwidth meter ticking every time a training container pulls a dataset. The question isn't whether the pricing is attractive — it obviously is. The question is whether the architecture can hold up as AI workloads scale into petabytes, and whether enterprises will trust a DePIN-backed stack for data they'd previously only hand to a hyperscaler.

The Egress Tax That Ate AI Budgets

AWS S3's sticker price is not the problem. Standard storage runs about $0.023/GB per month in us-east-1, which works out to roughly $920/month for a 40TB training corpus — annoying but manageable. Egress is where the math breaks. After the first 100GB free, S3 egress to the internet starts at $0.09/GB, stepping down slowly to $0.05/GB above 150TB. Pull 10TB of training data out to an external GPU provider and you're looking at $921.60 in transfer alone. Do it repeatedly — which is what AI pipelines actually do — and the "hidden" egress charge eclipses storage within a quarter.

This is not a pricing quirk. It's an architectural choice that assumes storage and compute live together inside one cloud. The moment an AI team splits them — because GPU capacity sits at CoreWeave, Lambda, or an on-prem cluster while data still sits in S3 — every epoch, every checkpoint restore, every data-parallel reread becomes a billable event. AI data fabrics multiply this problem: datasets get duplicated across preprocessing, training, validation, and analytics stages, each boundary potentially a paywall.

The industry's informal workaround has been CloudFront, because S3-to-CloudFront in-region transfer is free, so teams route data through a CDN that wasn't really designed for the job. It's a tell. When customers are architecturally twisting themselves to avoid a line item, the line item is no longer pricing — it's a tax.

What Akave Is Actually Selling

Akave Cloud is deliberately boring in the way serious infrastructure needs to be boring. The interface is S3-compatible — same SDKs, same GET and PUT semantics — so migrating a training pipeline is closer to changing an endpoint than rewriting code. Pricing is a single flat rate: $14.99 per terabyte per month, no egress, no per-request fees, no retrieval penalties. If your container pulls 500GB or 2TB of training data, it costs exactly $0 in transfer.

Underneath the familiar API, the architecture looks nothing like S3. Data is chunked, encrypted client-side, and distributed across the Akave network using 32-of-16 Reed-Solomon erasure coding, which Akave claims delivers 11 nines of durability. Long-term archival is anchored to Filecoin, the same network that underwrites a growing share of decentralized storage economics. Every write generates an on-chain receipt, and every retrieval is cryptographically verifiable — which matters less for cat photos and a lot more for AI training artifacts that regulators, auditors, or downstream model consumers may need to verify were unmodified.

The flagship piece for enterprises is the O3 gateway, an S3-compatible front door that can be hosted by Akave or self-hosted inside a customer's own infrastructure. The self-hosted version is the tell: teams with strict data residency or sovereignty requirements run O3 locally, hold their own encryption keys, and define their own access policies while still benefiting from the distributed backend. For sectors that historically couldn't touch decentralized storage — healthcare data, defense-adjacent AI, EU-regulated workloads — that configuration is meaningful.

Customer logos already include Intuizi, LaserSETI, and 375ai running production workloads, and the cap table reads like a who's-who of protocol-aligned capital: Protocol Labs, Filecoin Foundation, Avalanche, Blockchain Builders Fund, No Limit Holdings, Blockchange, Lightshift, and Big Brain Holdings. A partnership with Akash Network bundles decentralized GPU compute at around 70% below hyperscaler prices with Akave's zero-egress storage into what both companies are marketing as "sovereign AI infrastructure."

Reading the Room: Where Akave Sits in the Storage Stack

The decentralized storage landscape has matured dramatically. In January 2026, Filecoin launched Onchain Cloud on mainnet, positioning itself as a full-stack decentralized alternative to AWS with compute, verifiable retrieval, and automated payments. Storacha Forge, one of the earliest Onchain Cloud services, offers warm storage at $5.99 per terabyte. The broader DePIN sector has grown from roughly $5.2 billion in market cap in 2024 to over $19 billion by late 2025 — close to 270% growth — as AI demand, enterprise adoption, and DePIN infrastructure quality all crossed usability thresholds at roughly the same time.

Against that backdrop, Akave occupies a specific niche that neither Filecoin nor Arweave natively fills:

  • Filecoin is brilliant at long-tail archival and economic incentives but historically required deals, retrieval markets, and tooling that don't look like S3. Akave essentially packages Filecoin's durability into an S3-compatible interface with a flat rate.
  • Arweave sells permanence: one-time payment, indefinite storage, no retrieval guarantees. That's the right tool for immutable artifacts — NFT assets, on-chain documents, compliance archives — but a poor fit for the hot, mutable datasets AI training pipelines churn through.
  • Cloudflare R2 already offers zero egress and is the centralized benchmark Akave's pricing explicitly targets. R2 wins on latency, ecosystem integrations, and track record; Akave counters with sovereignty, verifiability, and a trust model that doesn't depend on a single provider's uptime — a point sharpened by the global Cloudflare outage in November 2025 that exposed how many "decentralized" apps still lived on one company's edge.
  • MinIO, the open-source self-hosted S3 alternative, recently shifted to a source-only model that spooked enterprises who'd built stacks assuming predictable community editions. Akave has been quietly pitching itself as a migration target for MinIO users who wanted self-host ergonomics without assuming their own operations burden.

The clearest way to understand Akave is as a pricing and interface arbitrage on decentralized storage primitives: take Filecoin's durability, wrap it in S3 semantics, put a flat-rate meter on top, and sell the result to AI teams who are already bleeding on egress.

Why Timing Matters: The Power and Data Gravity Pincer

At NVIDIA GTC 2026, Jensen Huang described AI as a "five-layer cake" with energy forming the foundation — every unit of machine intelligence ultimately a conversion of electricity into computation. The Department of Energy and Lawrence Berkeley National Laboratory project US data centers could consume up to 12% of total US electricity by 2030, up from about 4.4% today (roughly 176 TWh). The IEA's 2026 projection has global data centers hitting 1,000 TWh this year — Japan-scale power consumption, dedicated to compute.

The knock-on effect is that where data sits increasingly determines where compute can run. Hyperscalers are supply-constrained on power. GPU capacity is popping up wherever grid interconnects allow: Texas, the Nordics, the Middle East, secondary US markets. If your training data is pinned to us-east-1 and your GPUs are in Reykjavík or Abu Dhabi, you're paying egress to move bits to the silicon. Zero-egress, compute-agnostic storage turns data into a first-class citizen of a multi-cloud, multi-geography world — exactly the world AI economics is now forcing.

That's the real reason a pricing model like Akave's lands now rather than three years ago. When compute was abundant and cheap, egress was a rounding error. In an AI-constrained grid, egress is strategy.

The Skeptical Case: What Could Go Wrong

Three legitimate concerns temper the bull case.

First, latency and throughput at petabyte scale. AI training pipelines are bandwidth-hungry and latency-sensitive. S3 isn't just cheap storage with a nice API — it's a globally distributed edge network with decades of optimization. Akave's erasure coding and decentralized retrieval add hops. Production customers like 375ai suggest it's viable for common workloads, but teams considering multi-hundred-gigabit-per-second training feeds should benchmark carefully before committing.

Second, enterprise procurement inertia. Flat pricing is great; so is sovereignty. But enterprise security, legal, and compliance teams move on a timescale measured in quarters, and DePIN is still a novel procurement category for most Fortune 500 CIOs. Akave's self-hosted O3 gateway is partially an answer to this — "it's our hardware running their software" is easier to approve than "our data lives on a blockchain" — but the sales cycle is real.

Third, economics are only cheap if the network stays healthy. Filecoin and Akave's incentive layers assume a population of storage providers willing to underwrite capacity at the offered price. If AI demand spikes faster than supply, flat pricing either compresses provider margins or quietly gets re-tiered. Hyperscalers can subsidize; DePIN networks have to balance.

None of these are fatal. All of them mean Akave's challenge is less about whether the cost pitch lands and more about whether the operational story is boring enough for a Fortune 500 SRE to sign off.

The Bigger Pattern: Storage as a Wedge Into AI Infrastructure

The most interesting thing about Akave isn't the $14.99 price tag. It's what the price tag is trying to accomplish strategically. Storage is a low-margin commodity, but it's also the layer with the most data gravity — whoever owns the dataset owns the default answer to "where should we train?" and eventually "where should we inference?" The Akash x Akave partnership is a clear signal of this: decentralized GPU compute at 70% below hyperscaler prices means nothing if your data lives somewhere that charges you to leave. Bundle them, and the economics become an integrated alternative to the AWS stack rather than two discounts stapled together.

Expect this pattern to repeat across the DePIN-for-AI category through 2026. Storage networks will court compute networks, compute networks will court inference gateways, and inference gateways will court agent frameworks — all trying to assemble a vertical that can quote a single, predictable price against what is still, from the customer's perspective, a single bundled hyperscaler experience. The winners will be the ones who feel like infrastructure, not like crypto.

Akave is a credible early contender because it refuses to look like crypto at the surface: S3 endpoint, flat rate, audit-friendly receipts, real customers. The decentralized bits are under the hood, where — if Akave is right — they should be.


For developers building the next generation of Web3 and AI-native applications, BlockEden.xyz provides enterprise-grade RPC, indexing, and API infrastructure across 25+ chains, with the reliability profile serious production workloads demand. Explore our API marketplace to build on infrastructure designed for the long haul.

Sources

Bittensor's Conviction Test: Can Locked TAO Save Decentralized AI After the Covenant Shock?

· 9 min read
Dora Noda
Software Engineer

On March 10, 2026, a network of roughly 70 strangers scattered across the open internet finished training a 72-billion-parameter language model that beat LLaMA-2-70B on MMLU. Six weeks later, the same network was trying to stop itself from falling apart.

That whiplash — from a historic technical milestone to a full-blown governance crisis — is the story of Bittensor in 2026. And the fix on the table, a strange new primitive called the Conviction Mechanism, may be the most important governance experiment in crypto-AI this year.

Chrome 146 Shipped WebMCP. Web3 Just Got Its Biggest Distribution Unlock Ever.

· 10 min read
Dora Noda
Software Engineer

On March 10, 2026, Google quietly shipped Chrome 146 to stable. Buried in the release notes — behind yet another round of password-manager tweaks and a tab-groups redesign — was a browser API that will reshape Web3 distribution more than any wallet launch of the last five years.

It's called WebMCP. It lives at navigator.modelContext. And it just gave 3.83 billion Chrome users a native path to transact on-chain without ever installing a wallet.

The quiet feature that breaks the wallet-install bottleneck

For a decade, Web3's growth math looked like this: acquire user → convince user to install MetaMask → convince user to fund wallet → convince user to sign a transaction. Every one of those steps bled 40–70% of the funnel. The entire "crypto UX" discourse has been a running post-mortem on the MetaMask dependency.

WebMCP — the Web Model Context Protocol — removes the first three steps by moving the transaction surface into the browser itself.

Developed jointly by Google and Microsoft engineers and incubated through the W3C's Web Machine Learning community group, WebMCP adapts Anthropic's Model Context Protocol (MCP) for the browser. Any website can now register structured "tools" that AI agents running inside Chrome can discover and call directly, bypassing DOM scraping, button-clicking heuristics, and screen-reader simulation. Google engineer Khushal Sagar described the ambition in one sentence: WebMCP aims to be "the USB-C of AI agent interactions with the web."

That framing undersells what it means for crypto. USB-C standardized hardware connectors. WebMCP standardizes the interface between 3.83 billion browser users, their AI agents, and every on-chain service those agents might need to pay, swap, or settle against.

What Chrome 146 actually shipped

The API surface is deliberately minimal. A site calls navigator.modelContext.registerTool() to expose a named action — say, swapTokens or signPermit — with a JSON schema for its inputs and an execute() handler for its logic. Agents in the browser enumerate those tools the same way they enumerate any MCP server: by asking for a capability list, reading the schema, and invoking with typed parameters.

There are two ways to register:

  • Declarative API: HTML form attributes define standard actions. Zero JavaScript.
  • Imperative API: registerTool(), unregisterTool(), provideContext(), and clearContext() let dynamic apps update their tool surface as state changes.

Both paths present the agent with the same thing — a named tool with a typed contract. No more "find the button that says Confirm," no more brittle Playwright scripts, no more LLM-guessed XPaths. The website tells the agent, in a structured way, what it can do.

Chrome 146 Canary carried the feature behind a chrome://flags toggle in February 2026. Stable promotion landed March 10. Microsoft Edge 147 followed within days. That is effectively the entire desktop browser market — Chrome plus Chromium derivatives clear 75% of global browser share, and Statcounter puts Chrome alone at 67.72% in 2026.

Why Web3 protocols are racing to publish WebMCP endpoints

The implications for agentic crypto commerce are immediate, and the protocols paying attention have already started moving.

Consider the stack as it exists today:

  • MCP — how agents discover and call tools.
  • x402 — HTTP 402 revived, pioneered by Coinbase, enabling instant stablecoin payments over plain HTTP. Over 50 million transactions processed by early 2026, with Solana handling roughly 65% of x402 volume across Base, Solana, and BNB Chain.
  • AP2 (Agent Payments Protocol) — Google's coordination layer, built with Coinbase, the Ethereum Foundation, and MetaMask, with an explicit "A2A x402 extension" for crypto settlement.
  • ERC-8004 — Ethereum's emerging agent-execution primitive.

Before Chrome 146, this stack lived in server-side agent frameworks. An autonomous agent calling a paid API had to run inside someone's managed runtime — OpenAI's Custom Actions, Anthropic's MCP-hosted tools, a Zapier-style broker. The user surface was a chat window, and the distribution bottleneck was whichever AI app the user happened to open that day.

WebMCP collapses that. The browser becomes the runtime. The agent lives one tab over from the website it's transacting with. And crucially, the payment flow doesn't need a pre-installed wallet — the MetaMask+AP2+x402 consortium has already designed the path where a Chrome-native agent negotiates a stablecoin payment, routes it through a user-consented signer, and receives a structured confirmation back as a tool response.

The Linux Foundation's April 2026 announcement that it will house the newly-formed x402 Foundation isn't a coincidence. x402 needs a neutral standards home precisely because Chrome, Edge, and every AI agent vendor are about to treat it as the default payment primitive for WebMCP-exposed tools.

The numbers that make this a category-defining moment

A few data points to anchor scale:

  • 3.83 billion Chrome users worldwide in 2026, per consolidated Statcounter and DemandSage figures.
  • 67.72% global browser market share, up slightly year-over-year — this is not a declining distribution channel.
  • $8 billion in agentic commerce transaction value already flowing in 2026, projected to reach $3.5 trillion by 2031 (Juniper Research).
  • 50+ million x402 transactions processed by Q1 2026, with weekly volume crossing 500,000 by late 2025.
  • 40% of enterprise applications expected to embed task-specific AI agents by end-2026 (Gartner).
  • IDC pegs agentic AI at 10–15% of total IT spending in 2026.

Now multiply: if even 1% of Chrome's 3.83 billion users activate a WebMCP-capable agent (and Google is aggressively pushing Gemini integration in exactly this direction), that is 38 million agent-wielding users with one-click access to any WebMCP-enabled crypto service. No wallet install. No seed phrase ceremony. No "what's gas?" drop-off.

That's a distribution unlock crypto has never had.

The architectural race: who gets to be the wallet?

WebMCP doesn't pick a wallet. That's both its genius and the thing about to trigger a months-long knife fight between incumbents.

Three camps are already staking positions:

  1. Custodial exchange wallets (Coinbase Agentic Wallet, Binance Web3 Wallet). Fastest UX, compliance-friendly, but reintroduces a centralized signer. Coinbase's head start with x402 and Browserbase integration makes it the obvious default for retail agent flows.
  2. Self-custody incumbents (MetaMask, Rabby). MetaMask explicitly positioned itself in the AP2 launch: "Blockchains are the natural payment layer for agents." Their pitch is composability plus true self-custody — the agent negotiates, but the user signs.
  3. Programmatic wallet infrastructure (Privy, Turnkey, MoonPay Open Wallet Standard, Polygon Agent CLI). These target the developer layer: a WebMCP tool that internally creates a scoped, spending-limited wallet for the agent itself, with no human key management at all.

None of these require the user to have anything pre-installed. The agent calls the WebMCP tool, the tool orchestrates the wallet path, and the user gets a single consent prompt. The friction that defined Web3 onboarding for a decade compresses into one modal.

The historical parallel: Service Workers and the PWA unlock

If you want a template for how this plays out, look at Chrome 49 in March 2016, when Service Workers shipped to stable and quietly created the Progressive Web App ecosystem. Nobody noticed on day one. Within two years, every major retail site had a PWA strategy, Twitter Lite was shipping 70% faster load times in emerging markets, and the mobile web stopped losing ground to native apps for the first time since 2010.

WebMCP has the same shape: boring release-notes entry, fundamental platform capability, multi-year compounding adoption. The companies that ship WebMCP endpoints in Q2 2026 will own the agent-routed traffic when Google flips on Gemini-in-Chrome default agent mode — which every signal suggests is the Chrome 150 or 151 release.

For Web3 protocols, that means the window to be a first-class WebMCP citizen is measured in months, not years. A DEX that exposes swapTokens as a structured tool gets routed by every agent that needs to rebalance a portfolio. A stablecoin issuer that exposes mint and redeem captures every AP2 payment flow that needs on-ramp. A node/API provider that exposes RPC methods as MCP tools becomes the default compute layer for the entire agent economy.

What builders should do on Monday

Three concrete moves, in order of leverage:

  1. Audit your existing API surface for WebMCP-able actions. Anything already behind a REST or GraphQL endpoint is a candidate. Pick the five highest-intent actions (swap, bridge, mint, stake, query-balance) and wrap them with navigator.modelContext.registerTool() behind a feature flag.
  2. Decide your payment posture. Will you accept x402 directly? Require AP2 handshake? Gate tools behind user session cookies? The answer determines whether agents can transact autonomously or require human-in-the-loop. For most protocols, x402 + per-tool spending caps is the right default.
  3. Publish a /.well-known/mcp.json manifest. Chrome 146 doesn't require it yet, but the spec is heading toward automatic tool discovery via well-known URIs. Protocols that publish manifests early will be indexed by agent registries (including the ones Anthropic and Google are building) before their competitors exist in those indexes at all.

The distribution story for Web3 has always been "wait for users to come to us." Chrome 146 inverts it: now agents come to you, at browser scale, with payment rails pre-negotiated. The protocols that show up as structured tools will be the ones the machine economy uses. The ones that don't will be invisible.

BlockEden.xyz powers the RPC and indexing infrastructure that makes WebMCP-exposed Web3 tools fast and reliable across 20+ chains. If you're building agent-ready endpoints, explore our API marketplace — we've already optimized for the high-frequency, low-latency call patterns autonomous agents generate.

Sources