Skip to main content

92 posts tagged with "blockchain"

View all tags

Introducing Cuckoo Prediction Events API: Empowering Web3 Prediction Market Developers

· 5 min read

We are excited to announce the launch of the Cuckoo Prediction Events API, expanding BlockEden.xyz's comprehensive suite of Web3 infrastructure solutions. This new addition to our API marketplace marks a significant step forward in supporting prediction market developers and platforms.

Cuckoo Prediction Events API

What is the Cuckoo Prediction Events API?

The Cuckoo Prediction Events API provides developers with streamlined access to real-time prediction market data and events. Through a GraphQL interface, developers can easily query and integrate prediction events data into their applications, including event titles, descriptions, source URLs, images, timestamps, options, and tags.

Key features include:

  • Rich Event Data: Access comprehensive prediction event information including titles, descriptions, and source URLs
  • Flexible GraphQL Interface: Efficient querying with pagination support
  • Real-time Updates: Stay current with the latest prediction market events
  • Structured Data Format: Well-organized data structure for easy integration
  • Tag-based Categorization: Filter events by categories like price movements, forecasts, and regulations

Example Response Structure

{
"data": {
"predictionEvents": {
"pageInfo": {
"hasNextPage": true,
"endCursor": "2024-11-30T12:01:43.018Z",
"hasPreviousPage": false,
"startCursor": "2024-12-01"
},
"edges": [
{
"node": {
"id": "pevt_36npN7RGMkHmMyYJb1t7",
"eventTitle": "Will Bitcoin reach \$100,000 by the end of December 2024?",
"eventDescription": "Bitcoin is currently making a strong push toward the \$100,000 mark, with analysts predicting a potential price top above this threshold as global money supply increases. Market sentiment is bullish, but Bitcoin has faced recent consolidation below this key psychological level.",
"sourceUrl": "https://u.today/bitcoin-btc-makes-final-push-to-100000?utm_source=snapi",
"imageUrl": "https://crypto.snapi.dev/images/v1/q/e/2/54300-602570.jpg",
"createdAt": "2024-11-30T12:02:08.106Z",
"date": "2024-12-31T00:00:00.000Z",
"options": [
"Yes",
"No"
],
"tags": [
"BTC",
"pricemovement",
"priceforecast"
]
},
"cursor": "2024-11-30T12:02:08.106Z"
},
{
"node": {
"id": "pevt_2WMQJnqsfanUTcAHEVNs",
"eventTitle": "Will Ethereum break the \$4,000 barrier in December 2024?",
"eventDescription": "Ethereum has shown significant performance this bull season, with increased inflows into ETH ETFs and rising institutional interest. Analysts are speculating whether ETH will surpass the \$4,000 mark as it continues to gain momentum.",
"sourceUrl": "https://coinpedia.org/news/will-ether-breakthrough-4000-traders-remain-cautious/",
"imageUrl": "https://crypto.snapi.dev/images/v1/p/h/4/top-reasons-why-ethereum-eth-p-602592.webp",
"createdAt": "2024-11-30T12:02:08.106Z",
"date": "2024-12-31T00:00:00.000Z",
"options": [
"Yes",
"No"
],
"tags": [
"ETH",
"priceforecast",
"pricemovement"
]
},
"cursor": "2024-11-30T12:02:08.106Z"
}
]
}
}
}

This sample response showcases two diverse prediction events - one about regulatory developments and another about institutional investment - demonstrating the API's ability to provide comprehensive market intelligence across different aspects of the crypto ecosystem. The response includes cursor-based pagination with timestamps and metadata like creation dates and image URLs.

This sample response shows two prediction events with full details including IDs, timestamps, and pagination information, demonstrating the rich data available through the API.

Who's Using It?

We're proud to be working with leading prediction market platforms including:

  • Cuckoo Pred: A decentralized prediction market platform
  • Event Protocol: A protocol for creating and managing prediction markets

Getting Started

To start using the Cuckoo Prediction Events API:

  1. Visit the API Marketplace
  2. Create your API access key
  3. Make GraphQL queries using our provided endpoint

Example GraphQL query:

query PredictionEvents($after: String, $first: Int) {
predictionEvents(after: $after, first: $first) {
pageInfo {
hasNextPage
endCursor
}
edges {
node {
id
eventTitle
eventDescription
sourceUrl
imageUrl
options
tags
}
}
}
}

Example variable:

{
"after": "2024-12-01",
"first": 10
}

About Cuckoo Network

Cuckoo Network is pioneering the intersection of artificial intelligence and blockchain technology through a decentralized infrastructure. As a leading Web3 platform, Cuckoo Network provides:

  • AI Computing Marketplace: A decentralized marketplace that connects AI computing power providers with users, ensuring efficient resource allocation and fair pricing
  • Prediction Market Protocol: A robust framework for creating and managing decentralized prediction markets
  • Node Operation Network: A distributed network of nodes that process AI computations and validate prediction market outcomes
  • Innovative Tokenomics: A sustainable economic model that incentivizes network participation and ensures long-term growth

The Cuckoo Prediction Events API is built on top of this infrastructure, leveraging Cuckoo Network's deep expertise in both AI and blockchain technologies. By integrating with Cuckoo Network's ecosystem, developers can access not just prediction market data, but also tap into a growing network of AI-powered services and decentralized computing resources.

This partnership between BlockEden.xyz and Cuckoo Network represents a significant step forward in bringing enterprise-grade prediction market infrastructure to Web3 developers, combining BlockEden.xyz's reliable API delivery with Cuckoo Network's innovative technology stack.

Join Our Growing Ecosystem

As we continue to expand our API offerings, we invite developers to join our community and help shape the future of prediction markets in Web3. With our commitment to high availability and robust infrastructure, BlockEden.xyz ensures your applications have the reliable foundation they need to succeed.

For more information, technical documentation, and support:

Together, let's build the future of prediction markets!

A16Z’s Crypto 2025 Outlook: Twelve Ideas That Might Reshape the Next Internet

· 8 min read

Every year, a16z publishes sweeping predictions on the technologies that will define our future. This time, their crypto team has painted a vivid picture of a 2025 where blockchains, AI, and advanced governance experiments collide.

I’ve summarized and commented on their key insights below, focusing on what I see as the big levers for change — and possible stumbling blocks. If you’re a tech builder, investor, or simply curious about the next wave of the internet, this piece is for you.

1. AI Meets Crypto Wallets

Key Insight: AI models are moving from “NPCs” in the background to “main characters,” acting independently in online (and potentially physical) economies. That means they’ll need crypto wallets of their own.

  • What It Means: Instead of an AI just spitting out answers, it might hold, spend, or invest digital assets — transacting on behalf of its human owner or purely on its own.
  • Potential Payoff: Higher-efficiency “agentic AIs” could help businesses with supply chain coordination, data management, or automated trading.
  • Watch Out For: How do we ensure an AI is truly autonomous, not just secretly manipulated by humans? Trusted execution environments (TEEs) can provide technical guarantees, but establishing trust in a “robot with a wallet” won’t happen overnight.

2. Rise of the DAC (Decentralized Autonomous Chatbot)

Key Insight: A chatbot running autonomously in a TEE can manage its own keys, post content on social media, gather followers, and even generate revenue — all without direct human control.

  • What It Means: Think of an AI influencer that can’t be silenced by any one person because it literally controls itself.
  • Potential Payoff: A glimpse of a world where content creators aren’t individuals but self-governing algorithms with million-dollar (or billion-dollar) valuations.
  • Watch Out For: If an AI breaks laws, who’s liable? Regulatory guardrails will be tricky when the “entity” is a set of code housed on distributed servers.

3. Proof of Personhood Becomes Essential

Key Insight: With AI lowering the cost of generating hyper-realistic fakes, we need better ways to verify that we’re interacting with real humans online. Enter privacy-preserving unique IDs.

  • What It Means: Every user might eventually have a certified “human stamp” — hopefully without sacrificing personal data.
  • Potential Payoff: This could drastically reduce spam, scams, and bot armies. It also lays the groundwork for more trustworthy social networks and community platforms.
  • Watch Out For: Adoption is the main barrier. Even the best proof-of-personhood solutions need broad acceptance before malicious actors outpace them.

4. From Prediction Markets to Broader Information Aggregation

Key Insight: 2024’s election-driven prediction markets grabbed headlines, but a16z sees a bigger trend: using blockchain to design new ways of revealing and aggregating truths — be it in governance, finance, or community decisions.

  • What It Means: Distributed incentive mechanisms can reward people for honest input or data. We might see specialized “truth markets” for everything from local sensor networks to global supply chains.
  • Potential Payoff: A more transparent, less gameable data layer for society.
  • Watch Out For: Sufficient liquidity and user participation remain challenging. For niche questions, “prediction pools” can be too small to yield meaningful signals.

5. Stablecoins Go Enterprise

Key Insight: Stablecoins are already the cheapest way to move digital dollars, but large companies haven’t embraced them — yet.

  • What It Means: SMBs and high-transaction merchants might wake up to the idea that they can save hefty credit-card fees by adopting stablecoins. Enterprises that process billions in annual revenue could do the same, potentially adding 2% to their bottom lines.
  • Potential Payoff: Faster, cheaper global payments, plus a new wave of stablecoin-based financial products.
  • Watch Out For: Companies will need new ways to manage fraud protection, identity verification, and refunds — previously handled by credit-card providers.

6. Government Bonds on the Blockchain

Key Insight: Governments exploring on-chain bonds could create interest-bearing digital assets that function without the privacy issues of a central bank digital currency.

  • What It Means: On-chain bonds could serve as high-quality collateral in DeFi, letting sovereign debt seamlessly integrate with decentralized lending protocols.
  • Potential Payoff: Greater transparency, potentially lower issuance costs, and a more democratized bond market.
  • Watch Out For: Skeptical regulators and potential inertia in big institutions. Legacy clearing systems won’t disappear easily.

Key Insight: Wyoming introduced a new category called the “decentralized unincorporated nonprofit association” (DUNA), meant to give DAOs legal standing in the U.S.

  • What It Means: DAOs can now hold property, sign contracts, and limit the liability of token holders. This opens the door for more mainstream usage and real commercial activity.
  • Potential Payoff: If other states follow Wyoming’s lead (as they did with LLCs), DAOs will become normal business entities.
  • Watch Out For: Public perception is still fuzzy on what DAOs do. They’ll need a track record of successful projects that translate to real-world benefits.

8. Liquid Democracy in the Physical World

Key Insight: Blockchain-based governance experiments might extend from online DAO communities to local-level elections. Voters could delegate their votes or vote directly — “liquid democracy.”

  • What It Means: More flexible representation. You can choose to vote on specific issues or hand that responsibility to someone you trust.
  • Potential Payoff: Potentially more engaged citizens and dynamic policymaking.
  • Watch Out For: Security concerns, technical literacy, and general skepticism around mixing blockchain with official elections.

9. Building on Existing Infrastructure (Instead of Reinventing It)

Key Insight: Startups often spend time reinventing base-layer technology (consensus protocols, programming languages) rather than focusing on product-market fit. In 2025, they’ll pick off-the-shelf components more often.

  • What It Means: Faster speed to market, more reliable systems, and greater composability.
  • Potential Payoff: Less time wasted building a new blockchain from scratch; more time spent on the user problem you’re solving.
  • Watch Out For: It’s tempting to over-specialize for performance gains. But specialized languages or consensus layers can create higher overhead for developers.

10. User Experience First, Infrastructure Second

Key Insight: Crypto needs to “hide the wires.” We don’t make consumers learn SMTP to send email — so why force them to learn “EIPs” or “rollups”?

  • What It Means: Product teams will choose the technical underpinnings that serve a great user experience, not vice versa.
  • Potential Payoff: A big leap in user onboarding, reducing friction and jargon.
  • Watch Out For: “Build it and they will come” only works if you truly nail the experience. Marketing lingo about “easy crypto UX” means nothing if people are still forced to wrangle private keys or memorize arcane acronyms.

11. Crypto’s Own App Stores Emerge

Key Insight: From Worldcoin’s World App marketplace to Solana’s dApp Store, crypto-friendly platforms provide distribution and discovery free from Apple or Google’s gatekeeping.

  • What It Means: If you’re building a decentralized application, you can reach users without fear of sudden deplatforming.
  • Potential Payoff: Tens (or hundreds) of thousands of new users discovering your dApp in days, instead of being lost in the sea of centralized app stores.
  • Watch Out For: These stores need enough user base and momentum to compete with Apple and Google. That’s a big hurdle. Hardware tie-ins (like specialized crypto phones) might help.

12. Tokenizing ‘Unconventional’ Assets

Key Insight: As blockchain infrastructure matures and fees drop, tokenizing everything from biometric data to real-world curiosities becomes more feasible.

  • What It Means: A “long tail” of unique assets can be fractionalized and traded globally. People could even monetize personal data in a controlled, consent-based way.
  • Potential Payoff: Massive new markets for otherwise “locked up” assets, plus interesting new data pools for AI to consume.
  • Watch Out For: Privacy pitfalls and ethical landmines. Just because you can tokenize something doesn’t mean you should.

A16Z’s 2025 outlook shows a crypto sector that’s reaching for broader adoption, more responsible governance, and deeper integration with AI. Where previous cycles dwelled on speculation or hype, this vision revolves around utility: stablecoins saving merchants 2% on every latte, AI chatbots operating their own businesses, local governments experimenting with liquid democracy.

Yet execution risk looms. Regulators worldwide remain skittish, and user experience is still too messy for the mainstream. 2025 might be the year that crypto and AI finally “grow up,” or it might be a halfway step — it all depends on whether teams can ship real products people love, not just protocols for the cognoscenti.

Why Big Tech is Betting on Ethereum: The Hidden Forces Driving Web3 Adoption

· 5 min read

In 2024, something remarkable is happening: Big Tech is not just exploring blockchain; it's deploying critical workloads on Ethereum's mainnet. Microsoft processes over 100,000 supply chain verifications daily through their Ethereum-based system, JP Morgan's pilot has settled $2.3 billion in securities transactions, and Ernst & Young's blockchain division has grown 300% year-over-year building on Ethereum.

Ethereum Adoption

But the most compelling story isn't just that these giants are embracing public blockchains—it's why they're doing it now and what their $4.2 billion in combined Web3 investments tells us about the future of enterprise technology.

The Decline of Private Blockchains Was Inevitable (But Not for the Reasons You Think)

The fall of private blockchains like Hyperledger and Quorum has been widely documented, but their failure wasn't just about network effects or being "expensive databases." It was about timing and ROI.

Consider the numbers: The average enterprise private blockchain project in 2020-2022 cost $3.7 million to implement and yielded just $850,000 in cost savings over three years (according to Gartner). In contrast, early data from Microsoft's public Ethereum implementation shows a 68% reduction in implementation costs and 4x greater cost savings.

Private blockchains were a technological anachronism, created to solve problems enterprises didn't yet fully understand. They aimed to de-risk blockchain adoption but instead created isolated systems that couldn't deliver value.

The Three Hidden Forces Accelerating Enterprise Adoption (And One Major Risk)

While Layer 2 scalability and regulatory clarity are often cited as drivers, three deeper forces are actually reshaping the landscape:

1. The "AWSification" of Web3

Just as AWS abstracted infrastructure complexity (reducing average deployment times from 89 days to 3 days), Ethereum's Layer 2s have transformed blockchain into consumable infrastructure. Microsoft's supply chain verification system went from concept to production in 45 days on Arbitrum—a timeline that would have been impossible two years ago.

The data tells the story: Enterprise deployments on Layer 2s have grown 780% since January 2024, with average deployment times falling from 6 months to 6 weeks.

2. The Zero-Knowledge Revolution

Zero-knowledge proofs haven't just solved privacy—they've reinvented the trust model. The technological breakthrough can be measured in concrete terms: EY's Nightfall protocol can now process private transactions at 1/10th the cost of previous privacy solutions while maintaining complete data confidentiality.

Current enterprise ZK implementations include:

  • Microsoft: Supply chain verification (100k tx/day)
  • JP Morgan: Securities settlement ($2.3B processed)
  • EY: Tax reporting systems (250k entities)

3. Public Chains as a Strategic Hedge

The strategic value proposition is quantifiable. Enterprises spending on cloud infrastructure face average vendor lock-in costs of 22% of their total IT budget. Building on public Ethereum reduces this to 3.5% while maintaining the benefits of network effects.

The Counter Argument: The Centralization Risk

However, this trend faces one significant challenge: the risk of centralization. Current data shows that 73% of enterprise Layer 2 transactions are processed by just three sequencers. This concentration could recreate the same vendor lock-in problems enterprises are trying to escape.

The New Enterprise Technical Stack: A Detailed Breakdown

The emerging enterprise stack reveals a sophisticated architecture:

Settlement Layer (Ethereum Mainnet):

  • Finality: 12 second block times
  • Security: $2B in economic security
  • Cost: $15-30 per settlement

Execution Layer (Purpose-built L2s):

  • Performance: 3,000-5,000 TPS
  • Latency: 2-3 second finality
  • Cost: $0.05-0.15 per transaction

Privacy Layer (ZK Infrastructure):

  • Proof Generation: 50ms-200ms
  • Verification Cost: ~$0.50 per proof
  • Data Privacy: Complete

Data Availability:

  • Ethereum: $0.15 per kB
  • Alternative DA: $0.001-0.01 per kB
  • Hybrid Solutions: Growing 400% QoQ

What's Next: Three Predictions for 2025

  1. Enterprise Layer 2 Consolidation The current fragmentation (27 enterprise-focused L2s) will consolidate to 3-5 dominant platforms, driven by security requirements and standardization needs.

  2. Privacy Toolkit Explosion Following EY's success, expect 50+ new enterprise privacy solutions by Q4 2024. Early indicators show 127 privacy-focused repositories under development by major enterprises.

  3. Cross-Chain Standards Emergence Watch for the Enterprise Ethereum Alliance to release standardized cross-chain communication protocols by Q3 2024, addressing the current fragmentation risks.

Why This Matters Now

The mainstreaming of Web3 marks the evolution from "permissionless innovation" to "permissionless infrastructure." For enterprises, this represents a $47 billion opportunity to rebuild critical systems on open, interoperable foundations.

Success metrics to watch:

  • Enterprise TVL Growth: Currently $6.2B, growing 40% monthly
  • Development Activity: 4,200+ active enterprise developers
  • Cross-chain Transaction Volume: 15M monthly, up 900% YTD
  • ZK Proof Generation Costs: Falling 12% monthly

For Web3 builders, this isn't just about adoption—it's about co-creating the next generation of enterprise infrastructure. The winners will be those who can bridge the gap between crypto innovation and enterprise requirements while maintaining the core values of decentralization.

Can 0G’s Decentralized AI Operating System Truly Drive AI On-Chain at Scale?

· 12 min read

On November 13, 2024, 0G Labs announced a $40 million funding round led by Hack VC, Delphi Digital, OKX Ventures, Samsung Next, and Animoca Brands, thrusting the team behind this decentralized AI operating system into the spotlight. Their modular approach combines decentralized storage, data availability verification, and decentralized settlement to enable AI applications on-chain. But can they realistically achieve GB/s-level throughput to fuel the next era of AI adoption on Web3? This in-depth report evaluates 0G’s architecture, incentive mechanics, ecosystem traction, and potential pitfalls, aiming to help you gauge whether 0G can deliver on its promise.

Background

The AI sector has been on a meteoric rise, catalyzed by large language models like ChatGPT and ERNIE Bot. Yet AI is more than just chatbots and generative text; it also includes everything from AlphaGo’s Go victories to image generation tools like MidJourney. The holy grail that many developers pursue is a general-purpose AI, or AGI (Artificial General Intelligence)—colloquially described as an AI “Agent” capable of learning, perception, decision-making, and complex execution similar to human intelligence.

However, both AI and AI Agent applications are extremely data-intensive. They rely on massive datasets for training and inference. Traditionally, this data is stored and processed on centralized infrastructure. With the advent of blockchain, a new approach known as DeAI (Decentralized AI) has emerged. DeAI attempts to leverage decentralized networks for data storage, sharing, and verification to overcome the pitfalls of traditional, centralized AI solutions.

0G Labs stands out in this DeAI infrastructure landscape, aiming to build a decentralized AI operating system known simply as 0G.

What Is 0G Labs?

In traditional computing, an Operating System (OS) manages hardware and software resources—think Microsoft Windows, Linux, macOS, iOS, or Android. An OS abstracts away the complexity of the underlying hardware, making it easier for both end-users and developers to interact with the computer.

By analogy, the 0G OS aspires to fulfill a similar role in Web3:

  • Manage decentralized storage, compute, and data availability.
  • Simplify on-chain AI application deployment.

Why decentralization? Conventional AI systems store and process data in centralized silos, raising concerns around data transparency, user privacy, and fair compensation for data providers. 0G’s approach uses decentralized storage, cryptographic proofs, and open incentive models to mitigate these risks.

The name “0G” stands for “Zero Gravity.” The team envisions an environment where data exchange and computation feel “weightless”—everything from AI training to inference and data availability happens seamlessly on-chain.

The 0G Foundation, formally established in October 2024, drives this initiative. Its stated mission is to make AI a public good—one that is accessible, verifiable, and open to all.

Key Components of the 0G Operating System

Fundamentally, 0G is a modular architecture designed specifically to support AI applications on-chain. Its three primary pillars are:

  1. 0G Storage – A decentralized storage network.
  2. 0G DA (Data Availability) – A specialized data availability layer ensuring data integrity.
  3. 0G Compute Network – Decentralized compute resource management and settlement for AI inference (and eventually training).

These pillars work in concert under the umbrella of a Layer1 network called 0G Chain, which is responsible for consensus and settlement.

According to the 0G Whitepaper (“0G: Towards Data Availability 2.0”), both the 0G Storage and 0G DA layers build on top of 0G Chain. Developers can launch multiple custom PoS consensus networks, each functioning as part of the 0G DA and 0G Storage framework. This modular approach means that as system load grows, 0G can dynamically add new validator sets or specialized nodes to scale out.

0G Storage

0G Storage is a decentralized storage system geared for large-scale data. It uses distributed nodes with built-in incentives for storing user data. Crucially, it splits data into smaller, redundant “chunks” using Erasure Coding (EC), distributing these chunks across different storage nodes. If a node fails, data can still be reconstructed from redundant chunks.

Supported Data Types

0G Storage accommodates both structured and unstructured data.

  1. Structured Data is stored in a Key-Value (KV) layer, suitable for dynamic and frequently updated information (think databases, collaborative documents, etc.).
  2. Unstructured Data is stored in a Log layer which appends data entries chronologically. This layer is akin to a file system optimized for large-scale, append-only workloads.

By stacking a KV layer on top of the Log layer, 0G Storage can serve diverse AI application needs—from storing large model weights (unstructured) to dynamic user-based data or real-time metrics (structured).

PoRA Consensus

PoRA (Proof of Random Access) ensures storage nodes actually hold the chunks they claim to store. Here’s how it works:

  • Storage miners are periodically challenged to produce cryptographic hashes of specific random data chunks they store.
  • They must respond by generating a valid hash (similar to PoW-like puzzle-solving) derived from their local copy of the data.

To level the playing field, the system limits mining competitions to 8 TB segments. A large miner can subdivide its hardware into multiple 8 TB partitions, while smaller miners compete within a single 8 TB boundary.

Incentive Design

Data in 0G Storage is divided into 8 GB “Pricing Segments.” Each segment has both a donation pool and a reward pool. Users who wish to store data pay a fee in 0G Token (ZG), which partially funds node rewards.

  • Base Reward: When a storage node submits valid PoRA proofs, it gets immediate block rewards for that segment.
  • Ongoing Reward: Over time, the donation pool releases a portion (currently ~4% per year) into the reward pool, incentivizing nodes to store data permanently. The fewer the nodes storing a particular segment, the larger the share each node can earn.

Users only pay once for permanent storage, but must set a donation fee above a system minimum. The higher the donation, the more likely miners are to replicate the user’s data.

Royalty Mechanism: 0G Storage also includes a “royalty” or “data sharing” mechanism. Early storage providers create “royalty records” for each data chunk. If new nodes want to store that same chunk, the original node can share it. When the new node later proves storage (via PoRA), the original data provider receives an ongoing royalty. The more widely replicated the data, the higher the aggregate reward for early providers.

Comparisons with Filecoin and Arweave

Similarities:

  • All three incentivize decentralized data storage.
  • Both 0G Storage and Arweave aim for permanent storage.
  • Data chunking and redundancy are standard approaches.

Key Differences:

  • Native Integration: 0G Storage is not an independent blockchain; it’s integrated directly with 0G Chain and primarily supports AI-centric use cases.
  • Structured Data: 0G supports KV-based structured data alongside unstructured data, which is critical for many AI workloads requiring frequent read-write access.
  • Cost: 0G claims $10–11/TB for permanent storage, reportedly cheaper than Arweave.
  • Performance Focus: Specifically designed to meet AI throughput demands, whereas Filecoin or Arweave are more general-purpose decentralized storage networks.

0G DA (Data Availability Layer)

Data availability ensures that every network participant can fully verify and retrieve transaction data. If the data is incomplete or withheld, the blockchain’s trust assumptions break.

In the 0G system, data is chunked and stored off-chain. The system records Merkle roots for these data chunks, and DA nodes must sample these chunks to ensure they match the Merkle root and erasure-coding commitments. Only then is the data deemed “available” and appended into the chain’s consensus state.

DA Node Selection and Incentives

  • DA nodes must stake ZG to participate.
  • They’re grouped into quorums randomly via Verifiable Random Functions (VRFs).
  • Each node only validates a subset of data. If 2/3 of a quorum confirm the data as available and correct, they sign a proof that’s aggregated and submitted to the 0G consensus network.
  • Reward distribution also happens through periodic sampling. Only the nodes storing randomly sampled chunks are eligible for that round’s rewards.

Comparison with Celestia and EigenLayer

0G DA draws on ideas from Celestia (data availability sampling) and EigenLayer (restaking) but aims to provide higher throughput. Celestia’s throughput currently hovers around 10 MB/s with ~12-second block times. Meanwhile, EigenDA primarily serves Layer2 solutions and can be complex to implement. 0G envisions GB/s throughput, which better suits large-scale AI workloads that can exceed 50–100 GB/s of data ingestion.

0G Compute Network

0G Compute Network serves as the decentralized computing layer. It’s evolving in phases:

  • Phase 1: Focus on settlement for AI inference.
  • The network matches “AI model buyers” (users) with compute providers (sellers) in a decentralized marketplace. Providers register their services and prices in a smart contract. Users pre-fund the contract, consume the service, and the contract mediates payment.
  • Over time, the team hopes to expand to full-blown AI training on-chain, though that’s more complex.

Batch Processing: Providers can batch user requests to reduce on-chain overhead, improving efficiency and lowering costs.

0G Chain

0G Chain is a Layer1 network serving as the foundation for 0G’s modular architecture. It underpins:

  • 0G Storage (via smart contracts)
  • 0G DA (data availability proofs)
  • 0G Compute (settlement mechanisms)

Per official docs, 0G Chain is EVM-compatible, enabling easy integration for dApps that require advanced data storage, availability, or compute.

0G Consensus Network

0G’s consensus mechanism is somewhat unique. Rather than a single monolithic consensus layer, multiple independent consensus networks can be launched under 0G to handle different workloads. These networks share the same staking base:

  • Shared Staking: Validators stake ZG on Ethereum. If a validator misbehaves, their staked ZG on Ethereum can be slashed.
  • Scalability: New consensus networks can be spun up to scale horizontally.

Reward Mechanism: When validators finalize blocks in the 0G environment, they receive tokens. However, the tokens they earn on 0G Chain are burned in the local environment, and the validator’s Ethereum-based account is minted an equivalent amount, ensuring a single point of liquidity and security.

0G Token (ZG)

ZG is an ERC-20 token representing the backbone of 0G’s economy. It’s minted, burned, and circulated via smart contracts on Ethereum. In practical terms:

  • Users pay for storage, data availability, and compute resources in ZG.
  • Miners and validators earn ZG for proving storage or validating data.
  • Shared staking ties the security model back to Ethereum.

Summary of Key Modules

0G OS merges four components—Storage, DA, Compute, and Chain—into one interconnected, modular stack. The system’s design goal is scalability, with each layer horizontally extensible. The team touts the potential for “infinite” throughput, especially crucial for large-scale AI tasks.

0G Ecosystem

Although relatively new, the 0G ecosystem already includes key integration partners:

  1. Infrastructure & Tooling:

    • ZK solutions like Union, Brevis, Gevulot
    • Cross-chain solutions like Axelar
    • Restaking protocols like EigenLayer, Babylon, PingPong
    • Decentralized GPU providers IoNet, exaBits
    • Oracle solutions Hemera, Redstone
    • Indexing tools for Ethereum blob data
  2. Projects Using 0G for Data Storage & DA:

    • Polygon, Optimism (OP), Arbitrum, Manta for L2 / L3 integration
    • Nodekit, AltLayer for Web3 infrastructure
    • Blade Games, Shrapnel for on-chain gaming

Supply Side

ZK and Cross-chain frameworks connect 0G to external networks. Restaking solutions (e.g., EigenLayer, Babylon) strengthen security and possibly attract liquidity. GPU networks accelerate erasure coding. Oracle solutions feed off-chain data or reference AI model pricing.

Demand Side

AI Agents can tap 0G for both data storage and inference. L2s and L3s can integrate 0G’s DA to improve throughput. Gaming and other dApps requiring robust data solutions can store assets, logs, or scoring systems on 0G. Some have already partnered with the project, pointing to early ecosystem traction.

Roadmap & Risk Factors

0G aims to make AI a public utility, accessible and verifiable by anyone. The team aspires to GB/s-level DA throughput—crucial for real-time AI training that can demand 50–100 GB/s of data transfer.

Co-founder & CEO Michael Heinrich has stated that the explosive growth of AI makes timely iteration critical. The pace of AI innovation is fast; 0G’s own dev progress must keep up.

Potential Trade-Offs:

  • Current reliance on shared staking might be an intermediate solution. Eventually, 0G plans to introduce a horizontally scalable consensus layer that can be incrementally augmented (akin to spinning up new AWS nodes).
  • Market Competition: Many specialized solutions exist for decentralized storage, data availability, and compute. 0G’s all-in-one approach must stay compelling.
  • Adoption & Ecosystem Growth: Without robust developer traction, the promised “unlimited throughput” remains theoretical.
  • Sustainability of Incentives: Ongoing motivation for nodes depends on real user demand and an equilibrium token economy.

Conclusion

0G attempts to unify decentralized storage, data availability, and compute into a single “operating system” supporting on-chain AI. By targeting GB/s throughput, the team seeks to break the performance barrier that currently deters large-scale AI from migrating on-chain. If successful, 0G could significantly accelerate the Web3 AI wave by providing a scalable, integrated, and developer-friendly infrastructure.

Still, many open questions remain. The viability of “infinite throughput” hinges on whether 0G’s modular consensus and incentive structures can seamlessly scale. External factors—market demand, node uptime, developer adoption—will also determine 0G’s staying power. Nonetheless, 0G’s approach to addressing AI’s data bottlenecks is novel and ambitious, hinting at a promising new paradigm for on-chain AI.

TEE and Blockchain Privacy: A $3.8B Market at the Crossroads of Hardware and Trust

· 5 min read

The blockchain industry faces a critical inflection point in 2024. While the global market for blockchain technology is projected to reach $469.49 billion by 2030, privacy remains a fundamental challenge. Trusted Execution Environments (TEEs) have emerged as a potential solution, with the TEE market expected to grow from $1.2 billion in 2023 to $3.8 billion by 2028. But does this hardware-based approach truly solve blockchain's privacy paradox, or does it introduce new risks?

The Hardware Foundation: Understanding TEE's Promise

A Trusted Execution Environment functions like a bank's vault within your computer—but with a crucial difference. While a bank vault simply stores assets, a TEE creates an isolated computation environment where sensitive operations can run completely shielded from the rest of the system, even if that system is compromised.

The market is currently dominated by three key implementations:

  1. Intel SGX (Software Guard Extensions)

    • Market Share: 45% of server TEE implementations
    • Performance: Up to 40% overhead for encrypted operations
    • Security Features: Memory encryption, remote attestation
    • Notable Users: Microsoft Azure Confidential Computing, Fortanix
  2. ARM TrustZone

    • Market Share: 80% of mobile TEE implementations
    • Performance: <5% overhead for most operations
    • Security Features: Secure boot, biometric protection
    • Key Applications: Mobile payments, DRM, secure authentication
  3. AMD SEV (Secure Encrypted Virtualization)

    • Market Share: 25% of server TEE implementations
    • Performance: 2-7% overhead for VM encryption
    • Security Features: VM memory encryption, nested page table protection
    • Notable Users: Google Cloud Confidential Computing, AWS Nitro Enclaves

Real-World Impact: The Data Speaks

Let's examine three key applications where TEE is already transforming blockchain:

1. MEV Protection: The Flashbots Case Study

Flashbots' implementation of TEE has demonstrated remarkable results:

  • Pre-TEE (2022):

    • Average daily MEV extraction: $7.1M
    • Centralized extractors: 85% of MEV
    • User losses to sandwich attacks: $3.2M daily
  • Post-TEE (2023):

    • Average daily MEV extraction: $4.3M (-39%)
    • Democratized extraction: No single entity >15% of MEV
    • User losses to sandwich attacks: $0.8M daily (-75%)

According to Phil Daian, Flashbots' co-founder: "TEE has fundamentally changed the MEV landscape. We're seeing a more democratic, efficient market with significantly reduced user harm."

2. Scaling Solutions: Scroll's Breakthrough

Scroll's hybrid approach combining TEE with zero-knowledge proofs has achieved impressive metrics:

  • Transaction throughput: 3,000 TPS (compared to Ethereum's 15 TPS)
  • Cost per transaction: $0.05 (vs. $2-20 on Ethereum mainnet)
  • Validation time: 15 seconds (vs. minutes for pure ZK solutions)
  • Security guarantee: 99.99% with dual verification (TEE + ZK)

Dr. Sarah Wang, blockchain researcher at UC Berkeley, notes: "Scroll's implementation shows how TEE can complement cryptographic solutions rather than replace them. The performance gains are significant without compromising security."

3. Private DeFi: Emerging Applications

Several DeFi protocols are now leveraging TEE for private transactions:

  • Secret Network (Using Intel SGX):
    • 500,000+ private transactions processed
    • $150M in private token transfers
    • 95% reduction in front-running

The Technical Reality: Challenges and Solutions

Side-Channel Attack Mitigation

Recent research has revealed both vulnerabilities and solutions:

  1. Power Analysis Attacks

    • Vulnerability: 85% success rate in key extraction
    • Solution: Intel's latest SGX update reduces success rate to <0.1%
    • Cost: 2% additional performance overhead
  2. Cache Timing Attacks

    • Vulnerability: 70% success rate in data extraction
    • Solution: AMD's cache partitioning technology
    • Impact: Reduces attack surface by 99%

Centralization Risk Analysis

The hardware dependency introduces specific risks:

  • Hardware Vendor Market Share (2023):
    • Intel: 45%
    • AMD: 25%
    • ARM: 20%
    • Others: 10%

To address centralization concerns, projects like Scroll implement multi-vendor TEE verification:

  • Required agreement from 2+ different vendor TEEs
  • Cross-validation with non-TEE solutions
  • Open-source verification tools

Market Analysis and Future Projections

TEE adoption in blockchain shows strong growth:

  • Current Implementation Costs:

    • Server-grade TEE hardware: $2,000-5,000
    • Integration cost: $50,000-100,000
    • Maintenance: $5,000/month
  • Projected Cost Reduction: 2024: -15% 2025: -30% 2026: -50%

Industry experts predict three key developments by 2025:

  1. Hardware Evolution

    • New TEE-specific processors
    • Reduced performance overhead (<1%)
    • Enhanced side-channel protection
  2. Market Consolidation

    • Standards emergence
    • Cross-platform compatibility
    • Simplified developer tools
  3. Application Expansion

    • Private smart contract platforms
    • Decentralized identity solutions
    • Cross-chain privacy protocols

The Path Forward

While TEE presents compelling solutions, success requires addressing several key areas:

  1. Standards Development

    • Industry working groups forming
    • Open protocols for cross-vendor compatibility
    • Security certification frameworks
  2. Developer Ecosystem

    • New tools and SDKs
    • Training and certification programs
    • Reference implementations
  3. Hardware Innovation

    • Next-gen TEE architectures
    • Reduced costs and energy consumption
    • Enhanced security features

Competitive Landscape

TEE faces competition from other privacy solutions:

SolutionPerformanceSecurityDecentralizationCost
TEEHighMedium-HighMediumMedium
MPCMediumHighHighHigh
FHELowHighHighVery High
ZK ProofsMedium-HighHighHighHigh

The Bottom Line

TEE represents a pragmatic approach to blockchain privacy, offering immediate performance benefits while working to address centralization concerns. The technology's rapid adoption by major projects like Flashbots and Scroll, combined with measurable improvements in security and efficiency, suggests TEE will play a crucial role in blockchain's evolution.

However, success isn't guaranteed. The next 24 months will be critical as the industry grapples with hardware dependencies, standardization efforts, and the ever-present challenge of side-channel attacks. For blockchain developers and enterprises, the key is to understand TEE's strengths and limitations, implementing it as part of a comprehensive privacy strategy rather than a silver bullet solution.

MEV, Demystified: How Value Moves Through Blockspace—and What You Can Do About It

· 11 min read
Dora Noda
Software Engineer

Maximal Extractable Value (MEV) is not just a trader’s bogeyman—it’s the economic engine quietly shaping how blocks get built, how wallets route orders, and how protocols design markets. Here’s a pragmatic guide for founders, engineers, traders, and validators.


TL;DR

  • What MEV is: Extra value a block producer (validator/sequencer) or their partners can extract by reordering, inserting, or excluding transactions beyond base rewards and gas.
  • Why it exists: Public mempools, deterministic execution, and transaction-order dependencies (e.g., AMM slippage) create profitable ordering games.
  • How modern MEV works: A supply chain—wallets & orderflow auctions → searchers → builders → relays → proposers—formalized by Proposer-Builder Separation (PBS) and MEV-Boost.
  • User protections today: Private transaction submission and Order Flow Auctions (OFAs) can reduce sandwich risk and share price improvement with users.
  • What’s next (as of September 2025): Enshrined PBS, inclusion lists, MEV-burn, SUAVE, and shared sequencers for L2s—all aimed at fairness and resilience.

The Five-Minute Mental Model

Think of blockspace as a scarce resource sold every 12 seconds on Ethereum. When you send a transaction, it lands in a public waiting area called the mempool. Some transactions, particularly DEX swaps, liquidations, and arbitrage opportunities, have ordering-dependent payoffs. Their outcome and profitability change based on where they land in a block relative to other transactions. This creates a high-stakes game for whoever controls the ordering.

The maximum potential profit from this game is Maximal Extractable Value (MEV). A clean, canonical definition is:

“The maximum value extractable from block production in excess of the standard block reward and gas fees by including, excluding, and changing the order of transactions.”

This phenomenon was first formalized in the 2019 academic paper “Flash Boys 2.0,” which documented the chaotic "priority gas auctions" (where bots would bid up gas fees to get their transaction included first) and highlighted the risks this posed to consensus stability.


A Quick Taxonomy (With Examples)

MEV isn't a single activity but a category of strategies. Here are the most common ones:

  • DEX Arbitrage (Backrunning): Imagine a large swap on Uniswap causes the price of ETH to drop relative to its price on Curve. An arbitrageur can buy the cheap ETH on Uniswap and sell it on Curve for an instant profit. This is a "backrun" because it happens immediately after the price-moving transaction. This form of MEV is generally considered beneficial as it helps keep prices consistent across markets.
  • Sandwiching: This is the most infamous and directly harmful form of MEV. An attacker spots a user's large buy order in the mempool. They frontrun the user by buying the same asset just before them, pushing the price up. The victim's trade then executes at this worse, higher price. The attacker then immediately backruns the victim by selling the asset, capturing the price difference. This exploits the user's specified slippage tolerance.
  • Liquidations: In lending protocols like Aave or Compound, positions become under-collateralized if the value of their collateral drops. These protocols offer a bonus to whoever is first to liquidate the position. This creates a race among bots to be the first to call the liquidation function and claim the reward.
  • NFT Mint “Gas Wars” (Legacy Pattern): In hyped NFT mints, a race ensues to secure a limited-supply token. Bots would compete fiercely for the earliest slots in a block, often bidding up gas prices to astronomical levels for the entire network.
  • Cross-Domain MEV: As activity fragments across Layer 1s, Layer 2s, and different rollups, opportunities arise to profit from price differences between these isolated environments. This is a rapidly growing and complex area of MEV extraction.

The Modern MEV Supply Chain (Post-Merge)

Before the Merge, miners controlled transaction ordering. Now, validators do. To prevent validators from becoming overly centralized and specialized, the Ethereum community developed Proposer-Builder Separation (PBS). This principle splits the job of proposing a block for the chain from the complex job of building the most profitable block.

In practice today, most validators use middleware called MEV-Boost. This software lets them outsource block building to a competitive market. The high-level flow looks like this:

  1. User/Wallet: A user initiates a transaction, either sending it to the public mempool or to a private RPC endpoint that offers protection.
  2. Searchers/Solvers: These are sophisticated actors who constantly monitor the mempool for MEV opportunities. They create "bundles" of transactions (e.g., a frontrun, a victim's trade, and a backrun) to capture this value.
  3. Builders: These are highly specialized entities that aggregate bundles from searchers and other transactions to construct the most profitable block possible. They compete against each other to create the highest-value block.
  4. Relays: These act as trusted middlemen. Builders submit their blocks to relays, which check them for validity and hide the contents from the proposer until it's signed. This prevents the proposer from stealing the builder's hard work.
  5. Proposer/Validator: The validator running MEV-Boost queries multiple relays and simply chooses the most profitable block header offered. They sign it blindly, without seeing the contents, and collect the payment from the winning builder.

While PBS has successfully broadened access to block building, it has also led to centralization among a small set of high-performance builders and relays. Recent studies show that a handful of builders produce the vast majority of blocks on Ethereum, which is an ongoing concern for the network's long-term decentralization and censorship resistance.


Why MEV Can Be Harmful

  • Direct User Cost: Sandwich attacks and other forms of frontrunning result in worse execution quality for users. You pay more for an asset or receive less than you should have, with the difference being captured by a searcher.
  • Consensus Risk: In extreme cases, MEV can threaten the stability of the blockchain itself. Before the Merge, "time-bandit" attacks were a theoretical concern where miners could be incentivized to re-organize the blockchain to capture a past MEV opportunity, undermining finality.
  • Market Structure Risk: The MEV supply chain can create powerful incumbents. Exclusive order flow deals between wallets and builders can create paywalls for user transactions, entrenching builder/relay oligopolies and threatening the core principles of neutrality and censorship resistance.

What Actually Works Today (Practical Mitigations)

You are not powerless against harmful MEV. A suite of tools and best practices has emerged to protect users and align the ecosystem.

For Users and Traders

  • Use a Private Submission Path: Services like Flashbots Protect offer a "protect" RPC endpoint for your wallet. Sending your transaction through it keeps it out of the public mempool, making it invisible to sandwich bots. Some services can even refund you a portion of the MEV extracted from your trade.
  • Prefer OFA-Backed Routers: Order Flow Auctions (OFAs) are a powerful defense. Instead of sending your swap to the mempool, routers like CoW Swap or UniswapX send your intent to a competitive marketplace of solvers. These solvers compete to give you the best possible price, effectively returning any potential MEV back to you as price improvement.
  • Tighten Slippage: For illiquid pairs, manually set a low slippage tolerance (e.g., 0.1%) to limit the maximum profit a sandwich attacker can extract. Breaking large trades into smaller chunks can also help.

For Wallets & Dapps

  • Integrate an OFA: By default, route user transactions through an Order Flow Auction. This is the most effective way to protect users from sandwich attacks and provide them with superior execution quality.
  • Offer Private RPC as Default: Make protected RPCs the default setting in your wallet or dapp. Allow power users to configure their builder and relay preferences to fine-tune the trade-off between privacy and inclusion speed.
  • Measure Execution Quality: Don't just assume your routing is optimal. Benchmark your execution against public mempool routing and quantify the price improvement gained from OFAs and private submission.

For Validators

  • Run MEV-Boost: Participate in the PBS market to maximize your staking rewards.
  • Diversify: Connect to a diverse set of relays and builders to avoid dependence on a single provider and enhance network resilience. Monitor your rewards and block inclusion rates to ensure you are well-connected.

L2s & the Rise of SEV (Sequencer Extractable Value)

Layer 2 rollups don't eliminate MEV; they just change its name. Rollups concentrate ordering power in a single entity called the sequencer, creating Sequencer Extractable Value (SEV). Empirical research shows that MEV is widespread on L2s, though often with lower profit margins than on L1.

To combat the centralization risk of a single sequencer per rollup, concepts like shared sequencers are emerging. These are decentralized marketplaces that allow multiple rollups to share a single, neutral entity for transaction ordering, aiming to arbitrate cross-rollup MEV more fairly.


What’s Coming Next (And Why It Matters)

The work to tame MEV is far from over. Several major protocol-level upgrades are on the horizon:

  • Enshrined PBS (ePBS): This aims to move Proposer-Builder Separation directly into the Ethereum protocol itself, reducing the reliance on trusted, centralized relays and hardening the network's security guarantees.
  • Inclusion Lists (EIP-7547): This proposal gives proposers a way to force a builder to include a specific set of transactions. It's a powerful tool to combat censorship, ensuring that even transactions with low fees can eventually make it onto the chain.
  • MEV-Burn: Similar to how EIP-1559 burns a portion of the base gas fee, MEV-burn proposes to burn a portion of builder payments. This would smooth out MEV revenue spikes, reduce incentives for destabilizing behavior, and redistribute value back to all ETH holders.
  • SUAVE (Single Unifying Auction for Value Expression): A project by Flashbots to create a decentralized, privacy-preserving auction layer for orderflow. The goal is to create a more open and fair market for block building and combat the trend toward exclusive, centralized deals.
  • OFA Standardization: As auctions become the norm, work is underway to create formal metrics and open tooling to quantify and compare the price improvement offered by different routers, raising the bar for execution quality across the entire ecosystem.

A Founder’s Checklist (Ship MEV-Aware Products)

  • Default to Privacy: Route user flow through private submission or encrypted intents-based systems.
  • Design for Auctions, Not Races: Avoid "first-come, first-served" mechanics that create latency games. Leverage batch auctions or OFAs to create fair and efficient markets.
  • Instrument Everything: Log slippage, effective price versus oracle price, and the opportunity cost of your routing decisions. Be transparent with your users about their execution quality.
  • Diversify Dependencies: Rely on multiple builders and relays today. Prepare your infrastructure for the transition to enshrined PBS tomorrow.
  • Plan for L2s: If you're building a multichain application, account for SEV and cross-domain MEV in your design.

Developer FAQ

  • Is MEV “bad” or “illegal”? MEV is an unavoidable byproduct of open, deterministic blockchain markets. Some forms, like arbitrage and liquidations, are essential for market efficiency. Others, like sandwiching, are purely extractive and harmful to users. The goal isn't to eliminate MEV but to design mechanisms that minimize the harm and align extraction with user benefit and network security. Its legal status is complex and varies by jurisdiction.
  • Does private transaction submission guarantee no sandwiches? It significantly reduces your exposure by keeping your transaction out of the public mempool where most bots are looking. When combined with an OFA, it's a very strong defense. However, no system is perfect, and guarantees depend on the specific policies of the private relay and builders you use.
  • Why not just “turn MEV off”? You can't. As long as there are on-chain markets with price inefficiencies (which is always), there will be profit in correcting them. Trying to eliminate it entirely would likely break useful economic functions. The more productive path is to manage and redistribute it through better mechanism design like ePBS, inclusion lists, and MEV-burn.

Further Reading

  • Canonical definition & overview: Ethereum.org—MEV docs
  • Origins & risks: Flash Boys 2.0 (Daian et al., 2019)
  • PBS/MEV-Boost primer: Flashbots docs and MEV-Boost in a Nutshell
  • OFA research: Uniswap Labs—Quantifying Price Improvement in Order Flow Auctions
  • ePBS & MEV-burn: Ethereum Research forum discussions
  • L2 MEV evidence: Empirical analyses across major rollups (e.g., "Analyzing the Extraction of MEV Across Layer-2 Rollups")

Bottom Line

MEV isn’t a glitch; it’s an incentive gradient inherent to blockchains. The winning approach is not denial—it’s mechanism design. The goal is to make value extraction contestable, transparent, and user-aligned. If you’re building, bake this awareness into your product from day one. If you’re trading, insist your tools do it for you. The ecosystem is rapidly converging on this more mature, resilient future—now is the time to design for it.

Decentralized Physical Infrastructure Networks (DePIN): Economics, Incentives, and the AI Compute Era

· 47 min read
Dora Noda
Software Engineer

Introduction

Decentralized Physical Infrastructure Networks (DePIN) are blockchain-based projects that incentivize people to deploy real-world hardware in exchange for crypto tokens. By leveraging idle or underutilized resources – from wireless radios to hard drives and GPUs – DePIN projects create crowdsourced networks providing tangible services (connectivity, storage, computing, etc.). This model transforms normally idle infrastructure (like unused bandwidth, disk space, or GPU power) into active, income-generating networks by rewarding contributors with tokens. Major early examples include Helium (crowdsourced wireless networks) and Filecoin (distributed data storage), and newer entrants target GPU computing and 5G coverage sharing (e.g. Render Network, Akash, io.net).

DePIN’s promise lies in distributing the costs of building and operating physical networks via token incentives, thus scaling networks faster than traditional centralized models. In practice, however, these projects must carefully design economic models to ensure that token incentives translate into real service usage and sustainable value. Below, we analyze the economic models of key DePIN networks, evaluate how effectively token rewards have driven actual infrastructure use, and assess how these projects are coupling with the booming demand for AI-related compute.

Economic Models of Leading DePIN Projects

Helium (Decentralized Wireless IoT & 5G)

Helium pioneered a decentralized wireless network by incentivizing individuals to deploy radio hotspots. Initially focused on IoT (LoRaWAN) and later expanded to 5G small-cell coverage, Helium’s model centers on its native token HNT. Hotspot operators earn HNT by participating in Proof-of-Coverage (PoC) – essentially proving they are providing wireless coverage in a given location. In Helium’s two-token system, HNT has utility through Data Credits (DC): users must burn HNT to mint non-transferable DC, which are used to pay for actual network usage (device connectivity) at a fixed rate of $0.0001 per 24 bytes. This burn mechanism creates a burn-and-mint equilibrium where increased network usage (DC spending) leads to more HNT being burned, reducing supply over time.

Originally, Helium operated on its own blockchain with an inflationary issuance of HNT that halved every two years (yielding a gradually decreasing supply and an eventual max around ~223 million HNT in circulation). In 2023, Helium migrated to Solana and introduced a “network of networks” framework with sub-DAOs. Now, Helium’s IoT network and 5G mobile network each have their own tokens (IOT and MOBILE respectively) rewarded to hotspot operators, while HNT remains the central token for governance and value. HNT can be redeemed for subDAO tokens (and vice versa) via treasury pools, and HNT is also used for staking in Helium’s veHNT governance model. This structure aims to align incentives in each sub-network: for example, 5G hotspot operators earn MOBILE tokens, which can be converted to HNT, effectively tying rewards to the success of that specific service.

Economic value creation: Helium’s value is created by providing low-cost wireless access. By distributing token rewards, Helium offloaded the capex of network deployment onto individuals who purchased and ran hotspots. In theory, as businesses and IoT devices use the network (by spending DC that require burning HNT), that demand should support HNT’s value and fund ongoing rewards. Helium sustains its economy through a burn-and-spend cycle: network users buy HNT (or use HNT rewards) and burn it for DC to use the network, and the protocol mints HNT (according to a fixed schedule) to pay hotspot providers. In Helium’s design, a portion of HNT emissions was also allocated to founders and a community reserve, but the majority has always been for hotspot operators as an incentive to build coverage. As discussed later, Helium’s challenge has been getting enough paying demand to balance the generous supply-side incentives.

Filecoin (Decentralized Storage Network)

Filecoin is a decentralized storage marketplace where anyone can contribute disk space and earn tokens for storing data. Its economic model is built around the FIL token. Filecoin’s blockchain rewards storage providers (miners) with FIL block rewards for provisioning storage and correctly storing clients’ data – using cryptographic proofs (Proof-of-Replication and Proof-of-Spacetime) to verify data is stored reliably. Clients, in turn, pay FIL to miners to have their data stored or retrieved, negotiating prices in an open market. This creates an incentive loop: miners invest in hardware and stake FIL collateral (to guarantee service quality), earning FIL rewards for adding storage capacity and fulfilling storage deals, while clients spend FIL for storage services.

Filecoin’s token distribution is heavily weighted toward incentivizing storage supply. FIL has a maximum supply of 2 billion, with 70% reserved for mining rewards. (In fact, ~1.4 billion FIL are allocated to be released over time as block rewards to storage miners over many years.) The remaining 30% was allocated to stakeholders: 15% to Protocol Labs (the founding team), 10% to investors, and 5% to the Filecoin Foundation. Block reward emissions follow a somewhat front-loaded schedule (with a six-year half-life), meaning supply inflation was highest in the early years to quickly bootstrap a large storage network. To balance this, Filecoin requires miners to lock up FIL as collateral for each gigabyte of data they pledge to store – if they fail to prove the data is retained, they can be penalized (slashed) by losing some collateral. This mechanism aligns miner incentives with reliable service.

Economic value creation: Filecoin creates value by offering censorship-resistant, redundant data storage at potentially lower costs than centralized cloud providers. The FIL token’s value is tied to demand for storage and the utility of the network: clients must obtain FIL to pay for storing data, and miners need FIL (both for collateral and often to cover costs or as revenue). Initially, much of Filecoin’s activity was driven by miners racing to earn tokens – even storing zero-value or duplicated data just to increase their storage power and earn block rewards. To encourage useful storage, Filecoin introduced the Filecoin Plus program: clients with verified useful data (e.g. open datasets, archives) can register deals as “verified,” which gives miners 10× the effective power for those deals, translating into proportionally larger FIL rewards. This has incentivized miners to seek out real clients and has dramatically increased useful data stored on the network. By late 2023, Filecoin’s network had grown to about 1,800 PiB of active deals, up 3.8× year-over-year, with storage utilization rising to ~20% of total capacity (from only ~3% at the start of 2023). In other words, token incentives bootstrapped enormous capacity, and now a growing fraction of that capacity is being filled by paying customers – a sign of the model beginning to sustain itself with real demand. Filecoin is also expanding into adjacent services (see AI Compute Trends below), which could create new revenue streams (e.g. decentralized content delivery and compute-over-data services) to bolster the FIL economy beyond simple storage fees.

Render Network (Decentralized GPU Rendering & Compute)

Render Network is a decentralized marketplace for GPU-based computation, originally focused on rendering 3D graphics and now also supporting AI model training/inference jobs. Its native token RNDR (recently updated to the ticker RENDER on Solana) powers the economy. Creators (users who need GPU work done) pay in RNDR for rendering or compute tasks, and Node Operators (GPU providers) earn RNDR by completing those jobs. This basic model turns idle GPUs (from individual GPU owners or data centers) into a distributed cloud rendering farm. To ensure quality and fairness, Render uses escrow smart contracts: clients submit jobs and burn the equivalent RNDR payment, which is held until node operators submit proof of completing the work, then the RNDR is released as reward. Originally, RNDR functioned as a pure utility/payment token, but the network has recently overhauled its tokenomics to a Burn-and-Mint Equilibrium (BME) model to better balance supply and demand.

Under the BME model, all rendering or compute jobs are priced in stable terms (USD) and paid in RENDER tokens, which are **burned upon job completion. In parallel, the protocol mints new RENDER tokens on a predefined declining emissions schedule to compensate node operators and other participants. In effect, user payments for work destroy tokens while the network inflates tokens at a controlled rate as mining rewards – the net supply can increase or decrease over time depending on usage. The community approved an initial emission of ~9.1 million RENDER in the first year of BME (mid-2023 to mid-2024) as network incentives, and set a long-term max supply of about 644 million RENDER (up from the initial 536.9 million RNDR that were minted at launch). Notably, RENDER’s token distribution heavily favored ecosystem growth: 65% of the initial supply was allocated to a treasury (for future network incentives), 25% to investors, and 10% to team/advisors. With BME, that treasury is being deployed via the controlled emissions to reward GPU providers and other contributors, while the burn mechanism ties those rewards directly to platform usage. RNDR also serves as a governance token (token holders can vote on Render Network proposals). Additionally, node operators on Render can stake RNDR to signal their reliability and potentially receive more work, adding another incentive layer.

Economic value creation: Render Network creates value by supplying on-demand GPU computing at a fraction of the cost of traditional cloud GPU instances. By late 2023, Render’s founder noted that studios had already used the network to render movie-quality graphics with significant cost and speed advantages – “one tenth the cost” and with massive aggregated capacity beyond any single cloud provider. This cost advantage is possible because Render taps into dormant GPUs globally (from hobbyist rigs to pro render farms) that would otherwise be idle. With rising demand for GPU time (for both graphics and AI), Render’s marketplace meets a critical need. Crucially, the BME token model means token value is directly linked to service usage: as more rendering and AI jobs flow through the network, more RENDER is burned (creating buy pressure or reducing supply), while node incentives scale up only as those jobs are completed. This helps avoid “paying for nothing” – if network usage stagnates, the token emissions eventually outpace burns (inflating supply), but if usage grows, the burns can offset or even exceed emissions, potentially making the token deflationary while still rewarding operators. The strong interest in Render’s model was reflected in the market: RNDR’s price rocketed in 2023, rising over 1,000% in value as investors anticipated surging demand for decentralized GPU services amid the AI boom. Backed by OTOY (a leader in cloud rendering software) and used in production by some major studios, Render Network is positioned as a key player at the intersection of Web3 and high-performance computing.

Akash Network (Decentralized Cloud Compute)

Akash is a decentralized cloud computing marketplace that enables users to rent general compute (VMs, containers, etc.) from providers with spare server capacity. Think of it as a decentralized alternative to AWS or Google Cloud, powered by a blockchain-based reverse auction system. The native token AKT is central to Akash’s economy: clients pay for compute leases in AKT, and providers earn AKT for supplying resources. Akash is built on the Cosmos SDK and uses a delegated Proof-of-Stake blockchain for security and coordination. AKT thus also functions as a staking and governance token – validators stake AKT (and users delegate AKT to validators) to secure the network and earn staking rewards.

Akash’s marketplace operates via a bidding system: a client defines a deployment (CPU, RAM, storage, possibly GPU requirements) and a max price, and multiple providers can bid to host it, driving the price down. Once the client accepts a bid, a lease is formed and the workload runs on the chosen provider’s infrastructure. Payments for leases are handled by the blockchain: the client escrows AKT and it streams to the provider over time for as long as the deployment is active. Uniquely, the Akash network charges a protocol “take rate” fee on each lease to fund the ecosystem and reward AKT stakers: 10% of the lease amount if paid in AKT (or 20% if paid in another currency) is diverted as fees to the network treasury and stakers. This means AKT stakers earn a portion of all usage, aligning the token’s value with actual demand on the platform. To improve usability for mainstream users, Akash has integrated stablecoin and credit card payments (via its console app): a client can pay in USD stablecoin, which under the hood is converted to AKT (with a higher fee rate). This reduces the volatility risk for users while still driving value to the AKT token (since those stablecoin payments ultimately result in AKT being bought/burned or distributed to stakers).

On the supply side, AKT’s tokenomics are designed to incentivize long-term participation. Akash began with 100 million AKT at genesis and has a max supply of 389 million via inflation. The inflation rate is adaptive based on the proportion of AKT staked: it targets 20–25% annual inflation if the staking ratio is low, and around 15% if a high percentage of AKT is staked. This adaptive inflation (a common design in Cosmos-based chains) encourages holders to stake (contributing to network security) by rewarding them more when staking participation is low. Block rewards from inflation pay validators and delegators, as well as funding a reserve for ecosystem growth. AKT’s initial distribution set aside allocations for investors, the core team (Overclock Labs), and a foundation pool for ecosystem incentives (e.g. an early program in 2024 funded GPU providers to join).

Economic value creation: Akash creates value by offering cloud computing at potentially much lower costs than incumbent cloud providers, leveraging underutilized servers around the world. By decentralizing the cloud, it also aims to fill regional gaps and reduce reliance on a few big tech companies. The AKT token accrues value from multiple angles: demand-side fees (more workloads = more AKT fees flowing to stakers), supply-side needs (providers may hold or stake earnings, and need to stake some AKT as collateral for providing services), and general network growth (AKT is needed for governance and as a reserve currency in the ecosystem). Importantly, as more real workloads run on Akash, the proportion of AKT in circulation that is used for staking and fee deposits should increase, reflecting real utility. Initially, Akash saw modest usage for web services and crypto infrastructure hosting, but in late 2023 it expanded support for GPU workloads – making it possible to run AI training, machine learning, and high-performance compute jobs on the network. This has significantly boosted Akash’s usage in 2024. By Q3 2024, the network’s metrics showed explosive growth: the number of active deployments (“leases”) grew 1,729% year-on-year, and the average fee per lease (a proxy for complexity of workloads) rose 688%. In practice, this means users are deploying far more applications on Akash and are willing to run larger, longer workloads (many involving GPUs) – evidence that token incentives have attracted real paying demand. Akash’s team reported that by the end of 2024, the network had over 700 GPUs online with ~78% utilization (i.e. ~78% of GPU capacity rented out at any time). This is a strong signal of efficient token incentive conversion (see next section). The built-in fee-sharing model also means that as this usage grows, AKT stakers receive protocol revenue, effectively tying token rewards to actual service revenue – a healthier long-term economic design.

io.net (Decentralized GPU Cloud for AI)

io.net is a newer entrant (built on Solana) aiming to become the “world’s largest GPU network” specifically geared toward AI and machine learning workloads. Its economic model draws lessons from earlier projects like Render and Akash. The native token IO has a fixed maximum supply of 800 million. At launch, 500 million IO were pre-minted and allocated to various stakeholders, and the remaining 300 million IO are being emitted as mining rewards over a 20-year period (distributed hourly to GPU providers and stakers). Notably, io.net implements a revenue-based burn mechanism: a portion of network fees/revenue is used to burn IO tokens, directly tying token supply to platform usage. This combination – a capped supply with time-released emissions and a burn driven by usage – is intended to ensure long-term sustainability of the token economy.

To join the network as a GPU node, providers are required to stake a minimum amount of IO as collateral. This serves two purposes: it deters malicious or low-quality nodes (as they have “skin in the game”), and it reduces immediate sell pressure from reward tokens (since nodes must lock up some tokens to participate). Stakers (which can include both providers and other participants) also earn a share of network rewards, aligning incentives across the ecosystem. On the demand side, customers (AI developers, etc.) pay for GPU compute on io.net, presumably in IO tokens or possibly stable equivalents – the project claims to offer cloud GPU power at up to 90% lower cost than traditional providers like AWS. These usage fees drive the burn mechanism: as revenue flows in, a portion of tokens get burned, linking platform success to token scarcity.

Economic value creation: io.net’s value proposition is aggregating GPU power from many sources (data centers, crypto miners repurposing mining rigs, etc.) into a single network that can deliver on-demand compute for AI at massive scale. By aiming to onboard over 1 million GPUs globally, io.net seeks to out-scale any single cloud and meet the surging demand for AI model training and inference. The IO token captures value through a blend of mechanisms: supply is limited (so token value can grow if demand for network services grows), usage burns tokens (directly creating value feedback to the token from service revenue), and token rewards bootstrap supply (gradually distributing tokens to those who contribute GPUs, ensuring the network grows). In essence, io.net’s economic model is a refined DePIN approach where supply-side incentives (hourly IO emissions) are substantial but finite, and they are counter-balanced by token sinks (burns) that scale with actual usage. This is designed to avoid the trap of excessive inflation with no demand. As we will see, the AI compute trend provides a large and growing market for networks like io.net to tap into, which could drive the desired equilibrium where token incentives lead to robust service usage. (io.net is still emerging, so its real-world metrics remain to be proven, but its design clearly targets the AI compute sector’s needs.)

Table 1: Key Economic Model Features of Selected DePIN Projects

ProjectSectorToken (Ticker)Supply & DistributionIncentive MechanismToken Utility & Value Flow
HeliumDecentralized Wireless (IoT & 5G)Helium Network Token (HNT); plus sub-tokens IOT & MOBILEVariable supply, decreasing issuance: HNT emissions halved every ~2 years (as of original blockchain), targeting ~223M HNT in circulation after 50 years. Migrated to Solana with 2 new sub-tokens: IOT and MOBILE rewarded to IoT and 5G hotspot owners.Proof-of-Coverage mining: Hotspots earn IOT or MOBILE tokens for providing coverage (LoRaWAN or 5G). Those sub-tokens can be converted to HNT via treasury pools. HNT is staked for governance (veHNT) and is the basis for rewards across networks.Network usage via Data Credits: HNT is burned to create Data Credits (DC) for device connectivity (fixed price $0.0001 per 24 bytes). All network fees (DC purchases) effectively burn HNT (reducing supply). Token value thus ties to demand for IoT/Mobile data transfer. HNT’s value also backs the subDAO tokens (giving them convertibility to a scarce asset).
FilecoinDecentralized StorageFilecoin (FIL)Capped supply 2 billion: 70% allocated to storage mining rewards (released over decades); ~30% to Protocol Labs, investors, and foundation. Block rewards follow a six-year half-life (higher inflation early, tapering later).Storage mining: Storage providers earn FIL block rewards proportional to proven storage contributed. Clients pay FIL for storing or retrieving data. Miners put up FIL collateral that can be slashed for failure. Filecoin Plus gives 10× power reward for “useful” client data to incentivize real storage.Payment & collateral: FIL is the currency for storage deals – clients spend FIL to store data, creating organic demand for the token. Miners lock FIL as collateral (temporarily reducing circulating supply) and earn FIL for useful service. As usage grows, more FIL gets tied up in deals and collateral. Network fees (for transactions) are minimal (Filecoin focuses on storage fees which go to miners). Long term, FIL value depends on data storage demand and emerging use cases (e.g. Filecoin Virtual Machine enabling smart contracts for data, potentially generating new fee sinks).
Render NetworkDecentralized GPU Compute (Rendering & AI)Render Token (RNDR / RENDER)Initial supply ~536.9M RNDR, increased to max ~644M via new emissions. Burn-and-Mint Equilibrium: New RENDER emitted on a fixed schedule (20% inflation pool over ~5 years, then tail emissions). Emissions fund network incentives (node rewards, etc.). Burning: Users’ payments in RENDER are burned for each completed job. Distribution: 65% treasury (network ops and rewards), 25% investors, 10% team/advisors.Marketplace for GPU work: Node operators do rendering/compute tasks and earn RENDER. Jobs are priced in USD but paid in RENDER; the required tokens are burned when the work is done. In each epoch (e.g. weekly), new RENDER is minted and distributed to node operators based on the work they completed. Node operators can also stake RNDR for higher trust and potential job priority.Utility & value flow: RENDER is the fee token for GPU services – content creators and AI developers must acquire and spend it to get work done. Because those tokens are burned, usage directly reduces supply. New token issuance compensates workers, but on a declining schedule. If network demand is high (burn > emission), RENDER becomes deflationary; if demand is low, inflation may exceed burns (incentivizing more supply until demand catches up). RENDER also governs the network. The token’s value is thus closely linked to platform usage – in fact, RNDR rallied ~10× in 2023 as AI-driven demand for GPU compute skyrocketed, indicating market confidence that usage (and burns) will be high.
Akash NetworkDecentralized Cloud (general compute & GPU)Akash Token (AKT)Initial supply 100M; max supply 389M. Inflationary PoS token: Adaptive inflation ~15–25% annually (dropping as staking % rises) to incentivize staking. Ongoing emissions pay validators and delegators. Distribution: 34.5% investors, 27% team, 19.7% foundation, 8% ecosystem, 5% testnet (with lock-ups/vesting).Reverse-auction marketplace: Providers bid to host deployments; clients pay in AKT for leases. Fee pool: 10% of AKT payments (or 20% of payments in other tokens) goes to the network (stakers) as a protocol fee. Akash uses a Proof-of-Stake chain – validators stake AKT to secure the network and earn block rewards. Clients can pay via AKT or integrated stablecoins (with conversion).Utility & value flow: AKT is used for all transactions (either directly or via conversion from stable payments). Clients buy AKT to pay for compute leases, creating demand as network usage grows. Providers earn AKT and can sell or stake it. Staking rewards + fee revenue: Holding and staking AKT yields rewards from inflation and a share of all fees, so active network usage benefits stakers directly. This model aligns token value with cloud demand: as more CPU/GPU workloads run on Akash, more fees in AKT flow to holders (and more AKT might be locked as collateral or staked by providers). Governance is also via AKT holdings. Overall, the token’s health improves with higher utilization and has inflation controls to encourage long-term participation.
io.netDecentralized GPU Cloud (AI-focused)IO Token (IO)Fixed cap 800M IO: 500M pre-minted (allocated to team, investors, community, etc.), 300M emitted over ~20 years as mining rewards (hourly distribution). No further inflation after that cap. Built-in burn: Network revenue triggers token burns to reduce supply. Staking: providers must stake a minimum IO to participate (and can stake more for rewards).GPU sharing network: Hardware providers (data centers, miners) connect GPUs and earn IO rewards continuously (hourly) for contributing capacity. They also earn fees from customers’ usage. Staking requirement: Operators stake IO as collateral to ensure good behavior. Users likely pay in IO (or in stable converted to IO) for AI compute tasks; a portion of every fee is burned by the protocol.Utility & value flow: IO is the medium of exchange for GPU compute power on the network, and also the security token that operators stake. Token value is driven by a trifecta: (1) Demand for AI compute – clients must acquire IO to pay for jobs, and higher usage means more tokens burned (reducing supply). (2) Mining incentives – new IO distributed to GPU providers motivates network growth, but the fixed cap limits long-term inflation. (3) Staking – IO is locked up by providers (and possibly users or delegates) to earn rewards, reducing liquid supply and aligning participants with network success. In sum, io.net’s token model is designed so that if it successfully attracts AI workloads at scale, token supply becomes increasingly scarce (through burns and staking), benefiting holders. The fixed supply also imposes discipline, preventing endless inflation and aiming for a sustainable “reward-for-revenue” balance.

Sources: Official documentation and research for each project (see inline citations above).

Token Incentives vs. Real-World Service Usage

A critical question for DePIN projects is how effectively token incentives convert into real service provisioning and actual usage of the network. In the initial stages, many DePIN protocols emphasized bootstrapping supply (hardware deployment) through generous token rewards, even if demand was minimal – a “build it and (hopefully) they will come” strategy. This led to situations where the network’s market cap and token emissions far outpaced the revenue from customers. As of late 2024, the entire DePIN sector (~350 projects) had a combined market cap of ~$50 billion, yet generated only about ~$0.5 billion annualized revenue – an aggregate valuation of ~100× annual revenue. Such a gap underscores the inefficiency in early stages. However, recent trends show improvements as networks shift from purely supply-driven growth to demand-driven adoption, especially propelled by the surge in AI compute needs.

Below we evaluate each example project’s token incentive efficiency, looking at usage metrics versus token outlays:

  • Helium: Helium’s IoT network grew explosively in 2021–2022, with nearly 1 million hotspots deployed globally for LoRaWAN coverage. This growth was almost entirely driven by the HNT mining incentives and crypto enthusiasm – not by customer demand for IoT data, which remained low. By mid-2022, it became clear that Helium’s data traffic (devices actually using the network) was minuscule relative to the enormous supply-side investment. One analysis in 2022 noted that less than $1,000 of tokens were burned for data usage per month, even as the network was minting tens of millions of dollars worth of HNT for hotspot rewards – a stark imbalance (essentially, <1% of token emission was being offset by network usage). In late 2022 and 2023, HNT token rewards underwent scheduled halvings (reducing issuance), but usage was still lagging. An example from November 2023: the dollar value of Helium Data Credits burned was only about $156 for that day – whereas the network was still paying out an estimated $55,000 per day in token rewards to hotspot owners (valued in USD). In other words, that day’s token incentive “cost” outweighed actual network usage by a factor of 350:1. This illustrates the poor incentive-to-usage conversion in Helium’s early IoT phase. Helium’s founders recognized this “chicken-and-egg” dilemma: a network needs coverage before it can attract users, but without users the coverage is hard to monetize.

    There are signs of improvement. In late 2023, Helium activated its 5G Mobile network with a consumer-facing cell service (backed by T-Mobile roaming) and began rewarding 5G hotspot operators in MOBILE tokens. The launch of Helium Mobile (5G) quickly brought in paying users (e.g. subscribers to Helium’s $20/month unlimited mobile plan) and new types of network usage. Within weeks, Helium’s network usage jumped – by early 2024, the daily Data Credit burn reached ~$4,300 (up from almost nothing a couple months prior). Moreover, 92% of all Data Credits consumed were from the Mobile network (5G) as of Q1 2024, meaning the 5G service immediately dwarfed the IoT usage. While $4.3k/day is still modest in absolute terms (~$1.6 million annualized), it represents a meaningful step toward real revenue. Helium’s token model is adapting: by isolating the IoT and Mobile networks into separate reward tokens, it ensures that the 5G rewards (MOBILE tokens) will scale down if 5G usage doesn’t materialize, and similarly for IOT tokens – effectively containing the inefficiency. Helium Mobile’s growth also showed the power of coupling token incentives with a service of immediate consumer interest (cheap cellular data). Within 6 months of launch, Helium had ~93,000 MOBILE hotspots deployed in the US (alongside ~1 million IoT hotspots worldwide), and had struck partnerships (e.g. with Telefónica) to expand coverage. The challenge ahead is to substantially grow the user base (both IoT device clients and 5G subscribers) so that burning of HNT for Data Credits approaches the scale of HNT issuance. In summary, Helium started with an extreme supply surplus (and correspondingly overvalued token), but its pivot toward demand (5G, and positioning as an “infrastructure layer” for other networks) is gradually improving the efficiency of its token incentives.

  • Filecoin: In Filecoin’s case, the imbalance was between storage capacity vs. actual stored data. Token incentives led to an overabundance of supply: at its peak, the Filecoin network had well over 15 exbibytes (EiB) of raw storage capacity pledged by miners, yet for a long time only a few percent of that was utilized by real data. Much of the space was filled with dummy data (clients could even store random garbage data to satisfy proof requirements) just so miners could earn FIL rewards. This meant a lot of FIL was being minted and awarded for storage that wasn’t actually demanded by users. However, over 2022–2023 the network made big strides in driving demand. Through initiatives like Filecoin Plus and aggressive onboarding of open datasets, the utilization rate climbed from ~3% to over 20% of capacity in 2023. By Q4 2024, Filecoin’s storage utilization had further risen to ~30% – meaning nearly one-third of the enormous capacity was holding real client data. This is still far from 100%, but the trend is positive: token rewards are increasingly going toward useful storage rather than empty padding. Another measure: as of Q1 2024, about 1,900 PiB (1.9 EiB) of data was stored in active deals on Filecoin, a 200% year-over-year increase. Notably, the majority of new deals now come via Filecoin Plus (verified clients), indicating miners strongly prefer to devote space to data that earns them bonus reward multipliers.

    In terms of economic efficiency, Filecoin’s protocol also experienced a shift: initially, protocol “revenue” (fees paid by users) was negligible compared to mining rewards (which some analyses treated as revenue, inflating early figures). For example, in 2021, Filecoin’s block rewards were worth hundreds of millions of dollars (at high FIL prices), but actual storage fees were tiny; in 2022, as FIL price fell, reported revenue dropped 98% from $596M to $13M, reflecting that most of 2021’s “revenue” was token issuance value rather than customer spend. Going forward, the balance is improving: the pipeline of paying storage clients is growing (e.g. an enterprise deal of 1 PiB was closed in late 2023, one of the first large fully-paid deals). Filecoin’s introduction of the FVM (enabling smart contracts) and forthcoming storage marketplaces and DEXes are expected to bring more on-chain fee activity (and possibly FIL burns or lockups). In summary, Filecoin’s token incentives successfully built a massive global storage network, albeit with efficiency under 5% in the early period; by 2024 that efficiency improved to ~20–30% and is on track to climb further as real demand catches up with the subsidized supply. The sector’s overall demand for decentralized storage (Web3 data, archives, NFT metadata, AI datasets, etc.) appears to be rising, which bodes well for converting more of those mining rewards into actual useful service.

  • Render Network: Render’s token model inherently links incentives to usage more tightly, thanks to the burn-and-mint equilibrium. In the legacy model (pre-2023), RNDR issuance was largely in the hands of the foundation and based on network growth goals, while usage involved locking up RNDR in escrow for jobs. This made it a bit difficult to analyze efficiency. However, with BME fully implemented in 2023, we can measure how many tokens are burned relative to minted. Since each rendering or compute job burns RNDR proportional to its cost, essentially every token emitted as a reward corresponds to work done (minus any net inflation if emissions > burns in a given epoch). Early data from the Render network post-upgrade indicated that usage was indeed ramping up: the Render Foundation noted that at “peak moments” the network could be completing more render frames per second than Ethereum could handle in transactions, underscoring significant activity. While detailed usage stats (e.g. number of jobs or GPU-hours consumed) aren’t public in the snippet above, one strong indicator is the price and demand for RNDR. In 2023, RNDR became one of the best-performing crypto assets, rising from roughly $0.40 in January to over $2.50 by May, and continuing to climb thereafter. By November 2023, RNDR was up over 10× year-to-date, propelled by the frenzy for AI-related computing power. This price action suggests that users were buying RNDR to get rendering and AI jobs done (or speculators anticipated they would need to). Indeed, the interest in AI tasks likely brought a new wave of demand – Render reported that its network was expanding beyond media rendering into AI model training, and that the GPU shortage in traditional clouds meant demand far outstripped supply in this niche. In essence, Render’s token incentives (the emissions) have been met with equally strong user demand (burns), making its incentive-to-usage conversion relatively high. It’s worth noting that in the first year of BME, the network intentionally allocated some extra tokens (the 9.1M RENDER emissions) to bootstrap node operator earnings. If those outpace usage, it could introduce some temporary inflationary inefficiency. However, given the network’s growth, the burn rate of RNDR has been climbing. The Render Network Dashboard as of mid-2024 showed steady increases in cumulative RNDR burned, indicating real jobs being processed. Another qualitative sign of success: major studios and content creators have used Render for high-profile projects, proving real-world adoption (these are not just crypto enthusiasts running nodes – they are customers paying for rendering). All told, Render appears to have one of the more effective token-to-service conversion metrics in DePIN: if the network is busy, RNDR is being burned and token holders see tangible value; if the network were idle, token emissions would be the only output, but the excitement around AI has ensured the network is far from idle.

  • Akash: Akash’s efficiency can be seen in the context of cloud spend vs. token issuance. As a proof-of-stake chain, Akash’s AKT has inflation to reward validators, but that inflation is not excessively high (and a large portion is offset by staking locks). The more interesting part is how much real usage the token is capturing. In 2022, Akash usage was relatively low (only a few hundred deployments at any time, mainly small apps or test nets). This meant AKT’s value was speculative, not backed by fees. However, in 2023–2024, usage exploded due to AI. By late 2024, Akash was processing ~$11k of spend per day on its network, up from just ~$1.3k/day in January 2024 – a ~749% increase in daily revenue within the year. Over the course of 2024, Akash surpassed $1.6 million in cumulative paid spend for compute. These numbers, while still small compared to giants like AWS, represent actual customers deploying workloads on Akash and paying in AKT or USDC (which ultimately drives AKT demand via conversion). The token incentives (inflationary rewards) during that period were on the order of maybe 15–20% of the 130M circulating AKT (~20–26M AKT minted in 2024, which at $1–3 per AKT might be $20–50M value). So in pure dollar terms, the network was still issuing more value in tokens than it was bringing in fees – similar to other early-stage networks. But the trend is that usage is catching up fast. A telling statistic: comparing Q3 2024 to Q3 2023, the average fee per lease rose from $6.42 to $18.75. This means users are running much more resource-intensive (and thus expensive) workloads, likely GPUs for AI, and they are willing to pay more, presumably because the network delivers value (e.g. lower cost than alternatives). Also, because Akash charges a 10–20% fee on leases to the protocol, that means 10–20% of that $1.6M cumulative spend went to stakers as real yield. In Q4 2024, AKT’s price hit new multi-year highs (~$4, an 8× increase from mid-2023 lows), indicating the market recognized the improved fundamentals and usage. On-chain data from year-end 2024 showed over 650 active leases and over 700 GPUs in the network with ~78% utilization – effectively, most of the GPUs added via incentives were actually in use by customers. This is a strong conversion of token incentives into service: nearly 4 out of 5 GPUs incentivized were serving AI developers (for model training, etc.). Akash’s proactive steps, like enabling credit card payments and supporting popular AI frameworks, helped bridge crypto tokens to real-world users (some users might not even know they are paying for AKT under the hood). Overall, while Akash initially had the common DePIN issue of “supply > demand,” it is quickly moving toward a more balanced state. If AI demand continues, Akash could even approach a regime where demand outstrips the token incentives – in other words, usage might drive AKT’s value more than speculative inflation. The protocol’s design to share fees with stakers also means AKT holders benefit directly as efficiency improves (e.g. by late 2024, stakers were earning significant yield from actual fees, not just inflation).

  • io.net: Being a very new project (launched in 2023/24), io.net’s efficiency is still largely theoretical, but its model is built explicitly to maximize incentive conversion. By hard-capping supply and instituting hourly rewards, io.net avoids the scenario of runaway indefinite inflation. And by burning tokens based on revenue, it ensures that as soon as demand kicks in, there is an automatic counterweight to token emissions. Early reports claimed io.net had aggregated a large number of GPUs (possibly by bringing existing mining farms and data centers on board), giving it significant supply to offer. The key will be whether that supply finds commensurate demand from AI customers. One positive sign for the sector: as of 2024, decentralized GPU networks (including Render, Akash, and io.net) were often capacity-constrained, not demand-constrained – meaning there was more user demand for compute than the networks had online at any moment. If io.net taps into that unmet demand (offering lower prices or unique integrations via Solana’s ecosystem), its token burn could accelerate. On the flip side, if it distributed a large chunk of the 500M IO initial supply to insiders or providers, there is a risk of sell pressure if usage lags. Without concrete usage data yet, io.net serves as a test of the refined tokenomic approach: it targets a demand-driven equilibrium from the outset, trying to avoid oversupplying tokens. In coming years, one can measure its success by tracking what percentage of the 300M emission gets effectively “paid for” by network revenue (burns). The DePIN sector’s evolution suggests io.net is entering at a fortuitous time when AI demand is high, so it may reach high utilization more quickly than earlier projects did.

In summary, early DePIN projects often faced low token incentive efficiency, with token payouts vastly exceeding real usage. Helium’s IoT network was a prime example, where token rewards built a huge network that was only a few percent utilized. Filecoin similarly had a bounty of storage with little stored data initially. However, through network improvements and external demand trends, these gaps are closing. Helium’s 5G pivot multiplied usage, Filecoin’s utilization is steadily climbing, and both Render and Akash have seen real usage surge in tandem with the AI boom, bringing their token economics closer to a sustainable loop. A general trend in 2024 was the shift to “prove the demand”: DePIN teams started focusing on getting users and revenue, not just hardware and hype. This is evidenced by networks like Helium courting enterprise partners for IoT and telco, Filecoin onboarding large Web2 datasets, and Akash making its platform user-friendly for AI developers. The net effect is that token values are increasingly underpinned by fundamentals (e.g. data stored, GPU hours sold) rather than just speculation. While there is still a long way to go – the sector overall at 100× price/revenue implies plenty of speculation remains – the trajectory is towards more efficient use of token incentives. Projects that fail to translate tokens into service (or “hardware on the ground”) will likely fade, while those that achieve a high conversion rate are gaining investor and community confidence.

One of the most significant developments benefiting DePIN projects is the explosive growth in AI computing demand. The year 2023–2024 saw AI model training and deployment become a multi-billion-dollar market, straining the capacity of traditional cloud providers and GPU vendors. Decentralized infrastructure networks have quickly adapted to capture this opportunity, leading to a convergence sometimes dubbed “DePIN x AI” or even “Decentralized Physical AI (DePAI)” by futurists. Below, we outline how our focus projects and the broader DePIN sector are leveraging the AI trend:

  • Decentralized GPU Networks & AI: Projects like Render, Akash, io.net (and others such as Golem, Vast.ai, etc.) are at the forefront of serving AI needs. As noted, Render expanded beyond rendering to support AI workloads – e.g. renting GPU power to train Stable Diffusion models or other ML tasks. Interest in AI has directly driven usage on these networks. In mid-2023, demand for GPU compute to train image and language models skyrocketed. Render Network benefited as many developers and even some enterprises turned to it for cheaper GPU time; this was a factor in RNDR’s 10× price surge, reflecting the market’s belief that Render would supply GPUs to meet AI needs. Similarly, Akash’s GPU launch in late 2023 coincided with the generative AI boom – within months, hundreds of GPUs on Akash were being rented to fine-tune language models or serve AI APIs. The utilization rate of GPUs on Akash reaching ~78% by year-end 2024 indicates that nearly all incentivized hardware found demand from AI users. io.net is explicitly positioning itself as an “AI-focused decentralized computing network”. It touts integration with AI frameworks (they mention using the Ray distributed compute framework, popular in machine learning, to make it easy for AI developers to scale on io.net). Io.net’s value proposition – being able to deploy a GPU cluster in 90 seconds at 10–20× efficiency of cloud – is squarely aimed at AI startups and researchers who are constrained by expensive or backlogged cloud GPU instances. This targeting is strategic: 2024 saw extreme GPU shortages (e.g. NVIDIA’s high-end AI chips were sold out), and decentralized networks with access to any kind of GPU (even older models or gaming GPUs) stepped in to fill the gap. The World Economic Forum noted the emergence of “Decentralized Physical AI (DePAI)” where everyday people contribute computing power and data to AI processes and get rewarded. This concept aligns with GPU DePIN projects enabling anyone with a decent GPU to earn tokens by supporting AI workloads. Messari’s research likewise highlighted that the intense demand from the AI industry in 2024 has been a “significant accelerator” for the DePIN sector’s shift to demand-driven growth.

  • Storage Networks & AI Data: The AI boom isn’t just about computation – it also requires storing massive datasets (for training) and distributing trained models. Decentralized storage networks like Filecoin and Arweave have found new use cases here. Filecoin in particular has embraced AI as a key growth vector: in 2024 the Filecoin community identified “Compute and AI” as one of three focus areas. With the launch of the Filecoin Virtual Machine, it’s now possible to run compute services close to the data stored on Filecoin. Projects like Bacalhau (a distributed compute-over-data project) and Fluence’s compute L2 are building on Filecoin to let users run AI algorithms directly on data stored in the network. The idea is to enable, for example, training a model on a large dataset that’s already stored across Filecoin nodes, rather than having to move it to a centralized cluster. Filecoin’s tech innovations like InterPlanetary Consensus (IPC) allow spinning up subnetworks that could be dedicated to specific workloads (like an AI-specific sidechain leveraging Filecoin’s storage security). Furthermore, Filecoin is supporting decentralized data commons that are highly relevant to AI – for instance, datasets from universities, autonomous vehicle data, or satellite imagery can be hosted on Filecoin, and then accessed by AI models. The network proudly stores major AI-relevant datasets (the referenced UC Berkeley and Internet Archive data, for example). On the token side, this means more clients using FIL for data – but even more exciting is the potential for secondary markets for data: Filecoin’s vision includes allowing storage clients to monetize their data for AI training use cases. That suggests a future where owning a large dataset on Filecoin could earn you tokens when AI companies pay to train on it, etc., creating an ecosystem where FIL flows not just for storage but for data usage rights. This is nascent but highlights how deeply Filecoin is coupling with AI trends.

  • Wireless Networks & Edge Data for AI: On the surface, Helium and similar wireless DePINs are less directly tied to AI compute. However, there are a few connections. IoT sensor networks (like Helium’s IoT subDAO, and others such as Nodle or WeatherXM) can supply valuable real-world data to feed AI models. For instance, WeatherXM (a DePIN for weather station data) provides a decentralized stream of weather data that could improve climate models or AI predictions – WeatherXM data is being integrated via Filecoin’s Basin L2 for exactly these reasons. Nodle, which uses smartphones as nodes to collect data (and is considered a DePIN), is building an app called “Click” for decentralized smart camera footage; they plan to integrate Filecoin to store the images and potentially use them in AI computer vision training. Helium’s role could be providing the connectivity for such edge devices – for example, a city deploying Helium IoT sensors for air quality or traffic, and those datasets then being used to train urban planning AI. Additionally, the Helium 5G network could serve as edge infrastructure for AI in the future: imagine autonomous drones or vehicles that use decentralized 5G for connectivity – the data they generate (and consume) might plug into AI systems continuously. While Helium hasn’t announced specific “AI strategies,” its parent Nova Labs has hinted at positioning Helium as a general infrastructure layer for other DePIN projects. This could include ones in AI. For example, Helium could provide the physical wireless layer for an AI-powered fleet of devices, while that AI fleet’s computational needs are handled by networks like Akash, and data storage by Filecoin – an interconnected DePIN stack.

  • Synergistic Growth and Investments: Both crypto investors and traditional players are noticing the DePIN–AI synergy. Messari’s 2024 report projected the DePIN market could grow to $3.5 trillion by 2028 (from ~$50B in 2024) if trends continue. This bullish outlook is largely premised on AI being a “killer app” for decentralized infrastructure. The concept of DePAI (Decentralized Physical AI) envisions a future where ordinary people contribute not just hardware but also data to AI systems and get rewarded, breaking Big Tech’s monopoly on AI datasets. For instance, someone’s autonomous vehicle could collect road data, upload it via a network like Helium, store it on Filecoin, and have it used by an AI training on Akash – with each protocol rewarding the contributors in tokens. While somewhat futuristic, early building blocks of this vision are appearing (e.g. HiveMapper, a DePIN mapping project where drivers’ dashcams build a map – those maps could train self-driving AI; contributors earn tokens). We also see AI-focused crypto projects like Bittensor (TAO) – a network for training AI models in a decentralized way – reaching multi-billion valuations, indicating strong investor appetite for AI+crypto combos.

  • Autonomous Agents and Machine-to-Machine Economy: A fascinating trend on the horizon is AI agents using DePIN services autonomously. Messari speculated that by 2025, AI agent networks (like autonomous bots) might directly procure decentralized compute and storage from DePIN protocols to perform tasks for humans or for other machines. In such a scenario, an AI agent (say, part of a decentralized network of AI services) could automatically rent GPUs from Render or io.net when it needs more compute, pay with crypto, store its results on Filecoin, and communicate over Helium – all without human intervention, negotiating and transacting via smart contracts. This machine-to-machine economy could unlock a new wave of demand that is natively suited to DePIN (since AI agents don’t have credit cards but can use tokens to pay each other). It’s still early, but prototypes like Fetch.ai and others hint at this direction. If it materializes, DePIN networks would see a direct influx of machine-driven usage, further validating their models.

  • Energy and Other Physical Verticals: While our focus has been connectivity, storage, and compute, the AI trend also touches other DePIN areas. For example, decentralized energy grids (sometimes called DeGEN – decentralized energy networks) could benefit as AI optimizes energy distribution: if someone shares excess solar power into a microgrid for tokens, AI could predict and route that power efficiently. A project cited in the Binance report describes tokens for contributing excess solar energy to a grid. AI algorithms managing such grids could again be run on decentralized compute. Likewise, AI can enhance decentralized networks’ performance – e.g. AI-based optimization of Helium’s radio coverage or AI ops for predictive maintenance of Filecoin storage nodes. This is more about using AI within DePIN, but it demonstrates the cross-pollination of technologies.

In essence, AI has become a tailwind for DePIN. The previously separate narratives of “blockchain meets real world” and “AI revolution” are converging into a shared narrative: decentralization can help meet AI’s infrastructure demands, and AI can, in turn, drive massive real-world usage for decentralized networks. This convergence is attracting significant capital – over $350M was invested in DePIN startups in 2024 alone, much of it aiming at AI-related infrastructure (for instance, many recent fundraises were for decentralized GPU projects, edge computing for AI, etc.). It’s also fostering collaboration between projects (Filecoin working with Helium, Akash integrating with other AI tool providers, etc.).

Conclusion

DePIN projects like Helium, Filecoin, Render, and Akash represent a bold bet that crypto incentives can bootstrap real-world infrastructure faster and more equitably than traditional models. Each has crafted a unique economic model: Helium uses token burns and proof-of-coverage to crowdsource wireless networks, Filecoin uses cryptoeconomics to create a decentralized data storage marketplace, Render and Akash turn GPUs and servers into global shared resources through tokenized payments and rewards. Early on, these models showed strains – rapid supply growth with lagging demand – but they have demonstrated the ability to adjust and improve efficiency over time. The token-incentive flywheel, while not a magic bullet, has proven capable of assembling impressive physical networks: a global IoT/5G network, an exabyte-scale storage grid, and distributed GPU clouds. Now, as real usage catches up (from IoT devices to AI labs), these networks are transitioning toward sustainable service economies where tokens are earned by delivering value, not just by being early.

The rise of AI has supercharged this transition. AI’s insatiable appetite for compute and data plays to DePIN’s strengths: untapped resources can be tapped, idle hardware put to work, and participants globally can share the rewards. The alignment of AI-driven demand with DePIN supply in 2024 has been a pivotal moment, arguably providing the “product-market fit” that some of these projects were waiting for. Trends suggest that decentralized infrastructure will continue to ride the AI wave – whether by hosting AI models, collecting training data, or enabling autonomous agent economies. In the process, the value of the tokens underpinning these networks may increasingly reflect actual usage (e.g. GPU-hours sold, TB stored, devices connected) rather than speculation alone.

That said, challenges remain. DePIN projects must continue improving conversion of investment to utility – ensuring that adding one more hotspot or one more GPU actually adds proportional value to users. They also face competition from traditional providers (who are hardly standing still – e.g. cloud giants are lowering prices for committed AI workloads) and must overcome issues like regulatory hurdles (Helium’s 5G needs spectrum compliance, etc.), user experience friction with crypto, and the need for reliable performance at scale. The token models, too, require ongoing calibration: for instance, Helium splitting into sub-tokens was one such adjustment; Render’s BME was another; others may implement fee burns, dynamic rewards, or even DAO governance tweaks to stay balanced.

From an innovation and investment perspective, DePIN is one of the most exciting areas in Web3 because it ties crypto directly to tangible services. Investors are watching metrics like protocol revenue, utilization rates, and token value capture (P/S ratios) to discern winners. For example, if a network’s token has a high market cap but very low usage (high P/S), it might be overvalued unless one expects a surge in demand. Conversely, a network that manages to drastically increase revenue (like Akash’s 749% jump in daily spend) could see its token fundamentally re-rated. Analytics platforms (Messari, Token Terminal) now track such data: e.g. Helium’s annualized revenue (~$3.5M) vs incentives (~$47M) yielded a large deficit, while a project like Render might show a closer ratio if burns start canceling out emissions. Over time, we expect the market to reward those DePIN tokens that demonstrate real cash flows or cost savings for users – a maturation of the sector from hype to fundamentals.

In conclusion, established networks like Helium and Filecoin have proven the power and pitfalls of tokenized infrastructure, and emerging networks like Render, Akash, and io.net are pushing the model into the high-demand realm of AI compute. The economics behind each network differ in mechanics but share a common goal: create a self-sustaining loop where tokens incentivize the build-out of services, and the utilization of those services, in turn, supports the token’s value. Achieving this equilibrium is complex, but the progress so far – millions of devices, exabytes of data, and thousands of GPUs now online in decentralized networks – suggests that the DePIN experiment is bearing fruit. As AI and Web3 continue to converge, the next few years could see decentralized infrastructure networks move from niche alternatives to vital pillars of the internet’s fabric, delivering real-world utility powered by crypto economics.

Sources: Official project documentation and blogs, Messari research reports, and analytics data from Token Terminal and others. Key references include Messari’s Helium and Akash overviews, Filecoin Foundation updates, Binance Research on DePIN and io.net, and CoinGecko/CoinDesk analyses on token performance in the AI context. These provide the factual basis for the evaluation above, as cited throughout.

Sui Network Reliability Engineering (NRE) Tools: A Complete Guide for Node Operators

· 6 min read
Dora Noda
Software Engineer

The Sui blockchain has rapidly gained attention for its innovative approach to scalability and performance. For developers and infrastructure teams looking to run Sui nodes reliably, Mysten Labs has created a comprehensive set of Network Reliability Engineering (NRE) tools that streamline deployment, configuration, and management processes.

In this guide, we'll explore the Sui NRE repository and show you how to leverage these powerful tools for your Sui node operations.

ERC-4337: Revolutionizing Ethereum with Account Abstraction

· 3 min read
Dora Noda
Software Engineer

Hello and welcome back to our blockchain blog! Today, we will be diving into an exciting new proposal called ERC-4337, which introduces account abstraction to Ethereum without requiring any consensus-layer protocol changes. Instead, this proposal relies on higher-layer infrastructure to achieve its goals. Let's explore what ERC-4337 has to offer and how it addresses the limitations of the current Ethereum ecosystem.

What is ERC-4337?

ERC-4337 is a proposal that introduces account abstraction to Ethereum through the use of a separate mempool and a new type of pseudo-transaction object called a UserOperation. Users send UserOperation objects into the alternative mempool, where a special class of actors called bundlers package them into a transaction making a handleOps call to a dedicated contract. These transactions are then included in a block.

The proposal aims to achieve several goals:

  1. Enable users to use smart contract wallets with arbitrary verification logic as their primary accounts.
  2. Completely remove the need for users to have externally owned accounts (EOAs).
  3. Ensure decentralization by allowing any bundler to participate in the process of including account-abstracted user operations.
  4. Enable all activity to happen over a public mempool, eliminating the need for users to know direct communication addresses of specific actors.
  5. Avoid trust assumptions on bundlers.
  6. Avoid requiring any Ethereum consensus changes for faster adoption.
  7. Support other use cases such as privacy-preserving applications, atomic multi-operations, paying transaction fees with ERC-20 tokens, and developer-sponsored transactions.

Backwards Compatibility

Since ERC-4337 does not change the consensus layer, there are no direct backwards compatibility issues for Ethereum. However, pre-ERC-4337 accounts are not easily compatible with the new system because they lack the necessary validateUserOp function. This can be addressed by creating an ERC-4337 compatible account that re-implements the verification logic as a wrapper and setting it as the original account’s trusted op submitter.

Reference Implementation

For those interested in diving deeper into the technical details of ERC-4337, a reference implementation is available at https://github.com/eth-infinitism/account-abstraction/tree/main/contracts.

Security Considerations

The entry point contract for ERC-4337 must be heavily audited and formally verified, as it serves as a central trust point for the entire system. While this approach reduces the auditing and formal verification load for individual accounts, it does concentrate security risk in the entry point contract, which must be robustly verified.

Verification should cover two primary claims:

  1. Safety against arbitrary hijacking: The entry point only calls an account generically if validateUserOp to that specific account has passed.
  2. Safety against fee draining: If the entry point calls validateUserOp and passes, it must also make the generic call with calldata equal to op.calldata.

Conclusion

ERC-4337 is an exciting proposal that aims to introduce account abstraction to Ethereum without requiring consensus-layer protocol changes. By using higher-layer infrastructure, it opens up new possibilities for decentralization, flexibility, and various use cases. While there are security considerations to address, this proposal has the potential to greatly improve the Ethereum ecosystem and user experience.