Skip to main content

42 posts tagged with "blockchain"

View All Tags

Building Gas-less Experiences with Sui Paymaster: Architecture and Implementation Guide

· 10 min read
Dora Noda
Software Engineer

Imagine a world where users can interact with your dApp seamlessly, without needing to hold any native tokens (SUI). This is no longer a distant dream. With Sui's Gas Station (also known as a Paymaster), developers can cover gas fees on behalf of their users, completely removing one of the biggest barriers for new entrants to Web3 and enabling a truly frictionless on-chain experience.

This article provides a complete guide to upgrading your dApp to be gas-less. We'll dive deep into the core concepts of the Sui Paymaster, its architecture, implementation patterns, and best practices.

1. Background and Core Concepts: What is a Sponsored Transaction?

In the world of blockchain, every transaction requires a network fee, or "gas." For users accustomed to the seamless experiences of Web2, this is a significant cognitive and operational hurdle. Sui addresses this challenge at the protocol level with Sponsored Transactions.

The core idea is simple: allow one party (the Sponsor) to pay the SUI gas fees for another party's (the User) transaction. This way, even if a user has zero SUI in their wallet, they can still successfully initiate on-chain actions.

Paymaster ≈ Gas Station

In the Sui ecosystem, the logic for sponsoring transactions is typically handled by an off-chain or on-chain service called a Gas Station or Paymaster. Its primary responsibilities include:

  1. Evaluating the Transaction: It receives a user's gas-less transaction data (GasLessTransactionData).
  2. Providing Gas: It locks and allocates the necessary gas fee for the transaction. This is usually managed through a gas pool composed of many SUI Coin objects.
  3. Generating a Sponsor Signature: After approving the sponsorship, the Gas Station signs the transaction with its private key (SponsorSig), certifying its willingness to pay the fee.
  4. Returning the Signed Transaction: It sends back the TransactionData, which now includes the gas data and the sponsor's signature, to await the user's final signature.

In short, a Gas Station acts as a refueling service for your dApp's users, ensuring their "vehicles" (transactions) can travel smoothly on the Sui network.

2. High-Level Architecture and Interaction Flow

A typical gas-less transaction involves coordination between the user, the dApp frontend, the Gas Station, and a Sui Full Node. The interaction sequence is as follows:

Flow Breakdown:

  1. The User performs an action in the dApp UI, which constructs a transaction data package without any gas information.
  2. The dApp sends this data to its designated Gas Station to request sponsorship.
  3. The Gas Station verifies the request's validity (e.g., checks if the user is eligible for sponsorship), then populates the transaction with a Gas Coin and its signature, returning the semi-complete transaction to the dApp.
  4. The User sees the full transaction details in their wallet (e.g., "Purchase one NFT") and provides the final signature. This is a crucial step that ensures the user maintains consent and control over their actions.
  5. The dApp broadcasts the complete transaction, containing both the user's and the sponsor's signatures, to a Sui Full Node.
  6. After the transaction is finalized on-chain, the Gas Station can confirm this by listening for on-chain events or receipts, then notify the dApp backend via a webhook to close the loop on the business process.

3. Three Core Interaction Models

You can use the following three interaction models individually or in combination to suit your business needs.

Model 1: User-Initiated → Sponsor-Approved (Most Common)

This is the standard model, suitable for the vast majority of in-dApp interactions.

  1. User constructs GasLessTransactionData: The user performs an action within the dApp.
  2. Sponsor adds GasData and signs: The dApp backend sends the transaction to the Gas Station, which approves it, attaches a Gas Coin, and adds its signature.
  3. User reviews and gives final signature: The user confirms the final transaction details in their wallet and signs it. The dApp then submits it to the network.

This model strikes an excellent balance between security and user experience.

Model 2: Sponsor-Initiated Airdrops/Incentives

This model is perfect for airdrops, user incentives, or batch asset distributions.

  1. Sponsor pre-fills TransactionData + signs: The Sponsor (typically the project team) pre-constructs most of the transaction (e.g., airdropping an NFT to a specific address) and attaches its sponsorship signature.
  2. User's second signature makes it effective: The user only needs to sign this "pre-approved" transaction once for it to be executed.

This creates an extremely smooth user experience. With just one click to confirm, users can claim rewards or complete tasks, dramatically increasing the conversion rates of marketing campaigns.

Model 3: Wildcard GasData (Credit Line Model)

This is a more flexible and permission-based model.

  1. Sponsor transfers a GasData object: The Sponsor first creates one or more Gas Coin objects with a specific budget and transfers ownership directly to the user.
  2. User spends freely within the budget: The user can then freely use these Gas Coins to pay for any transactions they initiate within the budget's limits and validity period.
  3. Gas Coin is returned: Once depleted or expired, the Gas Coin object can be designed to be automatically destroyed or returned to the Sponsor.

This model is equivalent to giving the user a limited-time, limited-budget "gas fee credit card," suitable for scenarios requiring a high degree of user autonomy, such as offering a free-to-play experience during a game season.

4. Typical Application Scenarios

The power of the Sui Paymaster lies not just in solving the gas fee problem, but also in its ability to deeply integrate with business logic to create new possibilities.

Scenario 1: Paywalls

Many content platforms or dApp services require users to meet certain criteria (e.g., hold a VIP NFT, reach a certain membership level) to access features. The Paymaster can implement this logic perfectly.

  • Flow: A user requests an action → the dApp backend verifies the user's qualifications (e.g., NFT ownership) → if eligible, it calls the Paymaster to sponsor the gas fee; if not, it simply denies the signing request.
  • Advantage: This model is inherently resistant to bots and abuse. Since the sponsorship decision is made on the backend, malicious users cannot bypass the qualification check to drain gas funds.

Scenario 2: One-Click Checkout

In e-commerce or in-game purchase scenarios, simplifying the payment process is critical.

  • Flow: The user clicks "Buy Now" on a checkout page. The dApp constructs a transaction that includes the business logic (e.g., transfer_nft_to_user). The user only needs to sign to approve the business transaction in their wallet, without worrying about gas. The gas fee is covered by the dApp's Sponsor.
  • Advantage: You can encode business parameters like an order_id directly into the ProgrammableTransactionBlock, enabling precise on-chain attribution for backend orders.

Scenario 3: Data Attribution

Accurate data tracking is fundamental to business optimization.

  • Flow: When constructing the transaction, write a unique identifier (like an order_hash) into the transaction's parameters or into an event that will be emitted upon execution.
  • Advantage: When the Gas Station receives the on-chain receipt for a successful transaction, it can easily extract this order_hash by parsing the event or transaction data. This allows for a precise mapping between on-chain state changes and specific backend orders or user actions.

5. Code Skeleton (Based on the Rust SDK)

Here is a simplified code snippet demonstrating the core interaction steps.

// Assume tx_builder, sponsor, and wallet have been initialized

// Step 1: On the user or dApp side, construct a gas-less transaction
let gasless_transaction_data = tx_builder.build_gasless_transaction_data(false)?;

// Step 2: On the Sponsor (Gas Station) side, receive the gasless_transaction_data,
// fill it with a Gas Coin, and return the transaction data with the Sponsor's signature.
// The sponsor_transaction_block function handles gas allocation and signing internally.
let sponsored_transaction = sponsor.sponsor_transaction_block(gasless_transaction_data, user_address, gas_budget)?;

// Step 3: The dApp sends the sponsored_transaction back to the user,
// who signs and executes it with their wallet.
let response = wallet.sign_and_execute_transaction_block(&sponsored_transaction)?;

For a complete implementation, refer to the official Sui documentation's Gas Station Tutorial which offer out-of-the-box code examples.

6. Risks and Protection

While powerful, deploying a Gas Station in a production environment requires careful consideration of the following risks:

  • Equivocation (Double-Spending): A malicious user might try to use the same Gas Coin for multiple transactions in parallel, which would cause the Gas Coin to be locked by the Sui network. This can be effectively mitigated by assigning a unique Gas Coin per user or transaction, maintaining a blacklist, and rate-limiting signing requests.
  • Gas Pool Management: In high-concurrency scenarios, a single large-value Gas Coin can become a performance bottleneck. The Gas Station service must be capable of automatically splitting large SUI Coins into many smaller-value Gas Coins and efficiently reclaiming them after use. Professional Gas Station providers like Shinami offer mature, managed solutions for this.
  • Authorization and Rate Limiting: You must establish strict authorization and rate-limiting policies. For instance, manage sponsorship limits and frequencies based on user IP, wallet address, or API tokens to prevent the service from being drained by malicious actors.

7. Ecosystem Tools

The Sui ecosystem already offers a rich set of tools to simplify Paymaster development and deployment:

  • Official SDKs (Rust/TypeScript): Include high-level APIs like sponsor_transaction_block(), significantly reducing integration complexity.
  • Shinami Gas Station: Provides an all-in-one managed service, including automated Gas Coin splitting/reclaiming, detailed metrics monitoring, and webhook notifications, allowing developers to focus on business logic.
  • Enoki / Mysten Demos: The community and Mysten Labs also provide open-source Paymaster implementations that can be used as a reference for building your own service.

8. Implementation Checklist

Ready to upgrade your dApp to the gas-less era? Go through this checklist before you start:

  • Plan Your Funding Flow: Define the Sponsor's funding source, budget, and replenishment strategy. Set up monitoring and alerts for key metrics (e.g., gas pool balance, consumption rate).
  • Reserve Attribution Fields: When designing your transaction parameters, be sure to reserve fields for business identifiers like order_id or user_id.
  • Deploy Anti-Abuse Policies: You must implement strict authorization, rate-limiting, and logging mechanisms before going live.
  • Rehearse on Testnet: Whether building your own service or integrating a third-party Gas Station, always conduct thorough concurrency and stress testing on a testnet or devnet first.
  • Continuously Optimize: After launch, continuously track transaction success rates, failure reasons, and gas costs. Fine-tune your budget and strategies based on the data.

Conclusion

The Sui Paymaster (Gas Station) is more than just a tool for covering user gas fees. It's a powerful paradigm that elegantly combines a "zero SUI on-chain" user experience with the business need for "order-level on-chain attribution" within a single, atomic transaction. It paves the way for Web2 users to enter Web3 and provides developers with unprecedented flexibility for business customization.

With an increasingly mature ecosystem of tools and the current low gas costs on the Sui network, there has never been a better time to upgrade your dApp's payment and interaction flows to the gas-less era.

Introducing SUI Token Staking on BlockEden.xyz: Earn 2.08% APY with One-Click Simplicity

· 7 min read
Dora Noda
Software Engineer

We're happy to announce the launch of SUI token staking on BlockEden.xyz! Starting today, you can stake your SUI tokens directly through our platform and earn a $2.08% APY while supporting the security and decentralization of the SUI network.

What's New: A Seamless SUI Staking Experience

Our new staking feature brings institutional-grade staking to everyone with a simple, intuitive interface that makes earning rewards effortless.

Key Features

One-Click Staking Staking SUI has never been easier. Simply connect your Suisplash wallet, enter the amount of SUI you wish to stake, and approve the transaction. You'll start earning rewards almost immediately without any complex procedures.

Competitive Rewards Earn a competitive 2.082.08% APY** on your staked SUI. Our **8% commission fee is transparent, ensuring you know exactly what to expect. Rewards are distributed daily upon the completion of each epoch.

Trusted Validator Join a growing community that has already staked over 22 million SUI with the BlockEden.xyz validator. We have a proven track record of reliable validation services, supported by enterprise-grade infrastructure that ensures $99.9% uptime.

Flexible Management Your assets remain flexible. Staking is instant, meaning your rewards begin to accumulate right away. Should you need to access your funds, you can initiate the unstaking process at any time. Your SUI will be available after the standard SUI network unbonding period of 24-48 hours. You can track your stakes and rewards in real-time through our dashboard.

Why Stake SUI with BlockEden.xyz?

Choosing a validator is a critical decision. Here’s why BlockEden.xyz is a sound choice for your staking needs.

Reliability You Can Trust

BlockEden.xyz has been a cornerstone of blockchain infrastructure since our inception. Our validator infrastructure powers enterprise applications and has maintained exceptional uptime across multiple networks, ensuring consistent reward generation.

Transparent & Fair

We believe in complete transparency. There are no hidden fees—just a clear $8% commission on the rewards you earn. You can monitor your staking performance with real-time reporting and verify our validator's activity on-chain.

  • Open Validator Address: 0x3b5664bb0f8bb4a8be77f108180a9603e154711ab866de83c8344ae1f3ed4695

Seamless Integration

Our platform is designed for simplicity. There's no need to create an account; you can stake directly from your wallet. The experience is optimized for the Suisplash wallet, and our clean, intuitive interface is built for both beginners and experts.

How to Get Started

Getting started with SUI staking on BlockEden.xyz takes less than two minutes.

Step 1: Visit the Staking Page

Navigate to blockeden.xyz/dash/stake. You can begin the process immediately without any account registration.

Step 2: Connect Your Wallet

If you don't have it already, install the Suisplash wallet. Click the "Connect Wallet" button on our staking page and approve the connection in the wallet extension. Your SUI balance will be displayed automatically.

Step 3: Choose Your Stake Amount

Enter the amount of SUI you want to stake (minimum 1 SUI). You can use the "MAX" button to conveniently stake your entire available balance, leaving a small amount for gas fees. A summary will show your stake amount and estimated annual rewards.

Step 4: Confirm & Start Earning

Click "Stake SUI" and approve the final transaction in your wallet. Your new stake will appear on the dashboard in real-time, and you will begin accumulating rewards immediately.

Staking Economics: What You Need to Know

Understanding the mechanics of staking is key to managing your assets effectively.

Reward Structure

  • Base APY: $2.08% annually
  • Reward Frequency: Distributed every epoch (approximately 24 hours)
  • Commission: $8% of earned rewards
  • Compounding: Rewards are added to your wallet and can be re-staked to achieve compound growth.

Example Earnings

Here is a straightforward breakdown of potential earnings based on a $2.08% APY, after the `$8% commission fee.

Stake AmountAnnual RewardsMonthly RewardsDaily Rewards
100 SUI~2.08 SUI~0.17 SUI~0.0057 SUI
1,000 SUI~20.8 SUI~1.73 SUI~0.057 SUI
10,000 SUI~208 SUI~17.3 SUI~0.57 SUI

Note: These are estimates. Actual rewards may vary based on network conditions.

Risk Considerations

Staking involves certain risks that you should be aware of:

  • Unbonding Period: When you unstake, your SUI is subject to a 24-48 hour unbonding period where it is inaccessible and does not earn rewards.
  • Validator Risk: While we maintain high standards, any validator carries operational risks. Choosing a reputable validator like BlockEden.xyz is important.
  • Network Risk: Staking is a form of network participation and is subject to the inherent risks of the underlying blockchain protocol.
  • Market Risk: The market value of the SUI token can fluctuate, which will affect the total value of your staked assets.

Technical Excellence

Enterprise Infrastructure

Our validator nodes are built on a foundation of technical excellence. We utilize redundant systems distributed across multiple geographic regions to ensure high availability. Our infrastructure is under 24/7 monitoring with automated failover capabilities, and a professional operations team manages the system around the clock. We also conduct regular security audits and compliance checks.

Open Source & Transparency

We are committed to the principles of open source. Our staking integration is built to be transparent, allowing users to inspect the underlying processes. Real-time metrics are publicly available on SUI network explorers, and our fee structure is completely open with no hidden costs. We also actively participate in community governance to support the SUI ecosystem.

Supporting the SUI Ecosystem

By staking with BlockEden.xyz, you're doing more than just earning rewards. You are actively contributing to the health and growth of the entire SUI network.

  • Network Security: Your stake adds to the total amount securing the SUI network, making it more robust against potential attacks.
  • Decentralization: Supporting independent validators like BlockEden.xyz enhances the network's resilience and prevents centralization.
  • Ecosystem Growth: The commission fees we earn are reinvested into maintaining and developing critical infrastructure.
  • Innovation: Revenue supports our research and development of new tools and services for the blockchain community.

Security & Best Practices

Please prioritize the security of your assets.

Wallet Security

  • Never share your private keys or seed phrase with anyone.
  • Use a hardware wallet for storing and staking large amounts.
  • Always verify transaction details in your wallet before signing.
  • Keep your wallet software updated to the latest version.

Staking Safety

  • If you are new to staking, start with a small amount to familiarize yourself with the process.
  • Consider diversifying your stake across multiple reputable validators to reduce risk.
  • Regularly monitor your staked assets and rewards.
  • Ensure you understand the unbonding period before you commit your funds.

Join the Future of SUI Staking

The launch of SUI staking on BlockEden.xyz is more than a new feature; it's a gateway to active participation in the decentralized economy. Whether you're an experienced DeFi user or just beginning your journey, our platform provides a simple and secure way to earn rewards while contributing to the future of the SUI network.

Ready to start earning?

Visit blockeden.xyz/dash/stake and stake your first SUI tokens today!


About BlockEden.xyz

BlockEden.xyz is a leading blockchain infrastructure provider offering reliable, scalable, and secure services to developers, enterprises, and the broader Web3 community. From API services to validator operations, we're committed to building the foundation for a decentralized future.

  • Founded: 2021
  • Networks Supported: 15+ blockchain networks
  • Enterprise Clients: 500+ companies worldwide
  • Total Value Secured: $100M+ across all networks

Follow us on Twitter, join our Discord, and explore our full suite of services at BlockEden.xyz.


Disclaimer: This blog post is for informational purposes only and does not constitute financial advice. Cryptocurrency staking involves risks, including the potential loss of principal. Please conduct your own research and consider your risk tolerance before staking.

Decentralized AI Inference Markets: Bittensor, Gensyn, and Cuckoo AI

· 71 min read
Dora Noda
Software Engineer

Introduction

Decentralized AI inference/training markets aim to harness global compute resources and community models in a trustless way. Projects like Bittensor, Gensyn, and Cuckoo Network (Cuckoo AI) illustrate how blockchain technology can power open AI marketplaces. Each platform tokenizes key AI assets – computing power, machine learning models, and sometimes data – into on-chain economic units. In the following, we delve into the technical architectures underpinning these networks, how they tokenize resources, their governance and incentive structures, methods for tracking model ownership, revenue-sharing mechanisms, and the attack surfaces (e.g. sybil attacks, collusion, freeloading, poisoning) that arise. A comparative table at the end summarizes all key dimensions across Bittensor, Gensyn, and Cuckoo AI.

Technical Architectures

Bittensor: Decentralized “Neural Internet” on Subnets

Bittensor is built on a custom Layer-1 blockchain (the Subtensor chain, based on Substrate) that coordinates a network of AI model nodes across many specialized subnets. Each subnet is an independent mini-network focusing on a particular AI task (for example, a subnet for language generation, another for image generation, etc.). Participants in Bittensor take on distinct roles:

  • Miners – they run machine learning models on their hardware and provide inference answers (or even perform training) for the subnet’s task. In essence, a miner is a node hosting an AI model that will answer queries.
  • Validators – they query miners’ models with prompts and evaluate the quality of the responses, forming an opinion on which miners are contributing valuable results. Validators effectively score the performance of miners.
  • Subnet Owners – they create and define subnets, setting the rules for what tasks are done and how validation is performed in that subnet. A subnet owner could, for example, specify that a subnet is for a certain dataset or modality and define the validation procedure.
  • Delegators – token holders who do not run nodes can delegate (stake) their Bittensor tokens (TAO) to miners or validators to back the best performers and earn a share of rewards (similar to staking in proof-of-stake networks).

Bittensor’s consensus mechanism is novel: instead of traditional block validation, Bittensor uses the Yuma consensus which is a form of “proof-of-intelligence.” In Yuma consensus, validators’ evaluations of miners are aggregated on-chain to determine reward distribution. Every 12-second block, the network mints new TAO tokens and distributes them according to the consensus of validators on which miners provided useful work. Validators’ scores are combined in a stake-weighted median scheme: outlier opinions are clipped and honest majority opinion prevails. This means if most validators agree a miner was high-quality, that miner will get a strong reward; if a validator deviates far from others (possibly due to collusion or error), that validator is penalized by earning less. In this way, Bittensor’s blockchain coordinates a miner–validator feedback loop: miners compete to produce the best AI outputs, and validators curate and rank those outputs, with both sides earning tokens proportional to the value they add. This architecture is often described as a “decentralized neural network” or “global brain,” where models learn from each other’s signals and evolve collectively. Notably, Bittensor recently upgraded its chain to support EVM compatibility (for smart contracts) and introduced dTAO, a system of subnet-specific tokens and staking (explained later) to further decentralize control of resource allocation.

Gensyn: Trustless Distributed Compute Protocol

Gensyn approaches decentralized AI from the angle of a distributed computing protocol for machine learning. Its architecture connects developers (submitters) who have AI tasks (like training a model or running an inference job) with compute providers (solvers) around the world who have spare GPU/TPU resources. Originally, Gensyn planned a Substrate L1 chain, but it pivoted to building on Ethereum as a rollup for stronger security and liquidity. The Gensyn network is thus an Ethereum Layer-2 (an Ethereum rollup) that coordinates job postings and payments, while computation happens off-chain on the providers’ hardware.

A core innovation of Gensyn’s design is its verification system for off-chain work. Gensyn uses a combination of optimistic verification (fraud proofs) and cryptographic techniques to ensure that when a solver claims to have run a training/inference task, the result is correct. In practice, the protocol involves multiple participant roles:

  • Submitter – the party requesting a job (for example, someone who needs a model trained). They pay the network’s fee and provide the model/data or the specification of the task.
  • Solver – a node that bids for and executes the ML task on their hardware. They will train the model or run the inference as requested, then submit the results and a proof of computation.
  • Verifier/Challenger – nodes that can audit or spot-check the solver’s work. Gensyn implements a Truebit-style scheme where by default a solver’s result is accepted, but a verifier can challenge it within a window if they suspect an incorrect computation. In a challenge, an interactive “binary search” through the computation steps (a fraud proof protocol) is used to pinpoint any discrepancy. This allows the chain to resolve disputes by performing only a minimal critical part of the computation on-chain, rather than redoing the entire expensive task.

Crucially, Gensyn is designed to avoid the massive redundancy of naive approaches. Instead of having many nodes all repeat the same ML job (which would destroy cost savings), Gensyn’s “proof-of-learning” approach uses training metadata to verify that learning progress was made. For example, a solver might provide cryptographic hashes or checkpoints of intermediate model weights and a succinct proof that these progressed according to the training updates. This probabilistic proof-of-learning can be checked much more cheaply than re-running the entire training, enabling trustless verification without full replication. Only if a verifier detects an anomaly would a heavier on-chain computation be triggered as a last resort. This approach dramatically reduces overhead compared to brute-force verification, making decentralized ML training more feasible. Gensyn’s architecture thus heavily emphasizes crypto-economic game design: solvers put down a stake or bond, and if they cheat (submitting wrong results), they lose that stake to honest verifiers who catch them. By combining blockchain coordination (for payments and dispute resolution) with off-chain compute and clever verification, Gensyn creates a marketplace for ML compute that can tap into idle GPUs anywhere while maintaining trustlessness. The result is a hyperscale “compute protocol” where any developer can access affordable, globally-distributed training power on demand.

Cuckoo AI: Full-Stack Decentralized AI Service Platform

Cuckoo Network (or Cuckoo AI) takes a more vertically integrated approach, aiming to provide end-to-end decentralized AI services rather than just raw compute. Cuckoo built its own blockchain (initially a Layer-1 called Cuckoo Chain on Arbitrum Orbit, an Ethereum-compatible rollup framework) to orchestrate everything: it not only matches jobs to GPUs, but also hosts AI applications and handles payments in one system. The design is full-stack: it combines a blockchain for transactions and governance, a decentralized GPU/CPU resource layer, and user-facing AI applications and APIs on top. In other words, Cuckoo integrates all three layers – blockchain, compute, and AI application – within a single platform.

Participants in Cuckoo fall into four groups:

  • AI App Builders (Coordinators) – these are developers who deploy AI models or services onto Cuckoo. For example, a developer might host a Stable Diffusion image generator or an LLM chatbot as a service. They run Coordinator Nodes, which are responsible for managing their service: accepting user requests, splitting them into tasks, and assigning those tasks to miners. Coordinators stake the native token ($CAI) to join the network and gain the right to utilize miners. They essentially act as layer-2 orchestrators that interface between users and the GPU providers.
  • GPU/CPU Miners (Task Nodes) – these are the resource providers. Miners run the Cuckoo task client and contribute their hardware to perform inference tasks for the AI apps. For instance, a miner might be assigned an image generation request (with a given model and prompt) by a coordinator and use their GPU to compute the result. Miners also must stake $CAI to ensure commitment and good behavior. They earn token rewards for each task they complete correctly.
  • End Users – the consumers of the AI applications. They interact via Cuckoo’s web portal or APIs (for example, generating art via CooVerse or chatting with AI personalities). Users can either pay with crypto for each use or possibly contribute their own computing (or stake) to offset usage costs. An important aspect is censorship resistance: if one coordinator (service provider) is blocked or goes down, users can switch to another serving the same application, since multiple coordinators could host similar models in the decentralized network.
  • Stakers (Delegators) – community members who do not run AI services or mining hardware can still participate by staking $CAI on those who do. By voting with their stake on trusted coordinators or miners, they help signal reputation and in return earn a share of network rewards. This design builds a Web3 reputation layer: good actors attract more stake (and thus trust and rewards), while bad actors lose stake and reputation. Even end users can stake in some cases, aligning them with the network’s success.

The Cuckoo chain (now in the process of transitioning from a standalone chain to a shared-security rollup) tracks all these interactions. When a user invokes an AI service, the coordinator node creates on-chain task assignments for miners. The miners execute the tasks off-chain and return results to the coordinator, which validates them (e.g., checking that the output image or text is not gibberish) and delivers the final result to the user. The blockchain handles payment settlement: for each task, the coordinator’s smart contract pays the miner in $CAI (often aggregating micropayments into daily payouts). Cuckoo emphasizes trustlessness and transparency – all participants stake tokens and all task assignments and completions are recorded, so cheating is discouraged by the threat of losing stake and by public visibility of performance. The network’s modular design means new AI models or use-cases can be added easily: while it started with text-to-image generation as a proof of concept, its architecture is general enough to support other AI workloads (e.g. language model inference, audio transcription, etc.).

A notable aspect of Cuckoo’s architecture is that it initially launched its own Layer-1 blockchain to maximize throughput for AI transactions (peaking at 300k daily transactions during testing). This allowed custom optimizations for AI task scheduling. However, the team found maintaining a standalone L1 costly and complex, and as of mid-2025 they decided to sunset the custom chain and migrate to a rollup/AVS (Active Validated Service) model on Ethereum. This means Cuckoo will inherit security from Ethereum or an L2 like Arbitrum, rather than running its own consensus, but will continue to operate its decentralized AI marketplace on that shared security layer. The change is intended to improve economic security (leveraging Ethereum’s robustness) and let the Cuckoo team focus on product rather than low-level chain maintenance. In summary, Cuckoo’s architecture creates a decentralized AI-serving platform where anyone can plug in hardware or deploy an AI model service, and users globally can access AI apps with lower cost and less reliance on Big Tech infrastructure.

Asset Tokenization Mechanisms

A common theme of these networks is converting compute, models, and data into on-chain assets or economic units that can be traded or monetized. However, each project focuses on tokenizing these resources in different ways:

  • Computing Power: All three platforms turn compute work into reward tokens. In Bittensor, useful computation (inference or training done by a miner) is quantified via validator scores and rewarded with TAO tokens each block. Essentially, Bittensor “measures” intelligence contributed and mints TAO as a commodity representing that contribution. Gensyn explicitly treats compute as a commodity – its protocol creates a marketplace where GPU time is the product, and the price is set by supply-demand in token terms. Developers buy compute using the token, and providers earn tokens by selling their hardware cycles. The Gensyn team notes that any digital resource (compute, data, algorithms) can be represented and traded in a similar trustless market. Cuckoo tokenizes compute via an ERC-20 token $CAI issued as payment for completed tasks. GPU providers essentially “mine” CAI by doing AI inference work. Cuckoo’s system creates on-chain records of tasks, so one can think of each completed GPU task as an atomic unit of work that is paid for in tokens. The premise across all three is that otherwise idle or inaccessible compute power becomes a tokenized, liquid asset – either through protocol-level token emissions (as in Bittensor and early Cuckoo) or through an open market of buy/sell orders for compute jobs (as in Gensyn).

  • AI Models: Representing AI models as on-chain assets (e.g. NFTs or tokens) is still nascent. Bittensor does not tokenize the models themselves – the models remain off-chain in the miners’ ownership. Instead, Bittensor indirectly puts a value on models by rewarding the ones that perform well. In effect, a model’s “intelligence” is turned into TAO earnings, but there isn’t an NFT that represents the model weights or permits others to use the model. Gensyn’s focus is on compute transactions, not explicitly on creating tokens for models. A model in Gensyn is typically provided by a developer off-chain (perhaps open-source or proprietary), trained by solvers, and returned – there is no built-in mechanism to create a token that owns the model or its IP. (That said, the Gensyn marketplace could potentially facilitate trading model artifacts or checkpoints if parties choose, but the protocol itself views models as the content of computation rather than a tokenized asset.) Cuckoo sits somewhere in between: it speaks of “AI agents” and models integrated into the network, but currently there isn’t a non-fungible token representing each model. Instead, a model is deployed by an app builder and then served via the network. The usage rights to that model are implicitly tokenized in that the model can earn $CAI when it’s used (via the coordinator who deploys it). All three platforms acknowledge the concept of model tokenization – for example, giving communities ownership of models via tokens – but practical implementations are limited. As an industry, tokenizing AI models (e.g. as NFTs with ownership rights and profit share) is still being explored. Bittensor’s approach of models exchanging value with each other is a form of “model marketplace” without explicit token per model. The Cuckoo team notes that decentralized model ownership is promising to lower barriers vs. centralized AI, but it requires effective methods to verify model outputs and usage on-chain. In summary, compute power is immediately tokenized (it’s straightforward to pay tokens for work done), whereas models are indirectly or aspirationally tokenized (rewarded for their outputs, possibly represented by stake or reputation, but not yet treated as transferable NFTs on these platforms).

  • Data: Data tokenization remains the hardest. None of Bittensor, Gensyn, or Cuckoo have fully generalized on-chain data marketplaces integrated (where datasets are traded with enforceable usage rights). Bittensor nodes might train on various datasets, but those datasets are not part of the on-chain system. Gensyn could allow a developer to provide a dataset for training, but the protocol does not tokenize that data – it’s simply provided off-chain for the solver to use. Cuckoo similarly doesn’t tokenize user data; it primarily handles data (like user prompts or outputs) in a transient way for inference tasks. The Cuckoo blog explicitly states that “decentralized data remains challenging to tokenize” despite being a critical resource. Data is sensitive (privacy and ownership issues) and hard to handle with current blockchain tech. So, while compute is being commoditized and models are beginning to be, data largely stays off-chain except for special cases (some projects outside these three are experimenting with data unions and token rewards for data contributions, but that’s outside our current scope). In summary, compute power is now an on-chain commodity in these networks, models are valued through tokens but not individually tokenized as assets yet, and data tokenization is still an open problem (beyond acknowledging its importance).

Governance and Incentives

A robust governance and incentive design is crucial for these decentralized AI networks to function autonomously and fairly. Here we examine how each platform governs itself (who makes decisions, how upgrades or parameter changes occur) and how they align participant incentives through token economics.

  • Bittensor Governance: In its early stages, Bittensor’s development and subnet parameters were largely controlled by the core team and a set of 64 “root” validators on the main subnet. This was a point of centralization – a few powerful validators had outsized influence on reward allocations, leading to what some called an “oligarchic voting system”. To address this, Bittensor introduced dTAO (decentralized TAO) governance in 2025. The dTAO system shifted resource allocation to be market-driven and community-controlled. Concretely, TAO holders can stake their tokens into subnet-specific liquidity pools (essentially, they “vote” on which subnets should get more network emission) and receive alpha tokens that represent ownership in those subnet pools. Subnets that attract more stake will have a higher alpha token price and get a larger share of the daily TAO emission, whereas unpopular or underperforming subnets will see capital (and thus emissions) flow away. This creates a feedback loop: if a subnet produces valuable AI services, more people stake TAO to it (seeking rewards), which gives that subnet more TAO to reward its participants, fostering growth. If a subnet stagnates, stakers withdraw to more lucrative subnets. In effect, TAO holders collectively govern the network’s focus by financially signaling which AI domains deserve more resources. This is a form of on-chain governance by token-weight, aligned to economic outcomes. Aside from resource allocation, major protocol upgrades or parameter changes likely still go through governance proposals where TAO holders vote (Bittensor has a mechanism for on-chain proposals and referenda managed by the Bittensor Foundation and an elected council, similar to Polkadot’s governance). Over time, one can expect Bittensor’s governance to become increasingly decentralized, with the foundation stepping back as the community (via TAO stake) steers things like inflation rate, new subnet approval, etc. The transition to dTAO is a big step in that direction, replacing centralized decision-makers with an incentive-aligned market of token stakeholders.

  • Bittensor Incentives: Bittensor’s incentive structure is tightly woven into its consensus. Every block (12 seconds), exactly 1 TAO is newly minted and split among the contributors of each subnet based on performance. The default split for each subnet’s block reward is 41% to miners, 41% to validators, and 18% to the subnet owner. This ensures all roles are rewarded: miners earn for doing inference work, validators earn for their evaluation effort, and subnet owners (who may have bootstrapped the data/task for that subnet) earn a residual for providing the “marketplace” or task design. Those percentages are fixed in protocol and aim to align everyone’s incentives toward high-quality AI output. The Yuma consensus mechanism further refines incentives by weighting rewards according to quality scores – a miner that provides better answers (as per validator consensus) gets a higher portion of that 41%, and a validator that closely follows honest consensus gets more of the validator portion. Poor performers get pruned out economically. Additionally, delegators (stakers) who back a miner or validator will typically receive a share of that node’s earnings (nodes often set a commission and give the rest to their delegators, similar to staking in PoS networks). This allows passive TAO holders to support the best contributors and earn yield, further reinforcing meritocracy. Bittensor’s token (TAO) is thus a utility token: it’s required for registration of new miners (miners must spend a small amount of TAO to join, which fights sybil spam) and can be staked to increase influence or earn via delegation. It is also envisioned as a payment token if external users want to consume services from Bittensor’s network (for instance, paying TAO to query a language model on Bittensor), though the internal reward mechanism has been the primary “economy” so far. The overall incentive philosophy is to reward “valuable intelligence” – i.e. models that help produce good AI outcomes – and to create a competition that continually improves the quality of models in the network.

  • Gensyn Governance: Gensyn’s governance model is structured to evolve from core-team control to community control as the network matures. Initially, Gensyn will have a Gensyn Foundation and an elected council that oversee protocol upgrades and treasury decisions. This council is expected to be composed of core team members and early community leaders at first. Gensyn plans a Token Generation Event (TGE) for its native token (often referred to as GENS), after which governance power would increasingly be in the hands of token holders via on-chain voting. The foundation’s role is to represent the protocol’s interests and ensure a smooth transition to full decentralization. In practice, Gensyn will likely have on-chain proposal mechanisms where changes to parameters (e.g., verification game length, fee rates) or upgrades are voted on by the community. Because Gensyn is being implemented as an Ethereum rollup, governance might also tie into Ethereum’s security (for example, using upgrade keys for the rollup contract that eventually turn over to a DAO of token holders). The decentralization and governance section of the Gensyn litepaper emphasizes that the protocol must ultimately be globally owned, aligning with the ethos that the “network for machine intelligence” should belong to its users and contributors. In summary, Gensyn’s governance starts semi-centralized but is architected to become a DAO where GENS token holders (potentially weighted by stake or participation) make decisions collectively.

  • Gensyn Incentives: The economic incentives in Gensyn are straightforward market dynamics supplemented by crypto-economic security. Developers (clients) pay for ML tasks in the Gensyn token, and Solvers earn tokens by completing those tasks correctly. The price for compute cycles is determined by an open market – presumably, developers can put tasks up with a bounty and solvers may bid or simply take it if the price meets their expectation. This ensures that as long as there is supply of idle GPUs, competition will drive the cost down to a fair rate (Gensyn’s team projects up to 80% cost reduction compared to cloud prices, as the network finds the cheapest available hardware globally). On the flip side, solvers have the incentive of earning tokens for work; their hardware that might otherwise sit idle now generates revenue. To ensure quality, Gensyn requires solvers to stake collateral when they take on a job – if they cheat or produce an incorrect result and are caught, they lose that stake (it can be slashed and awarded to the honest verifier). Verifiers are incentivized by the chance to earn a “jackpot” reward if they catch a fraudulent solver, similar to Truebit’s design of periodically rewarding verifiers who successfully identify incorrect computation. This keeps solvers honest and motivates some nodes to act as watchmen. In an optimal scenario (no cheating), solvers simply earn the task fee and the verifier role is mostly idle (or one of the participating solvers might double as a verifier on others). Gensyn’s token thus serves as both gas currency for purchasing compute and as stake collateral that secures the protocol. The litepaper mentions a testnet with non-permanent tokens and that early testnet participants will be rewarded at the TGE with real tokens. This indicates Gensyn allocated some token supply for bootstrapping – rewarding early adopters, test solvers, and community members. In the long run, fees from real jobs should sustain the network. There may also be a small protocol fee (a percentage of each task payment) that goes into a treasury or is burned; this detail isn’t confirmed yet, but many marketplace protocols include a fee to fund development or token buy-and-burn. In summary, Gensyn’s incentives align around honest completion of ML jobs: do the work, get paid; try to cheat, lose stake; verify others, earn if you catch cheats. This creates a self-policing economic system aimed at achieving reliable distributed computation.

  • Cuckoo Governance: Cuckoo Network built governance into its ecosystem from day one, though it is still in a developing phase. The $CAI token is explicitly a governance token in addition to its utility roles. Cuckoo’s philosophy is that GPU node operators, app developers, and even end users should have a say in the network’s evolution – reflecting its community-driven vision. In practice, important decisions (like protocol upgrades or economic changes) would be decided by token-weighted votes, presumably through a DAO mechanism. For example, Cuckoo could hold on-chain votes for changing the reward distribution or adopting a new feature, and $CAI holders (including miners, devs, and users) would vote. Already, on-chain voting is used as a reputation system: Cuckoo requires each role to stake tokens, and then community members can vote (perhaps by delegating stake or through governance modules) on which coordinators or miners are trustworthy. This affects reputation scores and could influence task scheduling (e.g., a coordinator with more votes might attract more users, or a miner with more votes might get assigned more tasks). It’s a blend of governance and incentive – using governance tokens to establish trust. The Cuckoo Foundation or core team has guided the project’s direction so far (for example, making the recent call to sunset the L1 chain), but their blog indicates a commitment to move towards decentralized ownership. They identified that running their own chain incurred high overhead and that pivoting to a rollup will allow more open development and integration with existing ecosystems. It’s likely that once on a shared layer (like Ethereum), Cuckoo will implement a more traditional DAO for upgrades, with the community voting using CAI.

  • Cuckoo Incentives: The incentive design for Cuckoo has two phases: the initial bootstrapping phase with fixed token allocations, and a future state with usage-driven revenue sharing. On launch, Cuckoo conducted a “fair launch” distribution of 1 billion CAI tokens. 51% of the supply was set aside for the community, allocated as:

    • Mining Rewards: 30% of total supply reserved to pay GPU miners for performing AI tasks.
    • Staking Rewards: 11% of supply for those who stake and help secure the network.
    • Airdrops: 5% to early users and community members as an adoption incentive.
    • (Another 5% was for developer grants to encourage building on Cuckoo.)

    This large allocation means that in the early network, miners and stakers were rewarded from an emission pool, even if actual user demand was low. Indeed, Cuckoo’s initial phase featured high APY yields for staking and mining, which successfully attracted participants but also “yield farmers” who were only there for tokens. The team noted that many users left once the reward rates fell, indicating those incentives were not tied to genuine usage. Having learned from this, Cuckoo is shifting to a model where rewards correlate directly with real AI workload. In the future (and partially already), when an end user pays for an AI inference, that payment (in CAI or possibly another accepted token converted to CAI) will be split among the contributors:

    • GPU miners will receive the majority share for the compute they provided.
    • Coordinator (app developer) will take a portion as the service provider who supplied the model and handled the request.
    • Stakers who have delegated to those miners or coordinators might get a small cut or inflationary reward, to continue incentivizing the backing of reliable nodes.
    • Network/Treasury might retain a fee for ongoing development or to fund future incentives (or the fee could be zero/nominal to maximize user affordability).

    Essentially, Cuckoo is moving toward a revenue-sharing model: if an AI app on Cuckoo generates earnings, those earnings are distributed to all contributors of that service in a fair way. This aligns incentives so that participants benefit from actual usage rather than just inflation. Already, the network required all parties to stake CAI – this means miners and coordinators earn not just a flat reward but also possibly stake-based rewards (for example, a coordinator might earn higher rewards if many users stake on them or if they themselves stake more, similar to how proof-of-stake validators earn). In terms of user incentives, Cuckoo also introduced things like an airdrop portal and faucets (which some users gamed) to seed initial activity. Going forward, users might be incentivized via token rebates for using the services or via governance rewards for participating in curation (e.g., maybe earning small tokens for rating outputs or contributing data). The bottom line is that Cuckoo’s token ($CAI) is multi-purpose: it is the gas/fee token on the chain (all transactions and payments use it), it’s used for staking and voting, and it’s the unit of reward for work done. Cuckoo explicitly mentions it wants to tie token rewards to service-level KPIs (key performance indicators) – for example, uptime, query throughput, user satisfaction – to avoid purely speculative incentives. This reflects a maturing of the token economy from simple liquidity mining to a more sustainable, utility-driven model.

Model Ownership and IP Attribution

Handling intellectual property (IP) and ownership rights of AI models is a complex aspect of decentralized AI networks. Each platform has taken a slightly different stance, and generally this is an evolving area with no complete solution yet:

  • Bittensor: Models in Bittensor are provided by the miner nodes, and those miners retain full control over their model weights (which are never published on-chain). Bittensor doesn’t explicitly track who “owns” a model beyond the fact that it’s running at a certain wallet address. If a miner leaves, their model leaves with them. Thus, IP attribution in Bittensor is off-chain: if a miner uses a proprietary model, there is nothing on-chain that enforces or even knows that. Bittensor’s philosophy encourages open contributions (many miners might use open-source models like GPT-J or others) and the network rewards the performance of those models. One could say Bittensor creates a reputation score for models (via the validator rankings), and that is a form of acknowledging the model’s value, but the rights to the model itself are not tokenized or distributed. Notably, subnet owners in Bittensor could be seen as owning a piece of IP: they define a task (which might include a dataset or method). The subnet owner mints an NFT (called a subnet UID) when creating a subnet, and that NFT entitles them to 18% of rewards in that subnet. This effectively tokenizes the creation of a model marketplace (the subnet), if not the model instances. If one considers the subnet’s definition (say a speech recognition task with a particular dataset) as IP, that is at least recorded and rewarded. But individual model weights that miners train – there’s no on-chain ownership record of those. Attribution comes in the form of rewards paid to that miner’s address. Bittensor does not currently implement a system where, for example, multiple people could jointly own a model and get automatic revenue share – the person running the model (miner) gets the reward and it’s up to them off-chain to honor any IP licenses of the model they used.

  • Gensyn: In Gensyn, model ownership is straightforward in that the submitter (the one who wants a model trained) provides the model architecture and data, and after training, they receive the resulting model artifact. The solvers performing the work do not have rights over the model; they are like contractors getting paid for service. Gensyn’s protocol thus assumes the traditional IP model: if you had legal rights to the model and data you submitted, you still have them after it’s trained – the compute network doesn’t claim any ownership. Gensyn does mention that the marketplace could also trade algorithms and data like any other resource. This hints at a scenario where someone could offer a model or algorithm for use in the network, possibly for a fee, thus tokenizing access to that model. For example, a model creator might put their pre-trained model on Gensyn and allow others to fine-tune it via the network for a fee (this would effectively monetize the model IP). While the protocol doesn’t enforce license terms, one could encode payment requirements: a smart contract could require a fee to unlock the model weights to a solver. However, these are speculative use cases – Gensyn’s primary design is about enabling training jobs. As for attribution, if multiple parties contribute to a model (say one provides data, another provides compute), that would likely be handled by whatever contract or agreement they set up before using Gensyn (e.g., a smart contract could split the payment among data provider and compute provider). Gensyn itself doesn’t track “this model was built by X, Y, Z” on-chain beyond the record of which addresses were paid for the job. Summarily, model IP in Gensyn remains with the submitter, and any attribution or licensing must be handled through the legal agreements outside the protocol or through custom smart contracts built on top of it.

  • Cuckoo: In Cuckoo’s ecosystem, model creators (AI app builders) are first-class participants – they deploy the AI service. If an app builder fine-tunes a language model or develops a custom model and hosts it on Cuckoo, that model is essentially their property and they act as the service owner. Cuckoo doesn’t seize any ownership; instead, it provides the infrastructure for them to monetize usage. For instance, if a developer deploys a chatbot AI, users can interact with it and the developer (plus miners) earn CAI from each interaction. The platform thus attributes usage revenue to the model creator but does not explicitly publish the model weights or turn them into an NFT. In fact, to run the model on miners’ GPUs, the coordinator node likely has to send the model (or runtime) to the miner in some form. This raises IP questions: could a malicious miner copy the model weights and distribute them? In a decentralized network, that risk exists if proprietary models are used. Cuckoo’s current focus has been on fairly open models (Stable Diffusion, LLaMA-derived models, etc.) and on building a community, so we haven’t yet seen an enforcement of IP rights via smart contracts. The platform could potentially integrate tools like encrypted model execution or secure enclaves in the future for IP protection, but nothing specific is mentioned in documentation. What it does track is who provided the model service for each task – since the coordinator is an on-chain identity, all usage of their model is accounted to them, and they automatically get their share of rewards. If one were to hand off or sell a model to someone else, effectively they’d transfer control of the coordinator node (perhaps even just give them the private key or NFT if the coordinator role was tokenized). At present, community ownership of models (via token shares) isn’t implemented, but Cuckoo’s vision hints at decentralized community-driven AI, so they may explore letting people collectively fund or govern an AI model. The tokenization of models beyond individual ownership is still an open area across these networks – it’s recognized as a goal (to let communities own AI models rather than corporations), but practically it requires solutions for the above IP and verification challenges.

In summary, model ownership in Bittensor, Gensyn, and Cuckoo is handled off-chain by traditional means: the person or entity running or submitting the model is effectively the owner. The networks provide attribution in the form of economic rewards (paying the model’s contributor for their IP or effort). None of the three has a built-in license or royalty enforcement on model usage at the smart contract level yet. The attribution comes through reputation and reward: e.g., Bittensor’s best models gain high reputation scores (which is public record) and more TAO, which is an implicit credit to their creators. Over time, we may see features like NFT-bound model weights or decentralized licenses to better track IP, but currently the priority has been on making the networks function and incentivize contributions. All agree that verifying model provenance and outputs is key to enabling true model asset markets, and research is ongoing in this direction.

Revenue Sharing Structures

All three platforms must decide how to divide the economic pie when multiple parties collaborate to produce a valuable AI output. Who gets paid, and how much, when an AI service is used or when tokens are emitted? Each has a distinct revenue sharing model:

  • Bittensor: As mentioned under incentives, Bittensor’s revenue distribution is protocol-defined at the block level: 41% to miners, 41% to validators, 18% to subnet owner for each block’s TAO issuance. This is effectively built-in revenue split for the value generated in each subnet. The subnet owner’s share (18%) acts like a royalty for the “model/task design” or for bootstrapping that subnet’s ecosystem. Miners and validators getting equal shares ensures that without validation, miners don’t get rewarded (and vice versa) – they are symbiotic and each gets an equal portion of the rewards minted. If we consider an external user paying TAO to query a model, the Bittensor whitepaper envisions that payment also being split similarly between the miner who answers and validators who helped vet the answer (the exact split could be determined by the protocol or market forces). Additionally, delegators who stake on miners/validators are effectively partners – typically, a miner/validator will share a percentage of their earned TAO with their delegators (this is configurable, but often majority to delegators). So, if a miner earned 1 TAO from a block, that might be divided 80/20 between their delegators and themselves, for example, based on stake. This means even non-operators get a share of the network’s revenue proportional to their support. With the introduction of dTAO, another layer of sharing was added: those who stake into a subnet’s pool get alpha tokens, which entitle them to some of that subnet’s emissions (like yield farming). In effect, anyone can take a stake in a particular subnet’s success and receive a fraction of miner/validator rewards via holding alpha tokens (alpha tokens appreciate as the subnet attracts more usage and emissions). To sum up, Bittensor’s revenue sharing is fixed by code for the main roles, and further shared by social/staking arrangements. It’s a relatively transparent, rule-based split – every block, participants know exactly how the 1 TAO is allocated, and thus know their “earnings” per contribution. This clarity is one reason Bittensor is sometimes likened to Bitcoin for AI – a deterministic monetary issuance where participants’ reward is mathematically set.

  • Gensyn: Revenue sharing in Gensyn is more dynamic and market-driven, since tasks are individually priced. When a submitter creates a job, they attach a reward (say X tokens) they are willing to pay. A solver who completes the job gets that X (minus any network fee). If a verifier is involved, typically there is a rule such as: if no fraud detected, the solver keeps full payment; if fraud is detected, the solver is slashed – losing some or all of their stake – and that slashed amount is given to the verifier as a reward. So verifiers don’t earn from every task, only when they catch a bad result (plus possibly a small baseline fee for participating, depending on implementation). There isn’t a built-in concept of paying a model owner here because the assumption is the submitter either is the model owner or has rights to use the model. One could imagine a scenario where a submitter is fine-tuning someone else’s pretrained model and a portion of the payment goes to the original model creator – but that would have to be handled off-protocol (e.g., by an agreement or a separate smart contract that splits the token payment accordingly). Gensyn’s protocol-level sharing is essentially client -> solver (-> verifier). The token model likely includes some allocation for the protocol treasury or foundation; for instance, a small percentage of every task’s payment might go to a treasury which could be used to fund development or insurance pools (this is not explicitly stated in available docs, but many protocols do it). Also, early on, Gensyn may subsidize solvers via inflation: testnet users are promised rewards at TGE, which is effectively revenue share from the initial token distribution (early solvers and supporters get a chunk of tokens for helping bootstrap, akin to an airdrop or mining reward). Over time, as real jobs dominate, inflationary rewards would taper, and solver income would mainly come from user payments. Gensyn’s approach can be summarized as a fee-for-service revenue model: the network facilitates a direct payment from those who need work done to those who do the work, with verifiers and possibly token stakers taking cuts only when they play a role in securing that service.

  • Cuckoo: Cuckoo’s revenue sharing has evolved. Initially, because there weren’t many paying end-users, revenue sharing was essentially inflation sharing: the 30% mining and 11% staking allocations from the token supply meant that miners and stakers were sharing the tokens issued by the network’s fair launch pool. In practice, Cuckoo ran things like daily CAI payouts to miners proportional to tasks completed. Those payouts largely came from the mining reward allocation (which is part of the fixed supply reserved). This is similar to how many Layer-1 blockchains distribute block rewards to miners/validators – it wasn’t tied to actual usage by external users, it was more to incentivize participation and growth. However, as highlighted in their July 2025 blog, this led to usage that was incentivized by token farming rather than real demand. The next stage for Cuckoo is a true revenue-sharing model based on service fees. In this model, when an end user uses, say, the image generation service and pays $1 (in crypto terms), that $1 worth of tokens would be split perhaps like: 0.70 to the miner who did the GPU work, 0.20 to the app developer (coordinator) who provided the model and interface, and 0.10 to stakers or the network treasury. (Note: the exact ratios are hypothetical; Cuckoo has not publicly specified them yet, but this illustrates the concept.) This way, all contributors to delivering the service get a cut of the revenue. This is analogous to, for example, a ride-sharing economy but for AI: the vehicle (GPU miner) gets a majority, the driver or platform (coordinator who built the model service) gets a cut, and maybe the platform’s governance/stakers get a small fee. Cuckoo’s mention of “revenue-share models and token rewards tied directly to usage metrics” suggests that if a particular service or node handles a lot of volume, its operators and supporters will earn more. They are moving away from flat yields for just locking tokens (which was the case with their staking APY initially). In concrete terms: if you stake on a coordinator that ends up powering a very popular AI app, you could earn a portion of that app’s fees – a true staking-as-investing-in-utility scenario, rather than staking just for inflation. This aligns everyone’s incentives toward attracting real users who pay for AI services, which in turn feeds value back to token holders. It’s worth noting Cuckoo’s chain also had fees for transactions (gas), so miners who produced blocks (initially GPU miners also contributed to block production on the Cuckoo chain) got gas fees too. With the chain shut down and migration to a rollup, gas fees will likely be minimal (or on Ethereum), so the main revenue becomes the AI service fees themselves. In summary, Cuckoo is transitioning from a subsidy-driven model (network pays participants from its token pool) to a demand-driven model (participants earn from actual user payments). The token will still play a role in staking and governance, but the day-to-day earnings of miners and app devs should increasingly come from users buying AI services. This model is more sustainable long-term and closely mirrors Web2 SaaS revenue sharing, but implemented via smart contracts and tokens for transparency.

Attack Surfaces and Vulnerabilities

Decentralizing AI introduces several incentive and security challenges. We now analyze key attack vectors – sybil attacks, collusion, freeloading, and data/model poisoning – and how each platform mitigates or remains vulnerable to them:

  • Sybil Attacks (fake identities): In an open network, an attacker might create many identities (nodes) to gain disproportionate rewards or influence.

    • Bittensor: Sybil resistance is provided primarily by cost to entry. To register a new miner or validator on Bittensor, one must spend or stake TAO – this could be a burn or a bonding requirement. This means creating N fake nodes incurs N times the cost, making large sybil swarms expensive. Additionally, Bittensor’s consensus ties influence to stake and performance; a sybil with no stake or poor performance earns little. An attacker would have to invest heavily and also have their sybil nodes actually contribute useful work to get any significant reward (which is not a typical sybil strategy). That said, if an attacker does have a lot of capital, they could acquire a majority of TAO and register many validators or miners – effectively a sybil by wealth. This overlaps with the 51% attack scenario: if a single entity controls >50% of staked TAO in a subnet, they can heavily sway consensus. Bittensor’s dTAO introduction helps a bit here: it spreads out influence across subnets and requires community staking support for subnets to thrive, making it harder for one entity to control everything. Still, sybil attacks by a well-funded adversary remain a concern – the Arxiv analysis explicitly notes that stake is quite concentrated now, so the barrier to a majority attack isn’t as high as desired. To mitigate this, proposals like stake caps per wallet (e.g. capping effective stake at the 88th percentile to prevent one wallet dominating) have been suggested. In summary, Bittensor relies on stake-weighted identity (you can’t cheaply spawn identities without proportional stake) to handle sybils; it’s reasonably effective except under a very resourceful attacker.
    • Gensyn: Sybil attacks in Gensyn would manifest as an attacker spinning up many solver or verifier nodes to game the system. Gensyn’s defense is purely economic and cryptographic – identities per se don’t matter, but doing work or posting collateral does. If an attacker creates 100 fake solver nodes but they have no jobs or no stake, they achieve nothing. To win tasks, a sybil node would have to bid competitively and have the hardware to do the work. If they underbid without capacity, they’ll fail and lose stake. Similarly, an attacker could create many verifier identities hoping to be chosen to verify (if the protocol randomly selects verifiers). But if there are too many, the network or job poster might limit the number of active verifiers. Also, verifiers need to potentially perform the computation to check it, which is costly; having many fake verifiers doesn’t help unless you can actually verify results. A more pertinent sybil angle in Gensyn is if an attacker tries to fill the network with bogus jobs or responses to waste others’ time. That is mitigated by requiring deposit from submitters too (a malicious submitter posting fake jobs loses their payment or deposit). Overall, Gensyn’s use of required stakes/bonds and random selection for verification means an attacker gains little by having multiple identities unless they also bring proportional resources. It becomes a costlier attack rather than a cheap one. The optimistic security model assumes at least one honest verifier – sybils would have to overwhelm and be all the verifiers to consistently cheat, which again circles back to owning a majority of stake or computing power. Gensyn’s sybil resistance is thus comparable to an optimistic rollup’s: as long as there’s one honest actor, sybils can’t cause systemic harm easily.
    • Cuckoo: Sybil attack prevention in Cuckoo relies on staking and community vetting. Every role in Cuckoo (miner, coordinator, even user in some cases) requires staking $CAI. This immediately raises the cost of sybil identities – an attacker making 100 dummy miners would need to acquire and lock stake for each. Moreover, Cuckoo’s design has a human/community element: new nodes need to earn reputation via on-chain voting. A sybil army of fresh nodes with no reputation is unlikely to be assigned many tasks or trusted by users. Coordinators in particular have to attract users; a fake coordinator with no track record wouldn’t get usage. For miners, coordinators can see their performance stats (successful tasks, etc.) on Cuckoo Scan and will prefer reliable miners. Cuckoo also had relatively small number of miners (40 GPUs at one point in beta), so any odd influx of many nodes would be noticeable. The potential weak point is if the attacker also farms the reputation system – e.g., they stake a lot of CAI on their sybil nodes to make them look reputable or create fake “user” accounts to upvote themselves. This is theoretically possible, but since it’s all token-curated, it costs tokens to do so (you’d be essentially voting with your own stake on your own nodes). The Cuckoo team can also adjust the staking and reward parameters if sybil behavior is observed (especially now that it’s becoming a more centralized rollup service; they can pause or slash bad actors). All told, sybils are kept at bay by requiring skin in the game (stake) and needing community approval. No one can just waltz in with hundreds of fake GPUs and reap rewards without significant investment that honest participants could better spend on real hardware and stake.
  • Collusion: Here we consider multiple participants colluding to game the system – for example, validators and miners colluding in Bittensor, or solvers and verifiers colluding in Gensyn, etc.

    • Bittensor: Collusion has been identified as a real concern. In the original design, a handful of validators could collude to always upvote certain miners or themselves, skewing reward distribution unfairly (this was observed as power concentration in the root subnet). The Yuma consensus provides some defense: by taking a median of validator scores and penalizing those that deviate, it prevents a small colluding group from dramatically boosting a target unless they are the majority. In other words, if 3 out of 10 validators collude to give a miner a super high score but the other 7 do not, the colluders’ outlier scores get clipped and the miner’s reward is based on the median score (so collusion fails to significantly help). However, if the colluders form >50% of the validators (or >50% of stake among validators), they effectively are the consensus – they can agree on false high scores and the median will reflect their view. This is the classic 51% attack scenario. Unfortunately, the Arxiv study found some Bittensor subnets where a coalition of just 1–2% of participants (in terms of count) controlled a majority of stake, due to heavy token concentration. This means collusion by a few big holders was a credible threat. The mitigation Bittensor is pursuing via dTAO is to democratize influence: by letting any TAO holder direct stake to subnets, it dilutes the power of closed validator groups. Also, proposals like concave staking (diminishing returns for outsized stake) and stake caps are aimed at breaking the ability of one colluding entity to gather too much voting power. Bittensor’s security assumption now is similar to proof-of-stake: no single entity (or cartel) controlling >50% of active stake. As long as that holds, collusion is limited because honest validators will override bad scoring and colluding subnet owners can’t arbitrarily boost their own rewards. Finally, on collusion between subnet owners and validators (e.g., a subnet owner bribing validators to rate their subnet’s miners highly), dTAO removes direct validator control, replacing it with token-holder decisions. It’s harder to collude with “the market” unless you buy out the token supply – in which case it’s not really collusion, it’s takeover. So Bittensor’s main anti-collusion technique is algorithmic consensus (median clipping) and broad token distribution.

    • Gensyn: Collusion in Gensyn would likely involve a solver and verifier (or multiple verifiers) colluding to cheat the system. For instance, a solver could produce a fake result and a colluding verifier could intentionally not challenge it (or even attest it’s correct if protocol asked verifiers to sign off). To mitigate this, Gensyn’s security model requires at least one honest verifier. If all verifiers are colluding with the solver, then a bad result goes unchallenged. Gensyn addresses this by encouraging many independent verifiers (anyone can verify) and by the game theory that a verifier could earn a large reward by breaking from the collusion and challenging (because they’d get the solver’s stake). Essentially, even if there’s a group agreeing to collude, each member has an incentive to defect and claim the bounty for themselves – this is a classic Prisoner’s Dilemma setup. The hope is that keeps collusion groups small or ineffective. Another potential collusion is between multiple solvers to bid up prices or monopolize tasks. However, since developers can choose where to post tasks (and tasks are not identical units that can be monopolized easily), solver collusion in price would be hard to coordinate globally – any non-colluding solver could underbid to win the work. The open market dynamic counters pricing collusion, assuming at least some competitive participants. One more angle: verifier collusion to grief solvers – e.g., verifiers falsely accusing honest solvers to steal their stake. Gensyn’s fraud proof is binary and on-chain; a false accusation would fail when the on-chain re-computation finds no error, and presumably the malicious verifier would then lose something (perhaps a deposit or reputation). So a collusion of verifiers trying to sabotage solvers would be caught by the protocol’s verification process. In summary, Gensyn’s architecture is robust as long as at least one party in any colluding set has an incentive to be honest – a property of optimistic verification similar to requiring one honest miner in Bitcoin to eventually expose a fraud. Collusion is theoretically possible if an attacker could control all verifiers and solvers in a task (like a majority of the network), but then they could just cheat without needing collusion per se. The cryptoeconomic incentives are arranged to make sustaining collusion irrational.

    • Cuckoo: Collusion in Cuckoo could happen in a few ways:

      1. A coordinator colluding with miners – for example, a coordinator could always assign tasks to a set of friendly miners and split rewards, ignoring other honest miners. Since coordinators have discretion in task scheduling, this can happen. However, if the friendly miners are subpar, the end users might notice slow or poor service and leave, so the coordinator is disincentivized from purely favoritism that hurts quality. If the collusion is to manipulate rewards (say, submitting fake tasks to give miners tokens), that would be detected on-chain (lots of tasks with maybe identical inputs or no actual user) and can be penalized. Cuckoo’s on-chain transparency means any unusual patterns could be flagged by the community or the core team. Also, because all participants stake, a colluding coordinator-miner ring stands to lose their stake if caught abusing the system (for instance, if governance decides to slash them for fraud).
      2. Miners colluding among themselves – they might share information or form a cartel to, say, all vote for each other in reputation or all refuse to serve a particular coordinator to extract higher fees. These scenarios are less likely: reputation voting is done by stakers (including users), not by the miners themselves voting for each other. And refusing service would only drive coordinators to find other miners or raise alarms. Given the relatively small scale currently, any collusion would be hard to hide.
      3. Collusion to manipulate governance – large CAI holders could collude to pass proposals in their favor (like setting an exorbitant fee or redirecting the treasury). This is a risk in any token governance. The best mitigation is widely distributing the token (Cuckoo’s fair launch gave 51% to community) and having active community oversight. Also, since Cuckoo pivoted away from L1, immediate on-chain governance might be limited until they resettle on a new chain; the team likely retains a multisig control in the interim, which ironically prevents collusion by malicious outsiders at the expense of being centralized temporarily. Overall, Cuckoo leans on transparency and staking to handle collusion. There is an element of trust in coordinators to behave because they want to attract users in a competitive environment. If collusion leads to poorer service or obvious reward gaming, stakeholders can vote out or stop staking on bad actors, and the network can slash or block them. The fairly open nature (anyone can become a coordinator or miner if they stake) means collusion would require a large coordinated effort that would be evident. It’s not as mathematically prevented as in Bittensor or Gensyn, but the combination of economic stake and community governance provides a check.
  • Freeloading (Free-rider problems): This refers to participants trying to reap rewards without contributing equivalent value – e.g., a validator that doesn’t actually evaluate but still earns, or a miner who copies others’ answers instead of computing, or users farming rewards without providing useful input.

    • Bittensor: A known free-rider issue in Bittensor is “weight copying” by lazy validators. A validator could simply copy the majority opinion (or another validator’s scores) instead of independently evaluating miners. By doing so, they avoid the cost of running AI queries but still get rewards if their submitted scores look consensus-aligned. Bittensor combats this by measuring each validator’s consensus alignment and informational contribution. If a validator always just copies others, they may align well (so they don’t get penalized heavily), but they add no unique value. The protocol developers have discussed giving higher rewards to validators that provide accurate but not purely redundant evaluations. Techniques like noise infusion (deliberately giving validators slightly different queries) could force them to actually work rather than copy – though it’s unclear if that’s implemented. The Arxiv suggests performance-weighted emission and composite scoring methods to better link validator effort to reward. As for miners, one possible free-rider behavior would be if a miner queries other miners and relays the answer (a form of plagiarism). Bittensor’s design (with decentralized queries) might allow a miner’s model to call others via its own dendrite. If a miner just relays another’s answer, a good validator might catch that because the answer might not match the miner’s claimed model capabilities consistently. It’s tricky to detect algorithmically, but a miner that never computes original results should eventually score poorly on some queries and lose reputation. Another free-rider scenario was delegators earning rewards without doing AI work. That is intentional (to involve token holders), so not an attack – but it does mean some token emissions go to people who only staked. Bittensor justifies this as aligning incentives, not wasted rewards. In short, Bittensor acknowledges the validator free-rider issue and is tuning incentives (like giving validator trust scores that boost those who don’t stray or copy). Their solution is essentially rewarding effort and correctness more explicitly, so that doing nothing or blindly copying yields less TAO over time.
    • Gensyn: In Gensyn, free-riders would find it hard to earn, because one must either provide compute or catch someone cheating to get tokens. A solver cannot “fake” work – they have to submit either a valid proof or risk slashing. There is no mechanism to get paid without doing the task. A verifier could theoretically sit idle and hope others catch frauds – but then they earn nothing (because only the one who raises the fraud proof gets the reward). If too many verifiers try to free-ride (not actually re-compute tasks), then a fraudulent solver might slip through because no one is checking. Gensyn’s incentive design addresses this by the jackpot reward: it only takes one active verifier to catch a cheat and get a big payout, so it’s rational for at least one to always do the work. Others not doing work don’t harm the network except by being useless; they also get no reward. So the system naturally filters out free-riders: only those verifiers who actually verify will make profit in the long run (others spend resources on nodes for nothing or very rarely snag a reward by chance). The protocol might also randomize which verifier gets the opportunity to challenge to discourage all verifiers from assuming “someone else will do it.” Since tasks are paid individually, there isn’t an analog of “staking rewards without work” aside from testnet incentives which are temporary. One area to watch is multi-task optimization: a solver might try to re-use work between tasks or secretly outsource it to someone cheaper (like use a centralized cloud) – but that’s not really harmful freeloading; if they deliver correct results on time, it doesn’t matter how they did it. That’s more like arbitrage than an attack. In summary, Gensyn’s mechanism design leaves little room for freeloaders to gain, because every token distributed corresponds to a job done or a cheat punished.
    • Cuckoo: Cuckoo’s initial phase inadvertently created a free-rider issue: the airdrop and high-yield staking attracted users who were only there to farm tokens. These users would cycle tokens through faucets or game the airdrop tasks (for example, continuously using free test prompts or creating many accounts to claim rewards) without contributing to long-term network value. Cuckoo recognized this as a problem – essentially, people were “using” the network not for AI output but for speculative reward gain. The decision to end the L1 chain and refocus was partly to shake off these incentive misalignments. By tying future token rewards to actual usage (i.e., you earn because the service is actually being used by paying customers), the free-rider appeal diminishes. There is also a miner-side freeloading scenario: a miner could join, get assigned tasks, and somehow not perform them but still claim reward. However, the coordinator is verifying results – if a miner returns no output or bad output, the coordinator won’t count it as a completed task, so the miner wouldn’t get paid. Miners might also try to cherry-pick easy tasks and drop hard ones (for instance, if some prompts are slower, a miner might disconnect to avoid them). This could be an issue, but coordinators can note a miner’s reliability. If a miner frequently drops, the coordinator can stop assigning to them or slash their stake (if such a mechanism exists or simply not reward them). User freeloading – since many AI services have free trials, a user could spam requests to get outputs without paying (if there’s a subsidized model). That’s not so much protocol-level as service-level issue; each coordinator can decide how to handle free usage (e.g., requiring a small payment or throttle). Because Cuckoo initially gave out freebies (like free AI image generations to attract users), some took advantage, but that was part of expected growth marketing. As those promotions end, users will have to pay, thus no free lunch to exploit. Overall, Cuckoo’s new strategy to map token distribution to real utility is explicitly aimed at eliminating the free-rider problem of “mining tokens for doing meaningless loops”.
  • Data or Model Poisoning: This refers to maliciously introducing bad data or behaviors such that the AI models degrade or outputs are manipulated, as well as issues of harmful or biased content being contributed.

    • Bittensor: Data poisoning in Bittensor would mean a miner intentionally giving incorrect or harmful answers, or validators purposefully mis-evaluating good answers as bad. If a miner outputs garbage or malicious content consistently, validators will give low scores, and that miner will earn little and eventually drop off – the economic incentive is to provide quality, so “poisoning” others yields no benefit to the attacker (unless their goal is purely sabotage at their own expense). Could a malicious miner poison others? In Bittensor, miners don’t directly train each other (at least not by design – there’s no global model being updated that could be poisoned). Each miner’s model is separate. They do learn in the sense that a miner could take interesting samples from others to fine-tune themselves, but that’s entirely optional and up to each. If a malicious actor spammed nonsense answers, honest validators would filter that out (they’d score it low), so it wouldn’t significantly influence any honest miner’s training process (plus, a miner would likely use high-scoring peers’ knowledge, not low-scoring ones). So classical data poisoning (injecting bad training data to corrupt a model) is minimal in Bittensor’s current setup. The more relevant risk is model response manipulation: e.g., a miner that outputs subtly biased or dangerous content that is not obvious to validators. However, since validators are also human-designed or at least algorithmic agents, blatant toxicity or error is likely caught (some subnets might even have AI validators checking for unsafe content). A worst-case scenario is if an attacker somehow had a majority of validators and miners colluding to push a certain incorrect output as “correct” – they could then bias the network’s consensus on responses (like all colluding validators upvote a malicious answer). But for an external user to be harmed by that, they’d have to actually query the network and trust the output. Bittensor is still in a phase where it’s building capability, not widely used for critical queries by end-users. By the time it is, one hopes it will have content filtering and diversity of validators to mitigate such risks. On the validator side, a malicious validator could feed poisoned evaluations – e.g., consistently downvote a certain honest miner to eliminate competition. With enough stake, they might succeed in pushing that miner out (if the miner’s rewards drop so low they leave). This is an attack on the incentive mechanism. Again, if they are not majority, the median clipping will thwart an outlier validator. If they are majority, it merges with the collusion/51% scenario – any majority can rewrite rules. The solution circles back to decentralization: keep any one entity from dominating. In summary, Bittensor’s design inherently penalizes poisoned data/model contributions via its scoring system – bad contributions get low weight and thus low reward. There isn’t a permanent model repository to poison; everything is dynamic and continuously evaluated. This provides resilience: the network can gradually “forget” or ignore bad actors as their contributions are filtered out by validators.
    • Gensyn: If a solver wanted to poison a model being trained (like introduce a backdoor or bias during training), they could try to do so covertly. The Gensyn protocol would verify that the training proceeded according to the specified algorithm (stochastic gradient descent steps, etc.), but it wouldn’t necessarily detect if the solver introduced a subtle backdoor trigger that doesn’t show up in normal validation metrics. This is a more insidious problem – it’s not a failure of the computation, it’s a manipulation within the allowed degrees of freedom of training (like adjusting weights towards a trigger phrase). Detecting that is an active research problem in ML security. Gensyn doesn’t have a special mechanism for model poisoning beyond the fact that the submitter could evaluate the final model on a test set of their choosing. A savvy submitter should always test the returned model; if they find it fails on some inputs or has odd behavior, they may dispute the result or refuse payment. Perhaps the protocol could allow a submitter to specify certain acceptance criteria (like “model must achieve at least X accuracy on this secret test set”) and if the solver’s result fails, the solver doesn’t get paid in full. This would deter poisoning because the attacker wouldn’t meet the eval criteria. However, if the poison doesn’t impact accuracy on normal tests, it could slip through. Verifiers in Gensyn only check computation integrity, not model quality, so they wouldn’t catch intentional overfitting or trojans as long as the training logs look valid. So, this remains a trust issue at the task level: the submitter has to trust either that the solver won’t poison the model or use methods like ensembling multiple training results from different solvers to dilute any single solver’s influence. Another angle is data poisoning: if the submitter provides training data, a malicious solver could ignore that data and train on something else or add garbage data. But that would likely reduce accuracy, which the submitter would notice in the output model’s performance. The solver would then not get full payment (since presumably they want to meet a performance target). So poisoning that degrades performance is self-defeating for the solver’s reward. Only a poison that is performance-neutral but malicious (a backdoor) is a real danger, and that is outside the scope of typical blockchain verification – it’s a machine learning security challenge. Gensyn’s best mitigation is likely social: use known reputable models, have multiple training runs, use open source tools. On inference tasks (if Gensyn is also used for inference jobs), a colluding solver could return incorrect outputs that bias a certain answer. But verifiers would catch wrong outputs if they run the same model, so that’s less a poisoning and more just cheating, which the fraud proofs address. To sum up, Gensyn secures the process, not the intent. It ensures the training/inference was done correctly, but not that the result is good or free of hidden nastiness. That remains an open problem, and Gensyn’s whitepaper likely doesn’t fully solve that yet (few do).
    • Cuckoo: Since Cuckoo currently focuses on inference (serving existing models), the risk of data/model poisoning is relatively limited to output manipulation or content poisoning. A malicious miner might try to tamper with the model they are given to run – e.g., if provided a Stable Diffusion checkpoint, they could swap it with a different model that perhaps inserts some subtle watermark or advertisement into every image. However, the coordinator (who is the model owner) typically sends tasks with an expectation of the output format; if a miner returns off-spec outputs consistently, the coordinator will flag and ban that miner. Also, miners can’t easily modify a model without affecting its outputs noticeably. Another scenario is if Cuckoo introduces community-trained models: then miners or data providers might try to poison the training data (for example, feed in lots of wrong labels or biased text). Cuckoo would need to implement validation of crowd-sourced data or weighting of contributors. This isn’t yet a feature, but the team’s interest in personalized AI (like their mention of AI life coach or learning apps) means they might eventually handle user-provided training data, which will require careful checks. On content safety, since Cuckoo miners perform inference, one could worry about them outputting harmful content even if the model wouldn’t normally. But miners don’t have an incentive to alter outputs arbitrarily – they are paid for correct computation, not creativity. If anything, a malicious miner might skip doing the full computation to save time (e.g., return a blurry image or a generic response). The coordinator or user would see that and downrate that miner (and likely not pay for that task). Privacy is another facet: a malicious miner might leak or log user data (like if a user input sensitive text or images). This isn’t poisoning, but it’s an attack on confidentiality. Cuckoo’s privacy stance is that it’s exploring privacy-preserving methods (mention of a privacy-preserving VPN in the ecosystem suggests future focus). They could incorporate techniques like secure enclaves or split inference to keep data private from miners. Not implemented yet, but a known consideration. Finally, Cuckoo’s blog emphasizes verifying model outputs effectively and ensuring secure decentralized model operation as key to making model tokenization viable. This indicates they are aware that to truly decentralize AI, one must guard against things like poisoned outputs or malfunctioning models. Possibly they intend to use a combination of cryptoeconomic incentives (stake slash for bad actors) and user rating systems (users can flag bad outputs, and those miners lose reputation). The reputation system can help here: if a miner returns even one obviously malicious or incorrect result, users/coordinators can downvote them, heavily impacting their future earning ability. Knowing this, miners are incentivized to be consistently correct and not slip in any poison. In essence, Cuckoo relies on trust but verify: it’s more traditional in that if someone misbehaves, you identify and remove them (with stake loss as punishment). It doesn’t yet have specialized defenses for subtle model poisoning, but the structure of having specific app owners (coordinators) in charge adds a layer of supervision – those owners will be motivated to ensure nothing compromises their model’s integrity, as their own revenue and reputation depend on it.

In conclusion, while decentralized AI networks introduce new attack surfaces, they also deploy a mix of cryptographic, game-theoretic, and community governance defenses: Sybil resistance is largely handled by requiring economic stake for participation. Collusion resistance comes from alignment of incentives (honest behavior is more profitable) and consensus mechanisms that limit the impact of small colluding groups. Freerider prevention is achieved by closely tying rewards to actual useful work and penalizing or eliminating those who contribute nothing. Poisoning and related attacks remain challenging, but the systems mitigate blatant cases via continuous evaluation and the ability to slash or eject malicious actors. These platforms are actively researching and iterating on these designs – as evidenced by Bittensor’s ongoing tweaks to Yuma and dTAO, and Cuckoo’s shift in tokenomics – to ensure a secure, self-sustaining decentralized AI ecosystem.

Comparative Evaluation

To highlight the differences and similarities of Bittensor, Gensyn, and Cuckoo AI, the following table provides a side-by-side comparison across key dimensions:

DimensionBittensor (TAO)GensynCuckoo AI (CAI)
Technical StackCustom L1 (Substrate-based Subtensor chain) with 93+ specialized AI subnets. EVM-compatible (after recent upgrade) on its own chain.Ethereum-based rollup (originally planned L1, now an ETH rollup). Off-chain compute with on-chain verification.Launched as an Arbitrum Orbit Layer-2 chain (EVM rollup). Full-stack platform (own chain + compute + app UI). Migrating from custom L1 to Ethereum shared security (rollup/AVS).
Primary FocusDecentralized AI network of models (“neural internet”). Nodes contribute to collective model inference and training across tasks (LLM, vision, etc.).Decentralized compute marketplace for ML. Emphasis on off-chain model training and inference by global GPUs, verifying the work via blockchain.Decentralized AI service platform. Focus on model serving/inference (e.g. generative art, LLM APIs) using distributed GPU miners. Integrates end-user applications with backend GPU marketplace.
Key RolesSubnet Owner: defines task & validation in a subnet (earns 18% rewards).
Miners: run AI models (inference/training), provide answers.
Validators: pose queries & score miners’ outputs (curate quality).
Delegators: stake TAO on miners/validators to amplify and earn share.
Submitter (Developer): posts ML job (with model/data) and payment.
Solver: computes the task on their hardware, submits result.
Verifier (Watcher): checks solver’s result; can challenge via fraud-proof if wrong.
(No distinct “owner” role since submitter provides model; governance roles via token holders).
AI App Builder (Coordinator): deploys AI model service, stakes CAI, manages tasks to miners.
Miner (GPU/CPU Provider): stakes CAI, performs assigned inference tasks, returns results.
End User: uses AI apps (pays in crypto or contributes resources).
Staker (Delegator): stakes on coordinators/miners, votes in governance, earns a share of rewards.
Consensus & VerificationYuma Consensus: custom “proof-of-intelligence” – validators’ scores of AI output are aggregated (stake-weighted median) to determine miner rewards. Underlying chain consensus is PoS-like (Substrate) for blocks, but block validity hinges on the AI consensus each epoch. Resistant to outlier scoring and collusion up to 50%.Optimistic verification (Truebit-style): assume solver’s result correct unless a verifier challenges. Uses interactive on-chain fraud proofs to pinpoint any incorrect step. Also implementing cryptographic proofs-of-computation (proof-of-learning) to validate training progress without re-execution. Ethereum provides base consensus for transactions.Proof-of-Stake chain + task validation by coordinators: The Cuckoo Chain used PoS validators for block production (initially, miners also helped secure blocks). AI task results are verified by the coordinator nodes (who check miner outputs against expected model behavior). No specialized crypto proofs yet – relies on stake and reputation (trustless to the extent that misbehavior leads to slashing or downvoting rather than automatic math-proof detection). Transitioning to Ethereum consensus (rollup) for ledger security.
Token & UtilityTAO token: native currency on Subtensor. Used for staking (required to register and influence consensus), transaction fees/payments (e.g. paying for AI queries), and as reward for contributions (mining/validating). TAO has continuous inflation (1 TAO per 12s block) which drives the reward mechanism. Also used in governance (dTAO staking to subnets).Gensyn token (ERC-20, name TBA): the protocol’s unit for payments (developers pay solvers in it). Functions as stake collateral (solvers/verifiers bond tokens and get slashed for faults). Will be used in governance (voting on protocol upgrades via the Gensyn Foundation’s DAO). No details on supply yet; likely a portion allocated to incentivize early adoption (testnet, etc.).CAI token (ERC-20): native token of Cuckoo Chain (1 billion fixed supply). Multi-purpose: gas fee for transactions on Cuckoo Chain, staking for network roles (miners, coordinators must lock CAI), governance voting on protocol decisions, and rewards for contributions (mining/staking rewards came from initial allocation). Also has meme appeal (community token aspect).
Asset TokenizationCompute: yes – AI compute work is tokenized via TAO rewards (think of TAO as “gas” for intelligence). Models: indirectly – models earn TAO based on performance, but models/weights themselves are not on-chain assets (no NFTs for models). Subnet ownership is tokenized (subnet owner NFT + alpha tokens) to represent a share in a model marketplace. Data: not tokenized (data is off-chain; Bittensor focuses on model outputs rather than datasets).Compute: yes – idle compute becomes an on-chain commodity, traded in a job marketplace for tokens. Models: not explicitly – models are provided off-chain by devs, and results returned; no built-in model tokens (though the protocol could facilitate licensing if parties set it up). Data: no – data sets are handled off-chain between submitter and solver (could be encrypted or protected, but not represented as on-chain assets). The Gensyn vision includes possibly trading algorithms or data like compute, but core implementation is compute-centric.Compute: yes – GPU time is tokenized via daily CAI payouts and task bounties. The network treats computing power as a resource that miners “sell” for CAI. Models: partially – the platform integrates models as services; however, models themselves aren’t minted as NFTs. The value of a model is captured in the coordinator’s ability to earn CAI from users using it. Future plans hint at community-owned models, but currently model IP is off-chain (owned by whoever runs the coordinator). Data: no general data tokenization. User inputs/outputs are transient. (Cuckoo partners with apps like Beancount, etc., but data is not represented by tokens on the chain.)
GovernanceDecentralized, token-holder driven (dTAO): Initially had 64 elected validators running root consensus; now governance is open – TAO holders stake to subnets to direct emissions (market-based resource allocation). Protocol upgrades and changes are decided via on-chain proposals (TAO voting, with Bittensor Foundation/council facilitating). Aim is to be fully community-governed, with the foundation gradually ceding control.Progressive decentralization: Gensyn Foundation + elected council manage early decisions. After token launch, governance will transition to a DAO where token holders vote on proposals (similar to many DeFi projects). Shared security environment of Ethereum means major changes involve the community and potentially Layer-1 governance. Governance scope includes economic params, contract upgrades (subject to security audits). Not yet live, but outlined in litepaper for post-mainnet.Community & foundation mixed: Cuckoo launched with a “fair launch” ethos (no pre-mine for insiders). A community DAO is intended, with CAI voting on key decisions and protocol upgrades. In practice, the core team (Cuckoo Network devs) has led major decisions (like chain sunset), but they share rationale transparently and position it as evolution for the community’s benefit. On-chain governance features (proposals, voting) are likely to come when the new rollup is in place. Staking also gives governance influence informally through the reputation system (stake-weighted votes for trusted nodes).
Incentive ModelInflationary rewards linked to contribution: ~1 TAO per block distributed to participants based on performance. Quality = more reward. Miners and validators earn continuously (block-by-block), plus delegators earn a cut. TAO also used by end-users to pay for services (creating a demand side for the token). The token economy is designed to encourage long-term participation (staking) and constant improvement of models, akin to Bitcoin’s miners but “mining AI”. Potential issues (stake centralization leading to misaligned rewards) are being addressed via incentive tweaks.Market-driven, pay-for-results: No ongoing inflationary yield (beyond possible early incentives); solvers get paid only when they do work successfully. Verifiers only get paid upon catching a fraud (jackpot incentive). This creates a direct economy: developers’ spending = providers’ earning. Token value is tied to actual demand for compute. To bootstrap, Gensyn likely rewards testnet users at launch (one-time distribution), but steady-state, it’s usage-based. This aligns incentives tightly with network utility (if AI jobs increase, token usage increases, benefiting all holders).Hybrid (moving from inflation to usage fees): Initially, Mining & staking allocations from the 51% community pool rewarded GPU miners (30% of supply) and stakers (11%) regardless of external usage – this was to kickstart network effects. Over time, and especially after L1 sunset, emphasis is on revenue sharing: miners and app devs earn from actual user payments (e.g. splitting fees for an image generation). Stakers’ yield will derive from a portion of real usage or be adjusted to encourage supporting only productive nodes. So early incentive was “grow the network” (high APY, airdrops) and later it’s “network grows if it’s actually useful” (earnings from customers). This transition is designed to weed out freeloaders and ensure sustainability.
Security & Attack MitigationsSybil: Costly registration (TAO stake) deters sybils. Collusion: Median consensus resists collusion up to 50% stake; dTAO broke up a validator oligarchy by empowering token-holder voting. Dishonesty: Validators deviating from consensus lose reward share (incentivizes honest scoring). 51% attack is possible if stake is highly concentrated – research suggests adding stake caps and performance slashing to mitigate this. Model attacks: Poor or malicious model outputs are penalized by low scores. No single point of failure – network is decentralized globally (TAO miners exist worldwide, pseudo-anonymous).Sybil: Requires economic stake for participation; fake nodes without stake/work gain nothing. Verification: At least one honest verifier needed – if so, any wrong result is caught and penalized. Uses crypto-economic incentives to make cheating not payoff (solver loses deposit, verifier gains). Collusion: Secure as long as not all parties collude – one honest breaks the scheme by revealing fraud. Trust: Doesn’t rely on trust in hardware or companies, only on economic game theory and cryptography. Attacks: Hard to censor or DoS as tasks are distributed; an attacker would need to outbid honest nodes or consistently beat the fraud-proof (unlikely without majority control). However, subtle model backdoors might evade detection, which is a known challenge (mitigated by user testing and possibly future audits beyond just correct execution). Overall security analogous to an optimistic rollup for compute.Sybil: All actors must stake CAI, raising the bar for sybils. Plus a reputation system (staking + voting) means sybil identities with no reputation won’t get tasks. Node misbehavior: Coordinators can drop poor-performing or suspicious miners; stakers can withdraw support. Protocol can slash stake for proven fraud (the L1 had slashing conditions for consensus; similar could apply to task fraud). Collusion: Partly trust-based – relies on open competition and community oversight to prevent collusion from dominating. Since tasks and payouts are public on-chain, blatant collusion can be identified and punished socially or via governance. User protection: Users can switch providers if one is censored or corrupted, ensuring no single point of control. Poisoning/content: By design, miners run provided models as-is; if they alter outputs maliciously, they lose reputation and rewards. The system bets on rational actors: because everyone has staked value and future earning potential, they are disincentivized from attacks that would undermine trust in the network (reinforced by the heavy lessons from their L1 experiment about aligning incentives with utility).

Table: Feature comparison of Bittensor, Gensyn, and Cuckoo AI across architecture, focus, roles, consensus, tokens, asset tokenization, governance, incentives, and security.

MEV Suppression and Fair Transaction Ordering: SUAVE vs. Anoma vs. Skip vs. Flashbots v2

· 84 min read
Dora Noda
Software Engineer

Maximal Extractable Value (MEV) refers to the profit a blockchain “insider” (miner/validator or other privileged actor) can gain by arbitrarily reordering, including, or excluding transactions in a block. Unchecked MEV extraction can lead to unfair transaction ordering, high fees (from priority gas auctions), and centralization of power in block production. A number of protocols have emerged to suppress harmful MEV or enforce fair ordering of transactions. This report compares four prominent approaches: Flashbots v2 (the post-Merge Flashbots MEV-Boost system for Ethereum), SUAVE (Flashbots’ upcoming Single Unifying Auction for Value Expression), Anoma (an intent-centric architecture reimagining how transactions are matched and ordered), and Skip Protocol (a Cosmos-based toolkit for sovereign in-protocol MEV management). We examine each in terms of their transaction queuing/ordering algorithms, MEV mitigation or extraction mechanisms, incentive models, compliance and neutrality features, technical architecture (consensus and cryptography), and development progress. Structured summaries and a comparison table are provided to highlight their strengths and trade-offs in pursuing fairness and reducing the negative externalities of MEV.

Flashbots v2 (MEV-Boost & BuilderNet on Ethereum)

Flashbots v2 denotes the current Flashbots ecosystem on Ethereum post-Proof-of-Stake, centered around MEV-Boost and recent initiatives like BuilderNet. Flashbots v2 builds on the proposer/builder separation (PBS) paradigm to open up block construction to a competitive market of builders while protecting Ethereum users from public mempool MEV attacks.

  • Transaction Ordering (Queuing & Algorithm): Flashbots MEV-Boost introduces an off-chain block-building marketplace. Validators (proposers) outsource block construction to specialized builders via a relay, instead of locally ordering transactions. Multiple builders compete to provide the highest-paying block, and the validator blindly signs the header of the top bid block (a PBS approach). This design effectively replaces the public mempool’s first-come, first-served ordering with a sealed-bid auction for entire blocks. Builders determine transaction ordering internally to maximize total payoffs (including MEV opportunities), typically preferring bundles with profitable arbitrages or liquidations at the top of the block. By using MEV-Boost, Ethereum avoided the chaotic priority gas auctions (PGAs) that previously determined ordering; instead of users and bots bidding via gas fees in real-time (driving up congestion), MEV-Boost centralizes ordering per block to the most competitive builder. Transaction queues are thus privately managed by builders, who can see incoming bundles or transactions and arrange them for optimal profit. One drawback is that this profit-driven ordering does not inherently enforce “fairness” for users – e.g. builders may still include toxic orderflows like sandwich attacks if profitable – but it does optimize efficiency by extracting MEV through a controlled auction rather than ad-hoc gas wars. Recent developments have aimed to make ordering more neutral: for example, Flashbots’ new BuilderNet (launched late 2024) allows multiple collaborating builders to share orderflow and construct blocks collectively in a Trusted Execution Environment, introducing verifiable ordering rules to improve fairness. This moves block ordering away from a single centralized builder towards a decentralized block-building network with rules that can be audited for neutrality.

  • MEV Suppression vs. Extraction Mechanisms: Flashbots v2 primarily facilitates MEV extraction in a more benign form rather than eliminating it. The original Flashbots (v1) system in 2021 allowed searchers to send bundles (preferred transaction sets) directly to miners, which suppressed harmful externalities (no public frontrunning, no failed transactions due to racing) while still extracting MEV. In the MEV-Boost era, MEV is extracted by builders bundling profitable transactions, but negative-sum competition is reduced: searchers no longer spam the mempool with competing transactions and exorbitant gas fees, which mitigates network congestion and excessive fees for users. Flashbots v2 also provides MEV mitigation tools for users: for example, Flashbots Protect RPC allows users to submit transactions privately to a relay, preventing public mempool frontrunning (no one can see or reorder the tx before inclusion). Another initiative, MEV-Share, lets users share just enough info about their transactions to attract MEV bids while capturing a portion of the value for themselves. However, Flashbots v2 does not “prevent” MEV like sandwiches or arbitrage – it channels these activities through an efficient auction that arguably democratizes who can extract the MEV. Recently, BuilderNet’s design has an explicit goal of “neutralizing negative-sum orderflow games” and sharing MEV back with the community via on-chain refund rules. BuilderNet computes refunds paid to transaction orderflow providers (like wallets or DApps) proportional to the MEV their transactions generated, redistributing value that would otherwise be pure profit for builders. In summary, Flashbots v2 maximizes MEV extraction efficiency (ensuring nearly all extractable value in a block is actually captured) while introducing measures to curb the worst externalities and return some value to users. It stops short of enforcing fair ordering (transactions are still ordered by builder profit), but through private submission, multi-party building, and refunds, it suppresses the negative user harm (like front-run slippage and censorship effects) as much as possible within the auction model.

  • Economic Incentive Structure: Flashbots v2 aligns incentives among validators, builders, and searchers through the PBS auction. Validators benefit by outsourcing block production – they simply accept the highest bid and get paid the bid amount (in addition to consensus rewards), which dramatically increased the share of MEV going to validators compared to the era where miners did not have such auctions. Builders are incentivized to out-compete each other by finding the most profitable ordering of transactions (often incorporating searcher strategies), and they keep any MEV profit left after paying the validator’s bid. In practice, competition forces builders to pay most of the MEV to validators (often >90% of profit), keeping only a thin margin. Searchers (now interacting with builders via bundles or direct transactions) still earn by discovering MEV opportunities (arbitrage, liquidation, etc.), but they must bid away most of their profit to win inclusion – effectively, searcher profits get transferred to validators via builder bids. This competitive equilibrium maximizes total network revenue (benefiting validators/stakers) but squeezes individual searcher margins. Flashbots v2 thus discourages exclusive deals: any searcher or builder with a private MEV strategy is incentivized to bid it through the open relay to avoid being undercut, leading to a more open market. The introduction of BuilderNet adds an incentive for orderflow originators (like DEXs or wallets) – by giving them refunds for the MEV their transactions create, it encourages users and apps to send orderflow to the BuilderNet ecosystem. This mechanism aligns users with the system: rather than being adversarial (users vs. MEV extractors), users share in the MEV, so they are more willing to participate in the auction fairly. Overall, Flashbots v2’s economics favor collaboration over competition in block building: validators get maximal revenue risk-free, builders compete on execution quality, and searchers innovate to find MEV but relinquish most profits to win bids, while users gain protection and possibly rebates.

  • Compliance and Censorship Resistance: Regulatory compliance became a contentious issue for Flashbots after the Ethereum Merge. The default Flashbots relay initially implemented OFAC sanctions compliance (censoring certain transactions like Tornado Cash) – leading to ~~80% of Ethereum blocks in late 2022 being “OFAC-compliant” and raising centralization/censorship concerns in the community. Flashbots v2 addressed this by fostering a multi-relay ecosystem where validators can choose non-censoring relays (e.g. UltraSound, Agnostic) or even run their own. Flashbots open-sourced its relay code in mid-2022 to encourage global relay competition and transparency. Additionally, MEV-Boost v1.4 introduced features like a minimum bid setting so proposers could reject low bids from censoring builders and fall back to local blocks, trading some profit for inclusion of all transactions. This feature explicitly gave validators a way to improve Ethereum’s censorship-resistance at a small cost. By late 2024, Flashbots took a further step by deprecating its own centralized builder in favor of BuilderNet – a collaborative network aimed to be “uncensorable and neutral”. BuilderNet uses TEEs (Intel SGX) to keep transaction orderflow encrypted and verifiably commits to an ordering rule, which can help prevent individual builders from censoring specific transactions. With multiple participants jointly building blocks inside secure enclaves, no single party can easily exclude a transaction without detection. In short, Flashbots v2 has evolved from a single (and initially censoring) relay to a more decentralized infrastructure with open participation and explicit neutrality goals. Compliance is left to individual relays/builders policies (and validators can choose), rather than enforced by the protocol. The trajectory is toward credible neutrality: eliminating any Flashbots-controlled chokepoints that could be pressured by regulators. Flashbots has publicly committed to removing itself as a central operator and to decentralize all aspects of the MEV supply chain in the long run.

  • Technical Architecture & Cryptography: Flashbots v2 operates off-chain and in-protocol hybrid. The core auction (MEV-Boost) happens off-chain via the builder and relay network, but it plugs directly into Ethereum’s consensus: validators run a sidecar client (mev-boost) that interfaces with relays using the standardized Builder API. Consensus-wise, Ethereum still uses standard PoS (Casper/Hotstuff) – MEV-Boost doesn’t alter L1 consensus rules; it only changes who assembles the block. Initially, the Flashbots auction required trusting the relay and builder not to steal transactions or censor – there were no cryptographic guarantees (the system relied on the economic incentive that builders must deliver a valid payload matching their bid or they lose the slot). Over time, Flashbots v2 has integrated more security technology. The introduction of Trusted Execution Environments (TEE) via BuilderNet is a notable architectural shift: builders run inside SGX enclaves so that even the builder operator cannot see the raw transaction orderflow (preventing them from leaking or frontrunning it). These enclaves collectively follow a protocol to produce blocks, which can enable verifiable fairness (e.g. proving that transactions were ordered by a committed rule or that no unauthorized entity saw them before inclusion). While SGX is used (a hardware-based approach), Flashbots research is also exploring pure cryptographic primitives – e.g. threshold encryption for mempool privacy and secure multi-party computation – to eventually replace or complement TEEs and further reduce trust. Flashbots v2’s software stack includes custom clients like MEV-geth (now obsolete) and Rust-based builders (e.g. rbuilder), and it adheres to Ethereum’s builder-specs for interoperability. In summary, the architecture is modular: a network of relays, builders, and now enclaves, sitting between users and Ethereum proposers. It prioritizes performance (fast bidding, block delivery) and is gradually adding cryptographic assurances of privacy and fair ordering. No new consensus algorithm is introduced; instead Flashbots v2 works alongside Ethereum’s consensus, evolving the block production pipeline rather than the consensus rules.

  • Development Roadmap & Milestones: Flashbots has progressed through iterative phases. Flashbots v1 (2020–2021) involved the launch of MEV-geth and the first off-chain bundle auctions with miners. By mid-2021 over 80% of Ethereum’s hashrate was running Flashbots’ MEV-geth, confirming the approach’s adoption. Flashbots v2 (2022) was conceived in advance of The Merge: in November 2021 Flashbots published the MEV-Boost architecture for PoS Ethereum. After Ethereum switched to PoS (Sept 15, 2022), MEV-Boost was activated within days and rapidly reached majority uptake by validators. Subsequent milestones included open-sourcing the relay (Aug 2022) and Flashbots’ internal block builder (Nov 2022) to spur competition. In late 2022, Flashbots also added features focusing on censorship-resistance and resilience (e.g. min-bid for proposers) and wrote about the “Cost of Resilience” to encourage validators to sometimes prefer inclusion over profit. Throughout 2023, improving builder decentralization became a key focus: Flashbots released “rbuilder” (a high-performance Rust builder) in July 2024 as a reference implementation to lower the barrier for new builders. Finally, in late 2024, Flashbots launched BuilderNet (alpha) in collaboration with partners (Beaverbuild, Nethermind). By December 2024, Flashbots shut down its centralized builder and migrated all orderflow to BuilderNet – a significant step towards decentralization. In early 2025, BuilderNet v1.2 was released with security and onboarding improvements (including reproducible enclave builds). These milestones mark Flashbots’ transition from an expedient centralized solution to a more open, community-run protocol. Looking forward, Flashbots is converging with its next-generation vision (SUAVE) to fully decentralize the block building layer and incorporate advanced privacy tech. Many lessons from Flashbots v2 (e.g. the need for neutrality, multi-chain scope, and user-inclusion of MEV rewards) directly inform the SUAVE roadmap.

SUAVE (Flashbots’ Single Unifying Auction for Value Expression)

SUAVE is Flashbots’ ambitious next-step protocol designed as a decentralized, cross-domain MEV marketplace and fair transaction sequencing layer. It aims to unbundle mempools and block building from individual blockchains and provide a unified platform where users express preferences, a decentralized network executes transactions optimally, and block builders produce blocks across many chains in a credibly neutral way. In short, SUAVE seeks to maximize total value extraction while returning value to users and preserving blockchain decentralization. Flashbots introduced SUAVE in late 2022 as “the future of MEV” and has been developing it in the open since.

  • Queuing and Transaction Ordering: From a high level, SUAVE functions as an independent blockchain network that other chains can use as a plug-and-play mempool and block builder. Rather than transactions being queued in each chain’s mempool and ordered by local miners or validators, users can send their transactions (or more generally, preferences) into the SUAVE network’s mempool. SUAVE’s mempool then serves as a global auction pool of preferences from all participating chains. Ordering of transactions is determined through this auction and subsequent execution optimization. Specifically, SUAVE introduces a concept of preferences: a user’s submission isn’t just a raw transaction for one chain, but can encode a goal or conditional trade (possibly spanning multiple chains) and an associated bid the user is willing to pay for fulfillment. The ordering/queuing algorithm in SUAVE has multiple stages: First, users post their preferences to the SUAVE mempool (the “Universal Preference Environment”), which aggregates all orders privately and globally. Next, specialized nodes called executors (analogous to searchers/solvers) monitor this mempool and compete in an Optimal Execution Market to fulfill these preferences. They effectively “queue” transactions by finding matches or optimal execution ordering for them. Finally, SUAVE produces block outputs for each connected chain via a Decentralized Block Building layer: many builders (or SUAVE executors acting as builders) collaborate to construct blocks using the (now optimized) transaction order derived from user preferences. In practical terms, SUAVE’s ordering is flexible and user-driven: a user can specify conditions like “execute my trade only if price < X” or even express an abstract intent (“swap token A for B at the best rate within 1 minute”) instead of a strict transaction. The system queues these intents until an executor finds an optimal ordering or match (possibly batching with others). Because SUAVE is blockchain-agnostic, it can coordinate ordering across chains (preventing scenarios where cross-chain arbitrages are missed due to uncoordinated separate mempools). In essence, SUAVE implements a global MEV auction: all participants share one sequencing layer, which orders transactions based on aggregated preferences and bids rather than simple time or gas price. This has the effect of leveling the playing field – all orderflow goes through one transparent queue (albeit encrypted for privacy, as discussed below) instead of exclusive deals or private mempools. SUAVE’s ordering algorithm is still being refined, but it will likely involve privacy-preserving batch auctions and matching algorithms so that “fair” outcomes (like maximum total surplus or user-optimal prices) can be achieved rather than pure first-come-first-served. Notably, SUAVE intends to prevent any single actor from manipulating ordering: it is Ethereum-native and MEV-aware, with a privacy-first encrypted mempool that protects against any central points of control. In summary, SUAVE’s queue is a unified orderflow pool where ordering is determined by a combination of user bids, executor strategy, and (eventually) cryptographic fairness constraints, rather than by block proposers racing for priority.

  • MEV Suppression/Extraction Mechanisms: SUAVE’s philosophy is that MEV can be harnessed for users’ benefit and for network security if done in a cooperative, decentralized manner. Instead of either ignoring MEV or letting it concentrate in a few hands, SUAVE explicitly surfaces MEV opportunities and returns the value to those who create it (users) as much as possible. The primary mechanism is the orderflow auction: whenever a user’s transaction (preference) has MEV – for example, it could be backrun for profit – SUAVE will conduct an auction among executors (searchers) for the right to execute that MEV opportunity. Searchers (executors) bid by promising a portion of the profit back to the user as a payment (this is the user’s “bid” field in their preference, which goes to whoever fulfills it). The result is competitive MEV extraction that pushes revenue to the user rather than the extractor. For instance, if a user’s large DEX trade creates a $100 arbitrage opportunity, searchers on SUAVE might bid the profit down by offering, say, $90 back to the user as a rebate, keeping only $10. This suppresses the negative aspects of MEV like user value extraction, and turns MEV into a user benefit (users effectively get price improvement or rebates). SUAVE’s design also suppresses front-running and other malicious MEV: transactions in the SUAVE mempool can be kept encrypted until a block is being built (using SGX enclaves initially, moving toward threshold cryptography). This means no external actor can see pending transactions to frontrun them; only when enough transactions are collected and a block is finalized are they decrypted and executed, similar in spirit to batch auctions or encrypted mempools that remove the time-priority advantage of bots. Additionally, because executors optimize execution across many preferences, SUAVE can eliminate inefficient competition (like two bots fighting over the same arbitrage by spamming). Instead, SUAVE selects the best executor through the auction and that executor performs the trade once, with the outcome benefiting the user and the network. SUAVE thus acts as a MEV aggregator and “fairy godmother”: it doesn’t eliminate MEV (the profitable opportunities are still taken), but those opportunities are realized under transparent rules and with proceeds largely distributed to users and validators (and not wasted on gas fees or latency wars). By unifying mempools, SUAVE also addresses cross-domain MEV in a user-friendly way – e.g. an arbitrage between Uniswap on Ethereum and a DEX on Arbitrum could be captured by a SUAVE executor and a portion paid to the users on both sides, rather than being missed or requiring a centralized arbitrageur. Importantly, SUAVE suppresses the centralizing forces of MEV: exclusive orderflow deals (where private entities capture MEV) become unnecessary if everyone is using the common auction. SUAVE’s ultimate vision is to reduce harmful MEV extraction (like sandwich attacks causing slippage) by either making them unprofitable or refunding the slippage, and to use “good MEV” (arbitrage, liquidations) to strengthen networks (through revenue sharing and optimal execution). In Flashbots’ own words, SUAVE’s goal is to ensure “users transact with the best execution and minimum fees” while “validators get maximum revenue” – i.e. any MEV present is extracted in the most user-aligned way.

  • Economic Incentive Structure: SUAVE introduces new roles and incentive flows in the MEV supply chain. The main participants are users, executors, block builders/validators, and the SUAVE network operators (validators of the SUAVE chain). Users set a bid (payment) in their preference, which will be paid out if their conditions are met. This bid is the carrot for executors: an executor who fulfills the user’s intent (e.g. backruns their trade to get them a better price) can claim the bid as a reward. Users are therefore directly paying for execution quality, rather like posting a bounty. Executors (Searchers) are motivated to pick up user preferences from the SUAVE mempool and optimize them because they earn the user’s bid plus any extra arbitrage profit inherent in the transaction. Executors will compete to offer the best outcome to the user because the user can set their bid in a way that they only pay if the executor actually achieves the desired result (the bid can be conditional on on-chain outcomes via oracles). For example, a user might say “I’ll pay 0.5 ETH to whoever executes this transaction such that I get at least X output; if not, no payment.” This aligns executor incentives with user success. SUAVE Validators/Builders: The SUAVE chain itself will likely be a Proof-of-Stake network (design TBD), so validators (who produce blocks on SUAVE) earn transaction fees on SUAVE (which come from users posting bids and other operations). Since SUAVE is an EVM-compatible chain, there may also be a native token or gas fee system for those transactions. These validators also play a role in sequencing cross-domain blocks; however, final block inclusion on each L1 is still done by that L1’s validator. In many cases, SUAVE will produce a partial or full block template that an Ethereum or other chain proposer can adopt. That builder might pay SUAVE (or SUAVE’s executors) some portion of the MEV. Flashbots has mentioned that SUAVE validators are incentivized by normal network fees, while executors are incentivized by bids. Value Distribution: SUAVE’s approach tends to push value to the edges: users capture value (through better prices or direct refunds), and validators capture value (through increased fees/bids). In theory, if SUAVE fulfills its mission, most MEV will be either returned to users or used to compensate validators for securing the network, rather than concentrating with searchers. Flashbots itself has indicated it does not plan to rent-seek from SUAVE and will not take a cut beyond what’s needed to bootstrap – they want to design the marketplace, not monopolize it. Another incentive consideration is cross-chain builders: SUAVE allows block builders to access cross-domain MEV, which means a builder on one chain can earn additional fees by including transactions that complete arbitrage with another chain. This encourages builders/validators of different chains to all participate in SUAVE, because opting out means missing revenue. In essence, SUAVE’s economic design tries to align all participants to join the common auction: users because they get better execution (and maybe MEV rebates), validators because they get maximum revenue, and searchers because that’s where the orderflow is aggregated. By concentrating orderflow, SUAVE also gains an information advantage over any isolated actor (all preferences in one place), which economically pressures everyone to cooperate within SUAVE rather than break away. In summary, SUAVE’s incentives promote a virtuous cycle: more orderflow → better combined MEV opportunities → higher bids to users/validators → more orderflow. This stands in contrast to the zero-sum competition and exclusive deals of the past, aiming instead for “coopetition” where MEV is a shared value to grow and distribute.

  • Compliance and Regulatory Considerations: SUAVE is being built with credible neutrality and censorship-resistance as core tenets. By design, SUAVE removes central intermediaries – there is no single mempool or single builder to attack or regulate. Transactions (preferences) in SUAVE can be fully encrypted and private until they are executed, using secure enclaves and eventually cryptographic techniques. This means that censorship at the transaction content level is impractical, since validators/builders cannot even read the transaction details before finalizing the order. SUAVE essentially forces a “don’t trust, verify” approach: participants don’t need to trust one entity not to censor, because the system architecture itself (decentralized network + encryption) ensures everyone’s preferences are included fairly. Moreover, SUAVE is intended to be an open, permissionless network – Flashbots explicitly invites all parties (users, searchers, wallets, other blockchains) to participate. There is no KYC or permission gating in its design. This could raise questions with regulators (e.g. the protocol could facilitate MEV extraction on sanctioned transactions), but because SUAVE is just a decentralized platform, enforcement would be difficult and analogous to trying to regulate a blockchain’s mempool. SUAVE’s focus on privacy (through SGX and later cryptography) also protects user data and orderflow from unwanted monitoring, which is positive for user security but might conflict with regulatory desires for transparency. On the other hand, SUAVE’s approach could be seen as more fair and compliant with the spirit of open markets: by creating a level playing field and returning value to users, it reduces the exploitative aspects of MEV that could draw regulatory ire (like backrunning users without their consent). SUAVE can also help eliminate unregulated dark pools – one reason regulators might be concerned about MEV is exclusive orderflow sales (which resemble insider trading). SUAVE replaces those with a transparent public auction, arguably a more compliant market structure. In terms of explicit compliance features, SUAVE could allow multiple ordering policies: for example, communities or jurisdictions could deploy their own executors with certain filters or preferences. However, the baseline is that SUAVE will try to be maximally neutral: “eliminate any central points of control, including Flashbots” and avoid embedding any policy decisions at the protocol level. Flashbots has stressed that it will not itself control SUAVE’s marketplace as it matures – meaning no central kill-switch or censorship toggle. The governance (if any) of SUAVE is still undefined publicly, but one can expect it to involve the broader community and possibly a token, rather than a company’s fiat. In summary, SUAVE is designed to align with decentralized principles, which by nature resists certain regulatory control (censorship), while potentially alleviating some regulatory concerns by making MEV extraction more equitable and transparent.

  • Technical Architecture (Consensus & Crypto): SUAVE will operate its own blockchain environment – at least initially. It is described as an EVM-compatible chain specialized for preferences and MEV. The architecture has three main components: (1) the Universal Preference Environment (the SUAVE chain + mempool, where preferences are posted and aggregated), (2) the Execution Market (off-chain or on-chain executors who solve/optimize the preferences, akin to a decentralized “order matching engine”), and (3) Decentralized Block Building (a network of SUAVE participants that assemble blocks for various domains). At its core, SUAVE’s consensus will likely be a Proof-of-Stake BFT consensus (similar to Ethereum or Cosmos) to operate the SUAVE chain itself – though whether SUAVE becomes an L1, an Ethereum L2, or a suite of “restaking” contracts is still being decided. One possibility is that SUAVE could start as a layer-2 or sidechain that uses Ethereum for finality, or leverage existing validator sets. The security model is TBD but discussions have included making it an Ethereum L3 or a Cosmos chain. Cryptographically, SUAVE leans heavily on Trusted Hardware and encryption in its early roadmap. The SUAVE Centauri phase implements a “privacy-aware orderflow auction” in which Flashbots (centrally) operates SGX enclaves to keep searcher and user orderflow private. In SUAVE Andromeda, they plan to use SGX-based auctions and block building without trusting Flashbots (the enclaves provide confidentiality so even Flashbots can’t peek). By SUAVE Helios, the aim is to have a SGX-based decentralized building network – meaning many independent parties running enclaves that collectively build blocks, achieving both privacy and decentralization. Long-term, Flashbots is researching custom secure enclaves and cryptographic replacements like threshold decryption and multi-party computation to reduce reliance on Intel’s SGX. For example, they might use a threshold encryption scheme where validators of SUAVE jointly hold a key to decrypt transactions only after ordering is decided (ensuring no one can frontrun). This concept is similar to Anoma’s Ferveo or other “fair ordering via threshold encryption” ideas. Additionally, SUAVE treats user preferences as smart contracts on its chain. A user’s preference might contain a validity predicate and a payment condition – this is essentially a piece of code that says “if X outcome is achieved on chain Y, then pay executor Z this amount”. The SUAVE chain needs to handle oracles and cross-chain verification to know when a preference has been fulfilled (e.g. reading Ethereum state to see if a swap happened). This implies SUAVE’s architecture will involve on-chain light clients or oracle systems for connected chains, as well as potentially atomic cross-chain settlement (to ensure, for instance, that an executor can execute on Ethereum and Arbitrum and atomically claim the bid). SUAVE plans to be highly extensible: because it’s EVM-compatible, arbitrary contracts (SUAVE-native “preferences” or even normal dapps) could run on it, although the intention is to keep it focused on orderflow coordination. Consensus-wise, SUAVE might innovate by being an intent-centric chain rather than a transaction-centric one, but ultimately it must order messages (preferences) and produce blocks like any chain. One can imagine SUAVE adopting a consensus algorithm optimized for throughput and low-latency finality, since it will sit in the critical path of transactions for many chains. Perhaps a Tendermint-style instant finality or even a DAG-based consensus could be used to quickly confirm preferences. Regardless, SUAVE’s distinguishing features are on the transaction layer, not the consensus layer: the use of privacy tech (SGX, threshold encryption) for ordering, cross-domain communication, and smart-order routing logic built into the protocol. This makes it a kind of “meta-layer” on top of existing blockchains. Technically, every participating chain will need to trust SUAVE’s outputs to some extent (e.g. an Ethereum proposer would need to accept a SUAVE-built block or include SUAVE suggestions). Flashbots has indicated SUAVE will be introduced gradually and opt-in – domains can choose to adopt SUAVE sequencing for their blocks. If widely adopted, SUAVE could become a de facto MEV-aware transaction routing network for Web3. To sum up, SUAVE’s architecture is a marriage of blockchain and off-chain auction: a specialized chain for coordination, married with off-chain secure computation among executors, all anchored by cryptographic guarantees of fairness and privacy.

  • Development Roadmap & Milestones: Flashbots outlined SUAVE’s roadmap in three major milestones, named after star systems: Centauri, Andromeda, and Helios. Centauri (the first phase, under development in 2023) focuses on building a centralized but privacy-preserving orderflow auction. In this phase, Flashbots runs the auction service (likely in SGX) that allows searchers to bid to backrun user transactions, returning MEV to users privately. It also includes launching a SUAVE devnet for early testing. Indeed, in August 2023 Flashbots open-sourced an early SUAVE client (suave-geth) and launched Toliman, the first public SUAVE testnet. This testnet has been used to experiment with preference expression and basic auction logic. Andromeda (the next phase) will roll out the first SUAVE mainnet. Here, users will be able to express preferences on a live network, and the Execution Market will operate (executors fulfilling intents). Andromeda also introduces SGX-based auctions and block building in a more distributed fashion – removing the need to trust Flashbots as an operator, and making the system truly permissionless for searchers and builders. One deliverable in this phase is using SGX to encrypt orderflow in a way that even block builders can’t peek yet can still build blocks (i.e. “open but private” orderflow). Helios is the ambitious third phase where SUAVE achieves full decentralization and cross-chain functionality. In Helios, a decentralized network of builders in SGX collaboratively produce blocks (no single builder dominance). Also, SUAVE will “onboard a second domain” beyond Ethereum – meaning it will handle MEV for at least two chains, demonstrating cross-chain MEV auctions. Additionally, cross-domain MEV expression and execution will be enabled (users can post truly multi-chain intents and have them executed atomically). Beyond Helios, Flashbots anticipates exploring custom hardware and advanced crypto (like zk-proofs or MPC) to further harden trust guarantees. Key updates and milestones so far: November 2022 – SUAVE announced; August 2023 – first SUAVE code release and testnet (Toliman); ongoing 2024 – Centauri phase orderflow auction running (Flashbots has hinted this is being tested with user transactions in a closed environment). A notable milestone will be the launch of the SUAVE mainnet (Andromeda), which as of mid-2025 is on the horizon. Flashbots has committed to building SUAVE in the open and inviting collaboration from across the ecosystem. This is reflected in the research and forum discussions, such as the “Stargazing” series posts that update on SUAVE’s design evolution. The endgame for SUAVE is that it becomes a community-owned piece of infrastructure – the “decentralized sequencing layer” for all of crypto. Achieving this will mark a major milestone in the fight for fair ordering: if SUAVE succeeds, MEV would no longer be a dark forest but a transparent, shared value source, and no single chain would have to suffer the centralizing effects of MEV on its own.

Anoma (Intent-Centric Architecture for Fair Counterparty Discovery)

Anoma is a radically different approach to enabling fair ordering and MEV mitigation – it is an entire architecture for intent-based blockchain infrastructure. Rather than bolting on an auction to existing chains, Anoma rethinks the transaction paradigm from the ground up. In Anoma, users don’t broadcast concrete transactions; they broadcast intents – declarations of what end state they desire – and the network itself discovers counterparties and forms transactions that fulfill these intents. By integrating counterparty discovery, fair ordering, and privacy at the protocol level, Anoma aims to virtually eliminate certain forms of MEV (like frontrunning) and enable “frontrunner-free” decentralized exchange and settlement. Anoma is more of a framework than a single chain: any blockchain can be a “fractal instance” of Anoma by adopting its intent gossip and matching architecture. For this discussion, we focus on Anoma’s first implementation (sometimes called Anoma L1) and its core protocol features, as they relate to fairness and MEV.

  • Queuing and Transaction Ordering: Anoma discards the conventional mempool of transactions; instead it has a gossip network of intents. Users broadcast an intent, e.g. “I want to swap 100 DAI for at least 1 ETH” or “I want to borrow against collateral at the best rate.” These intents are partial orders – they don’t specify exact execution paths, just the desired outcome and constraints. All intents are gossiped throughout the network and collected. Now, ordering in Anoma works in two stages: (1) Counterparty Discovery/Matching, and (2) Transaction Execution with Fair Ordering. In stage 1, specialized nodes called solvers continuously monitor the pool of intents and try to find sets of intents that complement each other to form a valid transaction. For example, if Alice intends to trade DAI for ETH and Bob intends to trade ETH for DAI, a solver can match them. If multiple intents are compatible (like an order book of bids and asks), solvers can find an optimal matching or clearing price. Importantly, this happens off-chain in the solver network – effectively an algorithmic matchmaking. Once a solver (or group of solvers) have constructed a complete transaction (or set of transactions) that fulfill some intents, they submit it to the chain for execution. This is where stage 2 comes: Anoma’s consensus will then order these solver-submitted transactions into blocks. However, Anoma’s consensus is designed to be order-fair: it uses cryptographic techniques (threshold encryption) to ensure that transactions are ordered without being influenced by their content or precise submission timing. Specifically, Anoma plans to use Ferveo, a threshold encryption scheme, at the mempool level. The way this works is: solvers encrypt the transactions they want to propose using a collective public key of the validators. Validators include these encrypted transactions in blocks without knowing their details. Only after a transaction is finalized in a block do validators collectively decrypt it (by each contributing a share of the decryption key). This ensures that no validator can selectively front-run or reorder based on a transaction’s content – they commit to an ordering blind. The consensus algorithm effectively orders transactions (really, intents) in something closer to a first-seen or batched manner, since all transactions in a given “batch” (block) are encrypted and revealed simultaneously. In practice, Anoma can implement batch auctions for certain applications: e.g. an intent to trade can be gathered over N blocks (kept encrypted), then all decrypted together after N blocks and matched by solvers in one batch. This prevents fast actors from seeing others’ orders and reacting within that batch – a huge advantage for fairness (this technique is inspired by Frequent Batch Auctions and has been proposed to eliminate high-frequency trading advantages). Additionally, Anoma’s validity predicates (application-level smart contracts) can enforce fairness constraints on ordering outcome. For example, an Anoma DEX application might have a rule: “all trades in a batch get the same clearing price, and solvers cannot insert additional transactions to exploit users”. Because these rules are part of the state validity, any block containing an unfair match (say, a solver tried to sneak in a self-trade at a better price) would be invalid and rejected by validators. In summary, ordering in Anoma happens as match then encrypt+order: intents are conceptually queued until a solver forms a transaction, and then that transaction is ordered by a fair-order consensus (preventing typical MEV). There’s effectively no mempool race, since user intents are not directly competing on gas price or time priority. Instead, the competition is for solvers to find matches, and then those matches are executed in a way that no one can change the order or intercept them while in flight. This architecture promises to neutralize many MEV vectors – there is no concept of frontrunning an intent because intents aren’t actionable until the solver assembles them, and by then they’re encrypted into the block. It’s a fundamentally different queuing model aimed at eliminating time-based priority exploits.

  • MEV Suppression/Extraction Mechanisms: Anoma is designed to minimize “bad MEV” by construction. By having trades resolved via batch solving and threshold encryption, typical MEV attacks like sandwiching are impossible – no one sees an intent and can insert their own before it, because intents are not transactions that live in a transparent mempool. Solvers only output final matched transactions after the opportunity for insertion has passed (due to encryption and batching). In an Anoma-based DEX, users wouldn’t be frontrun or backrun in the traditional sense, because all trades in a batch execute together at a uniform price (preventing an attacker from exploiting price change between them). This essentially suppresses predatory MEV like DEX arbitrage or sandwiching; the value that would have been taken by a bot is instead retained by users (they get a fair price). Anoma’s approach to arbitrage is also noteworthy: in many cases, if multiple intents create an arbitrage opportunity, the solver that matches them will incorporate that profit into the match (e.g. match different prices and net out a profit). But since multiple solvers can compete to provide the best match, competition can force solvers to give most of that edge back to users in the form of better fill terms. For example, if one user wants to sell at price A and another wants to buy at price B (B > A implies a gap), a solver could fulfill both at an intermediate price and capture the difference as profit – but if another solver offers users an even closer price to each other (leaving less profit), it will win the intent. Thus, solvers compete away MEV margins to benefit users, akin to how searchers in Flashbots compete via fees. The difference is this happens algorithmically via intent matching rather than gas bidding. There may still be “extracted MEV” in Anoma, but it is likely confined to solvers earning modest fees for their service. Notably, Anoma expects most orderflow to be internalized by the protocol or application logic. In some cases, this means what would be an MEV opportunity becomes just a normal protocol fee. For instance, Anoma’s first fractal instance (Namada) implements an on-chain bonding curve AMM; arbitrage on that AMM is captured by the AMM’s mechanism (like a built-in rebalancer) rather than external arbitrageurs. Another example: a lending intent offering high interest could be matched with a borrowing intent; no third party liquidator is needed if collateral falls, because the intents themselves could handle rebalancing or the protocol could auto-liquidate at a fair price. By cutting out third-party extractors, Anoma reduces the prevalence of off-chain MEV extraction. Additionally, Anoma emphasizes privacy (through the Taiga subsystem of ZK circuits). Users can choose to keep their intents partially or fully shielded (e.g. amounts or asset types hidden). This further suppresses MEV: if the details of a large order are hidden, nobody can target it for value extraction. Only after matching and execution might details emerge, by which time it’s too late to exploit. In summary, Anoma’s mechanism is largely about preventing MEV rather than extracting it: by batching transactions, encrypting the mempool, and baking economic alignment into matching, it tries to ensure there’s little opportunity for malicious arbitrage or frontrunning. The necessary MEV (like arbitrage to equalize prices across markets) is handled by solvers or protocol logic in a trust-minimized way. One could say Anoma aims for “MEV minimization”, striving for outcomes as if every user had access to the perfect counterparty instantly with no leakage. Any value extracted in facilitating that (the solver’s reward) is akin to a small service fee, not a windfall from exploiting asymmetry.

  • Economic Incentive Structure: In Anoma, solvers take on the role analogous to both matchmakers and block builders. They incur costs (computation, maybe posting collateral) to find intent matches, and they are rewarded when they successfully propose transactions that get included. Solvers can earn in a few ways: they might charge a fee or spread within the transaction they construct (for example, giving users slightly less favorable terms and keeping the difference, similar to how a DEX aggregator might take a small cut). Or, certain intents might explicitly include a reward for the solver (like “I’m willing to pay up to 0.01 ETH to get this done”). The exact compensation model is flexible, but the key is that solvers compete. If one solver tries to take too high a fee, another can propose a solution with a better user outcome and win inclusion. This competitive dynamic is intended to keep solver profits in check and aligned with providing value. Validators (Block Producers): Anoma validators run the consensus that orders and executes transactions. They are incentivized by block rewards and fees, as in any blockchain. Notably, if intents are matched across multiple users, the resulting transaction could have multiple fee sources (each user might contribute a fee or a portion of assets). It’s possible Anoma’s fee model could allow fee splitting, but typically validators will get the standard gas fees for processing transactions. In future phases, Anoma plans an “on-demand consensus” and a native token. The idea is that many Anoma instances (or shards) could exist, and some could spin up temporarily for specific tasks (“ad-hoc consensus” for particular application needs). The token would likely be used to stake and secure these instances. Incentives here ensure that the network has enough validators to process all the matched transactions reliably and that they behave honestly in the threshold decryption process (perhaps slashing conditions if they try to decrypt early or censor). Users: Users in Anoma potentially save money and earn better outcomes rather than paying MEV implicitly. For example, they might consistently get better trade prices than on a traditional chain, meaning value stays with them. In some cases, users might also pay explicit fees to incentivize solvers, especially for complex intents or when they desire faster matching. But since users can express intents without specifying how to do them, they offload the heavy lifting to solvers and only pay if it’s worth it. There’s also a notion of “intent owners can define their own security/performance trade-offs” – e.g. a user could say “I’ll wait longer for a better price” or “I’ll pay more for immediate execution.” This flexibility lets users themselves decide how much to offer to solvers or validators, aligning economic incentives with their needs. MEV redistribution: If any MEV does occur (like cross-chain ARB or so), Anoma architecture could allow capturing it into the system. For instance, multiple Anoma shards or instances might coordinate to settle an atomic multi-chain arb, and the profit could be shared or burned (depending on design) rather than letting an external arbitrageur keep it all. In general, because Anoma gives applications control over transaction flow, it’s possible to implement protocol-owned MEV strategies (similar to Skip’s philosophy) at the application level. For example, a DeFi app on Anoma could automatically route all user trades through an in-protocol solver that guarantees best execution and shares any additional profit with users or with liquidity providers. The net effect is that third-party MEV extractors are disintermediated. Economically, this is positive-sum for honest participants (users, LPs, etc.), but it might reduce opportunities for classic searchers. However, new roles like specialized solvers (maybe one focuses on NFT matching, another on FX swaps, etc.) will emerge. These solvers are analogous to today’s MEV searchers, but they operate within the system rules and likely have less insane profit margins due to competition and protocol constraints. Lastly, the Anoma Foundation’s vision hints at Anoma being a public good infrastructure. There will be a native token, presumably ANOMA, which might capture value via fees or be required for staking. One can foresee token incentives (inflationary rewards, etc.) for validators and perhaps even for solvers to bootstrap activity. At time of writing, details on token economics are not final, but the roadmap confirms an Anoma token and native on-demand consensus are planned in future phases. To summarize, Anoma’s incentive model encourages cooperative behavior: solvers earn by helping users get what they want, not by exploiting them; validators earn by securing the network and ordering fairly; and users “pay” mainly through giving up some MEV to solvers or fees, but ideally much less than the implicit MEV they’d lose in other systems.

  • Compliance and Neutrality: Anoma, being a framework, not a single network, can be instantiated in various ways – some could be permissioned, but the flagship Anoma L1 and similar instances are meant to be permissionless and privacy-enhanced. By incorporating heavy privacy features (like shielded intents using zero-knowledge proofs in Taiga), Anoma aligns with the view that financial privacy is a right. This could put it at odds with certain regulatory regimes that demand open visibility into transactions. However, Anoma’s design might also avoid certain regulatory pitfalls. For instance, if front-running and unfair order selection are eliminated, market manipulation concerns are mitigated – a regulator might appreciate that users aren’t being systematically exploited by insiders. Additionally, the concept of “user-defined security models” implies that users or communities could opt into different trust assumptions. Potentially, a regulated application could be built on Anoma where, say, the solver or some subset of validators are KYC’d entities ensuring compliance for that particular intent domain. Anoma as a base layer wouldn’t enforce KYC on everyone, but one could implement validity predicates requiring (for example) a proof of eligibility for certain transactions (like a proof of not being a sanctioned address, or a credential check) if an application needed it. The architecture is flexible enough to support compliance at the application level without compromising the base layer neutrality. Regarding censorship: Anoma’s threshold encryption means that even if validators wanted to censor, they cannot target specific intents because they don’t see them in plaintext. The only thing they could do is refuse to include encrypted transactions from certain solvers or users, but that would be obvious (and against the protocol rules if done arbitrarily). The expectation is that consensus rules will discourage censorship – for example, perhaps if a block doesn’t include all available decrypted intents from the last batch, it could be deemed invalid or less preferable. In any case, the decentralization of validators and the encrypted nature of payloads together ensure a high degree of censorship-resistance. On neutrality: Anoma aims to be a general platform not controlled by any single entity. The research and development is spearheaded by Heliax (the team behind Anoma and Namada), but once live, an Anoma network would be community-run. There is likely to be on-chain governance for upgrades, etc., which could raise compliance questions (e.g. could a government subvert governance to change rules?), but that is a general blockchain issue. One interesting compliance-related feature is that Anoma supports multiple parallel instances – meaning one could have an isolated intent pool or shard for certain asset types or jurisdictions. This isn’t explicitly for regulation, but it could allow, for instance, a CBDC intent pool where only authorized banks run solvers, coexisting with a free DeFi pool. The architecture’s modularity provides flexibility to segregate if needed, while still allowing interoperability via intents bridging. Finally, in terms of legal compatibility, Anoma’s entire concept of intents might avoid some classifications that bedevil traditional crypto: since an intent is not a binding transaction until matched, one could argue users maintain more control (it’s like posting an order on an exchange, which has clearer legal precedent, versus directly executing a trade). This might help with things like tax treatment (the system could potentially give a unified receipt of a multistep trade rather than many transactions) – though this is speculative. Overall, Anoma prioritizes decentralization, privacy, and user autonomy, which historically can clash with regulatory expectations, but the fairness and transparency gains might win favor. It essentially brings the sophistication of traditional financial matching engines on-chain, but without centralized operators. If regulators come to understand that model, they might see it as a more orderly and fair market structure than the free-for-all of mempools.

  • Technical Architecture (Consensus & Cryptography): Anoma’s architecture is complex, comprising several components: Typhon (network, mempool, consensus, execution) and Taiga (the zero-knowledge privacy layer). The core of Typhon is the intent gossip layer and a novel approach to combined consensus + matching. Anoma’s consensus protocol extends typical BFT consensus with the concept of “Validity Predicates” and “Proof-of-Order-Matching”. Essentially, each application in Anoma can define a validity predicate that must be satisfied for transactions (think of it like smart contract conditions that apply at the block level, not just tx level). This allows enforcing properties like batch auction clearing prices, etc., as described. The consensus algorithm itself is likely building on Tendermint or HotStuff style BFT (since Anoma is in the Cosmos realm and supports IBC). Indeed, Anoma’s initial testnet (Feigenbaum in 2021) and Namada use Tendermint-style consensus with modifications. One major modification is the integration of threshold encryption (Ferveo) in the mempool pipeline. Typically, Tendermint selects a proposer who orders transactions. In Anoma, the proposer would order encrypted intents/transactions. Ferveo likely works by having validators periodically agree on a threshold public key, and each intent submitted by solvers is encrypted to that key. During block proposal, all encrypted transactions are included; after proposing, the validators run a protocol to decrypt them (perhaps the next block contains the decrypted outputs or some scheme like that). This adds a phase to consensus but ensures order fairness. Cryptographically, this uses distributed key generation and threshold decryption (so it relies on assumptions like at least 2/3 of validators being honest to not leak or early-decrypt data). On the privacy side, Taiga provides zkSNARK or zk-STARK proofs that allow intents to remain partially or fully shielded. For example, a user could submit an intent to swap without revealing the asset type or amount; they provide a ZK proof that they have sufficient balance and that the transaction will be valid if matched, without revealing specifics. This is analogous to how shielded transactions in Zcash work, but extended to intents. The use of recursive proofs is mentioned, meaning multiple steps of a transaction (or multiple intents) can be proven in one succinct proof for efficiency. The interplay of Taiga and Typhon means that some solvers and validators might be operating on ciphertext or commitments rather than plaintext values. For instance, a solver might match intents that are expressed in a confidential way, solving an equation of commitments. This is cutting-edge cryptography and beyond what most current blockchains do. Another key piece is IBC integration: Anoma instances can communicate with other chains (especially Cosmos chains) via the Inter-Blockchain Communication protocol. This means an intent on Anoma could potentially trigger an action on another chain (via an IBC message) or consume data from another chain’s state. The Mainnet Phase 1 in Anoma’s roadmap specifically mentions an “adapter” on Ethereum and rollups to allow Anoma intents to tap into EVM liquidity. Likely, an Anoma solver could compose a transaction that, say, uses Uniswap on Ethereum, by crafting an intent that when matched sends a message to Ethereum to execute a swap (perhaps via a relayer or via something like an IBC bridge). Consensus has to ensure atomicity: presumably, Anoma’s output might be like a single transaction that spans multiple chains (something like initiating a tx on chain A and expecting an outcome on chain B). Achieving atomic cross-chain settlement is hard; possibly Anoma will start by settling on one chain at a time (Phase 1 focuses on Ethereum ecosystem, probably meaning Anoma intents will settle onto Ethereum L1 or L2s in one go). Later, “Chimera chains” and on-demand consensus might allow custom sidechains to spin up to handle particular cross-chain matches. Performance-wise, Anoma’s approach could be more computationally intensive (solvers doing NP-hard matching problems, validators doing heavy crypto). But the trade-off is vastly improved user experience (no failed transactions, better prices, etc.). The development of Anoma requires building these novel components nearly from scratch: Heliax has been creating Juvix, a new language for writing validity predicates and intents, and lots of research (some references from Anoma’s site talk about these concepts in detail). Major milestones: Anoma’s first public testnet Feigenbaum launched Nov 2021 as a demo of basic intent gossip. Subsequently, Heliax shifted focus to launching Namada (a privacy-focused L1 that can be seen as an instance of Anoma focusing on asset transfers) – Namada went live in 2023 and includes features like shielded transfers and Ferveo threshold encryption for its mempool. This shows the tech in action on a narrower use-case. Meanwhile, Anoma’s full vision testnets have been rolling out in stages (“summer 2023 testnet” mentioned in community as well). The roadmap indicates Phase 1 mainnet will integrate Ethereum, Phase 2 adds more chains and advanced cryptography, and eventually native consensus and token come in. The separation of “consensus and token in future phase” suggests initial Anoma mainnet might rely on Ethereum (e.g. leveraging Ethereum security or existing tokens rather than having its own from day one). Possibly they launch an L2 or sidechain that posts to Ethereum. Then later spin up their own PoS network with a token. This phased approach is interesting – it might be to lower the barrier for adoption (use existing capital on Ethereum rather than launching a new coin initially). In conclusion, Anoma’s architecture is novel and comprehensive: it marries cryptographic fairness (threshold encryption, ZK proofs) with a new transaction paradigm (intent-based matching) and cross-chain capabilities. It’s arguably the most aggressive attempt to eradicate traditional MEV at the protocol level, by doing what no legacy chain does: built-in fair matching engines. The complexity is high, but if successful, an Anoma chain could provide users near CEX-like execution guarantees in a decentralized setting, which is a holy grail in blockchain UX and fairness.

Skip Protocol (Cosmos Sovereign MEV Control and Fair Ordering Toolkit)

Skip Protocol is a leading MEV solution in the Cosmos ecosystem, focused on giving each blockchain (“app-chain”) the tools to manage transaction ordering and MEV capture on its own terms. Unlike Flashbots or Anoma which propose network-spanning systems, Skip aligns with Cosmos’s philosophy of sovereignty: each chain can integrate Skip’s modules to enforce custom fair ordering rules, run in-protocol blockspace auctions, and capture MEV for the chain’s stakeholders or users. Skip can be thought of as a suite of Cosmos SDK modules and infrastructure that together enable Protocol-Owned Blockbuilding (POB) and flexible transaction sequencing. It has been adopted on chains like Osmosis, Juno, Terra, and others in Cosmos, and is also collaborating with projects like dYdX’s upcoming chain for MEV mitigation. Key elements include an on-chain auction mechanism for priority transactions, consensus-level transaction ordering logic, and in-app mechanisms to recycle MEV (“good MEV”) for the protocol’s benefit.

  • Transaction Queuing & Ordering Algorithms: In a typical Cosmos chain (using Tendermint/BFT consensus), the mempool orders transactions roughly by fee and arrival time, and the block proposer can pick any ordering when creating a block (with no algorithmic constraints beyond including valid txs). Skip changes this by introducing consensus-enforced ordering rules and multi-lane mempools. Using Cosmos’s new ABCI++ interface (which allows customizing block proposal and processing), Skip’s Protocol-Owned Builder (POB) module can partition the block into distinct lanes with different ordering policies. For example, one lane could be a Top-of-Block auction lane where the highest-bid transactions (perhaps from arbitrage bots or urgent trades) are placed first in the block in a fixed order, another lane could be a Free lane for ordinary user transactions with no fees, and a Default lane for normal transactions with fees. The Skip module’s BlockBuster component allows developers to define these lanes and their ordering logic in a modular way. Crucially, these rules are enforced by all validators: when a proposer constructs a block, the other validators will verify that the block’s transactions adhere to the agreed ordering rules (via the ProcessProposal ABCI checks). If not, they can reject the block. This means even a malicious or profit-seeking proposer cannot deviate (e.g. cannot slip in their own front-run transaction ahead of a winning auction bidder, because that would violate the ordering rule). Some examples of ordering rules Skip enables: (a) Order transactions by descending gas price (fee) – ensuring the highest fee tx always get priority. This formalizes a fair “pay-for-priority” scheme instead of random or time-based. (b) Must include at least one oracle price update tx before any trades – ensuring data feeds are updated, which prevents scenarios where a proposer could ignore oracle updates to exploit stale prices. (c) Limit the number of special transactions at top-of-block – e.g. only one auction-winning bundle can occupy the very top, to prevent spamming of many small MEV grabs. (d) No transactions that violate a state property – Skip allows stateful ordering rules, like “after building the block, ensure no DEX trade was executed at a price worse than if it were last in the block” (a way to enforce no sandwich attack occurred). One concrete rule described is a “zero frontrunning condition across all DEXs”, which could mean if any transaction would have been affected by later ones in a way that indicates frontrunning, the block is invalid. This is powerful: it’s essentially making fairness part of block validity. Cosmos chains can implement such rules because they control their full stack. Skip’s framework gives a structured way to do it via the AuctionDecorator in the SDK, which can check each tx against configured rules. Additionally, Skip provides a mempool enhancements: the node’s mempool can simulate blocks ahead of time, filter out failing tx, etc., to help proposers follow the rules efficiently. For instance, if a block’s auction lane must have highest bids, the mempool can be sorted by bids for that lane. If a block must include only transactions that result in a certain state condition, the proposer’s node can simulate transactions as it picks them to ensure the condition holds. In summary, Skip enables deterministic, chain-defined ordering rather than leaving it entirely to proposer whim or simple gas price priority. Chains adopt Skip’s builder module to effectively codify their transaction ordering policy into the protocol. This fosters fairness because all validators enforce the same rules, removing the opportunity for a single proposer to do arbitrary reordering for MEV unless it’s within the allowed mechanism (like the auction, where it’s transparent and competitive). Queuing of transactions in Skip’s model might involve separate queues per lane. For example, an auction lane might queue special bid transactions (Skip uses a special MsgAuctionBid type for bidding for top-of-block inclusion). Those bids are collected each block and the highest is selected. Meanwhile, normal tx are queued in the default mempool. Essentially, Skip introduces a structured queue: one for priority bids, one for free or others, etc., each with its own ordering criteria. This modular approach means each chain can customize how it balances fairness and revenue – e.g. Osmosis might say “we want no MEV auction at all, but we enforce order-fairness via threshold encryption” (they did implement threshold encryption with help from Skip and others), whereas another chain might say “we allow auctions for MEV but require some proceeds to be burned”. Skip supports both. This configurability of ordering is Skip’s hallmark.

  • MEV Mitigation and Extraction Mechanisms: Skip’s approach to MEV is often described as “protocol-owned MEV” and “multiplicity.” Protocol-owned MEV means the blockchain protocol itself, via its code and governance, captures or redistributes MEV rather than leaving it to individual validators or outsiders. Multiplicity refers to ensuring the “right” (multiple) transactions get included – essentially not excluding legitimate user tx in favor of only MEV tx, and perhaps including multiple MEV opportunities in one block if possible (so no single searcher monopolizes). Concretely, Skip provides tools to capture MEV in ways that benefit the network: one is Skip Select, a blockspace auction system for top-of-block inclusion. In Skip Select, searchers (like arbitrage bots) submit bundles with tips to validators, similar to Flashbots bundles, except it’s done natively on-chain via Skip’s modules. The highest paying bundle (or bundles) are then automatically inserted at the top of the block in the specified order. This guarantees those transactions execute as intended, and the validator (or chain) collects the tip. This mechanism takes what was an off-chain OTC process (in Ethereum) and makes it an open, on-chain auction – improving transparency and access. Another mechanism is ProtoRev (Prototype Revenue module), which Skip developed for Osmosis. ProtoRev is an on-chain arbitrage module that automatically detects and executes cyclic arbitrages (like those involving multiple pools) within the block’s execution and accumulates the profit to the chain’s treasury or community pool. Essentially, Osmosis decided that certain “good MEV” (like arbitrage that keeps prices aligned) should still happen (for market efficiency) but the protocol itself does the arbitrage and captures the profit, then later distributes it (e.g. to stakers or as liquidity mining incentives). This eliminates the need for external arbitrage bots on those opportunities and ensures the value stays in the ecosystem. ProtoRev was the first of its kind on a major chain and demonstrates how deep integration can mitigate MEV’s externalities: users trading on Osmosis face less slippage because if an arbitrage exists after their trade, the protocol will close it and essentially rebate the value back to Osmosis (which could indirectly benefit users via lower fees or token buybacks, etc.). Moreover, Skip empowers chains to implement anti-MEV measures like threshold encryption for the mempool. For example, Osmosis, in collaboration with Skip and others, is implementing mempool encryption where transactions are submitted encrypted and only revealed after a fixed time (similar to Anoma’s idea, but at the chain level). While not a Skip product per se, Skip’s architecture is compatible – Skip’s auction can run on encrypted transactions by doing the auction based on declared bids rather than reading tx content. In terms of suppressing harmful MEV: Skip’s consensus rules like “no front-running allowed” (enforced by state checks) are a direct measure to stop malicious behavior. If a validator tries to include a sandwich attack, other validators would detect that the state outcome violates the no frontrunning rule (for instance, they could check that no trade was immediately preceded and followed by another from the same address in a way that took advantage). That block would be rejected. Knowing this, validators won’t even try to include such patterns, thus users are protected by protocol law. Skip also encourages burning or redistributing MEV revenue to avoid perverse incentives. For example, a chain could choose to burn all auction proceeds or put them in a community fund rather than give them all to the block proposer. This reduces the incentive for validators to re-order transactions themselves, since they might not personally profit from it (depending on the chain’s choice). In summary, Skip’s toolkit allows each chain to surgically extract MEV where it’s beneficial (e.g. arbitrage to maintain market efficiency, liquidations to keep lending markets healthy) and ensure that value is captured by the protocol or users, while strictly forbidding and preventing malicious MEV (like user-unfriendly frontrunning). It’s a pragmatic mix of extraction and suppression, tailored by governance: rather than one-size-fits-all, Skip empowers communities to decide which MEV is “good” (and automate its capture) and which is “bad” (and outlaw it via consensus rules). The result is a fairer trading environment on Skip-enabled chains and an additional revenue source that can fund public goods or lower costs (one of Skip’s blog posts notes fair MEV capture can be used to “fairly distribute revenue among all network participants”).

  • Economic Incentive Structure: Skip’s introduction fundamentally shifts incentives especially for validators and chain communities in Cosmos. Traditionally, a validator in Cosmos might extract MEV by privately reordering transactions in their block (since Cosmos lacks an MEV auction by default). With Skip, validators instead agree to a protocol where MEV is captured via auctions or modules and often shared. Validators still benefit: they can receive a cut of the auction proceeds or extra fees from Skip’s mechanisms, but importantly all validators (not just the proposer) can benefit if designed that way. For example, some Skip auctions can be configured such that the revenue is split among all stakers or according to governance decisions, rather than a winner-takes-all for the proposer. This aligns validators collectively to run the Skip software because even non-proposers get safety (knowing if someone tries an invalid block, it won’t pay off) and possibly revenue. Some chains might still give the proposer most of the MEV auction fee (to maximize immediate incentive to include it), but even then it’s transparent and competitive, arguably reducing the chance of under-the-table deals. Chain/Community: The concept of protocol-owned MEV means the blockchain and its stakeholders capture MEV. For example, Osmosis directs ProtoRev profits to its community pool, effectively turning MEV into an additional protocol revenue that could fund development or be distributed to OSMO stakers. This makes the community at large an “owner” of that MEV, aligning everyone’s interest in extracting MEV in healthy ways. If users know the MEV is going back to improving the chain or tokenomics, they might be more accepting of it than if it goes to a random bot. Searchers: In Skip’s model, independent searchers/bots may have less to do on-chain because some opportunities are taken by protocol logic (like ProtoRev) and others are channeled through auctions. However, Skip doesn’t eliminate searchers – rather, it channels them into bidding through the proper routes. A searcher can still attempt a complex strategy, but to guarantee inclusion at a particular spot, they should participate in Skip’s auction (Skip Select) by submitting their bundle with a bid. If they don’t, they risk a validator ignoring them in favor of someone who did bid or the chain’s own mechanism taking the opportunity. So searchers in Cosmos are evolving to work with Skip: e.g. many arbitrageurs on Osmosis now submit their arbs via Skip’s system. They pay a portion to the chain, keeping less profit, but it’s the price to play. Over time, some “searcher” roles might be entirely absorbed (like backrunning arbitrage – ProtoRev handles it, so no external searcher can compete). This might reduce spam and wasted effort in the network (no more multiple bots racing; just one protocol execution). Users: End-users stand to gain because of a fairer environment (no surprise MEV attacks). Also, some Skip configurations explicitly reward users: MEV redistribution to users is possible. For instance, a chain could decide to rebate some MEV auction revenue to the users whose trades created that MEV (similar to Flashbots’ refund idea). Astroport, a DEX on Terra, integrated Skip to share MEV revenue with swappers – meaning if a user’s trade had MEV, part of that value is returned to them by default. This aligns with the ethos that MEV should go to users. While not all chains do this, the option exists via Skip’s infrastructure to implement such schemes. Skip Protocol itself (the company/team) has a business model where they provide these tools often for free to validators (to encourage adoption), and they monetize by partnering with chains (B2B). For example, Skip might take a small fee from the MEV captured or charge chains for advanced features/support. It’s mentioned that Skip is free for validators but uses a B2B model with chains. This means Skip has an incentive to maximize MEV captured by the chain and community (so that the chain is happy and perhaps shares a portion as per agreement). But because governance is involved, any fee Skip takes is usually agreed upon by the community. The economic effect is interesting: it professionalizes MEV extraction as a service provided to chains. In doing so, it disincentivizes rogue behavior – validators don’t need to individually make shady deals, they can just use Skip and get a reliable flow of extra revenue that’s socially accepted. Honest behavior (following the protocol rules) yields nearly as much or more profit than trying to cheat, because if you cheat, your block might be invalid or you might get slashed socially, etc. Governance plays a significant role: adopting Skip’s module or setting the parameters (like auction cut, distribution of proceeds) is done via on-chain proposals. This means the economic outcomes (who gets the MEV) are ultimately determined by community vote. For instance, the Cosmos Hub is debating adopting Skip’s builder SDK to possibly redirect MEV to the Hub’s treasury or stakers. This alignment via governance ensures that the use of MEV is seen as legitimate by the community. It turns MEV from a toxic byproduct into a public resource that can be allocated (to security, users, devs, etc.). In summary, Skip reshapes incentives such that validators collectively and users/community benefit, while opportunistic MEV takers are either co-opted into the system (as bidders) or designed out. Everyone is better off in theory: users lose less value to MEV, validators still get compensated (even possibly more in total, due to auctions), and the network as a whole can use MEV to strengthen itself (financially or via fairer experience). The only losers are those who thrived on zero-sum extraction without returning value.

  • Compliance and Regulatory Compatibility: Skip’s framework, by empowering chain governance, actually makes it easier for chains to ensure compliance or specific policies if they desire. Because Skip operates at the protocol level, a chain could choose to enforce certain transaction filtering or ordering rules to comply with regulations. For example, if a chain wanted to block sanctioned addresses, they could integrate an AnteHandler or AuctionDecorator rule in Skip’s module that invalidates blocks containing blacklisted addresses. This is arguably simpler than in Ethereum, where censorship is an off-chain choice by individual validators; in Cosmos with Skip, it could be a chain-wide rule (though it would be controversial and goes against decentralization ideals for many). Alternatively, a chain could enforce something like “FIAT onramp transactions must appear before others” if mandated by some law. The Skip toolkit doesn’t come with preset compliance rules, but it’s flexible enough to implement them if a community is compelled to (through governance). On the flip side, Skip can bolster censorship-resistance: by distributing MEV revenue and giving equal access, it reduces the advantage of any single validator that might censor for profit. Additionally, if threshold encryption mempools (like the one Osmosis is adding) become standard with Skip, that will hide transaction contents, making censorship harder (like in Anoma). Skip is neutral infrastructure – it can be used to either comply or resist, depending on governance. Since Cosmos chains are often jurisdiction-specific (Terra’s community might worry about Korean laws, Kava might worry about US laws, etc.), having the option to configure compliance is valuable. For instance, a permissioned Cosmos chain (like an institutional chain) could still use Skip’s builder module but maybe require that only whitelisted addresses can bid in auctions, etc., aligning with their regulations. Regulatory compatibility is also about transparency: Skip’s on-chain auctions produce a public record of MEV transactions and who paid what. This could actually satisfy some regulatory concerns about fairness (everyone had a chance to bid, and it’s auditable). It’s more transparent than under-the-table payments to validators. Also, by capturing MEV on-chain, Skip reduces the likelihood of off-chain cartels or dark pools, which regulators fear due to opaqueness. For example, without Skip, validators might make private deals with searchers (as was seen with relay censorship issues). With Skip, the expectation is you use the official auction – which is open and logged – to get priority. This fosters an open market accessible to all bots equally, which is arguably fairer and less prone to collusion (collusion is possible but governance oversight exists). Another compliance angle: since Skip deals with value capture, if MEV revenue goes to a community pool or treasury, that might raise questions (is it a fee, is it taxable, etc.?). But those are similar to how transaction fees are handled, so nothing fundamentally new legally. In Cosmos, chain communities can also decide how to use that fund (burn, distribute, etc.), which they can align with any legal guidance if needed (for example, they might avoid sending it to a foundation if that triggers tax issues and instead burn it). In terms of censorship-resistance, one interesting note: by enforcing block validity rules, Skip prevents validators from censoring certain tx if that would break rules. For example, if a chain had a rule “must include at least one oracle update”, a censoring validator couldn’t just omit all oracle tx (which might come from certain sources) because their block would be invalid. So, ironically, Skip rules can force inclusion of critical transactions (anti-censorship) just as they could be used to force exclusion of disallowed ones. It’s all about what the community sets. Neutrality: Skip’s default stance is empowering chains to “protect users from negative MEV and improve user experience”, which implies neutrality and user-friendliness. There isn’t a central Skip network making decisions – each chain is sovereign. Skip as a company might advise or provide defaults (like a recommended auction format), but ultimately the chain’s token holders decide. This decentralization of MEV policy to each chain’s governance can be seen as more compatible with regulatory diversity: e.g. a US-based chain could implement OFAC compliance if legally pressured, without affecting other chains. It’s not one relay censoring across many chains; it’s per-chain choice. From a regulator perspective, Skip doesn’t introduce any additional illicit activity – it just reorganizes how transactions are ordered. If anything, it might reduce volatility (fewer gas wars) and create more predictable execution, which could be a plus. Summing up, Skip’s architecture is highly adaptable to compliance needs while preserving the option for maximal censorship-resistance if communities prioritize that. It keeps MEV in the daylight and under collective control, which likely makes blockchain ecosystems more robust against both malicious actors and regulatory crackdowns, since self-governance can proactively address worst abuses.

  • Technical Architecture & Implementation: Skip Protocol is built tightly into the Cosmos SDK stack. The core delivery is a set of modules (e.g. x/builder) and modifications like the BlockBuster mempool implementation. Cosmos chains run a consensus (Tendermint/CometBFT) which offers ABCI hooks for preparing and processing proposals. Skip leverages the ABCI++ extensions that allow executing code between block proposal and finalization. This is how it enforces ordering: PrepareProposal can reorder the block transactions according to lane rules before broadcasting the proposal, and ProcessProposal on receiving validators can check the ordering and state validity matches expectations. This is a modern feature (Cosmos SDK v0.47+), and Skip’s POB is compatible with recent SDK versions. Under the hood, Skip’s modules maintain data structures for auctions (e.g. an on-chain order book of bids for top-of-block). They also likely use priority transaction types. The README shows a special MsgAuctionBid and custom logic to handle it. So searchers interact by sending these messages through normal Cosmos transactions, which then the module intercepts and places accordingly. The builder module’s AnteHandler (the AuctionDecorator) can consume auction bids and decide winners in the block assembly phase. Cryptographically, Skip doesn’t inherently add new cryptographic requirements (aside from whatever the chain chooses, like threshold cryptography for mempool, which is separate). It relies on the honesty of >2/3 validators to enforce the rules and not collude to break them. If a majority did collude, they could technically change the rules via governance or ignore them by making that the new de facto rule. But that’s the case with any chain logic. Skip’s design tries to make it mechanically impossible for a single validator to cheat small-scale. For example, any attempt to deviate ordering will be caught by others because it’s objective. So it reduces trust in single proposers. In performance terms, running auctions and extra checks does add overhead. However, Cosmos blocks are relatively small and time between blocks is often a couple seconds, which is enough for these operations in most cases. Simulation (pre-executing txs to ensure no failure and ordering constraints) might be the heaviest part, but validators already do block execution normally, so this is similar. The presence of multi-lane means mempool separation: e.g. a transaction might need to specify which lane it’s targeting (auction vs free vs default). The Skip BlockBuster design indeed had separate lanes like lanes/auction, lanes/free etc., likely separate mempool queues. That ensures, for instance, free transactions don’t delay or interfere with auction ones. It’s a bit like having multiple priority classes in scheduling. Another aspect is security and misbehavior: If a proposer tries to game the auction (e.g. include their own tx but claim it followed rules), other validators will reject the block. Cosmos consensus then likely moves to the next proposer, slashing the previous one for double signing or just missing (depending on scenario). So the chain security model handles that – no special slashing by Skip needed beyond existing consensus. One could extend skip to slash for malicious ordering but probably unnecessary if block simply fails. Development and Tooling: Skip’s code has been open-sourced (initially at skip-mev/pob and now likely moved to a new repo after stable releases). They’ve been through testnets and iterations with partner chains. On the roadmap, we’ve seen: Osmosis Prop 341 (passed in fall 2022) to integrate ProtoRev and auctions with Skip – delivered in early 2023. Terra’s Astroport integrated MEV sharing with Skip in 2023. The Cosmos Hub is evaluating Skip’s “Block SDK” which would bring similar features to the Hub. Another interesting frontier is cross-chain MEV via the Interchain Scheduler – the Cosmos Hub community is exploring an interchain MEV auction where MEV from many chains could be traded on the Hub, and Skip is involved in those discussions (the Zerocap research noted IBC’s planned interchain scheduler). Skip’s tech could serve as the backbone for such cross-chain auctions because it’s already doing auctions on single chains. That would be analogous to SUAVE’s cross-domain goal but within Cosmos. As for key updates: Skip launched around mid-2022. By mid-2023, they had a stable POB release for SDK v0.47+ (which many chains upgrading to). They raised seed funding as well, indicating active development. Another competitor in Cosmos, Mekatek, offers similar functionality. This has perhaps accelerated Skip’s roadmap to keep ahead. Skip continues to refine features like private transaction lanes (maybe for hide-your-tx until included) and more complex validity rules as use cases arise. Because it’s modular, a chain like dYdX (which will have an orderbook) might use Skip to ensure fairness in order matching on-chain, etc., so Skip’s tools might adapt to different app logic. Technically, Skip’s solution is simpler than building a whole new chain: it’s upgrading existing chains’ capabilities. This incremental, opt-in approach has allowed fairly quick adoption – e.g., enabling auctions on Osmosis did not require a new consensus algorithm, just adding a module and coordinating validators to run the updated software (which they did, since it was beneficial and passed by governance). In summary, Skip’s architecture is embedded in each chain’s node software, customizing the mempool and block proposal pipeline. It’s a pragmatic engineering approach to fair ordering: use what’s already there (Tendermint BFT), add logic to guide it. The heavy lifting (like finding arbitrages) can even be done by the chain’s own module (ProtoRev uses Osmosis’ built-in Wasm and Rust code to scan pools). So a lot of MEV handling moves on-chain. This on-chain approach does have to be carefully coded for efficiency and security, but it’s under the community’s scrutiny. If any rule is problematic (too strict, etc.), governance can tweak it. Thus, technically and socially, Skip turns MEV into just another parameter of the chain to be optimized and governed, rather than a wild west. This is a unique stance enabled by Cosmos’s flexibility.

Comparative Analysis of SUAVE, Anoma, Skip, and Flashbots v2

These four protocols approach the MEV and fair ordering problem from different angles, tailored to their respective ecosystems and design philosophies. Flashbots v2 is an incremental, pragmatic solution for Ethereum’s current architecture: it embraces MEV auctions but tries to democratize and soften their impact (via off-chain coordination, SGX privacy, and sharing mechanisms). SUAVE is Flashbots’ forward-looking plan to create an inter-chain MEV platform that maximizes total value and user benefits – essentially scaling the auction model to a decentralized, privacy-preserving global network. Anoma is a ground-up reimagining of how transactions are formulated and executed, aiming to eliminate the root causes of unfair ordering by using intents, solver-mediated matching, and cryptographic fairness in consensus. Skip is a sovereign chain approach, integrating fairness and MEV capture at the protocol level on a per-chain basis, especially in Cosmos, through configurable rules and auctions.

Each has strengths and trade-offs:

  • Fairness and Ordering Guarantees: Anoma offers the strongest theoretical fairness (no frontrunning by design, encrypted batches), but it requires a new paradigm and complex tech which is still being proven. Skip can enforce concrete fairness rules on existing chains (preventing front-runs or enforcing first-in-first-out within lanes) but is limited by what each community chooses to enforce. SUAVE and Flashbots v2 improve fairness in terms of access (open auctions rather than secret deals, protection from public mempool sniping), but they do not inherently prevent a determined MEV strategy from executing – they just make sure it pays the users or is done neutrally.
  • MEV Redistribution: SUAVE and Flashbots explicitly aim to give MEV “back” to users/validators: SUAVE via user bids/refunds, Flashbots via builder competitions and refunds. Skip can channel MEV to users (as configured, e.g. Astroport’s case) or to community funds. Anoma avoids explicit redistribution because the goal is to avoid extraction in the first place – ideally users just get fair prices, which is equivalent to not losing value to MEV.
  • Scope (Single vs Multi-domain): Flashbots v2 and Skip focus on their own domains (Ethereum and individual Cosmos chains respectively). SUAVE is inherently multi-domain – it sees cross-chain MEV as a major motivation. Anoma also eventually considers multi-chain intents, but in initial phases it might be within one fractal instance at a time, then bridging out via adapters. SUAVE’s cross-chain auction could unlock arbitrage and coordination that others can’t do as easily (except maybe an Interchain Scheduler with Skip’s help in Cosmos).
  • Complexity and Adoption: Flashbots v2 was relatively easy to adopt (a client sidecar) and quickly captured the majority of Ethereum blocks. Skip also leverages existing tech and is seeing adoption in Cosmos with straightforward governance proposals. SUAVE and Anoma are more revolutionary – they require new networks or major changes. SUAVE’s challenge will be getting many chains and users to opt in to a new layer; Anoma’s challenge is creating a new ecosystem and convincing developers to build in an intent-centric model.
  • Compliance and Neutrality: All four offer improvements in transparency. Flashbots v2/SUAVE remove dark forest elements but have had to manage censorship issues – SUAVE is explicitly being built to avoid those central points. Anoma, with privacy by default, maximally protects users (but might concern regulators due to encrypted activity). Skip’s model gives each chain autonomy to strike a compliance trade-off. If a regulator demanded “no MEV auctions” or “no privacy”, an Ethereum using Flashbots might face conflict, whereas a Cosmos chain using Skip could simply not implement those features or adjust them. In terms of neutrality: SUAVE and Anoma aim for credible neutrality (everyone accesses one system on equal terms; both are essentially public goods networks). Flashbots v2 is neutral in offering open access, but some centralization exists in builder market (though mitigated by buildernet efforts). Skip’s neutrality depends on governance – ideally it makes MEV not favor any single insider, but one could configure it poorly and harm neutrality (though unlikely as it’d require governance consensus to do so).
  • Technical Architecture Differences: Flashbots v2 and SUAVE are off-chain marketplaces layered on chain: they introduce specialized roles (builders, relays, executors) and use hardware or cryptography to secure them. Anoma and Skip integrate directly into the consensus or state machine. Anoma alters the transaction lifecycle and consensus itself (with threshold encryption and unified intents). Skip hooks into consensus of Tendermint via ABCI++ but doesn’t change the fundamental algorithm – it’s an application-layer tweak. This difference means SUAVE/Flashbots can theoretically serve many chains without each chain upgrading (they run in parallel to them), whereas Anoma/Skip require each chain or instance to use the new software. SUAVE is somewhat middle: it’s a separate chain but to use it effectively, other chains need minor adjustments (to accept SUAVE-built blocks or output to SUAVE). The cryptographic sophistication is highest in Anoma (ZK, MPC, threshold crypto all in one), moderate in SUAVE (threshold crypto and SGX, plus normal crypto for bridging), and relatively low in Flashbots v2 (SGX, standard signatures) and Skip (mainly standard signatures, plus whatever the chain uses like threshold decryption if opted).
  • Development Stage: Flashbots v2 is in production on Ethereum (since Sep 2022). Skip is in production on multiple Cosmos chains (2022–2023 onward). SUAVE is in testnet/devnet phase with parts rolling out (some auction functionality in testing, testnet Toliman live). Anoma is also in testnet phase (a vision paper, partial implementations like Namada mainnet, and likely an Anoma testnet requiring invite codes in 2023). So in terms of real-world data: Flashbots v2 and Skip have demonstrated results (e.g., Flashbots v2 delivered millions to validators and lowered average gas prices during high MEV periods; Skip’s ProtoRev made significant funds for Osmosis community and prevented many sandwich attacks as threshold encryption began). SUAVE and Anoma are promising but have to prove themselves operationally and economically.

To crystallize these comparisons, the table below summarizes key aspects of each protocol side-by-side:

ProtocolTransaction OrderingMEV Mechanism (Suppress vs Extract)Economic Incentives (Alignment)Compliance & NeutralityArchitecture & TechDevelopment Status
Flashbots v2 (Ethereum)Off-chain builder auctions decide block ordering (PBS via MEV-Boost). Public mempool transactions bypassed for private bundles. Ordering is profit-driven (highest-paying bundle first).Extracts MEV via sealed-bid block auctions, but mitigates harmful side-effects (no gas wars, no public frontrunning). Provides private tx submission (Flashbots Protect) to suppress user-visible MEV like direct frontrunning. Censorship resistance improving through multi-relay & builder decentralization.Validators maximize revenue by outsourcing blocks (earn highest bids). Searchers compete away profits to win inclusion (most MEV paid to validators). Builders earn a margin if any. Emerging refunds share MEV with users (via BuilderNet). Incentives favor open competition over exclusive deals.Initially faced OFAC censorship (central relay) but moved to multiple relays and open-source builders. Now pursuing credible neutrality: BuilderNet’s TEE network ensures no single builder can censor. Overall more transparent than mempool, but still reliant on off-chain entities (relays).Off-chain marketplace integrated with Ethereum PoS. Utilizes Trusted HW (SGX) for private orderflow in BuilderNet. No consensus change on L1; uses standard builder API. Heavy on engineering (sidecar clients, relays) but light on new cryptography.Production on Ethereum mainnet (since Sep 2022). >90% of blocks via MEV-Boost. Continual upgrades: open-sourced builder, BuilderNet alpha live (late 2024). Proven stable, with ongoing decentralization efforts.
SUAVE (Flashbots’ next-gen)Unified cross-chain mempool of preferences (user intents + bids). Executors form optimal tx bundles from these. Decentralized sequencing – SUAVE outputs ordered block fragments to domains. Ordering based on user bids & global welfare (not simple FIFO or gas). Privacy (encryption) prevents order manipulation until execution.Suppresses “bad MEV” by returning MEV to users: e.g. orderflow auctions pay users for being backrun. Aggregates “good MEV” (like cross-domain arbs) for maximal extraction, but redistributes to users/validators. Uses encrypted mempool & collaborative blockbuilding to prevent frontrunning and exclusive access.Users post preferences with payable bids; competing executors earn the bid by fulfilling user’s goal. Validators of each chain get higher fees due to optimal blocks and cross-chain MEV capture. SUAVE’s own validators earn network fees. Design pushes MEV profit to users and validators, minimizing searcher rent. Flashbots aims to remain just a facilitator.Built for credible neutrality: a neutral public platform not controlled by any single actor. Privacy-first (transactions encrypted in SGX or via cryptography) means no entity can censor based on content. Hopes to avoid any Flashbots trust requirement by progressive decentralization. Compliance is not explicitly built-in, but neutrality and global reach are prioritized (could face regulatory questions on privacy).Independent chain (EVM-compatible) for preferences & auctions. Uses Intel SGX enclaves extensively (for private mempool and collaborative block building). Plans to introduce threshold encryption and MPC to eliminate trusted hardware. Essentially a blockchain + secure compute layer on top of others.In developmentCentauri testnet phase active (devnet, basic auctions). Open-sourced SUAVE client (Aug 2023); Toliman testnet launched for community testing. Mainnet not yet live (expected in phases: Andromeda, Helios). Ambitious roadmap, unproven at scale yet.
Anoma (Intent-centric protocol)No conventional mempool; users broadcast intents (desired outcomes). Solvers gather intents and produce matched transactions. Uses threshold encryption of transactions so validators order them without seeing content, preventing reactive MEV. Often employs batch processing (e.g. decrypt and match intents in batches every N blocks) for fair pricing. Consensus ensures order commitments before reveal, achieving order-fairness.Strong MEV mitigation by design: Frontrunning impossible (transactions revealed only after ordering final). Batch auctions eliminate priority advantages (e.g. all trades in batch share clearing price). Solvers compete to fill intents, which drives prices toward user-optimal, leaving little MEV. Essentially minimizes extractable value – any necessary arbitrage is done as part of matching, not by outsiders.Solvers earn fees or spreads for finding matches (akin to DEX aggregators), but competition forces them to deliver best deals to users. Validators get fees and stake rewards; they also ensure fair execution (no extra MEV via consensus). Users gain via better execution (they only trade at fair prices, not losing value to MEV). Value that would be MEV is retained by users or protocol (or shared minimally with solvers as a service fee). The architecture aligns incentives for honest participation (solvers and validators are rewarded for facilitating trades, not exploiting them).Privacy and fairness are core – intents can be partially or fully shielded (with ZK proofs), protecting user data. Censorship-resistance: validators can’t selectively censor what they can’t see (encrypted tx) and must follow algorithmic matching rules. Highly neutral – all intents are treated by the same matching logic. Regulatory compliance isn’t baked in (strong privacy might be challenging for KYC), but intent framework could allow compliant designs at application layer.New blockchain architecture. Uses BFT consensus with integrated intent gossip & solver layer. Relies on threshold cryptography (Ferveo) for mempool privacy and ZK SNARKs (Taiga) for data privacy. Execution is guided by validity predicates (application-specific logic enforcing fair outcomes). Interoperable via IBC (multi-chain intents possible in future). Very advanced cryptographically (encryption, ZK, MPC concepts combined).Testnets and partial launches. Anoma’s first testnet Feigenbaum (Nov 2021) demonstrated basic intent matching. Many concepts are implemented in stages; e.g. Namada (2023) launched with Anoma’s privacy tech and Ferveo on a single-chain use-case. The full Anoma L1 with intents is in testnet (invite-only tests mid-2023). Mainnet Phase 1 (planned) will target Ethereum integration; native token and full consensus later. Still under heavy R&D, not yet battle-tested.
Skip Protocol (Cosmos)In-protocol transaction ordering rules and block lanes configured by each chain’s governance. E.g. auctions determine top-of-block order, then default transactions, etc. Consensus-enforced: validators reject blocks that violate ordering (like an invalid tx sequence). Allows custom policies (order by gas price, include oracle tx first, disallow certain patterns) – effectively deterministic ordering algorithms chosen by the chain.Hybrid approachextracts MEV in controlled ways (via on-chain auctions and protocol-owned arbitrage) while suppressing malicious MEV (via rule enforcement). Frontrunning can be outlawed by chain rules. Backrunning/arbitrage can be internalized: e.g. chain does its own arbitrage (ProtoRev) and shares revenue. Blockspace auctions (Skip Select) let searchers bid for priority, so MEV is captured transparently and often redistributed. Overall, negative MEV (sandwiches, etc.) is curtailed, while “positive MEV” (arbs, liquidations) is harnessed for the chain’s benefit.Validators gain a new revenue stream from auction fees or protocol-captured MEV without breaking consensus rules. Risk of individual rogue MEV is reduced (must follow rules or block is invalid), aligning validators collectively. Chain/community can direct MEV revenue (e.g. to stakers or community fund). Searchers must compete via auctions, often giving up part of profit to chain/validators. Some MEV roles are subsumed by on-chain modules (so searchers have fewer easy wins). Users benefit from fewer attacks and can even receive MEV rebates (e.g. Astroport shares MEV with traders). Incentives become community-aligned – MEV is treated as public revenue or not allowed at all if harmful, rather than private profit.Sovereign compliance: each chain chooses its policy. This means a chain could enforce strict anti-MEV, or include KYC requirements if needed, via module configuration. Skip’s transparency (on-chain bids) and governance control improve legitimacy. It inherently increases censorship-resistance within each chain’s chosen rules – e.g. if rule says “always include Oracle tx”, a censoring validator can’t omit it. But if a chain decided to censor (by rule), Skip could enforce that too. Generally, Skip promotes transparency and fairness as determined by the community. No single entity (like a relay) controls ordering – it’s in protocol and open source.Cosmos SDK modules (Protocol-Owned Builder) added to node software. Uses ABCI++ hooks for custom block assembly and validation. Implements on-chain auctions (contracts or modules handle bidding and payouts). No specialized cryptography by default (beyond standard Cosmos tech), but compatible with threshold encryption – e.g. Osmosis added encrypted mempool with Skip in mind. Essentially, an extension of Tendermint BFT that adds MEV-aware logic in block proposal. Lightweight for chains to adopt (just module integration, no new consensus protocol).Live on multiple chains. Skip’s auction and builder module deployed on Osmosis (2023) – ProtoRev module yielded protocol revenue and auctions live for top-of-block. Used on Terra/Astroport, Juno, etc., and being considered by Cosmos Hub. Code is open-source and evolving (v1 of POB for SDK 0.47+). Proven in production with real MEV captured and distributed. Continues to roll out features (e.g. new lane types) and onboard chains.

Each solution targets the MEV problem from a different layer – Flashbots v2 works around L1 consensus, SUAVE proposes a new L1.5 layer, Anoma redesigns the L1 itself, and Skip leverages modular L1 customization. In practice, these approaches are not mutually exclusive and could even complement each other (for instance, a Cosmos chain could use Skip internally and also send intents to SUAVE for cross-chain MEV, or Ethereum might implement some Anoma-like order-fairness in the future while still using Flashbots for builder markets). The table illustrates their comparative properties: Flashbots v2 is already delivering improvements on Ethereum but still extracts MEV (just more equitably and efficiently); SUAVE aims for a maximal synergy outcome where everyone cooperates through one network – its success will depend on broad adoption and technical delivery of promised privacy and decentralization; Anoma offers perhaps the most powerful MEV repression by changing how transactions work entirely, but it faces the steep challenge of bootstrapping a new ecosystem and proving its complex protocol; Skip strikes a pragmatic balance for Cosmos, letting communities actively govern MEV and fairness on their own terms – it’s less radical than Anoma but more embedded than Flashbots, and is already showing tangible results in Cosmos.

Conclusion and Outlook

MEV suppression and fair ordering remain one of the “Millennium Prize Problems of crypto”. The four protocols analyzed – Flashbots v2, SUAVE, Anoma, and Skip – represent a spectrum of solutions: from immediate mitigations in existing frameworks to total paradigm shifts in transaction processing. Flashbots v2 has demonstrated the power of open MEV markets to reduce chaos and redistribute value, albeit while navigating trade-offs like censorship which are being addressed via decentralization. It shows that incremental changes (like PBS auctions and private mempools) can significantly reduce the pain of MEV in the short term. SUAVE, Flashbots’ next step, carries that ethos forward into a unified cross-chain arena – if it succeeds, we might see a future where users routinely get paid for the MEV their trades create and where block production across many networks is collaborative and encrypted for fairness. Anoma points to a more fundamental evolution: by removing the concept of priority transactions and replacing it with an intent matchmaking system, it could eliminate entire classes of MEV and unlock more expressive financial dApps. Its fair ordering at the consensus layer (via threshold encryption and batch auctions) is a glimpse of how blockchains themselves can provide fairness guarantees, not just off-chain add-ons. Skip Protocol, meanwhile, exemplifies a middle ground in a multi-chain context – it gives individual chains the agency to decide how to balance MEV revenue and user protection. Its early adoption in Cosmos shows that many of MEV’s ill effects can be tackled today with thoughtful protocol engineering and community consent.

Going forward, we can expect cross-pollination of ideas: Ethereum researchers are studying order-fair consensus and threshold encryption (inspired by projects like Anoma and Osmo’s encrypted mempool) for potential inclusion in L1 or L2 solutions. Flashbots’ SUAVE might interface with Cosmos chains (perhaps even via Skip) as it seeks to be chain-agnostic. Anoma’s intent concept could influence application design even on traditional platforms (e.g., CoW Swap on Ethereum already uses a solver model; one can view it as an “Anoma-like” dApp). Skip’s success may encourage other ecosystems (Polkadot, Solana etc.) to adopt similar in-protocol MEV controls. A key theme is economic alignment – all these protocols strive to align the incentives of those securing the network with the welfare of users, so that exploiting users is not profitable or not possible. This is crucial for the long-term health of blockchain ecosystems and to avoid centralization.

In summary, SUAVE, Anoma, Skip, and Flashbots v2 each contribute pieces of the puzzle toward fair ordering and MEV mitigation. Flashbots v2 has set a template for MEV auctions that others emulate, Skip has proven that on-chain enforcement is viable, Anoma has expanded the imagination of what’s possible by rebuilding the transaction model, and SUAVE seeks to unify and decentralize the gains of the past years. The ultimate solution may involve elements of all: privacy-preserving global auctions, intent-centric user interfaces, chain-level fairness rules, and collaborative block building. As of 2025, the fight against MEV-induced unfairness is well underway – these protocols are turning MEV from a dark inevitability into a managed, even productive, part of the crypto economy, while inching closer to the ideal of “the best execution for users with the most decentralized infrastructure”.

Web3 DevEx Toolchain Innovation

· 4 min read
Dora Noda
Software Engineer

Here's a consolidated summary of the report on Web3 Developer Experience (DevEx) innovations.

Executive Summary

The Web3 developer experience has significantly advanced in 2024-2025, driven by innovations in programming languages, toolchains, and deployment infrastructure. Developers are reporting higher productivity and satisfaction due to faster tools, safer languages, and streamlined workflows. This summary consolidates findings on five key toolchains (Solidity, Move, Sway, Foundry, and Cairo 1.0) and two major trends: “one-click” rollup deployment and smart contract hot-reloading.


Comparison of Web3 Developer Toolchains

Each toolchain offers distinct advantages, catering to different ecosystems and development philosophies.

  • Solidity (EVM): Remains the most dominant language due to its massive ecosystem, extensive libraries (e.g., OpenZeppelin), and mature frameworks like Hardhat and Foundry. While it lacks native features like macros, its widespread adoption and strong community support make it the default choice for Ethereum and most EVM-compatible L2s.
  • Move (Aptos/Sui): Prioritizes safety and formal verification. Its resource-based model and the Move Prover tool help prevent common bugs like reentrancy by design. This makes it ideal for high-security financial applications, though its ecosystem is smaller and centered on the Aptos and Sui blockchains.
  • Sway (FuelVM): Designed for maximum developer productivity by allowing developers to write contracts, scripts, and tests in a single Rust-like language. It leverages the high-throughput, UTXO-based architecture of the Fuel Virtual Machine, making it a powerful choice for performance-intensive applications on the Fuel network.
  • Foundry (EVM Toolkit): A transformative toolkit for Solidity that has revolutionized EVM development. It offers extremely fast compilation and testing, allowing developers to write tests directly in Solidity. Features like fuzz testing, mainnet forking, and "cheatcodes" have made it the primary choice for over half of Ethereum developers.
  • Cairo 1.0 (Starknet): Represents a major DevEx improvement for the Starknet ecosystem. The transition to a high-level, Rust-inspired syntax and modern tooling (like the Scarb package manager and Starknet Foundry) has made developing for ZK-rollups significantly faster and more intuitive. While some tools like debuggers are still maturing, developer satisfaction has soared.

Key DevEx Innovations

Two major trends are changing how developers build and deploy decentralized applications.

"One-Click" Rollup Deployment

Launching a custom blockchain (L2/appchain) has become radically simpler.

  • Foundation: Frameworks like Optimism’s OP Stack provide a modular, open-source blueprint for building rollups.
  • Platforms: Services like Caldera and Conduit have created Rollup-as-a-Service (RaaS) platforms. They offer web dashboards that allow developers to deploy a customized mainnet or testnet rollup in minutes, with minimal blockchain engineering expertise.
  • Impact: This enables rapid experimentation, lowers the barrier to creating app-specific chains, and simplifies DevOps, allowing teams to focus on their application instead of infrastructure.

Hot-Reloading for Smart Contracts

This innovation brings the instant feedback loop of modern web development to the blockchain space.

  • Concept: Tools like Scaffold-ETH 2 automate the development cycle. When a developer saves a change to a smart contract, the tool automatically recompiles, redeploys to a local network, and updates the front-end to reflect the new logic.
  • Impact: Hot-reloading eliminates repetitive manual steps and dramatically shortens the iteration loop. This makes the development process more engaging, lowers the learning curve for new developers, and encourages frequent testing, leading to higher-quality code.

Conclusion

The Web3 development landscape is maturing at a rapid pace. The convergence of safer languages, faster tooling like Foundry, and simplified infrastructure deployment via RaaS platforms is closing the gap between blockchain and traditional software development. These DevEx improvements are as critical as protocol-level innovations, as they empower developers to build more complex and secure applications faster. This, in turn, fuels the growth and adoption of the entire blockchain ecosystem.

Sources:

  • Solidity Developer Survey 2024 – Soliditylang (2025)
  • Moncayo Labs on Aptos Move vs Solidity (2024)
  • Aptos Move Prover intro – Monethic (2025)
  • Fuel Labs – Fuel & Sway Documentation (2024); Fuel Book (2024)
  • Spearmanrigoberto – Foundry vs Hardhat (2023)
  • Medium (Rosario Borgesi) – Building Dapps with Scaffold-ETH 2 (2024)
  • Starknet/Cairo developer survey – Cairo-lang.org (2024)
  • Starknet Dev Updates – Starknet.io (2024–2025)
  • Solidity forum – Macro preprocessor discussion (2023)
  • Optimism OP Stack overview – CoinDesk (2025)
  • Caldera rollup platform overview – Medium (2024)
  • Conduit platform recap – Conduit Blog (2025)
  • Blockchain DevEx literature review – arXiv (2025)

Chain Abstraction and Intent‑Centric Architecture in Cross-Chain UX

· 44 min read
Dora Noda
Software Engineer

Introduction

The rapid growth of Layer-1 and Layer-2 blockchains has fragmented the Web3 user experience. Users today juggle multiple wallets, networks, and token bridges just to accomplish complex tasks that span chains. Chain abstraction and intent-centric architecture have emerged as key paradigms to simplify this landscape. By abstracting away chain-specific details and allowing users to act on intents (desired outcomes) rather than crafting explicit per-chain transactions, these approaches promise a unified, seamless cross-chain experience. This report delves into the core principles of chain abstraction, the design of intent-focused execution models, real-world implementations (such as Wormhole and Etherspot), technical underpinnings (relayers, smart wallets, etc.), and the UX benefits for developers and end-users. We also summarize insights from EthCC 2025 – where chain abstraction and intents were hot topics – and provide a comparative table of different protocol approaches.

Principles of Chain Abstraction

Chain abstraction refers to any technology or framework that presents multiple blockchains to users and developers as if they were a single unified environment. The motivation is to eliminate the friction caused by chain heterogeneity. In practice, chain abstraction means:

  • Unified Interfaces: Instead of managing separate wallets and RPC endpoints for each blockchain, users interact through one interface that hides network details. Developers can build dApps without deploying separate contracts on every chain or writing custom bridge logic for each network.
  • No Manual Bridging: Moving assets or data between chains happens behind the scenes. Users do not manually execute lock/mint bridge transactions or swap for bridge tokens; the abstraction layer handles it automatically. For example, a user could provide liquidity on a protocol regardless of which chain the liquidity resides on, and the system will route funds appropriately.
  • Gas Fee Abstraction: Users no longer need to hold each chain’s native token to pay for gas on that chain. The abstraction layer can sponsor gas fees or allow gas to be paid in an asset of the user’s choice. This lowers the barrier for entry since one does not have to acquire ETH, MATIC, SOL, etc. separately.
  • Network Agnostic Logic: The application logic becomes chain-agnostic. Smart contracts or off-chain services coordinate to execute user actions on whatever chain(s) necessary, without requiring the user to manually switch networks or sign multiple transactions. In essence, the user’s experience is of one “meta-chain” or a blockchain-agnostic application layer.

The core idea is to let users focus on what they want to achieve, not which chain or how to achieve it. A familiar analogy is web applications abstracting away server location – just as a user doesn’t need to know which server or database their request touches, a Web3 user shouldn’t need to know which chain or bridge is used for an action. By routing transactions through a unified layer, chain abstraction reduces the fragmentation of today’s multi-chain ecosystem.

Motivation: The push for chain abstraction stems from pain points in current cross-chain workflows. Managing separate wallets per chain and performing multi-step cross-chain operations (swap on Chain A, bridge to Chain B, swap again on Chain B, etc.) is tedious and error-prone. Fragmented liquidity and incompatible wallets also limit dApp growth across ecosystems. Chain abstraction tackles these by cohesively bridging ecosystems. Importantly, it treats Ethereum and its many L2s and sidechains as part of one user experience. EthCC 2025 emphasized that this is critical for mainstream adoption – speakers argued that a truly user-centric Web3 future “must abstract away blockchains”, making the multi-chain world feel as easy as a single network.

Intent-Centric Architecture: From Transactions to Intents

Traditional blockchain interactions are transaction-centric: a user explicitly crafts and signs a transaction that executes specific operations (calls a contract function, transfers a token, etc.) on a chosen chain. In a multi-chain context, accomplishing a complex goal might require many such transactions across different networks, each manually initiated by the user in the correct sequence. Intent-centric architecture flips this model. Instead of micromanaging transactions, the user declares an intent – a high-level goal or desired outcome – and lets an automated system figure out the transactions needed to fulfill it.

Under an intent-based design, a user might say: “Swap 100 USDC on Base for 100 USDT on Arbitrum”. This intent encapsulates the what (swap one asset for another on a target chain) without prescribing the how. A specialized agent (often called a solver) then takes on the job of completing it. The solver will determine how to best execute the swap across chains – for example, it might bridge the USDC from Base to Arbitrum using a fast bridge and then perform a swap to USDT, or use a direct cross-chain swap protocol – whatever yields the best result. The user signs one authorization, and the solver handles the complex sequence on the backend, including finding the optimal route, submitting the necessary transactions on each chain, and even fronting any required gas fees or taking on interim risk.

How Intents Empower Flexible Execution: By giving the system freedom to decide how to fulfill a request, intent-centric design enables much smarter and more flexible execution layers than fixed user transactions. Some advantages:

  • Optimal Routing: Solvers can optimize for cost, speed, or reliability. For instance, multiple solvers might compete to fulfill a user’s intent, and an on-chain auction can select the one offering the best price (e.g. best exchange rate or lowest fees). This competition drives down costs for the user. Wormhole’s Mayan Swift protocol is an example that embeds an on-chain English auction on Solana for each intent, shifting competition from a “first-come” race to a price-based bidding for better user outcomes. The solver that can execute the swap most profitably for the user wins the bid and carries out the plan, ensuring the user gets the most value. This kind of dynamic price discovery is not possible when a user pre-specifies a single path in a regular transaction.
  • Resilience and Flexibility: If one bridge or DEX is unavailable or suboptimal at the moment, a solver can choose an alternative path. The intent remains the same, but the execution layer can adapt to network conditions. Intents thus allow programmable execution strategies – e.g. splitting an order or retrying via another route – all invisible to the end-user who only cares that their goal is achieved.
  • Atomic Multi-Chain Actions: Intents can encompass what would traditionally be multiple transactions on different chains. Execution frameworks strive to make the entire sequence feel atomic or at least failure-managed. For example, the solver might only consider the intent fulfilled when all sub-transactions (bridge, swap, etc.) are confirmed, and roll back or compensate if anything fails. This ensures the user’s high-level action is either completed in full or not at all, improving reliability.
  • Offloading Complexity: Intents dramatically simplify the user’s role. The user doesn’t need to understand which bridges or exchanges to use, how to split liquidity, or how to schedule operations – all that is offloaded to the infrastructure. As one report puts it, “users focus on the what, not the how. A direct benefit is user experience: interacting with blockchain applications becomes more like using a Web2 app (where a user simply requests a result, and the service handles the process).

In essence, an intent-centric architecture elevates the level of abstraction from low-level transactions to high-level objectives. Ethereum’s community is so keen on this model that the Ethereum Foundation has introduced the Open Intents Framework (OIF), an open standard and reference architecture for building cross-chain intent systems. The OIF defines standard interfaces (like the ERC-7683 intent format) for how intents are created, communicated, and settled across chains, so that many different solutions (bridges, relayers, auction mechanisms) can plug in modularly. This encourages a whole ecosystem of solvers and settlement protocols that can interoperate. The rise of intents is grounded in the need to make Ethereum and its rollups feel “like a single chain” from a UX perspective – fast and frictionless enough that moving across L2s or sidechains happens in seconds without user headache. Early standards like ERC-7683 (for standardized intent format and lifecycle) have even garnered support from leaders like Vitalik Buterin, underscoring the momentum behind intent-centric designs.

Key Benefits Recap: To summarize, intent-centric architectures bring several key benefits : (1) Simplified UX – users state what they want and the system figures out the rest; (2) Cross-Chain Fluidity – operations that span multiple networks are handled seamlessly, effectively treating many chains as one; (3) Developer Scalability – dApp developers can reach users and liquidity across many chains without reinventing the wheel for each, because the intent layer provides standardized hooks into cross-chain execution. By decoupling what needs to be done from how/where it gets done, intents act as the bridge between user-friendly innovation and the complex interoperability behind the scenes.

Technical Building Blocks of Cross-Chain Abstraction

Implementing chain abstraction and intent-based execution requires a stack of technical mechanisms working in concert. Key components include:

  • Cross-Chain Messaging Relayers: At the core of any multi-chain system is a messaging layer that can reliably carry data and value between blockchains. Protocols like Wormhole, Hyperlane, Axelar, LayerZero, and others provide this capability by relaying messages (often with proofs or validator attestations) from a source chain to one or more destination chains. These messages might carry commands like “execute this intent” or “mint this asset” on the target chain. A robust relayer network is crucial for unified transaction routing – it serves as the “postal service” between chains. For example, Wormhole’s network of 19 Guardian nodes observes events on connected chains and signs a VAA (verifiable action approval) that can be submitted to any other chain to prove an event happened. This decouples the action from any single chain, enabling chain-agnostic behavior. Modern relayers focus on being chain-agnostic (supporting many chain types) and decentralized for security. Wormhole, for instance, extends beyond EVM-based chains to support Solana, Cosmos chains, etc., making it a versatile choice for cross-chain communication. The messaging layer often also handles ordering, retries, and finality guarantees for cross-chain transactions.

  • Smart Contract Wallets (Account Abstraction): Account abstraction (e.g. Ethereum’s ERC-4337) replaces externally owned accounts with smart contract accounts that can be programmed with custom validation logic and multi-step transaction capabilities. This is a foundation for chain abstraction because a smart wallet can serve as the user’s single meta-account controlling assets on all chains. Projects like Etherspot use smart contract wallets to enable features like transaction batching and session keys across chains. A user’s intent might be packaged as a single user operation (in 4337 terms) which the wallet contract then expands into multiple sub-transactions on different networks. Smart wallets can also integrate paymasters (sponsors) to pay gas fees on the user’s behalf, enabling true gas abstraction (the user might pay in a stablecoin or not at all). Security mechanisms like session keys (temporary keys with limited permissions) allow users to approve intents that involve multiple actions without multiple prompts, while limiting risk. In short, account abstraction provides the programmable execution container that can interpret a high-level intent and orchestrate the necessary steps as a series of transactions (often via the relayers).

  • Intent Orchestration and Solvers: Above the messaging and wallet layer lives the intent solver network – the brains that figure out how to fulfill intents. In some architectures, this logic is on-chain (e.g. an on-chain auction contract that matches intent orders with solvers, as in Wormhole’s Solana auction for Mayan Swift). In others, it’s off-chain agents monitoring an intent mempool or order book (for example, the Open Intents Framework provides a reference TypeScript solver that listens for new intent events and then submits transactions to fulfill them). Solvers typically must handle: finding liquidity routes (across DEXes, bridges), price discovery (ensuring the user gets a fair rate), and sometimes covering interim costs (like posting collateral or taking on finality risk – delivering funds to the user before the cross-chain transfer is fully finalized, thereby speeding up UX at some risk to the solver). A well-designed intent-centric system often involves competition among solvers to ensure the user’s intent is executed optimally. Solvers may be economically incentivized (e.g. they earn a fee or arbitrage profit for fulfilling the intent). Mechanisms like solvers’ auctions or batching can be used to maximize efficiency. For example, if multiple users have similar intents, a solver might batch them to minimize bridge fees per user.

  • Unified Liquidity and Token Abstraction: Moving assets across chains introduces the classic problem of fragmented liquidity and wrapped tokens. Chain abstraction layers often abstract tokens themselves – aiming to give the user the experience of a single asset that can be used on many chains. One approach is omnichain tokens (where a token can exist natively on multiple chains under one total supply, instead of many incompatible wrapped versions). Wormhole introduced Native Token Transfers (NTT) as an evolution of traditional lock-and-mint bridges: instead of infinite “bridged” IOU tokens, the NTT framework treats tokens deployed across chains as one asset with shared mint/burn controls. In practice, bridging an asset under NTT means burning on the source and minting on the destination, maintaining a single circulating supply. This kind of liquidity unification is crucial so that chain abstraction can “teleport” assets without confusing the user with multiple token representations. Other projects use liquidity networks or pools (e.g. Connext or Axelar) where liquidity providers supply capital on each chain to swap assets in and out, so users can effectively trade one asset for its equivalent on another chain in one step. The Securitize SCOPE fund example is illustrative: an institutional fund token was made multichain such that investors can subscribe or redeem on Ethereum or Optimism, and behind the scenes Wormhole’s protocol moves the token and even converts it into yield-bearing forms, removing the need for manual bridges or multiple wallets for the users.

  • Programmable Execution Layers: Finally, certain on-chain innovations empower more complex cross-chain workflows. Atomic multi-call support and transaction scheduling help coordinate multi-step intents. For instance, the Sui blockchain’s Programmable Transaction Blocks (PTBs) allow bundling multiple actions (like swaps, transfers, calls) into one atomic transaction. This can simplify cross-chain intent fulfillment on Sui by ensuring all steps either happen or none do, with one user signature. In Ethereum, proposals like EIP-7702 (smart contract code for EOAs) extend capabilities of user accounts to support things like sponsored gas and multi-step logic even at the base layer. Moreover, specialized execution environments or cross-chain routers can be employed – e.g. some systems route all intents through a particular L2 or hub which coordinates the cross-chain actions (the user might just interact with that hub). Examples include projects like Push Protocol’s L1 (Push Chain) which is being designed as a dedicated settlement layer for chain-agnostic operations, featuring universal smart contracts and sub-second finality to expedite cross-chain interactions. While not universally adopted, these approaches illustrate the spectrum of techniques used to realize chain abstraction: from purely off-chain orchestration to deploying new on-chain infrastructure purpose-built for cross-chain intent execution.

In summary, chain abstraction is achieved by layering these components: a routing layer (relayers messaging across chains), an account layer (smart wallets that can initiate actions on any chain), and an execution layer (solvers, liquidity and contracts that carry out the intents). Each piece is necessary to ensure that from a user’s perspective, interacting with a dApp across multiple blockchains is as smooth as using a single-chain application.

Case Study 1: Wormhole – Intent-Based, Chain-Agnostic Routing

Wormhole is a leading cross-chain interoperability protocol that has evolved from a token bridge into a comprehensive message-passing network with intent-based functionality. Its approach to chain abstraction is to provide a uniform message routing layer connecting 20+ chains (including EVM chains and non-EVM chains like Solana), and on top of that, build chain-agnostic application protocols. Key elements of Wormhole’s architecture include:

  • Generic Message Layer: At its core, Wormhole is a generic publish/subscribe bridge. Validators (Guardians) observe events on each connected chain and sign a VAA (verifiable action) that can be submitted on any other chain to reproduce the event or call a target contract. This generic design means developers can send arbitrary instructions or data cross-chain, not just token transfers. Wormhole ensures messages are delivered and verified consistently, abstracting away whether the source was Ethereum, Solana, or another chain.

  • Chain-Agnostic Token Transfers: Wormhole’s original Token Bridge (Portal) used a lock-and-mint approach. Recently, Wormhole introduced Native Token Transfers (NTT), an improved framework for multichain tokens. With NTT, assets can be issued natively on each chain (avoiding fragmented wrapped tokens), while Wormhole handles the accounting of burns and mints across chains to keep supply in sync. For users, this feels like a token “teleports” across chains – they deposit on one chain and withdraw the same asset on another, with Wormhole managing the mint/burn bookkeeping. This is a form of token abstraction that hides the complexity of different token standards and addresses on each chain.

  • Intent-Based xApp Protocols: Recognizing that bridging tokens is only one piece of cross-chain UX, Wormhole has developed higher-level protocols to fulfill user intents like swaps or transfers with gas fee management. In 2023–2024, Wormhole collaborated with the cross-chain DEX aggregator Mayan to launch two intent-focused protocols, often called xApps (cross-chain apps) in the Wormhole ecosystem: Mayan Swift and Mayan MCTP (Multichain Transfer Protocol).

    • Mayan Swift is described as a “flexible cross-chain intent protocol” that essentially lets a user request a token swap from Chain A to Chain B. The user signs a single transaction on the source chain locking their funds and specifying their desired outcome (e.g. “I want at least X amount of token Y on destination chain by time T”). This intent (the order) is then picked up by solvers. Uniquely, Wormhole Swift uses an on-chain auction on Solana to conduct competitive price discovery for the intent. Solvers monitor a special Solana contract; when a new intent order is created, they bid by committing how much of the output token they can deliver. Over a short auction period (e.g. 3 seconds), bids compete up the price. The highest bidder (who offers the most favorable rate to the user) wins and is granted the right to fulfill the swap. Wormhole then carries a message to the destination chain authorizing that solver to deliver the tokens to the user, and another message back to release the user’s locked funds to the solver as payment. This design ensures the user’s intent is fulfilled at the best possible price in a decentralized way, while the user only had to interact with their source chain. It also decouples the cross-chain swap into two steps (lock funds, then fulfill on dest) to minimize risk. The intent-centric design here shows how abstraction enables smart execution: rather than a user picking a particular bridge or DEX, the system finds the optimal path and price automatically.

    • Mayan MCTP focuses on cross-chain asset transfers with gas and fee management. It leverages Circle’s CCTP (Cross-Chain Transfer Protocol) – which allows native USDC to be burned on one chain and minted on another – as the base for value transfer, and uses Wormhole messaging for coordination. In an MCTP transfer, a user’s intent might be simply “move my USDC from Chain A to Chain B (and optionally swap to another token on B)”. The source-chain contract accepts the tokens and a desired destination, then initiates a burn via CCTP and simultaneously publishes a Wormhole message carrying metadata like the user’s destination address, desired token on destination, and even a gas drop (an amount of the bridged funds to convert to native gas on the destination). On the destination chain, once Circle mints the USDC, a Wormhole relayer ensures the intent metadata is delivered and verified. The protocol can then automatically e.g. swap a portion of USDC to the native token to pay for gas, and deliver the rest to the user’s wallet (or to a specified contract). This provides a one-step, gas-included bridge: the user doesn’t have to go acquire gas on the new chain or perform a separate swap for gas. It’s all encoded in the intent and handled by the network. MCTP thus demonstrates how chain abstraction can handle fee abstraction and reliable transfers in one flow. Wormhole’s role is to securely transmit the intent and proof that funds were moved (via CCTP) so that the user’s request is fulfilled end-to-end.

Illustration of Wormhole’s intent-centric swap architecture (Mayan Swift). In this design, the user locks assets on the source chain and defines an outcome (intent). Solvers bid in an on-chain auction for the right to fulfill that intent. The winning solver uses Wormhole messages to coordinate unlocking funds and delivering the outcome on the destination chain, all while ensuring the user receives the best price for their swap.

  • Unified UX and One-Click Flows: Wormhole-based applications are increasingly offering one-click cross-chain actions. For example, Wormhole Connect is a frontend SDK that dApps and wallets integrate to let users bridge assets with a single click – behind the scenes it calls Wormhole token bridging and (optionally) relayers that deposit gas on the target chain. In the Securitize SCOPE fund use-case, an investor on Optimism can purchase fund tokens that originally live on Ethereum, without manually bridging anything; Wormhole’s liquidity layer automatically moves the tokens across and even converts them into a yield-bearing form, so the user just sees a unified investment product. Such examples highlight the chain abstraction ethos: the user performs a high-level action (invest in fund, swap X for Y) and the platform handles cross-chain mechanics silently. Wormhole’s standard message relaying and automatic gas delivery (via services like Wormhole’s Automatic Relayer or Axelar’s Gas Service integrated in some flows) mean the user often signs just one transaction on their origin chain and receives the result on the destination chain with no further intervention. From the developer perspective, Wormhole provides a uniform interface to call contracts across chains, so building cross-chain logic is simpler.

In summary, Wormhole’s approach to chain abstraction is to provide the infrastructure (decentralized relayers + standardized contracts on each chain) that others can build upon to create chain-agnostic experiences. By supporting a wide variety of chains and offering higher-level protocols (like the intent auction and gas-managed transfer), Wormhole enables applications to treat the blockchain ecosystem as a connected whole. Users benefit by no longer needing to worry about what chain they’re on or how to bridge – whether it’s moving liquidity or doing a multi-chain swap, Wormhole’s intent-centric xApps aim to make it as easy as a single-chain interaction. Wormhole’s co-founder Robinson Burkey noted that this kind of infrastructure has reached “institutional-scale maturity”, allowing even regulated asset issuers to operate seamlessly across networks and abstract away chain-specific constraints for their users.

Case Study 2: Etherspot – Account Abstraction Meets Intents

Etherspot approaches the cross-chain UX problem from the perspective of wallets and developer tooling. It provides an Account Abstraction SDK and an intent protocol stack that developers can integrate to give their users a unified multi-chain experience. In effect, Etherspot combines smart contract wallets with chain abstraction logic so that a user’s single smart account can operate across many networks with minimal friction. Key features of Etherspot’s architecture include:

  • Modular Smart Wallet (Account Abstraction): Every user of Etherspot gets a smart contract wallet (ERC-4337 style) that can be deployed on multiple chains. Etherspot contributed to standards like ERC-7579 (minimal modular smart accounts interface) to ensure these wallets are interoperable and upgradeable. The wallet contract acts as the user’s agent and can be customized with modules. For example, one module might enable a unified balance view – the wallet can report the aggregate of a user’s funds across all chains. Another module might enable session keys, so the user can approve a series of actions with one signature. Because the wallet is present on each chain, it can directly initiate transactions locally when needed (with Etherspot’s backend bundlers and relayers orchestrating the cross-chain coordination).

  • Transaction Bundler and Paymasters: Etherspot runs a bundler service (called Skandha) that collects user operations from the smart wallets, and a paymaster service (Arka) that can sponsor gas fees. When a user triggers an intent through Etherspot, they effectively sign a message to their wallet contract. The Etherspot infrastructure (the bundler) then translates that into actual transactions on the relevant chains. Crucially, it can bundle multiple actions – e.g. a DEX swap on one chain and a bridge transfer to another chain – into one meta-transaction that the user’s wallet contract will execute step by step. The paymaster means the user might not need to pay any L1 gas; instead, the dApp or a third party could cover it, or the fee could be taken in another token. This realizes gas abstraction in practice (a big usability win). In fact, Etherspot highlights that with upcoming Ethereum features like EIP-7702, even Externally Owned Accounts could gain gasless capabilities similar to contract wallets – but Etherspot’s smart accounts already allow gasless intents via paymasters today.

  • Intent API and Solvers (Pulse): On top of the account layer, Etherspot provides a high-level Intent API known as Etherspot Pulse. Pulse is Etherspot’s chain abstraction engine that developers can use to enable cross-chain intents in their dApps. In a demo of Etherspot Pulse in late 2024, they showed how a user could perform a token swap from Ethereum to an asset on Base, using a simple React app interface with one click. Under the hood, Pulse handled the multi-chain transaction securely and efficiently. The key features of Pulse include Unified Balances (the user sees all assets as one portfolio regardless of chain), Session Key Security (limited privileges for certain actions to avoid constant approvals), Intent-Based Swaps, and Solver Integration. In other words, the developer just calls an intent like swap(tokenA on Chain1 -> tokenB on Chain2 for user) through the Etherspot SDK, and Pulse figures out how to do it – whether by routing through a liquidity network like Socket or calling a cross-chain DEX. Etherspot has integrated with various bridges and DEX aggregators to find optimal routes (it is likely using some of the Open Intents Framework concepts as well, given Etherspot’s involvement in the Ethereum intents community).

  • Education and Standards: Etherspot has been a vocal proponent of chain abstraction standards. It has released educational content explaining intents and how “users declare their desired outcome, while solvers handle the backend process”, emphasizing simplified UX and cross-chain fluidity. They enumerate benefits like users not needing to worry about bridging or gas, and dApps gaining scalability by easily accessing multiple chains. Etherspot is also actively collaborating with ecosystem projects: for example, it references the Ethereum Foundation’s Open Intents Framework and explores integrating new cross-chain messaging standards (ERC-7786, 7787, etc.) as they emerge. By aligning with common standards, Etherspot ensures its intent format or wallet interface can work in tandem with other solutions (like Hyperlane, Connext, Axelar, etc.) chosen by the developer.

  • Use Cases and Developer UX: For developers, using Etherspot means they can add cross-chain features without reinventing the wheel. A DeFi dApp can let a user deposit funds on whatever chain they have assets on, and Etherspot will abstract the chain differences. A gaming app could let users sign one transaction to claim an NFT on an L2 and have it automatically bridged to Ethereum if needed for trading. Etherspot’s SDK essentially offers chain-agnostic function calls – developers call high-level methods (like a unified transfer() or swap()) and the SDK handles locating user funds, moving them if needed, and updating state across chains. This significantly reduces development time for multi-chain support (the team claims up to 90% reduction in development time when using their chain abstraction platform). Another aspect is RPC Playground and debugging tools Etherspot built for AA flows, which make it easier to test complex user operations that may involve multiple networks. All of this is geared towards making integration of chain abstraction as straightforward as integrating a payments API in Web2.

From the end-user perspective, an Etherspot-powered application can offer a much smoother onboarding and daily experience. New users can sign in with social login or email (if the dApp uses Etherspot’s social account module) and get a smart account automatically – no need to manage seed phrases for each chain. They can receive tokens from any chain to their one address (the smart wallet’s address is the same on all supported chains) and see them in one list. If they want to perform an action (swap, lend, etc.) on a chain where they don’t have the asset or gas, the intent protocol will automatically route their funds and actions to make it happen. For example, a user holding USDC on Polygon who wants to participate in an Ethereum DeFi pool could simply click “Invest in Pool” – the app (via Etherspot) will swap the USDC to the required asset, bridge it to Ethereum, deposit into the pool contract, and even handle gas fees by taking a tiny portion of the USDC, all in one flow. The user is never confronted with “please switch to X network” or “you need ETH for gas” errors – those are handled behind the scenes. This one-click experience is exactly what chain abstraction strives for.

Etherspot’s CEO, Michael Messele, spoke at EthCC 2025 about “advanced chain abstraction” and highlighted that making Web3 truly blockchain-agnostic can empower both users and developers by enhancing interoperability, scalability, and UX. Etherspot’s own contributions, like the Pulse demo of single-intent cross-chain swaps, show that the technology is already here to drastically simplify cross-chain interactions. As Etherspot positions it, intents are the bridge between the innovative possibilities of a multi-chain ecosystem and the usability that end-users expect. With solutions like theirs, dApps can deliver “frictionless” experiences where chain differences disappear into the background, accelerating mainstream adoption of Web3.

User & Developer Experience Improvements

Both chain abstraction and intent-centric architectures are ultimately in service of a better user experience (UX) and developer experience (DX) in a multi-chain world. Some of the notable improvements include:

  • Seamless Onboarding: New users can be onboarded without worrying about what blockchain they’re on. For instance, a user could be given a single smart account that works everywhere, possibly created with a social login. They can receive any token or NFT to this account from any chain without confusion. No longer must a newcomer learn about switching networks in MetaMask or safeguarding multiple seed phrases. This lowers the barrier to entry significantly, as using a dApp feels closer to a Web2 app signup. Projects implementing account abstraction often allow email or OAuth-based wallet creation, with the resulting smart account being chain-agnostic.

  • One-Click Cross-Chain Actions: Perhaps the most visible UX gain is condensing what used to be multi-step, multi-app workflows into one or two clicks. For example, a cross-chain token swap previously might require: swapping Token A for a bridgeable asset on Chain 1, going to a bridge UI to send it to Chain 2, then swapping to Token B on Chain 2 – and managing gas fees on both chains. With intent-centric systems, the user simply requests “Swap A on Chain1 to B on Chain2” and confirms once. All intermediate steps (including acquiring gas on Chain2 if needed) are automated. This not only saves time but also reduces the chances of user error (using the wrong bridge, sending to wrong address, etc.). It’s akin to the convenience of booking a multi-leg flight through one travel site versus manually purchasing each leg separately.

  • No Native Gas Anxiety: Users don’t need to constantly swap for small amounts of ETH, MATIC, AVAX, etc. just to pay for transactions. Gas fee abstraction means either the dApp covers the gas (and maybe charges a fee in the transacted token or via a subscription model), or the system converts a bit of the user’s asset automatically to pay fees. This has a huge psychological impact – it removes a class of confusing prompt (no more “insufficient gas” errors) and lets users focus on the actions they care about. Several EthCC 2025 talks noted gas abstraction as a priority, e.g. Ethereum’s EIP-7702 will even allow EOA accounts to have gas sponsored in the future. In practice today, many intent protocols drop a small amount of the output asset as gas on the destination chain for the user, or utilize paymasters connected to user operations. The result: a user can, say, move USDC from Arbitrum to Polygon without ever touching ETH on either side, and still have their Polygon wallet able to make transactions immediately on arrival.

  • Unified Asset Management: For end-users, having a unified view of assets and activities across chains is a major quality-of-life improvement. Chain abstraction can present a combined portfolio – so your 1 ETH on mainnet and 2 ETH worth of bridged stETH on Optimism might both just show as “ETH balance”. If you have USD stablecoins on five different chains, a chain-agnostic wallet could show your total USD value and allow spending from it without you manually bridging. This feels more like a traditional bank app that shows a single balance (even if funds are spread across accounts behind the scenes). Users can set preferences like “use cheapest network by default” or “maximize yield” and the system might automatically allocate transactions to the appropriate chain. Meanwhile, all their transaction history could be seen in one timeline regardless of chain. Such coherence is important for broader adoption – it hides blockchain complexity under familiar metaphors.

  • Enhanced Developer Productivity: From the developer’s side, chain abstraction platforms mean no more writing chain-specific code for each integration. Instead of integrating five different bridges and six exchanges to ensure coverage of assets and networks, a developer can integrate one intent protocol API that abstracts those. This not only saves development effort but also reduces maintenance – as new chains or bridges come along, the abstraction layer’s maintainers handle integration, and the dApp just benefits from it. The weekly digest from Etherspot highlighted that solutions like Okto’s chain abstraction platform claim to cut multi-chain dApp development time by up to 90% by providing out-of-the-box support for major chains and features like liquidity optimization. In essence, developers can focus on application logic (e.g. a lending product, a game) rather than the intricacies of cross-chain transfers or gas management. This opens the door for more Web2 developers to step into Web3, as they can use higher-level SDKs instead of needing deep blockchain expertise for each chain.

  • New Composable Experiences: With intents and chain abstraction, developers can create experiences that were previously too complex to attempt. For example, cross-chain yield farming strategies can be automated: a user could click “maximize yield on my assets” and an intent protocol could move assets between chains to the best yield farms, even doing this continuously as rates change. Games can have assets and quests that span multiple chains without requiring players to manually bridge items – the game’s backend (using an intent framework) handles item teleportation or state sync. Even governance can benefit: a DAO could allow a user to vote once and have that vote applied on all relevant chains’ governance contracts via cross-chain messages. The overall effect is composability: just as DeFi on a single chain allowed Lego-like composition of protocols, cross-chain intent layers allow protocols on different chains to compose. A user intent might trigger actions on multiple dApps across chains (e.g. unwrap an NFT on one chain and sell it on a marketplace on another), which creates richer workflows than siloed single-chain operations.

  • Safety Nets and Reliability: An often under-appreciated UX aspect is error handling. In early cross-chain interactions, if something went wrong (stuck funds in a bridge, a transaction failing after you sent funds, etc.), users faced a nightmare of troubleshooting across multiple platforms. Intent frameworks can build in retry logic, insurance, or user protection mechanisms. For example, a solver might take on finality risk – delivering the user’s funds on the destination immediately (within seconds) and waiting for the slower source chain finality themselves. This means the user isn’t stuck waiting minutes or hours for confirmation. If an intent fails partially, the system can rollback or refund automatically. Because the entire flow is orchestrated with known steps, there’s more scope to make the user whole if something breaks. Some protocols are exploring escrow and insurance for cross-chain operations as part of the intent execution, which would be impossible if the user was manually jumping through hoops – they’d bear that risk alone. In short, abstraction can make the overall experience not just smoother but also more secure and trustworthy for the average user.

All these improvements point to a single trend: reducing the cognitive load on users and abstracting away blockchain plumbing into the background. When done right, users may not even realize which chains they are using – they just access features and services. Developers, on the other hand, get to build apps that tap liquidity and user bases across many networks from a single codebase. It’s a shift of complexity from the edges (user apps) to the middle (infrastructure protocols), which is a natural progression as technology matures. EthCC 2025’s tone echoed this sentiment, with “seamless, composable infrastructure” cited as a paramount goal for the Ethereum community.

Insights from EthCC 2025

The EthCC 2025 conference (held in July 2025 in Cannes) underscored how central chain abstraction and intent-based design have become in the Ethereum ecosystem. A dedicated block of sessions focused on unifying user experiences across networks. Key takeaways from the event include:

  • Community Alignment on Abstraction: Multiple talks by industry leaders echoed the same message – simplifying the multi-chain experience is critical for the next wave of Web3 adoption. Michael Messele (Etherspot) spoke about moving “towards a blockchain-agnostic future”, Alex Bash (Zerion wallet) discussed “unifying Ethereum’s UX with abstraction and intents”, and others introduced concrete standards like ERC-7811 for stablecoin chain abstraction. The very title of one talk, “There’s No Web3 Future Without Chain Abstraction”, encapsulated the community sentiment. In other words, there is broad agreement that without solving cross-chain usability, Web3 will not reach its full potential. This represents a shift from previous years where scaling L1 or L2 was the main focus – now that many L2s are live, connecting them for users is the new frontier.

  • Ethereum’s Role as a Hub: EthCC panels highlighted that Ethereum is positioning itself not just as one chain among many, but as the foundation of a multi-chain ecosystem. Ethereum’s security and its 4337 account abstraction on mainnet can serve as the common base that underlies activity on various L2s and sidechains. Rather than competing with its rollups, Ethereum (and by extension Ethereum’s community) is investing in protocols that make the whole network of chains feel unified. This is exemplified by the Ethereum Foundation’s support for projects like the Open Intents Framework, which spans many chains and rollups. The vibe at EthCC was that Ethereum’s maturity is shown in embracing an “ecosystem of ecosystems”, where user-centric design (regardless of chain) is paramount.

  • Stablecoins & Real-World Assets as Catalysts: An interesting theme was the intersection of chain abstraction with stablecoins and RWAs (Real-World Assets). Stablecoins were repeatedly noted as a “grounding force” in DeFi, and several talks (e.g. on ERC-7811 stablecoin chain abstraction) looked at making stablecoin usage chain-agnostic. The idea is that an average user shouldn’t need to care on which chain their USDC or DAI resides – it should hold the same value and be usable anywhere seamlessly. We saw this with Securitize’s fund using Wormhole to go multichain, effectively abstracting an institutional product across chains. EthCC discussions suggested that solving cross-chain UX for stablecoins and RWAs is a big step toward broader blockchain-based finance, since these assets demand smooth user experiences for adoption by institutions and mainstream users.

  • Developer Excitement and Tooling: Workshops and side events (like Multichain Day) introduced developers to the new tooling available. Hackathon projects and demos showcased how intent APIs and chain abstraction SDKs (from various teams) could be used to whip up cross-chain dApps in days. There was a palpable excitement that the “Holy Grail” of Web3 UX – using multiple networks without realizing it – is within reach. The Open Intents Framework team did a beginner’s workshop explaining how to build an intent-enabled app, likely using their reference solver and contracts. Developers who had struggled with bridging and multi-chain deployment in the past were keen on these solutions, as evidenced by the Q&A sessions (as reported informally on social media during the conference).

  • Announcements and Collaboration: EthCC 2025 also served as a stage for announcing collaborations between projects in this space. For example, a partnership between a wallet provider and an intent protocol or between a bridge project and an account abstraction project were hinted at. One concrete announcement was Wormhole integrating with the Stacks ecosystem (bringing Bitcoin liquidity into cross-chain flows) which wasn’t directly chain abstraction for Ethereum, but exemplified the expanding connectivity across traditionally separate crypto ecosystems. The presence of projects like Zerion (wallet), Safe (smart accounts), Connext, Socket, Axelar, etc., all discussing interoperability, signaled that many pieces of the puzzle are coming together.

Overall, EthCC 2025 painted a picture of a community coalescing around user-centric cross-chain innovation. The phrase “composable infrastructure” was used to describe the goal: all these L1s, L2s, and protocols should form a cohesive fabric that applications can build on without needing to stitch things together ad-hoc. The conference made it clear that chain abstraction and intents are not just buzzwords but active areas of development attracting serious talent and investment. Ethereum’s leadership in this—through funding, setting standards, and providing a robust base layer—was reaffirmed at the event.

Comparison of Approaches to Chain Abstraction and Intents

The table below compares several prominent protocols and frameworks that tackle cross-chain user/developer experience, highlighting their approach and key features:

Project / ProtocolApproach to Chain AbstractionIntent-Centric MechanismKey Features & Outcomes
Wormhole (Interop Protocol)Chain-agnostic message-passing layer connecting 25+ chains (EVM & non-EVM) via Guardian validator network. Abstracts token transfers with Native Token Transfer (NTT) standard (unified supply across chains) and generic cross-chain contract calls.Intent Fulfillment via xApps: Provides higher-level protocols on top of messaging (e.g. Mayan Swift for cross-chain swaps, Mayan MCTP for transfers with gas). Intents are encoded as orders on source chain; solved by off-chain or on-chain agents (auctions on Solana) with Wormhole relaying proofs between chains.Universal Interoperability: One integration gives access to many chains.
Best-Price Execution: Solvers compete in auctions to maximize user output (reduces costs).
Gas & Fee Abstraction: Relayers handle delivering funds and gas on target chain, enabling one-click user flows.
Heterogeneous Support: Works across very different chain environments (Ethereum, Solana, Cosmos etc.), making it versatile for developers.
Etherspot (AA + ChA SDK)Account abstraction platform offering smart contract wallets on multiple chains with unified SDK. Abstracts chains by providing a single API to interact with all user’s accounts and balances across networks. Developers integrate its SDK to get multi-chain functionality out-of-the-box.Intent Protocol (“Pulse”): Collects user-stated goals (e.g. swap X to Y cross-chain) via a high-level API. The backend uses the user’s smart wallet to execute necessary steps: bundling transactions, choosing bridges/swaps (with integrated solver logic or external aggregators), and sponsoring gas via paymasters.Smart Wallet Unification: One user account controls assets on all chains, enabling features like aggregated balance and one-click multi-chain actions.
Developer-Friendly: Pre-built modules (4337 bundler, paymaster) and React TransactionKit, cutting multi-chain dApp dev time significantly.
Gasless & Social Login: Supports gas sponsorship and alternative login (improving UX for mainstream users).
Single-Intent Swaps Demo: Showcased cross-chain swap in one user op, illustrating how users focus on “what” and let Etherspot handle “how”.
Open Intents Framework (Ethereum Foundation & collaborators)Open standard (ERC-7683) and reference architecture for building intent-based cross-chain applications. Provides a base set of contracts (e.g. a Base7683 intent registry on each chain) that can plug into any bridging/messaging layer. Aims to abstract chains by standardizing how intents are expressed and resolved, independent of any single provider.Pluggable Solvers & Settlement: OIF doesn’t enforce one solver network; it allows multiple settlement mechanisms (Hyperlane, LayerZero, Connext’s xcall, etc.) to be used interchangeably. Intents are submitted to a contract that solvers monitor; a reference solver implementation is provided (TypeScript bot) that developers can run or modify. Across Protocol’s live intent contracts on mainnet serve as one realization of ERC-7683.Ecosystem Collaboration: Built by dozens of teams to be a public good, encouraging shared infrastructure (solvers can service intents from any project).
Modularity: Developers can choose trust model – e.g. use optimistic verification, a specific bridge, or multi-sig – without changing the intent format.
Standardization: With common interfaces, wallets and UIs (like Superbridge) can support intents from any OIF-based protocol, reducing integration effort.
Community Support: Vitalik and others endorse the effort, and early adopters (Eco, Uniswap’s Compact, etc.) are building on it.
Axelar + Squid (Cross-Chain Network & SDK)Cosmos-based interoperability network (Axelar) with a decentralized validator set that passes messages and tokens between chains. Abstracts the chain hop by offering a unified cross-chain API (Squid SDK) which developers use to initiate transfers or contract calls across EVM chains, Cosmos chains, etc., through Axelar’s network. Squid focuses on providing easy cross-chain liquidity (swaps) via one interface.“One-Step” Cross-Chain Ops: Squid interprets intents like “swap TokenA on ChainX to TokenB on ChainY” and automatically splits it into on-chain steps: a swap on ChainX (using a DEX aggregator), a transfer via Axelar’s bridge, and a swap on ChainY. Axelar’s General Message Passing delivers any arbitrary intent data across. Axelar also offers a Gas Service – developers can have users pay gas in the source token and it ensures the destination transaction is paid, achieving gas abstraction for the user.Developer Simplicity: One SDK call handles multi-chain swaps; no need to manually integrate DEX + bridge + DEX logic.
Fast Finality: Axelar ensures finality with its own consensus (seconds) so cross-chain actions complete quickly (often faster than optimistic bridges).
Composable with dApps: Many dApps (e.g. decentralized exchanges, yield aggregators) integrate Squid to offer cross-chain features, effectively outsourcing the complexity.
Security Model: Relies on Axelar’s proof-of-stake security; users trust Axelar validators to safely bridge assets (a different model from optimistic or light-client bridges).
Connext (xCall & Amarok)Liquidity-network bridge that uses an optimistic assurance model (watchers challenge fraud) for security. Abstracts chains by providing an xcall interface – developers treat cross-chain function calls like normal function calls, and Connext routes the call through routers that provide liquidity and execute the call on the destination. The goal is to make calling a contract on another chain as simple as calling a local one.Function Call Intents: Connext’s xcall takes an intent like “invoke function F on Contract C on Chain B with data X and send result back” – effectively a cross-chain RPC. Under the hood, liquidity providers lock bond on Chain A and mint representative assets on Chain B (or use native assets if available) to carry out any value transfer. The intent (including any return handling) is fulfilled after a configurable delay (to allow fraud challenges). There isn’t a solver competition; instead any available router can execute, but Connext ensures the cheapest path by using a network of routers.Trust-Minimized: No external validator set – security comes from on-chain verification plus bonded routers. Users don’t cede custody to a multi-sig.
Native Execution: Can trigger arbitrary logic on the destination chain (more general than swap-focused intents). This suits cross-chain dApp composability (e.g. initiate an action in a remote protocol).
Router Liquidity Model: Instant liquidity for transfers (like a traditional bridge) without waiting for finality, since routers front liquidity and later reconcile.
Integration in Wallets/Bridges: Often used under the hood by wallets for simple bridging due to its simplicity and security posture. Less aimed at end-user UX platforms and more at protocol devs who want custom cross-chain calls.

(Table legend: AA = Account Abstraction, ChA = Chain Abstraction, AMB = arbitrary messaging bridge)

Each of the above approaches addresses the cross-chain UX challenge from a slightly different angle – some focus on the user’s wallet/account, others on the network messaging, and others on the developer API layer – but all share the goal of making blockchain interactions chain-agnostic and intent-driven. Notably, these solutions are not mutually exclusive; in fact, they often complement each other. For example, an application could use Etherspot’s smart wallet + paymasters, with the Open Intents standard to format the user’s intent, and then use Axelar or Connext under the hood as the execution layer to actually bridge and perform actions. The emerging trend is composability among chain abstraction tools themselves, ultimately building toward an Internet of Blockchains where users navigate freely.

Conclusion

Blockchain technology is undergoing a paradigm shift from siloed networks and manual operations to a unified, intent-driven experience. Chain abstraction and intent-centric architecture are at the heart of this transformation. By abstracting away the complexities of multiple chains, they enable a user-centric Web3 in which people interact with decentralized applications without needing to understand which chain they’re using, how to bridge assets, or how to acquire gas on each network. The infrastructure – relayers, smart accounts, solvers, and bridges – collaboratively handle those details, much like the Internet’s underlying protocols route packets without users knowing the route.

The benefits in user experience are already tangible: smoother onboarding, one-click cross-chain swaps, and truly seamless dApp interactions across ecosystems. Developers, too, are empowered by higher-level SDKs and standards that dramatically simplify building for a multi-chain world. As seen at EthCC 2025, there is a strong community consensus that these developments are not only exciting enhancements but fundamental requirements for the next phase of Web3 growth. Projects like Wormhole and Etherspot demonstrate that it’s possible to retain decentralization and trustlessness while offering Web2-like ease of use.

Looking ahead, we can expect further convergence of these approaches. Standards such as ERC-7683 intents and ERC-4337 account abstraction will likely become widely adopted, ensuring compatibility across platforms. More bridges and networks will integrate with open intent frameworks, increasing liquidity and options for solvers to fulfill user intents. Eventually, the term “cross-chain” might fade away, as interactions won’t be thought of in terms of distinct chains at all – much like users of the web don’t think about which data center their request hit. Instead, users will simply invoke services and manage assets in a unified blockchain ecosystem.

In conclusion, chain abstraction and intent-centric design are turning the multi-chain dream into reality: delivering the benefits of diverse blockchain innovation without the fragmentation. By centering designs on user intents and abstracting the rest, the industry is taking a major step toward making decentralized applications as intuitive and powerful as the centralized services of today, fulfilling the promise of Web3 for a broader audience. The infrastructure is still evolving, but its trajectory is clear – a seamless, intent-driven Web3 experience is on the horizon, and it will redefine how we perceive and interact with blockchains.

Sources: The information in this report was gathered from a range of up-to-date resources, including protocol documentation, developer blog posts, and talks from EthCC 2025. Key references include Wormhole’s official docs on their cross-chain intent protocols, Etherspot’s technical blog series on account and chain abstraction, and the Ethereum Foundation’s Open Intents Framework release notes, among others, as cited throughout the text. Each citation is denoted in the format 【source†lines】 to pinpoint the original source material supporting the statements made.

Sui’s Reference Gas Price (RGP) Mechanism

· 8 min read
Dora Noda
Software Engineer

Introduction

Announced for public launch on May 3rd, 2023, after an extensive three-wave testnet, the Sui blockchain introduced an innovative gas pricing system designed to benefit both users and validators. At its heart is the Reference Gas Price (RGP), a network-wide baseline gas fee that validators agree upon at the start of each epoch (approximately 24 hours).

This system aims to create a mutually beneficial ecosystem for SUI token holders, validators, and end-users by providing low, predictable transaction costs while simultaneously rewarding validators for performant and reliable behavior. This report provides a deep dive into how the RGP is determined, the calculations validators perform, its impact on the network economy, its evolution through governance, and how it compares to other blockchain gas models.

The Reference Gas Price (RGP) Mechanism

Sui’s RGP is not a static value but is re-established each epoch through a dynamic, validator-driven process.

  • The Gas Price Survey: At the beginning of each epoch, every validator submits their "reservation price"—the minimum gas price they are willing to accept for processing transactions. The protocol then orders these submissions by stake and sets the RGP for that epoch at the stake-weighted 2/3 percentile. This design ensures that validators representing a supermajority (at least two-thirds) of the total stake are willing to process transactions at this price, guaranteeing a reliable level of service.

  • Update Cadence and Requirements: While the RGP is set each epoch, validators are required to actively manage their quotes. According to official guidance, validators must update their gas price quote at least once a week. Furthermore, if there is a significant change in the value of the SUI token, such as a fluctuation of 20% or more, validators must update their quote immediately to ensure the RGP accurately reflects current market conditions.

  • The Tallying Rule and Reward Distribution: To ensure validators honor the agreed-upon RGP, Sui employs a "tallying rule." Throughout an epoch, validators monitor each other’s performance, tracking whether their peers are promptly processing RGP-priced transactions. This monitoring results in a performance score for each validator. At the end of the epoch, these scores are used to calculate a reward multiplier that adjusts each validator's share of the stake rewards.

    • Validators who performed well receive a multiplier of ≥1, boosting their rewards.
    • Validators who stalled, delayed, or failed to process transactions at the RGP receive a multiplier of <1, effectively slashing a portion of their earnings.

This two-part system creates a powerful incentive structure. It discourages validators from quoting an unrealistically low price they can't support, as the financial penalty for underperformance would be severe. Instead, validators are motivated to submit the lowest price they can sustainably and efficiently handle.


Validator Operations: Calculating the Gas Price Quote

From a validator's perspective, setting the RGP quote is a critical operational task that directly impacts profitability. It requires building data pipelines and automation layers to process a number of inputs from both on-chain and off-chain sources. Key inputs include:

  • Gas units executed per epoch
  • Staking rewards and subsidies per epoch
  • Storage fund contributions
  • The market price of the SUI token
  • Operational expenses (hardware, cloud hosting, maintenance)

The goal is to calculate a quote that ensures net rewards are positive. The process involves several key formulas:

  1. Calculate Total Operational Cost: This determines the validator's expenses in fiat currency for a given epoch.

    Costepoch=(Total Gas Units Executedepoch)×(Cost in $ per Gas Unitepoch)\text{Cost}_{\text{epoch}} = (\text{Total Gas Units Executed}_{\text{epoch}}) \times (\text{Cost in \$ per Gas Unit}_{\text{epoch}})
  2. Calculate Total Rewards: This determines the validator's total revenue in fiat currency, sourced from both protocol subsidies and transaction fees.

    $Rewardsepoch=(Total Stake Rewards in SUIepoch)×(SUI Token Price)\text{\$Rewards}_{\text{epoch}} = (\text{Total Stake Rewards in SUI}_{\text{epoch}}) \times (\text{SUI Token Price})

    Where Total Stake Rewards is the sum of any protocol-provided Stake Subsidies and the Gas Fees collected from transactions.

  3. Calculate Net Rewards: This is the ultimate measure of profitability for a validator.

    $Net Rewardsepoch=$Rewardsepoch$Costepoch\text{\$Net Rewards}_{\text{epoch}} = \text{\$Rewards}_{\text{epoch}} - \text{\$Cost}_{\text{epoch}}

    By modeling their expected costs and rewards at different RGP levels, validators can determine an optimal quote to submit to the Gas Price Survey.

Upon mainnet launch, Sui set the initial RGP to a fixed 1,000 MIST (1 SUI = 10⁹ MIST) for the first one to two weeks. This provided a stable operating period for validators to gather sufficient network activity data and establish their calculation processes before the dynamic survey mechanism took full effect.


Impact on the Sui Ecosystem

The RGP mechanism profoundly shapes the economics and user experience of the entire network.

  • For Users: Predictable and Stable Fees: The RGP acts as a credible anchor for users. The gas fee for a transaction follows a simple formula: User Gas Price = RGP + Tip. In normal conditions, no tip is needed. During network congestion, users can add a tip to gain priority, creating a fee market without altering the stable base price within the epoch. This model provides significantly more fee stability than systems where the base fee changes with every block.

  • For Validators: A Race to Efficiency: The system fosters healthy competition. Validators are incentivized to lower their operating costs (through hardware and software optimization) to be able to quote a lower RGP profitably. This "race to efficiency" benefits the entire network by driving down transaction costs. The mechanism also pushes validators toward balanced profit margins; quoting too high risks being priced out of the RGP calculation, while quoting too low leads to operational losses and performance penalties.

  • For the Network: Decentralization and Sustainability: The RGP mechanism helps secure the network's long-term health. The "threat of entry" from new, more efficient validators prevents existing validators from colluding to keep prices high. Furthermore, by adjusting their quotes based on the SUI token's market price, validators collectively ensure their operations remain sustainable in real-world terms, insulating the network's fee economy from token price volatility.


Governance and System Evolution: SIP-45

Sui's gas mechanism is not static and evolves through governance. A prominent example is SIP-45 (Prioritized Transaction Submission), which was proposed to refine fee-based prioritization.

  • Issue Addressed: Analysis showed that simply paying a high gas price did not always guarantee faster transaction inclusion.
  • Proposed Changes: The proposal included increasing the maximum allowable gas price and introducing an "amplified broadcast" for transactions paying significantly above the RGP (e.g., ≥5x RGP), ensuring they are rapidly disseminated across the network for priority inclusion.

This demonstrates a commitment to iterating on the gas model based on empirical data to improve its effectiveness.


Comparison with Other Blockchain Gas Models

Sui's RGP model is unique, especially when contrasted with Ethereum's EIP-1559.

AspectSui (Reference Gas Price)Ethereum (EIP-1559)
Base Fee DeterminationValidator survey each epoch (market-driven).Algorithmic each block (protocol-driven).
Frequency of UpdateOnce per epoch (~24 hours).Every block (~12 seconds).
Fee DestinationAll fees (RGP + tip) go to validators.Base fee is burned; only the tip goes to validators.
Price StabilityHigh. Predictable day-over-day.Medium. Can spike rapidly with demand.
Validator IncentivesCompete on efficiency to set a low, profitable RGP.Maximize tips; no control over the base fee.

Potential Criticisms and Challenges

Despite its innovative design, the RGP mechanism faces potential challenges:

  • Complexity: The system of surveys, tallying rules, and off-chain calculations is intricate and may present a learning curve for new validators.
  • Slow Reaction to Spikes: The RGP is fixed for an epoch and cannot react to sudden, mid-epoch demand surges, which could lead to temporary congestion until users begin adding tips.
  • Potential for Collusion: In theory, validators could collude to set a high RGP. This risk is primarily mitigated by the competitive nature of the permissionless validator set.
  • No Fee Burn: Unlike Ethereum, Sui recycles all gas fees to validators and the storage fund. This rewards network operators but does not create deflationary pressure on the SUI token, a feature some token holders value.

Frequently Asked Questions (FAQ)

Why stake SUI? Staking SUI secures the network and earns rewards. Initially, these rewards are heavily subsidized by the Sui Foundation to compensate for low network activity. These subsidies decrease by 10% every 90 days, with the expectation that rewards from transaction fees will grow to become the primary source of yield. Staked SUI also grants voting rights in on-chain governance.

Can my staked SUI be slashed? Yes. While parameters are still being finalized, "Tally Rule Slashing" applies. A validator who receives a zero performance score from 2/3 of its peers (due to low performance, malicious behavior, etc.) will have its rewards slashed by a to-be-determined amount. Stakers can also miss out on rewards if their chosen validator has downtime or quotes a suboptimal RGP.

Are staking rewards automatically compounded? Yes, staking rewards on Sui are automatically distributed and re-staked (compounded) every epoch. To access rewards, you must explicitly unstake them.

What is the Sui unbonding period? Initially, stakers can unbond their tokens immediately. An unbonding period where tokens are locked for a set time after unstaking is expected to be implemented and will be subject to governance.

Do I maintain custody of my SUI tokens when staking? Yes. When you stake SUI, you delegate your stake but remain in full control of your tokens. You never transfer custody to the validator.

Verifiable AI in Motion: How Lagrange Labs’ Dynamic zk-SNARKs Enable Continuous Trust

· 7 min read
Dora Noda
Software Engineer

In the rapidly converging worlds of artificial intelligence and blockchain, the demand for trust and transparency has never been higher. How can we be certain that an AI model's output is accurate and untampered with? How can we perform complex computations on vast on-chain datasets without compromising security or scalability? Lagrange Labs is tackling these questions head-on with its suite of zero-knowledge (ZK) infrastructure, aiming to build a future of "AI You Can Prove." This post provides an objective overview of their mission, technology, and recent breakthroughs, culminating in their latest paper on Dynamic zk-SNARKs.

1. The Team and Its Mission

Lagrange Labs is building the foundational infrastructure to generate cryptographic proofs for any AI inference or on-chain application. Their goal is to make computation verifiable, bringing a new layer of trust to the digital world. Their ecosystem is built on three core product lines:

  • ZK Prover Network: A decentralized network of over 85 proving nodes that supplies the computational power needed for a wide range of proving tasks, from AI and rollups to decentralized applications (dApps).
  • DeepProve (zkML): A specialized system for generating ZK proofs of neural network inferences. Lagrange claims it is up to 158 times faster than competing solutions, making verifiable AI a practical reality.
  • ZK Coprocessor 1.0: The first SQL-based ZK Coprocessor, allowing developers to run custom queries on massive on-chain datasets and receive verifiably accurate results.

2. A Roadmap to Verifiable AI

Lagrange has been methodically executing a roadmap designed to solve the challenges of AI verifiability one step at a time.

  • Q3 2024: ZK Coprocessor 1.0 Launch: This release introduced hyper-parallel recursive circuits, which delivered an average speed increase of approximately 2x. Projects like Azuki and Gearbox are already leveraging the coprocessor for their on-chain data needs.
  • Q1 2025: DeepProve Unveiled: Lagrange announced DeepProve, its solution for Zero-Knowledge Machine Learning (zkML). It supports popular neural network architectures like Multi-Layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs). The system achieves significant, order-of-magnitude acceleration across all three critical stages: one-time setup, proof generation, and verification, with speed-ups reaching as high as 158x.
  • Q2 2025: The Dynamic zk-SNARKs Paper (Latest Milestone): This paper introduces a groundbreaking "update" algorithm. Instead of re-generating a proof from scratch every time the underlying data or computation changes, this method can patch an old proof (π) into a new proof (π'). This update can be performed with a complexity of just O(√n log³n), a dramatic improvement over full re-computation. This innovation is particularly suited for dynamic systems like continuously learning AI models, real-time game logic, and evolving smart contracts.

3. Why Dynamic zk-SNARKs Matter

The introduction of updatable proofs represents a fundamental shift in the cost model of zero-knowledge technology.

  • A New Cost Paradigm: The industry moves from a model of "full-recomputation for every proof" to "incremental proofing based on the size of the change." This dramatically lowers the computational and financial cost for applications that undergo frequent, minor updates.

  • Implications for AI:

    • Continuous Fine-Tuning: When fine-tuning less than 1% of a model's parameters, the proof generation time grows almost linearly with the number of changed parameters (Δ parameters), rather than with the overall size of the model.
    • Streaming Inference: This enables the generation of proofs concurrently with the inference process itself. This drastically reduces the latency between an AI making a decision and that decision being settled and verified on-chain, unlocking use cases like on-chain AI services and compressed proofs for rollups.
  • Implications for On-Chain Applications:

    • Dynamic zk-SNARKs offer massive gas and time optimizations for applications characterized by frequent, small-state changes. This includes decentralized exchange (DEX) order books, evolving game states, and ledger updates involving frequent additions or deletions.

4. A Glimpse into the Tech Stack

Lagrange's powerful infrastructure is built on a sophisticated and integrated technology stack:

  • Circuit Design: The system is flexible, supporting the embedding of ONNX (Open Neural Network Exchange) models, SQL parsers, and custom operators directly into its circuits.
  • Recursion & Parallelism: The ZK Prover Network facilitates distributed recursive proofs, while the ZK Coprocessor leverages the sharding of "micro-circuits" to execute tasks in parallel, maximizing efficiency.
  • Economic Incentives: Lagrange is planning to launch a native token, LA, which will be integrated into a Double-Auction-for-Recursive-Auction (DARA) system. This will create a robust marketplace for bidding on prover computation, complete with incentives and penalties to ensure network integrity.

5. Ecosystem and Real-World Adoption

Lagrange is not just building in a vacuum; its technology is already being integrated by a growing number of projects across different sectors:

  • AI & ML: Projects like 0G Labs and Story Protocol are using DeepProve to verify the outputs of their AI models, ensuring provenance and trust.
  • Rollups & Infrastructure: Key players like EigenLayer, Base, and Arbitrum are participating in the ZK Prover Network as validation nodes or integration partners, contributing to its security and computational power.
  • NFT & DeFi Applications: Brands like Azuki and DeFi protocols like Gearbox are using the ZK Coprocessor to enhance the credibility of their data queries and reward distribution mechanisms.

6. Challenges and the Road Ahead

Despite its impressive progress, Lagrange Labs and the broader ZK field face several hurdles:

  • Hardware Bottlenecks: Even with a distributed network, updatable SNARKs still demand high bandwidth and rely on GPU-friendly cryptographic curves to perform efficiently.
  • Lack of Standardization: The process of mapping AI frameworks like ONNX and PyTorch to ZK circuits still lacks a universal, standardized interface, creating friction for developers.
  • A Competitive Landscape: The race to build zkVMs and generalized zkCompute platforms is heating up. Competitors like Risc-Zero and Succinct are also making significant strides. The ultimate winner may be the one who can first commercialize a developer-friendly, community-driven toolchain.

7. Conclusion

Lagrange Labs is methodically reshaping the intersection of AI and blockchain through the lens of verifiability. Their approach provides a comprehensive solution:

  • DeepProve addresses the challenge of trusted inference.
  • The ZK Coprocessor solves the problem of trusted data.
  • Dynamic zk-SNARKs incorporate the real-world need for continuous updates directly into the proving system.

If Lagrange can maintain its performance edge, solve the critical challenge of standardization, and continue to grow its robust network, it is well-positioned to become a cornerstone player in the emerging "AI + ZK Infrastructure" sector.

The Copy-Paste Crime: How a Simple Habit is Draining Millions from Crypto Wallets

· 5 min read
Dora Noda
Software Engineer

When you send crypto, what’s your routine? For most of us, it involves copying the recipient's address from our transaction history. After all, nobody can memorize a 40-character string like 0x1A2b...8f9E. It's a convenient shortcut we all use.

But what if that convenience is a carefully laid trap?

A devastatingly effective scam called Blockchain Address Poisoning is exploiting this exact habit. Recent research from Carnegie Mellon University has uncovered the shocking scale of this threat. In just two years, on the Ethereum and Binance Smart Chain (BSC) networks alone, scammers have made over 270 million attack attempts, targeting 17 million victims and successfully stealing at least $83.8 million.

This isn't a niche threat; it's one of the largest and most successful crypto phishing schemes operating today. Here’s how it works and what you can do to protect yourself.


How the Deception Works 🤔

Address poisoning is a game of visual trickery. The attacker’s strategy is simple but brilliant:

  1. Generate a Lookalike Address: The attacker identifies a frequent address you send funds to. They then use powerful computers to generate a new crypto address that has the exact same starting and ending characters. Since most wallets and block explorers shorten addresses for display (e.g., 0x1A2b...8f9E), their fraudulent address looks identical to the real one at a glance.

  2. "Poison" Your Transaction History: Next, the attacker needs to get their lookalike address into your wallet's history. They do this by sending a "poison" transaction. This can be:

    • A Tiny Transfer: They send you a minuscule amount of crypto (like $0.001) from their lookalike address. It now appears in your list of recent transactions.
    • A Zero-Value Transfer: In a more cunning move, they exploit a feature in many token contracts to create a fake, zero-dollar transfer that looks like it came from you to their lookalike address. This makes the fake address seem even more legitimate, as it appears you've sent funds there before.
    • A Counterfeit Token Transfer: They create a worthless, fake token (e.g., "USDTT" instead of USDT) and fake a transaction to their lookalike address, often mimicking the amount of a previous real transaction you made.
  3. Wait for the Mistake: The trap is now set. The next time you go to pay a legitimate contact, you scan your transaction history, see what you believe is the correct address, copy it, and hit send. By the time you realize your mistake, the funds are gone. And thanks to the irreversible nature of blockchain, there's no bank to call and no way to get them back.


A Glimpse into a Criminal Enterprise 🕵️‍♂️

This isn't the work of lone hackers. The research reveals that these attacks are carried out by large, organized, and highly profitable criminal groups.

Who They Target

Attackers don't waste their time on small accounts. They systematically target users who are:

  • Wealthy: Holding significant balances in stablecoins.
  • Active: Conducting frequent transactions.
  • High-Value Transactors: Moving large sums of money.

A Hardware Arms Race

Generating a lookalike address is a brute-force computational task. The more characters you want to match, the exponentially harder it gets. Researchers found that while most attackers use standard CPUs to create moderately convincing fakes, the most sophisticated criminal group has taken it to another level.

This top-tier group has managed to generate addresses that match up to 20 characters of a target's address. This feat is nearly impossible with standard computers, leading researchers to conclude they are using massive GPU farms—the same kind of powerful hardware used for high-end gaming or AI research. This shows a significant financial investment, which they easily recoup from their victims. These organized groups are running a business, and business is unfortunately booming.


How to Protect Your Funds 🛡️

While the threat is sophisticated, the defenses are straightforward. It all comes down to breaking bad habits and adopting a more vigilant mindset.

  1. For Every User (This is the most important part):

    • VERIFY THE FULL ADDRESS. Before you click "Confirm," take five extra seconds to manually check the entire address, character by character. Do not just glance at the first and last few digits.
    • USE AN ADDRESS BOOK. Save trusted, verified addresses to your wallet's address book or contact list. When sending funds, always select the recipient from this saved list, not from your dynamic transaction history.
    • SEND A TEST TRANSACTION. For large or important payments, send a tiny amount first. Confirm with the recipient that they have received it before sending the full sum.
  2. A Call for Better Wallets:

    • Wallet developers can help by improving user interfaces. This includes displaying more of the address by default or adding strong, explicit warnings when a user is about to send funds to an address they've only interacted with via a tiny or zero-value transfer.
  3. The Long-Term Fix:

    • Systems like the Ethereum Name Service (ENS), which allow you to map a human-readable name like yourname.eth to your address, can eliminate this problem entirely. Broader adoption is key.

In the decentralized world, you are your own bank, which also means you are your own head of security. Address poisoning is a silent but powerful threat that preys on convenience and inattention. By being deliberate and double-checking your work, you can ensure your hard-earned assets don't end up in a scammer's trap.