Skip to main content

36 posts tagged with "Web3"

View all tags

GameFi Industry Overview: A PM's Guide to Web3 Gaming in 2025

· 32 min read
Dora Noda
Software Engineer

The GameFi market reached $18-19 billion in 2024 with projections to hit $95-200 billion by 2034, yet faces a brutal reality check: 93% of projects fail and 60% of users abandon games within 30 days. This paradox defines the current state—massive growth potential colliding with fundamental sustainability challenges. The industry is pivoting from speculative "play-to-earn" models that attracted mercenary users toward "play-and-earn" experiences prioritizing entertainment value with blockchain benefits as secondary. Success in 2025 requires understanding five distinct user personas, designing for multiple "jobs to be done" beyond just earning, implementing sustainable tokenomics that don't rely on infinite user growth, and learning from both the successes of Axie Infinity's $4+ billion in NFT sales and the failures of its 95% user collapse. The winners will be products that abstract blockchain complexity, deliver AAA-quality gameplay, and build genuine communities rather than speculation farms.

Target user personas: Who's actually playing GameFi

The GameFi audience spans from Filipino pedicab drivers earning rent money to wealthy crypto investors treating games as asset portfolios. Understanding these personas is critical for product-market fit.

The Income Seeker represents 35-40% of users

This persona dominates Southeast Asia—particularly the Philippines, Vietnam, and Indonesia—where 40% of Axie Infinity's peak users originated. These are 20-35 year olds from below-minimum-wage households who view GameFi as legitimate employment, not entertainment. They invest 6-10 hours daily treating gameplay as a full-time job, often entering through scholarship programs where guilds provide NFTs in exchange for 30-75% of earnings. During Axie's peak, Filipino players earned $400-1,200 monthly compared to $200 minimum wage, enabling life-changing outcomes like paying university fees and buying groceries. However, this persona is extremely vulnerable to token volatility—when SLP crashed 99% from peak, earnings fell below minimum wage and retention collapsed. Their pain points center on high entry costs ($400-1,000+ for starter NFTs at peak), complex crypto-to-fiat conversion, and unsustainable tokenomics. For product managers, this persona requires free-to-play or scholarship models, mobile-first design, local language support, and transparent earning projections. The scholarship model pioneered by Yield Guild Games (30,000+ scholarships) democratizes access but raises exploitation concerns given the 10-30% commission structure.

The Gamer-Investor accounts for 25-30% of users

These are 25-40 year old professionals from developed markets—US, South Korea, Japan—with middle to upper-middle class incomes and college education. They're experienced core gamers seeking both entertainment value and financial returns, comfortable navigating DeFi ecosystems across 3.8 Layer 1 chains and 3.6 Layer 2 chains on average. Unlike Income Seekers, they directly purchase premium NFTs ($1,000-10,000+ single investments) and diversify portfolios across 3-5 games. They invest 2-4 hours daily and often act as guild owners rather than scholars, managing others' gameplay. Their primary frustration is poor gameplay quality in most GameFi titles—they want AAA production values matching traditional games, not "spreadsheets with graphics." This persona is critical for sustainability because they provide capital inflows and longer-term engagement. Product managers should focus on compelling gameplay mechanics, high production values, sophisticated tokenomics transparency, and governance participation through DAOs. They're willing to pay premium prices but demand quality and won't tolerate pay-to-win dynamics, which ranks as the top reason players quit traditional games.

The Casual Dabbler makes up 20-25% of users

Global and primarily mobile-first, these 18-35 year old students and young professionals are motivated by curiosity, FOMO, and the "why not earn while playing?" value proposition. They invest only 30 minutes to 2 hours daily with inconsistent engagement patterns. This persona increasingly discovers GameFi through Telegram mini-apps like Hamster Kombat (239 million users in 3 months) and Notcoin ($1.6 billion market cap), which offer zero-friction onboarding without wallet setup. However, they exhibit the highest churn rate—60%+ abandon within 30 days—because poor UX/UI (cited by 53% as biggest challenge), complex wallet setup (deters 11%), and repetitive gameplay drive them away. The discovery method matters: 60% learn about GameFi from friends and family, making viral mechanics essential. For product managers, this persona demands simplified onboarding (hosted wallets, no crypto knowledge required), social features for friend recruitment, and genuinely entertaining gameplay that works as a standalone experience. The trap is designing purely for token farming, which attracts this persona temporarily but fails to retain them beyond airdrops—Hamster Kombat lost 86% of users post-airdrop (300M to 41M).

The Crypto Native comprises 10-15% of users

These 22-45 year old crypto professionals, developers, and traders from global crypto hubs possess expert-level blockchain knowledge and variable gaming backgrounds. They view GameFi as an asset class and technological experiment rather than primary entertainment, seeking alpha opportunities, early adoption status, and governance participation. This persona trades high-frequency, provides liquidity, stakes governance tokens, and participates in DAOs (25% actively engage in governance). They're sophisticated enough to analyze smart contract code and tokenomics sustainability, making them the harshest critics of unsustainable models. Their investment approach focuses on high-value NFTs, land sales, and governance tokens rather than grinding for small rewards. Product managers should engage this persona for credibility and capital but recognize they're often early exiters—flipping positions before mainstream adoption. They value innovative tokenomics, transparent on-chain data, and utility beyond speculation. Major pain points include unsustainable token emissions, regulatory uncertainty, bot manipulation, and rug pulls. This persona is essential for initial liquidity and word-of-mouth but represents too small an audience (4.5 million crypto gamers vs 3 billion total gamers) to build a mass-market product around exclusively.

The Community Builder represents 5-10% of users

Guild owners, scholarship managers, content creators, and influencers—these 25-40 year olds with middle incomes invest 4-8 hours daily managing operations rather than playing directly. They built the infrastructure enabling Income Seekers to participate, managing anywhere from 10 to 1,000+ players and earning through 10-30% commissions on scholar earnings. At Axie's 2021 peak, successful guild leaders earned $20,000+ monthly. They create educational content, strategy guides, and market analysis while using rudimentary tools (often Google Sheets for scholar management). This persona is critical for user acquisition and education—Yield Guild Games managed 5,000+ scholars with 60,000 on waitlist—but faces sustainability challenges as token prices affect entire guild economics. Their pain points include lack of guild CRM tools, performance tracking difficulty, regulatory uncertainty around taxation, and the sustainability concerns of the scholar economy model (criticized as digital-age "gold farming"). Product managers should build tools specifically for this persona—guild dashboards, automated payouts, performance analytics—and recognize they serve as distribution channels, onboarding infrastructure, and community evangelists.

Jobs to be done: What users hire GameFi products for

GameFi products are hired to do multiple jobs simultaneously across functional, emotional, and social dimensions. Understanding these layered motivations explains why users adopt, engage with, and ultimately abandon these products.

Functional jobs: Practical problems being solved

The primary functional job for Southeast Asian users is generating income when traditional employment is unavailable or insufficient. During COVID-19 lockdowns, Axie Infinity players in the Philippines earned $155-$600 monthly compared to $200 minimum wage, with earnings enabling concrete outcomes like paying for mothers' medication and children's school fees. One 26-year-old line cook made $29 weekly playing, and professional players bought houses. This represents a genuine economic opportunity in markets with 60%+ unbanked populations and minimum daily wages of $7-25 USD. However, the job extends beyond primary income to supplementary earnings—content moderators playing 2 hours daily earned $155-$195 monthly (nearly half their salary) for grocery money and electricity bills. For developed market users, the functional job shifts to investment and wealth accumulation through asset appreciation. Early Axie adopters bought teams for $5 in 2020; by 2021 prices reached $50,000+ for starter teams. Virtual land in Decentraland and The Sandbox sold for substantial amounts, and the guild model emerged where "managers" own multiple teams and rent to "scholars" for 10-30% commission. The portfolio diversification job involves gaining crypto asset exposure through engaging activity rather than pure speculation, accessing DeFi features (staking, yield farming) embedded in gameplay. GameFi competes with traditional employment (offering flexible hours, work-from-home, no commute), traditional gaming (offering real money earnings), cryptocurrency trading (offering more engaging skill-based earnings), and gig economy work (offering more enjoyable activity for comparable pay).

Emotional jobs: Feelings and experiences being sought

Achievement and mastery drive engagement as users seek to feel accomplished through challenging gameplay and visible progress. Academic research shows "advancement" and "achievement" as top gaming motivations, satisfied through breeding optimal Axies, winning battles, climbing leaderboards, and progression systems creating dopamine-driven engagement. One study found 72.1% of players experienced mood uplift during play. However, the grinding nature creates tension—players describe initial happiness followed by "sleepiness and stress of the game." Escapism and stress relief became particularly important during COVID lockdowns, with one player noting being "protected from virus, play cute game, earn money." Academic research confirms escapism as a major motivation, though studies show gamers with escapism motivation had higher psychological issue risk when external problems persisted. The excitement and entertainment job represents the 2024 industry shift from pure "play-to-earn" to "play-and-earn," with criticism that early GameFi projects prioritized "blockchain gimmicks over genuine gameplay quality." AAA titles launching in 2024-2025 (Shrapnel, Off The Grid) focus on compelling narratives and graphics, recognizing players want fun first. Perhaps most importantly, GameFi provides hope and optimism about financial futures. Players express being "relentlessly optimistic" about achieving goals, with GameFi offering a bottom-up voluntary alternative to Universal Basic Income. The sense of autonomy and control over financial destiny—rather than dependence on employers or government—emerges through player ownership of assets via NFTs (versus traditional games where developers control everything) and decentralized governance through DAO voting rights.

Social jobs: Identity and social needs being met

Community belonging proves as important as financial returns. Discord servers reach 100,000+ members, guild systems like Yield Guild Games manage 8,000 scholars with 60,000 waitlists, and scholarship models create mentor-mentee relationships. The social element drives viral growth—Telegram mini-apps leveraging existing social graphs achieved 35 million (Notcoin) and 239 million (Hamster Kombat) users. Community-driven development is expected in 50%+ of GameFi projects by 2024. Early adopter and innovator status attracts participants wanting to be seen as tech-savvy and ahead of mainstream trends. Web3 gaming attracts "tech enthusiasts" and "crypto natives" beyond traditional gamers, with first-mover advantage in token accumulation creating status hierarchies. The wealth display and "flex culture" job manifests through rare NFT Axies with "limited-edition body parts that will never be released again" serving as status symbols, X-integrated leaderboards letting "players flex their rank to mainstream audience," and virtual real estate ownership demonstrating wealth. Stories of buying houses and land shared virally reinforce this job. For Income Seekers, the provider and family support role proves especially powerful—an 18-year-old breadwinner supporting family after father's COVID death, players paying children's school fees and parents' medication. One quote captures it: "It's food on the table." The helper and mentor status job emerges through scholarship models where successful players provide Axie NFTs to those who can't afford entry, with community managers organizing and training new players. Finally, GameFi enables gamer identity reinforcement by bridging traditional gaming culture with financial responsibility, legitimizing gaming as a career path and reducing stigma of gaming as "waste of time."

Progress users are trying to make in their lives

Users aren't hiring "blockchain games"—they're hiring solutions to make specific life progress. Financial progress involves moving from "barely surviving paycheck to paycheck" to "building savings and supporting family comfortably," from "dependent on unstable job market" to "multiple income streams with more control," and from "unable to afford children's education" to "paying school fees and buying digital devices." Social progress means shifting from "gaming seen as waste of time" to "gaming as legitimate income source and career," from "isolated during pandemic" to "connected to global community with shared interests," and from "consumer in gaming ecosystem" to "stakeholder with ownership and governance rights." Emotional progress involves transforming from "hopeless about financial future" to "optimistic about wealth accumulation possibilities," from "time spent gaming feels guilty" to "productive use of gaming skills," and from "passive entertainment consumer" to "active creator and earner in digital economy." Identity progress encompasses moving from "just a player" to "investor, community leader, entrepreneur," from "late to crypto" to "early adopter in emerging technology," and from "separated from family (migrant worker)" to "at home while earning comparable income." Understanding these progress paths—rather than just product features—is essential for product-market fit.

Monetization models: How GameFi companies make money

GameFi monetization has evolved significantly from the unsustainable 2021 boom toward diversified revenue streams and balanced tokenomics. Successful projects in 2024-2025 demonstrate multiple revenue sources rather than relying solely on token speculation.

Play-to-earn mechanics have transformed toward sustainability

The original play-to-earn model rewarded players with cryptocurrency tokens for achievements, which could be traded for fiat currency. Axie Infinity pioneered the dual-token system with AXS (governance, capped supply) and SLP (utility, inflationary), where players earned SLP through battles and quests then burned it for breeding. At peak in 2021, players earned $400-1,200+ monthly, but the model collapsed as SLP crashed 99% due to hyperinflation and unsustainable token emissions requiring constant new player influx. The 2024 resurgence shows how sustainability is achieved: Axie now generates $3.2M+ annually in treasury revenue (averaging $330K monthly) with 162,828 monthly active users through diversified sources—4.25% marketplace fees on all NFT transactions, breeding fees paid in AXS/SLP, and Part Evolution fees (75,477 AXS earned). Critically, the SLP Stability Fund created 0.57% annualized deflation in 2024, with more tokens burned than minted for the first time. STEPN's move-to-earn model with GST (unlimited supply, in-game rewards) and GMT (6 billion fixed supply, governance) demonstrated the failure mode—GST reached $8-9 at peak but collapsed due to hyperinflation from oversupply and Chinese market restrictions. The 2023-2024 evolution emphasizes "play-and-own" over "play-to-earn," stake-to-play models where players stake tokens to access features, and fun-first design where games must be enjoyable independent of earning potential. Balanced token sinks—requiring spending for upgrades, breeding, repairs, crafting—prove essential for sustainability.

NFT sales generate revenue through primary and secondary markets

Primary NFT sales include public launches, thematic partnerships, and land drops. The Sandbox's primary LAND sales drove 17.3% quarter-over-quarter growth in Q3 2024, with LAND buyer activity surging 94.11% quarter-over-quarter in Q4 2024. The platform's market cap reached $2.27 billion at December 2024 peak, with only 166,464 LAND parcels ever existing (creating scarcity). The Sandbox's Beta launch generated $1.3M+ in transactions in one day. Axie Infinity's Wings of Nightmare collection in November 2024 drove $4M treasury growth, while breeding mechanics create deflationary pressure (116,079 Axies released for materials, net reduction of 28.5K Axies in 2024). Secondary market royalties provide ongoing revenue through automated smart contracts using the ERC-2981 standard. The Sandbox implements a 5% total fee on secondary sales, split 2.5% to the platform and 2.5% to the original NFT creator, providing continuous creator income. However, marketplace dynamics shifted in 2024 as major platforms (Magic Eden, LooksRare, X2Y2) made royalties optional, reducing creator income significantly from 2022-2024 peaks. OpenSea maintains enforced royalties for new collections using filter registry, while Blur honors 0.5% minimum fees on immutable collections. The lands segment holds over 25% of NFT market revenue (2024's dominant category), with total NFT segments accounting for 77.1% of GameFi usage. This marketplace fragmentation around royalty enforcement creates strategic considerations for which platforms to prioritize.

In-game token economics balance emissions with sinks

Dual-token models dominate successful projects. Axie Infinity's AXS (governance) has fixed supply, staking rewards, governance voting rights, and requirements for breeding/upgrades, while SLP (utility) has unlimited supply earned through gameplay but is burned for breeding and activities, managed by SLP Stability Fund to control inflation. AXS joined Coinbase 50 Index in 2024 as a top gaming token. The Sandbox uses a single-token model (3 billion SAND capped supply, full dilution expected 2026) with multiple utilities: purchasing LAND and assets, staking for passive yields, governance voting, transaction medium, and premium content access. The platform implements 5% fees on all transactions split between platform and creators, with 50% distribution to Foundation (staking rewards, creator funds, P2E prizes) and 50% to Company. Token sinks are critical for sustainability, with effective burn mechanisms including repairs and maintenance (sneaker durability in STEPN), leveling and upgrades (Part Evolution in Axie burned 75,477 AXS), breeding/minting NFT creation costs (StarSharks burns 90% of utility tokens from blind box sales), crafting and combining (Gem/Catalyst systems in The Sandbox), land development (staking DEC in Splinterlands for upgrades), and continuous marketplace fee burns. Splinterlands' 2024 innovation requiring DEC staking for land upgrades creates strong demand. Best practices emerging for 2024-2025 include ensuring token sinks exceed faucets (emissions), time-locked rewards (Illuvium's sILV prevents immediate dumping), seasonal mechanics forcing regular purchases, NFT durability limiting earning potential, and negative-sum PvP where players willingly consume tokens for entertainment.

Transaction fees and marketplace commissions provide predictable revenue

Platform fees vary by game. Axie Infinity charges 4.25% on all in-game purchases (land, NFT trading, breeding) as Sky Mavis's primary monetization source, plus variable breeding costs requiring both AXS and SLP tokens. The Sandbox implements 5% on all marketplace transactions, split 50-50 between platform (2.5%) and NFT creators (2.5%), plus premium NFT sales, subscriptions, and services. Gas fee mitigation became essential as 80% of GameFi platforms incorporated Layer 2 solutions by 2024. Ronin Network (Axie's custom sidechain) provides minimal gas fees through 27 validator nodes, while Polygon integration (The Sandbox) reduced fees significantly. TON blockchain enables minimal fees for Telegram mini-apps (Hamster Kombat, Notcoin), though the trade-off matters—Manta Pacific's Celestia integration reduced gas fees but decreased revenue by 70.2% quarter-over-quarter in Q3 2024 (lower fees increase user activity but reduce protocol revenue). Smart contract fees automate royalty payments (ERC-2981 standard), breeding contract fees, staking/unstaking fees, and land upgrade fees. Marketplace commissions vary: OpenSea charges 2.5% platform fee plus creator royalties (if enforced), Blur charges 0.5% minimum on immutable collections using aggressive zero-fee trading for user acquisition, Magic Eden evolved from enforced to optional royalties with 25% of protocol fees distributed to creators as compromise, while The Sandbox's internal marketplace maintains 5% with 2.5% automatic creator royalty.

Diversified revenue streams reduce reliance on speculation

Land sales dominate with over 25% of NFT market revenue in 2024, representing the fastest-growing digital asset class. The Sandbox's 166,464 capped LAND parcels create scarcity, with developed land enabling creators to earn 95% of SAND revenue while maintaining 2.5% on secondary sales. Corporate interest from JPMorgan, Samsung, Gucci, and Nike established virtual presence, with high-traffic zones commanding premium prices and prime locations generating $5,000+/month in rental income. Breeding fees create token sinks while balancing new NFT supply—Axie's breeding requires AXS + SLP with costs increasing each generation, while Part Evolution requires Axie sacrifices generating 75,477 AXS in treasury revenue. Battle passes and seasonal content drive engagement and revenue. Axie's Bounty Board system (April 2024) and Coinbase Learn and Earn partnership (June 2024) drove 691% increase in Monthly Active Accounts and 80% increase in Origins DAU, while competitive seasons offer AXS prize pools (Season 9: 24,300 AXS total). The Sandbox's Alpha Season 4 in Q4 2024 reached 580,778 unique players, 49 million quests completed, and 1.4 million hours of gameplay, distributing 600,000 SAND to 404 unique creators and running Builders' Challenge with 1.5M SAND prize pool. Sponsorships and partnerships generate significant revenue—The Sandbox has 800+ brand partnerships including Atari, Adidas, Gucci, and Ralph Lauren, with virtual fashion shows and corporate metaverse lounges. Revenue models include licensing fees, sponsored events, and virtual advertising billboards in high-traffic zones.

The scholarship guild model represents a unique revenue stream where guilds own NFTs and lend to players unable to afford entry. Yield Guild Games provided 30,000+ scholarships with standard revenue-sharing of 70% scholar, 20% manager, 10% guild (though some guilds use 50-50 splits). MetaGaming Guild expanded Pixels scholarship from 100 to 1,500 slots using a 70-30 model (70% to scholars hitting 2,000 BERRY daily quota), while GuildFi aggregates scholarships from multiple sources. Guild monetization includes passive income from NFT lending, token appreciation from guild tokens (YGG, GF, etc.), management fees (10-30% of player earnings), and investment returns from early game backing. At 2021 peak, guild leaders earned $20,000+ monthly, enabling life-changing impact in developing nations where scholarship players earn $20/day versus previous $5/day in traditional work.

Major players: Leading projects, platforms, and infrastructure

The GameFi ecosystem consolidated around proven platforms and experienced significant evolution from speculative 2021 peaks toward quality-focused 2024-2025 landscape.

Top games span casual to AAA experiences

Lumiterra leads with 300,000+ daily active unique wallets on Ronin (July 2025), ranking #1 by onchain activity through MMORPG mechanics and MegaDrop campaign. Axie Infinity stabilized around 100,000 daily active unique wallets after pioneering play-to-earn, generating $4+ billion cumulative NFT sales despite losing 95% of users from peak. The dual-token AXS/SLP model and scholarship program defined the industry, though unsustainable tokenomics caused the collapse before 2024 resurgence with improved sustainability. Alien Worlds maintains ~100,000 daily active unique wallets on WAX blockchain through mining-focused metaverse with strong retention, while Boxing Star X by Delabs reaches ~100,000 daily active unique wallets through Telegram Mini-App integration on TON/Kaia chains showing strong growth since April 2025. MapleStory N by Nexon represents traditional gaming entering Web3 with 50,000-80,000 daily active unique wallets on Avalanche's Henesys chain as the biggest 2025 blockchain launch bringing AAA IP credibility. Pixels peaked at 260,000+ daily users at launch with $731M market cap and $1.4B trading volume in February 2024, utilizing dual tokens (PIXEL + BERRY) after migrating from Polygon to Ronin and bringing 87K addresses to the platform. The Sandbox built 5+ million user wallets and 800+ brand partnerships (Atari, Snoop Dogg, Gucci) using SAND token as the leading metaverse platform for user-generated content and virtual real estate. Guild of Guardians on Immutable reached 1+ million pre-registrations and top 10 on iOS/Android stores, driving Immutable's 274% daily unique active wallets increase in May 2024.

The Telegram phenomenon disrupted traditional onboarding with Hamster Kombat reaching 239 MILLION users in 3 months through tap-to-earn mechanics on TON blockchain, though losing 86% post-airdrop (300M to 41M) highlights retention challenges. Notcoin achieved $1.6+ billion market cap as #2 gaming token by market cap with zero crypto onboarding friction, while Catizen built multi-million user base with successful token airdrop. Other notable games include Illuvium (AAA RPG, highly anticipated), Gala Games (multi-game platform), Decentraland (metaverse pioneer with MANA token), Gods Unchained (leading trading card game on Immutable), Off The Grid (console/PC shooter on Gunz chain), Splinterlands (established TCG with 6-year track record on Hive), and Heroes of Mavia (2.6+ million users with 3-token system on Ronin).

Blockchain platforms compete on speed, cost, and developer tools

Ronin Network by Sky Mavis holds #1 gaming blockchain position in 2024 with 836K daily unique active wallets peak, hosting Axie Infinity, Pixels, Lumiterra, and Heroes of Mavia. Purpose-built for gaming with sub-second transactions, low fees, and proven scale, Ronin serves as a migration magnet. Immutable (X + zkEVM) achieved fastest growth at 71% year-over-year, surpassing Ronin in late 2024 with 250,000+ monthly active users, 5.5 million Passport signups, $40M total value locked, 250+ games (most in industry), 181 new games in 2024, and 1.1 million daily transactions (414% quarter-over-quarter growth). The dual solution—Immutable X on StarkWare and zkEVM on Polygon—offers zero gas fees for NFTs, EVM compatibility, best developer tools, and major partnerships (Ubisoft, NetMarble). Polygon Network maintains 550K daily unique active wallets, 220M+ addresses, and 2.48B transactions with Ethereum security, massive ecosystem, corporate partnerships, and multiple scaling solutions providing strong metaverse presence. Solana captures approximately 50% of GameFi application fees in Q1 2025 through highest throughput, lowest costs, fast finality, and trading-focused ecosystem. BNB Chain (+ opBNB) replaced Ethereum as volume leader, with opBNB providing $0.0001 gas fees (lowest) and 97 TPS average (highest), offering cost-effectiveness and strong Asian market presence. TON (The Open Network) integrated with Telegram's 700M+ users enabling Hamster Kombat, Notcoin, and Catizen with zero-friction onboarding, social integration, and viral growth potential. Other platforms include Ethereum (20-30% trading share, Layer 2 foundation), Avalanche (customizable subnets, Henesys chain), NEAR (human-readable accounts), and Gunz (Off The Grid dedicated chain).

Traditional gaming giants and VCs shape the future

Animoca Brands dominates as #1 most active investor with portfolio of 400+ companies, $880M raised over 22 rounds (latest $110M from Temasek, Boyu, GGV), key investments in Axie, Sandbox, OpenSea, Dapper Labs, and Yield Guild Games, plus Animoca Ventures $800M-$1B fund with 38+ investments in 2024 (most active in space). GameFi Ventures based in Hong Kong manages portfolio of 21 companies focusing on seed rounds and co-investing with Animoca, while Andreessen Horowitz (a16z) deployed $40M to CCP Games from multi-billion crypto fund. Other major VCs include Bitkraft (gaming/esports focus), Hashed (South Korea, Asian market), NGC Ventures ($100M Fund III, 246 portfolio companies), Paradigm (infrastructure focus), Infinity Ventures Crypto ($70M fund), Makers Fund, and Kingsway Capital.

Ubisoft leads traditional gaming entry with Champions Tactics: Grimoria Chronicles (October 2024 on Oasys) and Might & Magic: Fates (2025 on Immutable), featuring partnerships with Immutable, Animoca, Oasys, and Starknet. The studio sold 10K Warlords and 75K Champions NFTs (sold out) with potential to leverage 138 million players. Square Enix launched Symbiogenesis (Arbitrum/Polygon, 1,500 NFTs) and Final Fantasy VII NFTs, pursuing "blockchain entertainment/Web3" strategy through Animoca Brands Japan partnership. Nexon delivered MapleStory N as major 2025 launch with 50K-80K daily users, while Epic Games shifted policy to welcome P2E games in late 2024, hosting Gods Unchained and Striker Manager 3. CCP Games (EVE Online) raised $40M (a16z lead) for new AAA EVE Web3 game. Additional activity includes Konami (Project Zircon, Castlevania), NetMarble (Immutable partnership, MARBLEX), Sony PlayStation (exploring Web3), Sega, Bandai Namco (research phase), and The Pokémon Company (exploring). Industry data shows 29 of 40 largest gaming companies exploring Web3.

Infrastructure providers enable ecosystem growth

Immutable Passport leads with 5.5 million signups (industry leading), providing seamless Web3 onboarding and game integration, while MetaMask serves 100M+ users as most popular Ethereum wallet with new Stablecoin Earn feature. Others include Trust Wallet, Coinbase Wallet, Phantom (Solana), and WalletConnect. Enjin SDK provides dedicated NFT blockchain with Unity integration, ENJ token (36.2% staking APY), and comprehensive tools (Wallet, Platform, Marketplace, Beam) plus Efinity Matrixchain for cross-chain functionality. ChainSafe Gaming (web3.unity) offers open-source Unity SDK with C#, C++, Blueprints support as premier Unity-blockchain tool with AAA studio adoption. Venly provides multi-chain wallet API and Unity/Unreal plugins with cross-platform toolkit. Others include Moralis Unity SDK, Stardust (API), Halliday, GameSwift (complete platform), Alchemy (infrastructure), and Thirdweb (smart contracts). Game engines include Unity (most popular for Web3 with SDKs from Enjin, ChainSafe, Moralis, Venly), Unreal Engine (AAA graphics, Epic Games now accepts Web3, Web3.js integration), and Godot (open-source, flexible blockchain integration).

DappRadar serves as industry standard tracking 35+ blockchains, 2,000+ games with real-time rankings as primary discovery platform. Footprint Analytics indexes 20+ blockchains, 2,000+ games with deep on-chain analysis and bot detection (developing), used by CoinMarketCap and DeGame. Nansen provides on-chain intelligence with wallet profiling and regular GameFi reports. DeGame covers 3,106 projects across 55+ blockchains with player-focused discovery. Others include Messari, CryptoSlam, and GameFi.org. Middleware and launchpads include EnjinStarter (80+ successful IDOs, $6 minimum stake, multi-chain support), GameFi.org Launchpad (IDO platform with KYC integrated), and Polygon Studios/Immutable Platform (complete development suites).

Market dynamics and strategic considerations

The GameFi market in 2024-2025 represents a critical inflection point, transitioning from speculative hype toward sustainable product-market fit with clear opportunities and severe challenges requiring strategic navigation.

The shift toward quality and sustainability defines success

The pure play-to-earn model collapsed spectacularly—Axie Infinity's 95% user decline, SLP's 99% crash, and the industry's 93% project failure rate proved that attracting mercenary users seeking quick profits creates unsustainable token economies with hyperinflation and Ponzi-scheme dynamics. The 2024-2025 evolution prioritizes "play-and-earn" and "play-to-own" models where gameplay quality comes first with earning as secondary benefit, entertainment value matters over financial speculation, and long-term engagement trumps extraction mechanics. This shift responds to data showing the top reason players quit is games becoming "too pay-to-win" and that 53% cite poor UX/UI as the biggest barrier. The emerging "Web2.5 mullet" strategy—mainstream free-to-play mechanics and UX on surface with blockchain features abstracted away or hidden, listed in traditional app stores (Apple, Google now allowing certain Web3 games), and onboarding requiring zero crypto knowledge—enables mainstream adoption. AAA quality games with 2-5 year development cycles, indie games with compelling gameplay loops, and traditional gaming studios entering space (Ubisoft, Epic Games, Animoca) represent the maturation of production values to compete with traditional gaming's 3.09 billion players worldwide versus only 4.5 million daily active Web3 gamers.

Massive opportunities exist in underserved segments

True Web2 gamers represent the biggest opportunity—3.09B gamers worldwide versus 4.5M daily active Web3 gamers, with 52% not knowing what blockchain games are and 32% having heard of them but never played. The strategy requires abstracting blockchain away completely, marketing as normal games, and onboarding without requiring crypto knowledge or wallets initially. Mobile-first markets offer untapped potential with 73% of global gaming audience on mobile, Southeast Asia and Latin America being smartphone-first with lower entry barriers, and lower-cost blockchains (Solana, Polygon, opBNB) enabling mobile accessibility. The content creator economy remains underutilized—creator-owned economies with fair royalties, NFT-based asset creation and trading, user-generated content with blockchain ownership, and platforms that enforce creator royalties unlike OpenSea controversies. Subscription and hybrid monetization models address over-reliance on token mints and marketplace fees, with subscription models (à la Coinsub) providing predictable revenue, blending free-to-play + in-app purchases + blockchain rewards, and targeting "whale economy" with staking and premium memberships. Emerging niches include fully on-chain games (all logic and state on blockchain enabled by account abstraction wallets and better infrastructure like Dojo on Starknet and MUD on OP Stack with backing from a16z and Jump Crypto), AI-powered GameFi (50% of new projects expected to leverage AI for personalized experiences, dynamic NPCs, procedural content generation), and genre-specific opportunities in RPGs (best suited for Web3 due to character progression, economies, item ownership) and strategy games (complex economies benefit from blockchain transparency).

Retention crisis and tokenomics failures demand solutions

The 60-90% churn within 30 days defines the existential crisis, with 99% drop-off threshold marking failure per CoinGecko and Hamster Kombat's 86% loss (300M to 41M users) after airdrop exemplifying the problem. Root causes include lack of long-term incentives beyond token speculation, poor gameplay mechanics, unsustainable tokenomics with inflation eroding value, bots and mercenary behavior, and airdrop farming without genuine engagement. Solution pathways require dynamic loot distribution, staking-based rewards, skill-based progression, player-controlled economies via DAOs, and immersive storytelling with compelling game loops. Common tokenomics pitfalls include hyperinflation (excessive token minting crashes value), death spirals (declining players → lower demand → price crash → more players leave), pay-to-win concerns (top reason players quit traditional games), Ponzi dynamics (early adopters profit, late entrants lose), and unsustainable supply (DeFi Kingdoms' JEWEL supply expanded 500% to 500M by mid-2024). Best practices emphasize single-token economies (not dual tokens), fixed supply with deflationary mechanisms, token sinks exceeding token faucets (incentivize keeping assets in-game), tying tokens to narratives/characters/utility not just speculation, and controlling inflation through burning, staking, and crafting requirements.

UX complexity and security vulnerabilities create barriers

Barriers identified in 2024 Blockchain Game Alliance survey show 53% cite poor UX/UI as biggest challenge, 33% cite poor gameplay experiences, and 11% are deterred by wallet setup complexity. Technical literacy requirements include wallets, private keys, gas fees, and DEX navigation. Solutions demand hosted/custodial wallets managed by game (users don't see private keys initially), gasless transactions through Layer 2 solutions, fiat onramps, Web2-style login (email/social), and progressive disclosure of Web3 features. Security risks include smart contract vulnerabilities (immutable code means bugs can't be easily fixed), phishing attacks and private key theft, bridge exploits (Ronin Network $600M hack in 2022), and rug pulls with fraud (decentralized means less oversight). Mitigation requires comprehensive smart contract audits (Beosin, CertiK), bug bounty programs, insurance protocols, user education on wallet security, and multi-sig requirements for treasury. The regulatory landscape remains unclear—CyberKongz litigation classified ERC-20 tokens as securities, China bans GameFi entirely, South Korea bans converting game currency to cash (2004 law), Japan has restrictions, US has bipartisan proposals with mid-2023 legislation expected, and at least 20 countries predicted to have GameFi frameworks by end 2024. Implications require extensive disclosure and KYC, may restrict US participation, necessitate legal teams from day one, demand token design considering securities law, and navigate gambling regulations in some jurisdictions.

Product managers must prioritize execution and community

Web3 product management demands 95/5 execution over vision split (versus Web2's 70/30) because the market moves too fast for long-term strategic planning, vision lives in whitepapers (done by technical architects), speed of iteration matters most, and market conditions change weekly. This means quick specs over Telegram with developers, launch/measure/iterate rapidly, build hype on Twitter/Discord in real-time, QA carefully but ship fast, and remember smart contract audits are critical (can't patch easily). Product managers must wear many hats with ultra-versatile skill sets including user research (Discord, Twitter listening), data analysis (Dune Analytics, on-chain metrics), UX/UI design (sketch flows, tokenomics), partnership/BD (protocol integrations, guilds), marketing (blogs, Twitter, memes), community management (AMAs, Discord moderation), growth hacking (airdrops, quests, referrals), tokenomics design, and understanding regulatory landscape. Teams are small with roles not unbundled like Web2.

Community-first mindset proves essential—success equals thriving community not just revenue metrics, community owns and governs (DAOs), direct interaction expected (Twitter, Discord), transparency paramount (all on-chain), with the maxim "if community fails, you're NGMI (not gonna make it)." Tactics include regular AMAs and town halls, user-generated content programs, creator support (tools, royalties), guild partnerships, governance tokens and voting, plus memes and viral content. Prioritizing fun gameplay is non-negotiable—players must enjoy the game intrinsically, earning is secondary to entertainment, compelling narrative/characters/worlds matter, tight game loops (not tedious grinding), and polish/quality (compete with Web2 AAA). Avoid games that are "spreadsheets with graphics," pure economic simulators, pay-to-win dynamics, and repetitive boring tasks for token rewards. Understanding tokenomics deeply requires critical knowledge of supply/demand dynamics, inflation/deflation mechanisms, token sinks versus faucets, staking/burning/vesting schedules, liquidity pool management, and secondary market dynamics. Security is paramount because smart contracts are immutable (bugs can't be easily fixed), hacks result in permanent loss, every transaction involves funds (wallets don't separate game from finance), and exploits can drain entire treasury—requiring multiple audits, bug bounties, conservative permissions, multi-sig wallets, incident response plans, and user education.

Winning strategies for 2025 and beyond

Successful GameFi products in 2025 will balance gameplay quality above all else (fun over financialization), community engagement and trust (build loyal authentic fan base), sustainable tokenomics (single token, deflationary, utility-driven), abstract blockchain complexity (Web2.5 approach for onboarding), security first (audits, testing, conservative permissions), hybrid monetization (free-to-play + in-app purchases + blockchain rewards), traditional distribution (app stores not just DApp browsers), data discipline (track retention and lifetime value not vanity metrics), speed of execution (ship/learn/iterate faster than competition), and regulatory compliance (legal from day one). Common pitfalls to avoid include tokenomics over gameplay (building DeFi protocol with game graphics), dual/triple token complexity (confusing, hard to balance, inflation-prone), pay-to-win dynamics (top reason players quit), pure play-to-earn model (attracts mercenaries not genuine players), DAO-led development (bureaucracy kills creativity), ignoring Web2 gamers (targeting only 4.5M crypto natives versus 3B gamers), NFT speculation focus (pre-sales without product), poor onboarding (requiring wallet setup and crypto knowledge upfront), insufficient smart contract audits (hacks destroy projects permanently), neglecting security ("approve all" permissions, weak key management), ignoring regulations (legal issues can shut down project), no go-to-market strategy ("build it and they will come" doesn't work), vanity metrics (volume ≠ success; focus on retention/DAU/lifetime value), poor community management (ghosting Discord, ignoring feedback), launching too early (unfinished game kills reputation), fighting platform incumbents (Apple/Google bans isolate you), ignoring fraud/bots (airdrop farmers and Sybil attacks distort metrics), no token sinks (all faucets, no utility equals hyperinflation), and copying Axie Infinity (that model failed; learn from it).

The path forward requires building incredible games first (not financial instruments), using blockchain strategically not dogmatically, making onboarding invisible (Web2.5 approach), designing sustainable economics (single token, deflationary), prioritizing community and trust, moving fast and iterating constantly, securing everything meticulously, and staying compliant with evolving regulations. The $95-200 billion market size projections are achievable—but only if the industry collectively shifts from speculation to substance. The next 18 months will separate genuine innovation from hype, with product managers who combine Web2 gaming expertise with Web3 technical knowledge, execute ruthlessly, and keep players at the center building the defining products of this era. The future of gaming may indeed be decentralized, but it will succeed by being first and foremost fun.

Balaji's Vision for Cryptoidentity: From Keys to Network States

· 10 min read
Dora Noda
Software Engineer

1) What Balaji means by “cryptoidentity”

In Balaji’s vocabulary, cryptoidentity is identity that is rooted in cryptography—specifically public–private keypairs—and then extended with on‑chain names, verifiable credentials/attestations, and interfaces to legacy (“fiat”) identity. In his words and work:

  • Keys as identity. The bedrock is the idea that, in Bitcoin and web3, your keypair is your identity; authentication and authorization flow from control of private keys rather than from accounts in a corporate database. (balajis.com)
  • Names and reputation on-chain. Naming systems like ENS/SNS anchor human‑readable identities to addresses; credentials (NFTs, “soulbound” tokens, on‑chain “cryptocredentials”) and attestations layer reputation and history onto those identities.
  • On‑chain, auditable “census.” For societies and network states, identity participates in a cryptographically auditable census (proof‑of‑human/unique person, proof‑of‑income, proof‑of‑real‑estate) to demonstrate real population and economic activity.
  • Bridging legacy ID ↔ crypto ID. He explicitly argues we need a “fiat identity ↔ crypto identity exchange”—akin to fiat↔crypto exchanges—so “digital passports follow digital currency.” He highlights “crypto passports” as the next interface after stablecoins. (Circle)
  • Identity for a “web3 of trust” in the AI era. To counter deepfakes and bots, he promotes content signed by on‑chain identities (e.g., ENS) so provenance and authorship are cryptographically verifiable across the open web. (Chainlink Today)
  • Civic protection. In his shorthand: “Cryptocurrency partially protects you from debanking. Cryptoidentity partially protects you from denaturalization.” (X (formerly Twitter))

2) How his view evolved (a short chronology)

  • 2019–2020 – cryptographic identity & pseudonymity. Balaji’s writings emphasize public‑key cryptography as identity (keys-as-ID) and forecast decentralized identity + reputation growing through the 2020s. At the same time, his “pseudonymous economy” talk argues for persistent, reputation‑bearing pseudonyms to protect speech and experiment with new kinds of work and organization. (balajis.com)
  • 2022 – The Network State. He formalizes identity’s job in a network state: on‑chain census; ENS‑style identity; cryptographic proofs (of personhood/income/real‑estate); and crypto‑credentials/soulbounds. Identity is infrastructural—what the society counts and what the world can verify.
  • 2022–2024 – bridges to legacy systems. In public interviews and his podcast, he calls for fiat↔crypto identity bridges (e.g., Palau’s RNS.ID digital residency) and stresses moving “paper” records to code. (Circle)
  • 2023–present – identity as defense against AI fakes. He frames cryptoidentity as the backbone of a “web3 of trust”: signed content, on‑chain provenance, and economic friction (staking, payments) to separate humans from bots. (Chainlink Today)

3) The technical stack Balaji gestures toward

Root primitive: keys & wallets

  • Control of a private key = control of an identity; rotate/partition keys for different personas and risk profiles. (balajis.com)

Resolution & login

  • ENS/SNS map human‑readable names to addresses; Sign‑In with Ethereum (EIP‑4361) turns those addresses into a standard way to authenticate to off‑chain apps.

Credentials & attestations (reputation layer)

  • W3C Verifiable Credentials (VC 2.0) define an interoperable way to issue/hold/verify claims (e.g., KYC checks, diplomas).
  • Ethereum Attestation Service (EAS) provides a public good layer for on‑ or off‑chain attestations to build identity, reputation, and registries that applications can verify. (W3C)

Proof‑of‑personhood & uniqueness

  • In The Network State, Balaji sketches “proof‑of‑human” techniques for the on‑chain census; outside his work, approaches like World ID try to verify humanness/uniqueness, which has also raised data‑protection concerns—illustrating the trade‑offs of biometric PoP.

Bridges to legacy identity

  • Palau RNS.ID is a prominent example of a sovereign issuing legal ID with on‑chain components; acceptance is uneven across platforms, underscoring the “bridge” problem Balaji highlights. (Biometric Update)

Provenance & anti‑deepfake

  • He advocates signing content from ENS‑linked addresses so every image/post/video can be traced to a cryptographic identity in a “web3 of trust.” (Chainlink Today)

4) Why it matters (Balaji’s strategic claims)

  1. Censorship & deplatforming resistance: Keys and decentralized naming reduce reliance on centralized ID providers. (Keys are bearer‑style identities.) (balajis.com)
  2. Auditability for societies: Network states require verifiable population/income/footprint; auditability is impossible without identity that can be proven on‑chain.
  3. AI resilience: A cryptographic identity layer (plus signatures/attestations) underpins authenticity online, reversing AI‑driven fakery. (Chainlink Today)
  4. Interoperability & composability: Standards (ENS, SIWE, VC/EAS) make identity portable across apps and jurisdictions.

5) How it connects to The Network State

Balaji’s book repeatedly pairs identity with a real‑time, on‑chain census—including proof‑of‑human, proof‑of‑income, and proof‑of‑real‑estate—and highlights naming (ENS) and crypto‑credentials as core primitives. He also describes “ENS‑login‑to‑physical‑world” patterns (digital keys to doors/services) embedded in a social smart contract, pointing to cryptoidentity as the access layer for both digital and (eventually) physical governance.


6) Implementation blueprint (a practical path you can execute today)

A. Establish the base identities

  1. Generate separate keypairs for: (i) legal/“real name”, (ii) work/professional pseudonym, (iii) public‑speech pseudonym. Store each in a different wallet configuration (hardware, MPC, or smart accounts with guardians). (balajis.com)
  2. Register ENS names for each persona; publish minimal public profile metadata.

B. Add authentication & content provenance 3) Enable SIWE (EIP‑4361) for app logins; phase out passwords/social logins. (Ethereum Improvement Proposals) 4) Sign public artifacts (posts, images, code releases) from your ENS‑linked address; publish a simple “signed‑content” feed others can verify. (Chainlink Today)

C. Layer credentials and attestations 5) Issue/collect VCs for legal facts (company role, licenses) and EAS attestations for soft signals (reputation, verified contributions, attendance). Keep sensitive claims off‑chain with only hashes/receipts on‑chain. (W3C)

D. Bridge to legacy identity when needed 6) Where lawful and useful, link a sovereign/enterprise ID (e.g., Palau RNS.ID) to your cryptoidentity for KYC‑gated venues. Expect heterogeneous acceptance and maintain alternates. (Biometric Update)

E. Deploy for groups/societies 7) For a startup society or DAO:

  • Gate membership with ENS + a proof‑of‑human method you deem acceptable.
  • Maintain a public, auditable census (counts of members/income/holdings) using oracles plus signed attestations, not raw PII.

7) Risks, critiques, and open questions

  • Privacy/pseudonymity erosion. Blockchain analysis can cluster wallets; Balaji’s own pseudonymity framing warns how a handful of data “bits” can re‑identify you. Use mixers/privacy tech carefully and lawfully—but recognize limits. (blog.blockstack.org)
  • Proof‑of‑personhood trade‑offs. Biometric PoP (e.g., iris) invites significant data‑protection scrutiny; alternative PoP methods reduce risk but may increase Sybil vulnerability. (law.kuleuven.be)
  • Bridge brittleness. Palau‑style IDs are not a universal KYC pass; acceptance varies by platform and jurisdiction and can change. Build for graceful degradation. (Malakouti Law)
  • Key loss & coercion. Keys can be stolen/coerced; use multi‑sig/guardians and incident‑response policies. (Balaji’s model assumes cryptography + consent, which must be engineered socially.) (balajis.com)
  • Name/registry centralization. ENS or any naming authority becomes a policy chokepoint; mitigate via multi‑persona design and exportable proofs.

8) How Balaji’s cryptoidentity maps to standards (and where it differs)

  • Alignment:

    • DIDs + VCs (W3C) = portable, interoperable identity/claims; SIWE = wallet‑native authentication; EAS = attestations for reputation/registries. These are the components he points to—even if he uses plain language (ENS, credentials) rather than standards acronyms. (W3C)
  • Differences/emphasis:

    • He elevates societal auditability (on‑chain census) and AI‑era provenance (signed content) more than many DID/VC discussions, and he explicitly pushes fiat↔crypto identity bridges and crypto passports as a near‑term priority.

9) If you’re building: a minimal viable “cryptoidentity” rollout (90 days)

  1. Week 1–2: Keys, ENS, SIWE enabled; publish your signing policy and start signing public posts/releases. (Ethereum Improvement Proposals)
  2. Week 3–6: Integrate VCs/EAS for role/membership/participation; build a public “trust page” that verifies these programmatically. (W3C)
  3. Week 7–10: Stand up a basic census dashboard (aggregate member count, on‑chain treasury/income proofs) with clear privacy posture.
  4. Week 11–13: Pilot a legacy bridge (e.g., RNS.ID where appropriate) for one compliance‑intensive flow; publish results (what worked/failed). (Biometric Update)

Selected sources (primary and load‑bearing)

  • The Network State (on‑chain census; ENS/identity; crypto‑credentials) and “ENS‑login‑to‑physical‑world” examples.
  • Public‑Key Cryptography (keys as identity). (balajis.com)
  • Circle – The Money Movement (Ep. 74) (fiat↔crypto identity bridge; “crypto passports”). (Circle)
  • The Network State podcast, Ep. 10 (fiat‑identity→crypto‑identity exchange; Palau RNS.ID). (thenetworkstate.com)
  • Chainlink Today (signed content/ENS to fight deepfakes; “web3 of trust”). (Chainlink Today)
  • Balaji on X (“Cryptoidentity…denaturalization”). (X (formerly Twitter))
  • Standards: W3C DID Core, VC 2.0; EIP‑4361 (SIWE); EAS docs. (W3C)
  • RNS.ID / Palau (real‑world bridge; mixed acceptance). (Biometric Update)
  • Pseudonymous Economy (identity & 33‑bits re‑identification intuition). (blog.blockstack.org)

Bottom line

For Balaji, cryptoidentity is not just “DID tech.” It’s a civilizational primitive: keys and signatures at the base; names and credentials on top; bridges to legacy identity; and a verifiable public record that scales from individuals to network societies. It’s how you get authentic people and authentic records in an AI‑flooded internet—and how a startup society can prove it’s real without asking the world to trust its word. (Chainlink Today)

If you want, I can tailor the implementation blueprint to your specific use case (consumer app, DAO, enterprise, or a startup‑society pilot) and produce concrete schemas/flows for SIWE, EAS, and VC 2.0 that match your regulatory and UX constraints.

MCP in the Web3 Ecosystem: A Comprehensive Review

· 49 min read
Dora Noda
Software Engineer

1. Definition and Origin of MCP in Web3 Context

The Model Context Protocol (MCP) is an open standard that connects AI assistants (like large language models) to external data sources, tools, and environments. Often described as a "USB-C port for AI" due to its universal plug-and-play nature, MCP was developed by Anthropic and first introduced in late November 2024. It emerged as a solution to break AI models out of isolation by securely bridging them with the “systems where data lives” – from databases and APIs to development environments and blockchains.

Originally an experimental side project at Anthropic, MCP quickly gained traction. By mid-2024, open-source reference implementations appeared, and by early 2025 it had become the de facto standard for agentic AI integration, with leading AI labs (OpenAI, Google DeepMind, Meta AI) adopting it natively. This rapid uptake was especially notable in the Web3 community. Blockchain developers saw MCP as a way to infuse AI capabilities into decentralized applications, leading to a proliferation of community-built MCP connectors for on-chain data and services. In fact, some analysts argue MCP may fulfill Web3’s original vision of a decentralized, user-centric internet in a more practical way than blockchain alone, by using natural language interfaces to empower users.

In summary, MCP is not a blockchain or token, but an open protocol born in the AI world that has rapidly been embraced within the Web3 ecosystem as a bridge between AI agents and decentralized data sources. Anthropic open-sourced the standard (with an initial GitHub spec and SDKs) and cultivated an open community around it. This community-driven approach set the stage for MCP’s integration into Web3, where it is now viewed as foundational infrastructure for AI-enabled decentralized applications.

2. Technical Architecture and Core Protocols

MCP operates on a lightweight client–server architecture with three principal roles:

  • MCP Host: The AI application or agent itself, which orchestrates requests. This could be a chatbot (Claude, ChatGPT) or an AI-powered app that needs external data. The host initiates interactions, asking for tools or information via MCP.
  • MCP Client: A connector component that the host uses to communicate with servers. The client maintains the connection, manages request/response messaging, and can handle multiple servers in parallel. For example, a developer tool like Cursor or VS Code’s agent mode can act as an MCP client bridging the local AI environment with various MCP servers.
  • MCP Server: A service that exposes some contextual data or functionality to the AI. Servers provide tools, resources, or prompts that the AI can use. In practice, an MCP server could interface with a database, a cloud app, or a blockchain node, and present a standardized set of operations to the AI. Each client-server pair communicates over its own channel, so an AI agent can tap multiple servers concurrently for different needs.

Core Primitives: MCP defines a set of standard message types and primitives that structure the AI-tool interaction. The three fundamental primitives are:

  • Tools: Discrete operations or functions the AI can invoke on a server. For instance, a “searchDocuments” tool or an “eth_call” tool. Tools encapsulate actions like querying an API, performing a calculation, or calling a smart contract function. The MCP client can request a list of available tools from a server and call them as needed.
  • Resources: Data endpoints that the AI can read from (or sometimes write to) via the server. These could be files, database entries, blockchain state (blocks, transactions), or any contextual data. The AI can list resources and retrieve their content through standard MCP messages (e.g. ListResources and ReadResource requests).
  • Prompts: Structured prompt templates or instructions that servers can provide to guide the AI’s reasoning. For example, a server might supply a formatting template or a pre-defined query prompt. The AI can request a list of prompt templates and use them to maintain consistency in how it interacts with that server.

Under the hood, MCP communications are typically JSON-based and follow a request-response pattern similar to RPC (Remote Procedure Call). The protocol’s specification defines messages like InitializeRequest, ListTools, CallTool, ListResources, etc., which ensure that any MCP-compliant client can talk to any MCP server in a uniform way. This standardization is what allows an AI agent to discover what it can do: upon connecting to a new server, it can inquire “what tools and data do you offer?” and then dynamically decide how to use them.

Security and Execution Model: MCP was designed with secure, controlled interactions in mind. The AI model itself doesn’t execute arbitrary code; it sends high-level intents (via the client) to the server, which then performs the actual operation (e.g., fetching data or calling an API) and returns results. This separation means sensitive actions (like blockchain transactions or database writes) can be sandboxed or require explicit user approval. For example, there are messages like Ping (to keep connections alive) and even a CreateMessageRequest which allows an MCP server to ask the client’s AI to generate a sub-response, typically gated by user confirmation. Features like authentication, access control, and audit logging are being actively developed to ensure MCP can be used safely in enterprise and decentralized environments (more on this in the Roadmap section).

In summary, MCP’s architecture relies on a standardized message protocol (with JSON-RPC style calls) that connects AI agents (hosts) to a flexible array of servers providing tools, data, and actions. This open architecture is model-agnostic and platform-agnostic – any AI agent can use MCP to talk to any resource, and any developer can create a new MCP server for a data source without needing to modify the AI’s core code. This plug-and-play extensibility is what makes MCP powerful in Web3: one can build servers for blockchain nodes, smart contracts, wallets, or oracles and have AI agents seamlessly integrate those capabilities alongside web2 APIs.

3. Use Cases and Applications of MCP in Web3

MCP unlocks a wide range of use cases by enabling AI-driven applications to access blockchain data and execute on-chain or off-chain actions in a secure, high-level way. Here are some key applications and problems it helps solve in the Web3 domain:

  • On-Chain Data Analysis and Querying: AI agents can query live blockchain state in real-time to provide insights or trigger actions. For example, an MCP server connected to an Ethereum node allows an AI to fetch account balances, read smart contract storage, trace transactions, or retrieve event logs on demand. This turns a chatbot or coding assistant into a blockchain explorer. Developers can ask an AI assistant questions like “What’s the current liquidity in Uniswap pool X?” or “Simulate this Ethereum transaction’s gas cost,” and the AI will use MCP tools to call an RPC node and get the answer from the live chain. This is far more powerful than relying on the AI’s training data or static snapshots.
  • Automated DeFi Portfolio Management: By combining data access and action tools, AI agents can manage crypto portfolios or DeFi positions. For instance, an “AI Vault Optimizer” could monitor a user’s positions across yield farms and automatically suggest or execute rebalancing strategies based on real-time market conditions. Similarly, an AI could act as a DeFi portfolio manager, adjusting allocations between protocols when risk or rates change. MCP provides the standard interface for the AI to read on-chain metrics (prices, liquidity, collateral ratios) and then invoke tools to execute transactions (like moving funds or swapping assets) if permitted. This can help users maximize yield or manage risk 24/7 in a way that would be hard to do manually.
  • AI-Powered User Agents for Transactions: Think of a personal AI assistant that can handle blockchain interactions for a user. With MCP, such an agent can integrate with wallets and DApps to perform tasks via natural language commands. For example, a user could say, "AI, send 0.5 ETH from my wallet to Alice" or "Stake my tokens in the highest-APY pool." The AI, through MCP, would use a secure wallet server (holding the user’s private key) to create and sign the transaction, and a blockchain MCP server to broadcast it. This scenario turns complex command-line or Metamask interactions into a conversational experience. It’s crucial that secure wallet MCP servers are used here, enforcing permissions and confirmations, but the end result is streamlining on-chain transactions through AI assistance.
  • Developer Assistants and Smart Contract Debugging: Web3 developers can leverage MCP-based AI assistants that are context-aware of blockchain infrastructure. For example, Chainstack’s MCP servers for EVM and Solana give AI coding copilots deep visibility into the developer’s blockchain environment. A smart contract engineer using an AI assistant (in VS Code or an IDE) can have the AI fetch the current state of a contract on a testnet, run a simulation of a transaction, or check logs – all via MCP calls to local blockchain nodes. This helps in debugging and testing contracts. The AI is no longer coding “blindly”; it can actually verify how code behaves on-chain in real time. This use case solves a major pain point by allowing AI to continuously ingest up-to-date docs (via a documentation MCP server) and to query the blockchain directly, reducing hallucinations and making suggestions far more accurate.
  • Cross-Protocol Coordination: Because MCP is a unified interface, a single AI agent can coordinate across multiple protocols and services simultaneously – something extremely powerful in Web3’s interconnected landscape. Imagine an autonomous trading agent that monitors various DeFi platforms for arbitrage. Through MCP, one agent could concurrently interface with Aave’s lending markets, a LayerZero cross-chain bridge, and an MEV (Miner Extractable Value) analytics service, all through a coherent interface. The AI could, in one “thought process,” gather liquidity data from Ethereum (via an MCP server on an Ethereum node), get price info or oracle data (via another server), and even invoke bridging or swapping operations. Previously, such multi-platform coordination would require complex custom-coded bots, but MCP gives a generalizable way for an AI to navigate the entire Web3 ecosystem as if it were one big data/resource pool. This could enable advanced use cases like cross-chain yield optimization or automated liquidation protection, where an AI moves assets or collateral across chains proactively.
  • AI Advisory and Support Bots: Another category is user-facing advisors in crypto applications. For instance, a DeFi help chatbot integrated into a platform like Uniswap or Compound could use MCP to pull in real-time info for the user. If a user asks, “What’s the best way to hedge my position?”, the AI can fetch current rates, volatility data, and the user’s portfolio details via MCP, then give a context-aware answer. Platforms are exploring AI-powered assistants embedded in wallets or dApps that can guide users through complex transactions, explain risks, and even execute sequences of steps with approval. These AI agents effectively sit on top of multiple Web3 services (DEXes, lending pools, insurance protocols), using MCP to query and command them as needed, thereby simplifying the user experience.
  • Beyond Web3 – Multi-Domain Workflows: Although our focus is Web3, it's worth noting MCP’s use cases extend to any domain where AI needs external data. It’s already being used to connect AI to things like Google Drive, Slack, GitHub, Figma, and more. In practice, a single AI agent could straddle Web3 and Web2: e.g., analyzing an Excel financial model from Google Drive, then suggesting on-chain trades based on that analysis, all in one workflow. MCP’s flexibility allows cross-domain automation (e.g., "schedule my meeting if my DAO vote passes, and email the results") that blends blockchain actions with everyday tools.

Problems Solved: The overarching problem MCP addresses is the lack of a unified interface for AI to interact with live data and services. Before MCP, if you wanted an AI to use a new service, you had to hand-code a plugin or integration for that specific service’s API, often in an ad-hoc way. In Web3 this was especially cumbersome – every blockchain or protocol has its own interfaces, and no AI could hope to support them all. MCP solves this by standardizing how the AI describes what it wants (natural language mapped to tool calls) and how services describe what they offer. This drastically reduces integration work. For example, instead of writing a custom plugin for each DeFi protocol, a developer can write one MCP server for that protocol (essentially annotating its functions in natural language). Any MCP-enabled AI (whether Claude, ChatGPT, or open-source models) can then immediately utilize it. This makes AI extensible in a plug-and-play fashion, much like how adding a new device via a universal port is easier than installing a new interface card.

In sum, MCP in Web3 enables AI agents to become first-class citizens of the blockchain world – querying, analyzing, and even transacting across decentralized systems, all through safe, standardized channels. This opens the door to more autonomous dApps, smarter user agents, and seamless integration of on-chain and off-chain intelligence.

4. Tokenomics and Governance Model

Unlike typical Web3 protocols, MCP does not have a native token or cryptocurrency. It is not a blockchain or a decentralized network on its own, but rather an open protocol specification (more akin to HTTP or JSON-RPC in spirit). Thus, there is no built-in tokenomics – no token issuance, staking, or fee model inherent to using MCP. AI applications and servers communicate via MCP without any cryptocurrency involved; for instance, an AI calling a blockchain via MCP might pay gas fees for the blockchain transaction, but MCP itself adds no extra token fee. This design reflects MCP’s origin in the AI community: it was introduced as a technical standard to improve AI-tool interactions, not as a tokenized project.

Governance of MCP is carried out in an open-source, community-driven fashion. After releasing MCP as an open standard, Anthropic signaled a commitment to collaborative development. A broad steering committee and working groups have formed to shepherd the protocol’s evolution. Notably, by mid-2025, major stakeholders like Microsoft and GitHub joined the MCP steering committee alongside Anthropic. This was announced at Microsoft Build 2025, indicating a coalition of industry players guiding MCP’s roadmap and standards decisions. The committee and maintainers work via an open governance process: proposals to change or extend MCP are typically discussed publicly (e.g. via GitHub issues and “SEP” – Standard Enhancement Proposal – guidelines). There is also an MCP Registry working group (with maintainers from companies like Block, PulseMCP, GitHub, and Anthropic) which exemplifies the multi-party governance. In early 2025, contributors from at least 9 different organizations collaborated to build a unified MCP server registry for discovery, demonstrating how development is decentralized across community members rather than controlled by one entity.

Since there is no token, governance incentives rely on the common interests of stakeholders (AI companies, cloud providers, blockchain developers, etc.) to improve the protocol for all. This is somewhat analogous to how W3C or IETF standards are governed, but with a faster-moving GitHub-centric process. For example, Microsoft and Anthropic worked together to design an improved authorization spec for MCP (integrating things like OAuth and single sign-on), and GitHub collaborated on the official MCP Registry service for listing available servers. These enhancements were contributed back to the MCP spec for everyone’s benefit.

It’s worth noting that while MCP itself is not tokenized, there are forward-looking ideas about layering economic incentives and decentralization on top of MCP. Some researchers and thought leaders in Web3 foresee the emergence of “MCP networks” – essentially decentralized networks of MCP servers and agents that use blockchain-like mechanisms for discovery, trust, and rewards. In such a scenario, one could imagine a token being used to reward those who run high-quality MCP servers (similar to how miners or node operators are incentivized). Capabilities like reputation ratings, verifiable computation, and node discovery could be facilitated by smart contracts or a blockchain, with a token driving honest behavior. This is still conceptual, but projects like MIT’s Namda (discussed later) are experimenting with token-based incentive mechanisms for networks of AI agents using MCP. If these ideas mature, MCP might intersect with on-chain tokenomics more directly, but as of 2025 the core MCP standard remains token-free.

In summary, MCP’s “governance model” is that of an open technology standard: collaboratively maintained by a community and a steering committee of experts, with no on-chain governance token. Decisions are guided by technical merit and broad consensus rather than coin-weighted voting. This distinguishes MCP from many Web3 protocols – it aims to fulfill Web3’s ideals (decentralization, interoperability, user empowerment) through open software and standards, not through a proprietary blockchain or token. In the words of one analysis, “the promise of Web3... can finally be realized not through blockchain and cryptocurrency, but through natural language and AI agents”, positioning MCP as a key enabler of that vision. That said, as MCP networks grow, we may see hybrid models where blockchain-based governance or incentive mechanisms augment the ecosystem – a space to watch closely.

5. Community and Ecosystem

The MCP ecosystem has grown explosively in a short time, spanning AI developers, open-source contributors, Web3 engineers, and major tech companies. It’s a vibrant community effort, with key contributors and partnerships including:

  • Anthropic: As the creator, Anthropic seeded the ecosystem by open-sourcing the MCP spec and several reference servers (for Google Drive, Slack, GitHub, etc.). Anthropic continues to lead development (for example, staff like Theodora Chu serve as MCP product managers, and Anthropic’s team contributes heavily to spec updates and community support). Anthropic’s openness attracted others to build on MCP rather than see it as a single-company tool.

  • Early Adopters (Block, Apollo, Zed, Replit, Codeium, Sourcegraph): In the first months after release, a wave of early adopters implemented MCP in their products. Block (formerly Square) integrated MCP to explore AI agentic systems in fintech – Block’s CTO praised MCP as an open bridge connecting AI to real-world applications. Apollo (likely Apollo GraphQL) also integrated MCP to allow AI access to internal data. Developer tool companies like Zed (code editor), Replit (cloud IDE), Codeium (AI coding assistant), and Sourcegraph (code search) each worked to add MCP support. For instance, Sourcegraph uses MCP so an AI coding assistant can retrieve relevant code from a repository in response to a question, and Replit’s IDE agents can pull in project-specific context. These early adopters gave MCP credibility and visibility.

  • Big Tech Endorsement – OpenAI, Microsoft, Google: In a notable turn, companies that are otherwise competitors aligned on MCP. OpenAI’s CEO Sam Altman publicly announced in March 2025 that OpenAI would add MCP support across its products (including ChatGPT’s desktop app), saying “People love MCP and we are excited to add support across our products”. This meant OpenAI’s Agent API and ChatGPT plugins would speak MCP, ensuring interoperability. Just weeks later, Google DeepMind’s CEO Demis Hassabis revealed that Google’s upcoming Gemini models and tools would support MCP, calling it a good protocol and an open standard for the “AI agentic era”. Microsoft not only joined the steering committee but partnered with Anthropic to build an official C# SDK for MCP to serve the enterprise developer community. Microsoft’s GitHub unit integrated MCP into GitHub Copilot (VS Code’s ‘Copilot Labs/Agents’ mode), enabling Copilot to use MCP servers for things like repository searching and running test cases. Additionally, Microsoft announced Windows 11 would expose certain OS functions (like file system access) as MCP servers so AI agents can interact with the operating system securely. The collaboration among OpenAI, Microsoft, Google, and Anthropic – all rallying around MCP – is extraordinary and underscores the community-over-competition ethos of this standard.

  • Web3 Developer Community: A number of blockchain developers and startups have embraced MCP. Several community-driven MCP servers have been created to serve blockchain use cases:

    • The team at Alchemy (a leading blockchain infrastructure provider) built an Alchemy MCP Server that offers on-demand blockchain analytics tools via MCP. This likely lets an AI get blockchain stats (like historical transactions, address activity) through Alchemy’s APIs using natural language.
    • Contributors developed a Bitcoin & Lightning Network MCP Server to interact with Bitcoin nodes and the Lightning payment network, enabling AI agents to read Bitcoin block data or even create Lightning invoices via standard tools.
    • The crypto media and education group Bankless created an Onchain MCP Server focused on Web3 financial interactions, possibly providing an interface to DeFi protocols (sending transactions, querying DeFi positions, etc.) for AI assistants.
    • Projects like Rollup.codes (a knowledge base for Ethereum Layer 2s) made an MCP server for rollup ecosystem info, so an AI can answer technical questions about rollups by querying this server.
    • Chainstack, a blockchain node provider, launched a suite of MCP servers (covered earlier) for documentation, EVM chain data, and Solana, explicitly marketing it as “putting your AI on blockchain steroids” for Web3 builders.

    Additionally, Web3-focused communities have sprung up around MCP. For example, PulseMCP and Goose are community initiatives referenced as helping build the MCP registry. We’re also seeing cross-pollination with AI agent frameworks: the LangChain community integrated adapters so that all MCP servers can be used as tools in LangChain-powered agents, and open-source AI platforms like Hugging Face TGI (text-generation-inference) are exploring MCP compatibility. The result is a rich ecosystem where new MCP servers are announced almost daily, serving everything from databases to IoT devices.

  • Scale of Adoption: The traction can be quantified to some extent. By February 2025 – barely three months after launch – over 1,000 MCP servers/connectors had been built by the community. This number has only grown, indicating thousands of integrations across industries. Mike Krieger (Anthropic’s Chief Product Officer) noted by spring 2025 that MCP had become a “thriving open standard with thousands of integrations and growing”. The official MCP Registry (launched in preview in Sept 2025) is cataloging publicly available servers, making it easier to discover tools; the registry’s open API allows anyone to search for, say, “Ethereum” or “Notion” and find relevant MCP connectors. This lowers the barrier for new entrants and further fuels growth.

  • Partnerships: We’ve touched on many implicit partnerships (Anthropic with Microsoft, etc.). To highlight a few more:

    • Anthropic & Slack: Anthropic partnered with Slack to integrate Claude with Slack’s data via MCP (Slack has an official MCP server, enabling AI to retrieve Slack messages or post alerts).
    • Cloud Providers: Amazon (AWS) and Google Cloud have worked with Anthropic to host Claude, and it’s likely they support MCP in those environments (e.g., AWS Bedrock might allow MCP connectors for enterprise data). While not explicitly in citations, these cloud partnerships are important for enterprise adoption.
    • Academic collaborations: The MIT and IBM research project Namda (discussed next) represents a partnership between academia and industry to push MCP’s limits in decentralized settings.
    • GitHub & VS Code: Partnership to enhance developer experience – e.g., VS Code’s team actively contributed to MCP (one of the registry maintainers is from VS Code team).
    • Numerous startups: Many AI startups (agent startups, workflow automation startups) are building on MCP instead of reinventing the wheel. This includes emerging Web3 AI startups looking to offer “AI as a DAO” or autonomous economic agents.

Overall, the MCP community is diverse and rapidly expanding. It includes core tech companies (for standards and base tooling), Web3 specialists (bringing blockchain knowledge and use cases), and independent developers (who often contribute connectors for their favorite apps or protocols). The ethos is collaborative. For example, security concerns about third-party MCP servers have prompted community discussions and contributions of best practices (e.g., Stacklok contributors working on security tooling for MCP servers). The community’s ability to iterate quickly (MCP saw several spec upgrades within months, adding features like streaming responses and better auth) is a testament to broad engagement.

In the Web3 ecosystem specifically, MCP has fostered a mini-ecosystem of “AI + Web3” projects. It’s not just a protocol to use; it’s catalyzing new ideas like AI-driven DAOs, on-chain governance aided by AI analysis, and cross-domain automation (like linking on-chain events to off-chain actions through AI). The presence of key Web3 figures – e.g., Zhivko Todorov of LimeChain stating “MCP represents the inevitable integration of AI and blockchain” – shows that blockchain veterans are actively championing it. Partnerships between AI and blockchain companies (such as the one between Anthropic and Block, or Microsoft’s Azure cloud making MCP easy to deploy alongside its blockchain services) hint at a future where AI agents and smart contracts work hand-in-hand.

One could say MCP has ignited the first genuine convergence of the AI developer community with the Web3 developer community. Hackathons and meetups now feature MCP tracks. As a concrete measure of ecosystem adoption: by mid-2025, OpenAI, Google, and Anthropic – collectively representing the majority of advanced AI models – all support MCP, and on the other side, leading blockchain infrastructure providers (Alchemy, Chainstack), crypto companies (Block, etc.), and decentralized projects are building MCP hooks. This two-sided network effect bodes well for MCP becoming a lasting standard.

6. Roadmap and Development Milestones

MCP’s development has been fast-paced. Here we outline the major milestones so far and the roadmap ahead as gleaned from official sources and community updates:

  • Late 2024 – Initial Release: On Nov 25, 2024, Anthropic officially announced MCP and open-sourced the specification and initial SDKs. Alongside the spec, they released a handful of MCP server implementations for common tools (Google Drive, Slack, GitHub, etc.) and added support in the Claude AI assistant (Claude Desktop app) to connect to local MCP servers. This marked the 1.0 launch of MCP. Early proof-of-concept integrations at Anthropic showed how Claude could use MCP to read files or query a SQL database in natural language, validating the concept.
  • Q1 2025 – Rapid Adoption and Iteration: In the first few months of 2025, MCP saw widespread industry adoption. By March 2025, OpenAI and other AI providers announced support (as described above). This period also saw spec evolution: Anthropic updated MCP to include streaming capabilities (allowing large results or continuous data streams to be sent incrementally). This update was noted in April 2025 with the C# SDK news, indicating MCP now supported features like chunked responses or real-time feed integration. The community also built reference implementations in various languages (Python, JavaScript, etc.) beyond Anthropic’s SDK, ensuring polyglot support.
  • Q2 2025 – Ecosystem Tooling and Governance: In May 2025, with Microsoft and GitHub joining the effort, there was a push for formalizing governance and enhancing security. At Build 2025, Microsoft unveiled plans for Windows 11 MCP integration and detailed a collaboration to improve authorization flows in MCP. Around the same time, the idea of an MCP Registry was introduced to index available servers (the initial brainstorming started in March 2025 according to the registry blog). The “standards track” process (SEP – Standard Enhancement Proposals) was established on GitHub, similar to Ethereum’s EIPs or Python’s PEPs, to manage contributions in an orderly way. Community calls and working groups (for security, registry, SDKs) started convening.
  • Mid 2025 – Feature Expansion: By mid-2025, the roadmap prioritized several key improvements:
    • Asynchronous and Long-Running Task Support: Plans to allow MCP to handle long operations without blocking the connection. For example, if an AI triggers a cloud job that takes minutes, the MCP protocol would support async responses or reconnection to fetch results.
    • Authentication & Fine-Grained Security: Developing fine-grained authorization mechanisms for sensitive actions. This includes possibly integrating OAuth flows, API keys, and enterprise SSO into MCP servers so that AI access can be safely managed. By mid-2025, guides and best practices for MCP security were in progress, given the security risks of allowing AI to invoke powerful tools. The goal is that, for instance, if an AI is to access a user’s private database via MCP, it should follow a secure authorization flow (with user consent) rather than just an open endpoint.
    • Validation and Compliance Testing: Recognizing the need for reliability, the community prioritized building compliance test suites and reference implementations. By ensuring all MCP clients/servers adhere to the spec (through automated testing), they aimed to prevent fragmentation. A reference server (likely an example with best practices for remote deployment and auth) was on the roadmap, as was a reference client application demonstrating full MCP usage with an AI.
    • Multimodality Support: Extending MCP beyond text to support modalities like image, audio, video data in the context. For example, an AI might request an image from an MCP server (say, a design asset or a diagram) or output an image. The spec discussion included adding support for streaming and chunked messages to handle large multimedia content interactively. Early work on “MCP Streaming” was already underway (to support things like live audio feeds or continuous sensor data to AI).
    • Central Registry & Discovery: The plan to implement a central MCP Registry service for server discovery was executed in mid-2025. By September 2025, the official MCP Registry was launched in preview. This registry provides a single source of truth for publicly available MCP servers, allowing clients to find servers by name, category, or capabilities. It’s essentially like an app store (but open) for AI tools. The design allows for public registries (a global index) and private ones (enterprise-specific), all interoperable via a shared API. The Registry also introduced a moderation mechanism to flag or delist malicious servers, with a community moderation model to maintain quality.
  • Late 2025 and Beyond – Toward Decentralized MCP Networks: While not “official” roadmap items yet, the trajectory points toward more decentralization and Web3 synergy:
    • Researchers are actively exploring how to add decentralized discovery, reputation, and incentive layers to MCP. The concept of an MCP Network (or “marketplace of MCP endpoints”) is being incubated. This might involve smart contract-based registries (so no single point of failure for server listings), reputation systems where servers/clients have on-chain identities and stake for good behavior, and possibly token rewards for running reliable MCP nodes.
    • Project Namda at MIT, which started in 2024, is a concrete step in this direction. By 2025, Namda had built a prototype distributed agent framework on MCP’s foundations, including features like dynamic node discovery, load balancing across agent clusters, and a decentralized registry using blockchain techniques. They even have experimental token-based incentives and provenance tracking for multi-agent collaborations. Milestones from Namda show that it’s feasible to have a network of MCP agents running across many machines with trustless coordination. If Namda’s concepts are adopted, we might see MCP evolve to incorporate some of these ideas (possibly through optional extensions or separate protocols layered on top).
    • Enterprise Hardening: On the enterprise side, by late 2025 we expect MCP to be integrated into major enterprise software offerings (Microsoft’s inclusion in Windows and Azure is one example). The roadmap includes enterprise-friendly features like SSO integration for MCP servers and robust access controls. The general availability of the MCP Registry and toolkits for deploying MCP at scale (e.g., within a corporate network) is likely by end of 2025.

To recap some key development milestones so far (timeline format for clarity):

  • Nov 2024: MCP 1.0 released (Anthropic).
  • Dec 2024 – Jan 2025: Community builds first wave of MCP servers; Anthropic releases Claude Desktop with MCP support; small-scale pilots by Block, Apollo, etc.
  • Feb 2025: 1000+ community MCP connectors achieved; Anthropic hosts workshops (e.g., at an AI summit, driving education).
  • Mar 2025: OpenAI announces support (ChatGPT Agents SDK).
  • Apr 2025: Google DeepMind announces support (Gemini will support MCP); Microsoft releases preview of C# SDK.
  • May 2025: Steering Committee expanded (Microsoft/GitHub); Build 2025 demos (Windows MCP integration).
  • Jun 2025: Chainstack launches Web3 MCP servers (EVM/Solana) for public use.
  • Jul 2025: MCP spec version updates (streaming, authentication improvements); official Roadmap published on MCP site.
  • Sep 2025: MCP Registry (preview) launched; likely MCP hits general availability in more products (Claude for Work, etc.).
  • Late 2025 (projected): Registry v1.0 live; security best-practice guides released; possibly initial experiments with decentralized discovery (Namda results).

The vision forward is that MCP becomes as ubiquitous and invisible as HTTP or JSON – a common layer that many apps use under the hood. For Web3, the roadmap suggests deeper fusion: where not only will AI agents use Web3 (blockchains) as sources or sinks of information, but Web3 infrastructure itself might start to incorporate AI agents (via MCP) as part of its operation (for example, a DAO might run an MCP-compatible AI to manage certain tasks, or oracles might publish data via MCP endpoints). The roadmap’s emphasis on things like verifiability and authentication hints that down the line, trust-minimized MCP interactions could be a reality – imagine AI outputs that come with cryptographic proofs, or an on-chain log of what tools an AI invoked for audit purposes. These possibilities blur the line between AI and blockchain networks, and MCP is at the heart of that convergence.

In conclusion, MCP’s development is highly dynamic. It has hit major early milestones (broad adoption and standardization within a year of launch) and continues to evolve rapidly with a clear roadmap emphasizing security, scalability, and discovery. The milestones achieved and planned ensure MCP will remain robust as it scales: addressing challenges like long-running tasks, secure permissions, and the sheer discoverability of thousands of tools. This forward momentum indicates that MCP is not a static spec but a growing standard, likely to incorporate more Web3-flavored features (decentralized governance of servers, incentive alignment) as those needs arise. The community is poised to adapt MCP to new use cases (multimodal AI, IoT, etc.), all while keeping an eye on the core promise: making AI more connected, context-aware, and user-empowering in the Web3 era.

7. Comparison with Similar Web3 Projects or Protocols

MCP’s unique blend of AI and connectivity means there aren’t many direct apples-to-apples equivalents, but it’s illuminating to compare it with other projects at the intersection of Web3 and AI or with analogous goals:

  • SingularityNET (AGI/X)Decentralized AI Marketplace: SingularityNET, launched in 2017 by Dr. Ben Goertzel and others, is a blockchain-based marketplace for AI services. It allows developers to monetize AI algorithms as services and users to consume those services, all facilitated by a token (AGIX) which is used for payments and governance. In essence, SingularityNET is trying to decentralize the supply of AI models by hosting them on a network where anyone can call an AI service in exchange for tokens. This differs from MCP fundamentally. MCP does not host or monetize AI models; instead, it provides a standard interface for AI (wherever it’s running) to access data/tools. One could imagine using MCP to connect an AI to services listed on SingularityNET, but SingularityNET itself focuses on the economic layer (who provides an AI service and how they get paid). Another key difference: Governance – SingularityNET has on-chain governance (via SingularityNET Enhancement Proposals (SNEPs) and AGIX token voting) to evolve its platform. MCP’s governance, by contrast, is off-chain and collaborative without a token. In summary, SingularityNET and MCP both strive for a more open AI ecosystem, but SingularityNET is about a tokenized network of AI algorithms, whereas MCP is about a protocol standard for AI-tool interoperability. They could complement: for example, an AI on SingularityNET could use MCP to fetch external data it needs. But SingularityNET doesn’t attempt to standardize tool use; it uses blockchain to coordinate AI services, while MCP uses software standards to let AI work with any service.
  • Fetch.ai (FET)Agent-Based Decentralized Platform: Fetch.ai is another project blending AI and blockchain. It launched its own proof-of-stake blockchain and framework for building autonomous agents that perform tasks and interact on a decentralized network. In Fetch’s vision, millions of “software agents” (representing people, devices, or organizations) can negotiate and exchange value, using FET tokens for transactions. Fetch.ai provides an agent framework (uAgents) and infrastructure for discovery and communication between agents on its ledger. For example, a Fetch agent might help optimize traffic in a city by interacting with other agents for parking and transport, or manage a supply chain workflow autonomously. How does this compare to MCP? Both deal with the concept of agents, but Fetch.ai’s agents are strongly tied to its blockchain and token economy – they live on the Fetch network and use on-chain logic. MCP agents (AI hosts) are model-driven (like an LLM) and not tied to any single network; MCP is content to operate over the internet or within a cloud setup, without requiring a blockchain. Fetch.ai tries to build a new decentralized AI economy from the ground up (with its own ledger for trust and transactions), whereas MCP is layer-agnostic – it piggybacks on existing networks (could be used over HTTPS, or even on top of a blockchain if needed) to enable AI interactions. One might say Fetch is more about autonomous economic agents and MCP about smart tool-using agents. Interestingly, these could intersect: an autonomous agent on Fetch.ai might use MCP to interface with off-chain resources or other blockchains. Conversely, one could use MCP to build multi-agent systems that leverage different blockchains (not just one). In practice, MCP has seen faster adoption because it didn’t require its own network – it works with Ethereum, Solana, Web2 APIs, etc., out of the box. Fetch.ai’s approach is more heavyweight, creating an entire ecosystem that participants must join (and acquire tokens) to use. In sum, Fetch.ai vs MCP: Fetch is a platform with its own token/blockchain for AI agents, focusing on interoperability and economic exchanges between agents, while MCP is a protocol that AI agents (in any environment) can use to plug into tools and data. Their goals overlap in enabling AI-driven automation, but they tackle different layers of the stack and have very different architectural philosophies (closed ecosystem vs open standard).
  • Chainlink and Decentralized OraclesConnecting Blockchains to Off-Chain Data: Chainlink is not an AI project, but it’s highly relevant as a Web3 protocol solving a complementary problem: how to connect blockchains with external data and computation. Chainlink is a decentralized network of nodes (oracles) that fetch, verify, and deliver off-chain data to smart contracts in a trust-minimized way. For example, Chainlink oracles provide price feeds to DeFi protocols or call external APIs on behalf of smart contracts via Chainlink Functions. Comparatively, MCP connects AI models to external data/tools (some of which might be blockchains). One could say Chainlink brings data into blockchains, while MCP brings data into AI. There is a conceptual parallel: both establish a bridge between otherwise siloed systems. Chainlink focuses on reliability, decentralization, and security of data fed on-chain (solving the “oracle problem” of single point of failure). MCP focuses on flexibility and standardization of how AI can access data (solving the “integration problem” for AI agents). They operate in different domains (smart contracts vs AI assistants), but one might compare MCP servers to oracles: an MCP server for price data might call the same APIs a Chainlink node does. The difference is the consumer – in MCP’s case, the consumer is an AI or user-facing assistant, not a deterministic smart contract. Also, MCP does not inherently provide the trust guarantees that Chainlink does (MCP servers can be centralized or community-run, with trust managed at the application level). However, as mentioned earlier, ideas to decentralize MCP networks could borrow from oracle networks – e.g., multiple MCP servers could be queried and results cross-checked to ensure an AI isn’t fed bad data, similar to how multiple Chainlink nodes aggregate a price. In short, Chainlink vs MCP: Chainlink is Web3 middleware for blockchains to consume external data, MCP is AI middleware for models to consume external data (which could include blockchain data). They address analogous needs in different realms and could even complement: an AI using MCP might fetch a Chainlink-provided data feed as a reliable resource, and conversely, an AI could serve as a source of analysis that a Chainlink oracle brings on-chain (though that latter scenario would raise questions of verifiability).
  • ChatGPT Plugins / OpenAI Functions vs MCPAI Tool Integration Approaches: While not Web3 projects, a quick comparison is warranted because ChatGPT plugins and OpenAI’s function calling feature also connect AI to external tools. ChatGPT plugins use an OpenAPI specification provided by a service, and the model can then call those APIs following the spec. The limitations are that it’s a closed ecosystem (OpenAI-approved plugins running on OpenAI’s servers) and each plugin is a siloed integration. OpenAI’s newer “Agents” SDK is closer to MCP in concept, letting developers define tools/functions that an AI can use, but initially it was specific to OpenAI’s ecosystem. LangChain similarly provided a framework to give LLMs tools in code. MCP differs by offering an open, model-agnostic standard for this. As one analysis put it, LangChain created a developer-facing standard (a Python interface) for tools, whereas MCP creates a model-facing standard – an AI agent can discover and use any MCP-defined tool at runtime without custom code. In practical terms, MCP’s ecosystem of servers grew larger and more diverse than the ChatGPT plugin store within months. And rather than each model having its own plugin format (OpenAI had theirs, others had different ones), many are coalescing around MCP. OpenAI itself signaled support for MCP, essentially aligning their function approach with the broader standard. So, comparing OpenAI Plugins to MCP: plugins are a curated, centralized approach, while MCP is a decentralized, community-driven approach. In a Web3 mindset, MCP is more “open source and permissionless” whereas proprietary plugin ecosystems are more closed. This makes MCP analogous to the ethos of Web3 even though it’s not a blockchain – it enables interoperability and user control (you could run your own MCP server for your data, instead of giving it all to one AI provider). This comparison shows why many consider MCP as having more long-term potential: it’s not locked to one vendor or one model.
  • Project Namda and Decentralized Agent Frameworks: Namda deserves a separate note because it explicitly combines MCP with Web3 concepts. As described earlier, Namda (Networked Agent Modular Distributed Architecture) is an MIT/IBM initiative started in 2024 to build a scalable, distributed network of AI agents using MCP as the communication layer. It treats MCP as the messaging backbone (since MCP uses standard JSON-RPC-like messages, it fit well for inter-agent comms), and then adds layers for dynamic discovery, fault tolerance, and verifiable identities using blockchain-inspired techniques. Namda’s agents can be anywhere (cloud, edge devices, etc.), but a decentralized registry (somewhat like a DHT or blockchain) keeps track of them and their capabilities in a tamper-proof way. They even explore giving agents tokens to incentivize cooperation or resource sharing. In essence, Namda is an experiment in what a “Web3 version of MCP” might look like. It’s not a widely deployed project yet, but it’s one of the closest “similar protocols” in spirit. If we view Namda vs MCP: Namda uses MCP (so it’s not competing standards), but extends it with a protocol for networking and coordinating multiple agents in a trust-minimized manner. One could compare Namda to frameworks like Autonolas or Multi-Agent Systems (MAS) that the crypto community has seen, but those often lacked a powerful AI component or a common protocol. Namda + MCP together showcase how a decentralized agent network could function, with blockchain providing identity, reputation, and possibly token incentives, and MCP providing the agent communication and tool-use.

In summary, MCP stands apart from most prior Web3 projects: it did not start as a crypto project at all, yet it rapidly intersects with Web3 because it solves complementary problems. Projects like SingularityNET and Fetch.ai aimed to decentralize AI compute or services using blockchain; MCP instead standardizes AI integration with services, which can enhance decentralization by avoiding platform lock-in. Oracle networks like Chainlink solved data delivery to blockchain; MCP solves data delivery to AI (including blockchain data). If Web3’s core ideals are decentralization, interoperability, and user empowerment, MCP is attacking the interoperability piece in the AI realm. It’s even influencing those older projects – for instance, there is nothing stopping SingularityNET from making its AI services available via MCP servers, or Fetch agents from using MCP to talk to external systems. We might well see a convergence where token-driven AI networks use MCP as their lingua franca, marrying the incentive structure of Web3 with the flexibility of MCP.

Finally, if we consider market perception: MCP is often touted as doing for AI what Web3 hoped to do for the internet – break silos and empower users. This has led some to nickname MCP informally as “Web3 for AI” (even when no blockchain is involved). However, it’s important to recognize MCP is a protocol standard, whereas most Web3 projects are full-stack platforms with economic layers. In comparisons, MCP usually comes out as a more lightweight, universal solution, while blockchain projects are heavier, specialized solutions. Depending on use case, they can complement rather than strictly compete. As the ecosystem matures, we might see MCP integrated into many Web3 projects as a module (much like how HTTP or JSON are ubiquitous), rather than as a rival project.

8. Public Perception, Market Traction, and Media Coverage

Public sentiment toward MCP has been overwhelmingly positive in both the AI and Web3 communities, often bordering on enthusiastic. Many see it as a game-changer that arrived quietly but then took the industry by storm. Let’s break down the perception, traction, and notable media narratives:

Market Traction and Adoption Metrics: By mid-2025, MCP achieved a level of adoption rare for a new protocol. It’s backed by virtually all major AI model providers (Anthropic, OpenAI, Google, Meta) and supported by big tech infrastructure (Microsoft, GitHub, AWS etc.), as detailed earlier. This alone signals to the market that MCP is likely here to stay (akin to how broad backing propelled TCP/IP or HTTP in early internet days). On the Web3 side, the traction is evident in developer behavior: hackathons started featuring MCP projects, and many blockchain dev tools now mention MCP integration as a selling point. The stat of “1000+ connectors in a few months” and Mike Krieger’s “thousands of integrations” quote are often cited to illustrate how rapidly MCP caught on. This suggests strong network effects – the more tools available via MCP, the more useful it is, prompting more adoption (a positive feedback loop). VCs and analysts have noted that MCP achieved in under a year what earlier “AI interoperability” attempts failed to do over several years, largely due to timing (riding the wave of interest in AI agents) and being open-source. In Web3 media, traction is sometimes measured in terms of developer mindshare and integration into projects, and MCP scores high on both now.

Public Perception in AI and Web3 Communities: Initially, MCP flew under the radar when first announced (late 2024). But by early 2025, as success stories emerged, perception shifted to excitement. AI practitioners saw MCP as the “missing puzzle piece” for making AI agents truly useful beyond toy examples. Web3 builders, on the other hand, saw it as a bridge to finally incorporate AI into dApps without throwing away decentralization – an AI can use on-chain data without needing a centralized oracle, for instance. Thought leaders have been singing praises: for example, Jesus Rodriguez (a prominent Web3 AI writer) wrote in CoinDesk that MCP may be “one of the most transformative protocols for the AI era and a great fit for Web3 architectures”. Rares Crisan in a Notable Capital blog argued that MCP could deliver on Web3’s promise where blockchain alone struggled, by making the internet more user-centric and natural to interact with. These narratives frame MCP as revolutionary yet practical – not just hype.

To be fair, not all commentary is uncritical. Some AI developers on forums like Reddit have pointed out that MCP “doesn’t do everything” – it’s a communication protocol, not an out-of-the-box agent or reasoning engine. For instance, one Reddit discussion titled “MCP is a Dead-End Trap” argued that MCP by itself doesn’t manage agent cognition or guarantee quality; it still requires good agent design and safety controls. This view suggests MCP could be overhyped as a silver bullet. However, these criticisms are more about tempering expectations than rejecting MCP’s usefulness. They emphasize that MCP solves tool connectivity but one must still build robust agent logic (i.e., MCP doesn’t magically create an intelligent agent, it equips one with tools). The consensus though is that MCP is a big step forward, even among cautious voices. Hugging Face’s community blog noted that while MCP isn’t a solve-it-all, it is a major enabler for integrated, context-aware AI, and developers are rallying around it for that reason.

Media Coverage: MCP has received significant coverage across both mainstream tech media and niche blockchain media:

  • TechCrunch has run multiple stories. They covered the initial concept (“Anthropic proposes a new way to connect data to AI chatbots”) around launch in 2024. In 2025, TechCrunch highlighted each big adoption moment: OpenAI’s support, Google’s embrace, Microsoft/GitHub’s involvement. These articles often emphasize the industry unity around MCP. For example, TechCrunch quoted Sam Altman’s endorsement and noted the rapid shift from rival standards to MCP. In doing so, they portrayed MCP as the emerging standard similar to how no one wanted to be left out of the internet protocols in the 90s. Such coverage in a prominent outlet signaled to the broader tech world that MCP is important and real, not just a fringe open-source project.
  • CoinDesk and other crypto publications latched onto the Web3 angle. CoinDesk’s opinion piece by Rodriguez (July 2025) is often cited; it painted a futuristic picture where every blockchain could be an MCP server and new MCP networks might run on blockchains. It connected MCP to concepts like decentralized identity, authentication, and verifiability – speaking the language of the blockchain audience and suggesting MCP could be the protocol that truly melds AI with decentralized frameworks. Cointelegraph, Bankless, and others have also discussed MCP in context of “AI agents & DeFi” and similar topics, usually optimistic about the possibilities (e.g., Bankless had a piece on using MCP to let an AI manage on-chain trades, and included a how-to for their own MCP server).
  • Notable VC Blogs / Analyst Reports: The Notable Capital blog post (July 2025) is an example of venture analysis drawing parallels between MCP and the evolution of web protocols. It essentially argues MCP could do for Web3 what HTTP did for Web1 – providing a new interface layer (natural language interface) that doesn’t replace underlying infrastructure but makes it usable. This kind of narrative is compelling and has been echoed in panels and podcasts. It positions MCP not as competing with blockchain, but as the next layer of abstraction that finally allows normal users (via AI) to harness blockchain and web services easily.
  • Developer Community Buzz: Outside formal articles, MCP’s rise can be gauged by its presence in developer discourse – conference talks, YouTube channels, newsletters. For instance, there have been popular blog posts like “MCP: The missing link for agentic AI?” on sites like Runtime.news, and newsletters (e.g., one by AI researcher Nathan Lambert) discussing practical experiments with MCP and how it compares to other tool-use frameworks. The general tone is curiosity and excitement: developers share demos of hooking up AI to their home automation or crypto wallet with just a few lines using MCP servers, something that felt sci-fi not long ago. This grassroots excitement is important because it shows MCP has mindshare beyond just corporate endorsements.
  • Enterprise Perspective: Media and analysts focusing on enterprise AI also note MCP as a key development. For example, The New Stack covered how Anthropic added support for remote MCP servers in Claude for enterprise use. The angle here is that enterprises can use MCP to connect their internal knowledge bases and systems to AI safely. This matters for Web3 too as many blockchain companies are enterprises themselves and can leverage MCP internally (for instance, a crypto exchange could use MCP to let an AI analyze internal transaction logs for fraud detection).

Notable Quotes and Reactions: A few are worth highlighting as encapsulating public perception:

  • “Much like HTTP revolutionized web communications, MCP provides a universal framework... replacing fragmented integrations with a single protocol.” – CoinDesk. This comparison to HTTP is powerful; it frames MCP as infrastructure-level innovation.
  • “MCP has [become a] thriving open standard with thousands of integrations and growing. LLMs are most useful when connecting to the data you already have...” – Mike Krieger (Anthropic). This is an official confirmation of both traction and the core value proposition, which has been widely shared on social media.
  • “The promise of Web3... can finally be realized... through natural language and AI agents. ...MCP is the closest thing we've seen to a real Web3 for the masses.” – Notable Capital. This bold statement resonates with those frustrated by the slow UX improvements in crypto; it suggests AI might crack the code of mainstream adoption by abstracting complexity.

Challenges and Skepticism: While enthusiasm is high, the media has also discussed challenges:

  • Security Concerns: Outlets like The New Stack or security blogs have raised that allowing AI to execute tools can be dangerous if not sandboxed. What if a malicious MCP server tried to get an AI to perform a harmful action? The LimeChain blog explicitly warns of “significant security risks” with community-developed MCP servers (e.g., a server that handles private keys must be extremely secure). These concerns have been echoed in discussions: essentially, MCP expands AI’s capabilities, but with power comes risk. The community’s response (guides, auth mechanisms) has been covered as well, generally reassuring that mitigations are being built. Still, any high-profile misuse of MCP (say an AI triggered an unintended crypto transfer) would affect perception, so media is watchful on this front.
  • Performance and Cost: Some analysts note that using AI agents with tools could be slower or more costly than directly calling an API (because the AI might need multiple back-and-forth steps to get what it needs). In high-frequency trading or on-chain execution contexts, that latency could be problematic. For now, these are seen as technical hurdles to optimize (through better agent design or streaming), rather than deal-breakers.
  • Hype management: As with any trending tech, there’s a bit of hype. A few voices caution not to declare MCP the solution to everything. For instance, the Hugging Face article asks “Is MCP a silver bullet?” and answers no – developers still need to handle context management, and MCP works best in combination with good prompting and memory strategies. Such balanced takes are healthy in the discourse.

Overall Media Sentiment: The narrative that emerges is largely hopeful and forward-looking:

  • MCP is seen as a practical tool delivering real improvements now (so not vaporware), which media underscore by citing working examples: Claude reading files, Copilot using MCP in VSCode, an AI completing a Solana transaction in a demo, etc..
  • It’s also portrayed as a strategic linchpin for the future of both AI and Web3. Media often conclude that MCP or things like it will be essential for “decentralized AI” or “Web4” or whatever term one uses for the next-gen web. There’s a sense that MCP opened a door, and now innovation is flowing through – whether it's Namda’s decentralized agents or enterprises connecting legacy systems to AI, many future storylines trace back to MCP’s introduction.

In the market, one could gauge traction by the formation of startups and funding around the MCP ecosystem. Indeed, there are rumors/reports of startups focusing on “MCP marketplaces” or managed MCP platforms getting funding (Notable Capital writing about it suggests VC interest). We can expect media to start covering those tangentially – e.g., “Startup X uses MCP to let your AI manage your crypto portfolio – raises $Y million”.

Conclusion of Perception: By late 2025, MCP enjoys a reputation as a breakthrough enabling technology. It has strong advocacy from influential figures in both AI and crypto. The public narrative has evolved from “here’s a neat tool” to “this could be foundational for the next web”. Meanwhile, practical coverage confirms it’s working and being adopted, lending credibility. Provided the community continues addressing challenges (security, governance at scale) and no major disasters occur, MCP’s public image is likely to remain positive or even become iconic as “the protocol that made AI and Web3 play nice together.”

Media will likely keep a close eye on:

  • Success stories (e.g., if a major DAO implements an AI treasurer via MCP, or a government uses MCP for open data AI systems).
  • Any security incidents (to evaluate risk).
  • The evolution of MCP networks and whether any token or blockchain component officially enters the picture (which would be big news bridging AI and crypto even more tightly).

As of now, however, the coverage can be summed up by a line from CoinDesk: “The combination of Web3 and MCP might just be a new foundation for decentralized AI.” – a sentiment that captures both the promise and the excitement surrounding MCP in the public eye.

References:

  • Anthropic News: "Introducing the Model Context Protocol," Nov 2024
  • LimeChain Blog: "What is MCP and How Does It Apply to Blockchains?" May 2025
  • Chainstack Blog: "MCP for Web3 Builders: Solana, EVM and Documentation," June 2025
  • CoinDesk Op-Ed: "The Protocol of Agents: Web3’s MCP Potential," Jul 2025
  • Notable Capital: "Why MCP Represents the Real Web3 Opportunity," Jul 2025
  • TechCrunch: "OpenAI adopts Anthropic’s standard…", Mar 26, 2025
  • TechCrunch: "Google to embrace Anthropic’s standard…", Apr 9, 2025
  • TechCrunch: "GitHub, Microsoft embrace… (MCP steering committee)", May 19, 2025
  • Microsoft Dev Blog: "Official C# SDK for MCP," Apr 2025
  • Hugging Face Blog: "#14: What Is MCP, and Why Is Everyone Talking About It?" Mar 2025
  • Messari Research: "Fetch.ai Profile," 2023
  • Medium (Nu FinTimes): "Unveiling SingularityNET," Mar 2024

Google’s Agent Payments Protocol (AP2)

· 34 min read
Dora Noda
Software Engineer

Google’s Agent Payments Protocol (AP2) is a newly announced open standard designed to enable secure, trustworthy transactions initiated by AI agents on behalf of users. Developed in collaboration with over 60 payments and technology organizations (including major payment networks, banks, fintechs, and Web3 companies), AP2 establishes a common language for “agentic” payments – i.e. purchases and financial transactions that an autonomous agent (such as an AI assistant or LLM-based agent) can carry out for a user. AP2’s creation is driven by a fundamental shift: traditionally, online payment systems assumed a human is directly clicking “buy,” but the rise of AI agents acting on user instructions breaks this assumption. AP2 addresses the resulting challenges of authorization, authenticity, and accountability in AI-driven commerce, while remaining compatible with existing payment infrastructure. This report examines AP2’s technical architecture, purpose and use cases, integrations with AI agents and payment providers, security and compliance considerations, comparisons to existing protocols, implications for Web3/decentralized systems, and the industry adoption/roadmap.

Technical Architecture: How AP2 Works

At its core, AP2 introduces a cryptographically secure transaction framework built on verifiable digital credentials (VDCs) – essentially tamper-proof, signed data objects that serve as digital “contracts” of what the user has authorized. In AP2 terminology these contracts are called Mandates, and they form an auditable chain of evidence for each transaction. There are three primary types of mandates in the AP2 architecture:

  • Intent Mandate: Captures the user’s initial instructions or conditions for a purchase, especially for “human-not-present” scenarios (where the agent will act later without the user online). It defines the scope of authority the user gives the agent – for example, “Buy concert tickets if they drop below $200, up to 2 tickets”. This mandate is cryptographically signed upfront by the user and serves as verifiable proof of consent within specific limits.
  • Cart Mandate: Represents the final transaction details that the user has approved, used in “human-present” scenarios or at the moment of checkout. It includes the exact items or services, their price, and other particulars of the purchase. When the agent is ready to complete the transaction (e.g. after filling a shopping cart), the merchant first cryptographically signs the cart contents (guaranteeing the order details and price), and then the user (via their device or agent interface) signs off to create a Cart Mandate. This ensures what-you-see-is-what-you-pay, locking in the final order exactly as presented to the user.
  • Payment Mandate: A separate credential that is sent to the payment network (e.g. card network or bank) to signal that an AI agent is involved in the transaction. The Payment Mandate includes metadata such as whether the user was present or not during authorization and serves as a flag for risk management systems. By providing the acquiring and issuing banks with cryptographically verifiable evidence of user intent, this mandate helps them assess the context (for example, distinguishing an agent-initiated purchase from typical fraud) and manage compliance or liability accordingly.

All mandates are implemented as verifiable credentials signed by the relevant party’s keys (user, merchant, etc.), yielding a non-repudiable audit trail for every agent-led transaction. In practice, AP2 uses a role-based architecture to protect sensitive information – for instance, an agent might handle an Intent Mandate without ever seeing raw payment details, which are only revealed in a controlled way when needed, preserving privacy. The cryptographic chain of user intent → merchant commitment → payment authorization establishes trust among all parties that the transaction reflects the user’s true instructions and that both the agent and merchant adhered to those instructions.

Transaction Flow: To illustrate how AP2 works end-to-end, consider a simple purchase scenario with a human in the loop:

  1. User Request: The user asks their AI agent to purchase a particular item or service (e.g. “Order this pair of shoes in my size”).
  2. Cart Construction: The agent communicates with the merchant’s systems (using standard APIs or via an agent-to-agent interaction) to assemble a shopping cart for the specified item at a given price.
  3. Merchant Guarantee: Before presenting the cart to the user, the merchant’s side cryptographically signs the cart details (item, quantity, price, etc.). This step creates a merchant-signed offer that guarantees the exact terms (preventing any hidden changes or price manipulation).
  4. User Approval: The agent shows the user the finalized cart. The user confirms the purchase, and this approval triggers two cryptographic signatures from the user’s side: one on the Cart Mandate (to accept the merchant’s cart as-is) and one on the Payment Mandate (to authorize payment through the chosen payment provider). These signed mandates are then shared with the merchant and the payment network respectively.
  5. Execution: Armed with the Cart Mandate and Payment Mandate, the merchant and payment provider proceed to execute the transaction securely. For example, the merchant submits the payment request along with the proof of user approval to the payment network (card network, bank, etc.), which can verify the Payment Mandate. The result is a completed purchase transaction with a cryptographic audit trail linking the user’s intent to the final payment.

This flow demonstrates how AP2 builds trust into each step of an AI-driven purchase. The merchant has cryptographic proof of exactly what the user agreed to buy at what price, and the issuer/bank has proof that the user authorized that payment, even though an AI agent facilitated the process. In case of disputes or errors, the signed mandates act as clear evidence, helping determine accountability (e.g. if the agent deviated from instructions or if a charge was not what the user approved). In essence, AP2’s architecture ensures that verifiable user intent – rather than trust in the agent’s behavior – is the basis of the transaction, greatly reducing ambiguity.

Purpose and Use Cases for AP2

Why AP2 is Needed: The primary purpose of AP2 is to solve emerging trust and security issues that arise when AI agents can spend money on behalf of users. Google and its partners identified several key questions that today’s payment infrastructure cannot adequately answer when an autonomous agent is in the loop:

  • Authorization: How to prove that a user actually gave the agent permission to make a specific purchase? (In other words, ensuring the agent isn’t buying things without the user’s informed consent.)
  • Authenticity: How can a merchant know that an agent’s purchase request is genuine and reflects the user’s true intent, rather than a mistake or AI hallucination?
  • Accountability: If a fraudulent or incorrect transaction occurs via an agent, who is responsible – the user, the merchant, the payment provider, or the creator of the AI agent?

Without a solution, these uncertainties create a “crisis of trust” around agent-led commerce. AP2’s mission is to provide that solution by establishing a uniform protocol for secure agent transactions. By introducing standardized mandates and proofs of intent, AP2 prevents a fragmented ecosystem of each company inventing its own ad-hoc agent payment methods. Instead, any compliant AI agent can interact with any compliant merchant/payment provider under a common set of rules and verifications. This consistency not only avoids user and merchant confusion, but also gives financial institutions a clear way to manage risk for agent-initiated payments, rather than dealing with a patchwork of proprietary approaches. In short, AP2’s purpose is to be a foundational trust layer that lets the “agent economy” grow without breaking the payments ecosystem.

Intended Use Cases: By solving the above issues, AP2 opens the door to new commerce experiences and use cases that go beyond what’s possible with a human manually clicking through purchases. Some examples of agent-enabled commerce that AP2 supports include:

  • Smarter Shopping: A customer can instruct their agent, “I want this winter jacket in green, and I’m willing to pay up to 20% above the current price for it”. Armed with an Intent Mandate encoding these conditions, the agent will continuously monitor retailer websites or databases. The moment the jacket becomes available in green (and within the price threshold), the agent automatically executes a purchase with a secure, signed transaction – capturing a sale that otherwise would have been missed. The entire interaction, from the user’s initial request to the automated checkout, is governed by AP2 mandates ensuring the agent only buys exactly what was authorized.
  • Personalized Offers: A user tells their agent they’re looking for a specific product (say, a new bicycle) from a particular merchant for an upcoming trip. The agent can share this interest (within the bounds of an Intent Mandate) with the merchant’s own AI agent, including relevant context like the trip date. The merchant agent, knowing the user’s intent and context, could respond with a custom bundle or discount – for example, “bicycle + helmet + travel rack at 15% off, available for the next 48 hours.” Using AP2, the user’s agent can accept and complete this tailored offer securely, turning a simple query into a more valuable sale for the merchant.
  • Coordinated Tasks: A user planning a complex task (e.g. a weekend trip) delegates it entirely: “Book me a flight and hotel for these dates with a total budget of $700.” The agent can interact with multiple service providers’ agents – airlines, hotels, travel platforms – to find a combination that fits the budget. Once a suitable flight-hotel package is identified, the agent uses AP2 to execute multiple bookings in one go, each cryptographically signed (for example, issuing separate Cart Mandates for the airline and the hotel, both authorized under the user’s Intent Mandate). AP2 ensures all parts of this coordinated transaction occur as approved, and even allows simultaneous execution so that tickets and reservations are booked together without risk of one part failing mid-way.

These scenarios illustrate just a few of AP2’s intended use cases. More broadly, AP2’s flexible design supports both conventional e-commerce flows and entirely new models of commerce. For instance, AP2 can facilitate subscription-like services (an agent keeps you stocked on essentials by purchasing when conditions are met), event-driven purchases (buying tickets or items the instant a trigger event occurs), group agent negotiations (multiple users’ agents pooling mandates to bargain for a group deal), and many other emerging patterns. In every case, the common thread is that AP2 provides the trust framework – clear user authorization and cryptographic auditability – that allows these agent-driven transactions to happen safely. By handling the trust and verification layer, AP2 lets developers and businesses focus on innovating new AI commerce experiences without re-inventing payment security from scratch.

Integration with Agents, LLMs, and Payment Providers

AP2 is explicitly designed to integrate seamlessly with AI agent frameworks and with existing payment systems, acting as a bridge between the two. Google has positioned AP2 as an extension of its Agent2Agent (A2A) protocol and Model Context Protocol (MCP) standards. In other words, if A2A provides a generic language for agents to communicate tasks and MCP standardizes how AI models incorporate context/tools, then AP2 adds a transactions layer on top for commerce. The protocols are complementary: A2A handles agent-to-agent communication (allowing, say, a shopping agent to talk to a merchant’s agent), while AP2 handles agent-to-merchant payment authorization within those interactions. Because AP2 is open and non-proprietary, it’s meant to be framework-agnostic: developers can use it with Google’s own Agent Development Kit (ADK) or any AI agent library, and likewise it can work with various AI models including LLMs. An LLM-based agent, for example, could use AP2 by generating and exchanging the required mandate payloads (guided by the AP2 spec) instead of just free-form text. By enforcing a structured protocol, AP2 helps transform an AI agent’s high-level intent (which might come from an LLM’s reasoning) into concrete, secure transactions.

On the payments side, AP2 was built in concert with traditional payment providers and standards, rather than as a rip-and-replace system. The protocol is payment-method-agnostic, meaning it can support a variety of payment rails – from credit/debit card networks to bank transfers and digital wallets – as the underlying method for moving funds. In its initial version, AP2 emphasizes compatibility with card payments, since those are most common in online commerce. The AP2 Payment Mandate is designed to plug into the existing card processing flow: it provides additional data to the payment network (e.g. Visa, Mastercard, Amex) and issuing bank that an AI agent is involved and whether the user was present, thereby complementing existing fraud detection and authorization checks. Essentially, AP2 doesn’t process the payment itself; it augments the payment request with cryptographic proof of user intent. This allows payment providers to treat agent-initiated transactions with appropriate caution or speed (for example, an issuer might approve an unusual-looking purchase if it sees a valid AP2 mandate proving the user pre-approved it). Notably, Google and partners plan to evolve AP2 to support “push” payment methods as well – such as real-time bank transfers (like India’s UPI or Brazil’s PIX systems) – and other emerging digital payment types. This indicates AP2’s integration will expand beyond cards, aligning with modern payment trends worldwide.

For merchants and payment processors, integrating AP2 would mean supporting the additional protocol messages (mandates) and verifying signatures. Many large payment platforms are already involved in shaping AP2, so we can expect they will build support for it. For example, companies like Adyen, Worldpay, Paypal, Stripe (not explicitly named in the blog but likely interested), and others could incorporate AP2 into their checkout APIs or SDKs, allowing an agent to initiate a payment in a standardized way. Because AP2 is an open specification on GitHub with reference implementations, payment providers and tech platforms can start experimenting with it immediately. Google has also mentioned an AI Agent Marketplace where third-party agents can be listed – these agents are expected to support AP2 for any transactional capabilities. In practice, an enterprise that builds an AI sales assistant or procurement agent could list it on this marketplace, and thanks to AP2, that agent can carry out purchases or orders reliably.

Finally, AP2’s integration story benefits from its broad industry backing. By co-developing the protocol with major financial institutions and tech firms, Google ensured AP2 aligns with existing industry rules and compliance requirements. The collaboration with payment networks (e.g. Mastercard, UnionPay), issuers (e.g. American Express), fintechs (e.g. Revolut, Paypal), e-commerce players (e.g. Etsy), and even identity/security providers (e.g. Okta, Cloudflare) suggests AP2 is being designed to slot into real-world systems with minimal friction. These stakeholders bring expertise in areas like KYC (Know Your Customer regulations), fraud prevention, and data privacy, helping AP2 address those needs out of the box. In summary, AP2 is built to be agent-friendly and payment-provider-friendly: it extends existing AI agent protocols to handle transactions, and it layers on top of existing payment networks to utilize their infrastructure while adding necessary trust guarantees.

Security, Compliance, and Interoperability Considerations

Security and trust are at the heart of AP2’s design. The protocol’s use of cryptography (digital signatures on mandates) ensures that every critical action in an agentic transaction is verifiable and traceable. This non-repudiation is crucial: neither the user nor merchant can later deny what was authorized and agreed upon, since the mandates serve as secure records. A direct benefit is in fraud prevention and dispute resolution – with AP2, if a malicious or buggy agent attempts an unauthorized purchase, the lack of a valid user-signed mandate would be evident, and the transaction can be declined or reversed. Conversely, if a user claims “I never approved this purchase,” but a Cart Mandate exists with their cryptographic signature, the merchant and issuer have strong evidence to support the charge. This clarity of accountability answers a major compliance concern for the payments industry.

Authorization & Privacy: AP2 enforces an explicit authorization step (or steps) from the user for agent-led transactions, which aligns with regulatory trends like strong customer authentication. The User Control principle baked into AP2 means an agent cannot spend funds unless the user (or someone delegated by the user) has provided a verifiable instruction to do so. Even in fully autonomous scenarios, the user predefines the rules via an Intent Mandate. This approach can be seen as analogous to giving a power-of-attorney to the agent for specific transactions, but in a digitally signed, fine-grained manner. From a privacy perspective, AP2 is mindful about data sharing: the protocol uses a role-based data architecture to ensure that sensitive info (like payment credentials or personal details) is only shared with parties that absolutely need it. For example, an agent might send a Cart Mandate to a merchant containing item and price info, but the user’s actual card number might only be shared through the Payment Mandate with the payment processor, not with the agent or merchant. This minimizes unnecessary exposure of data, aiding compliance with privacy laws and PCI-DSS rules for handling payment data.

Compliance & Standards: Because AP2 was developed with input from established financial entities, it has been designed to meet or complement existing compliance standards in payments. The protocol doesn’t bypass the usual payment authorization flows – instead, it augments them with additional evidence and flags. This means AP2 transactions can still leverage fraud detection systems, 3-D Secure checks, or any regulatory checks required, with AP2’s mandates acting as extra authentication factors or context cues. For instance, a bank could treat a Payment Mandate akin to a customer’s digital signature on a transaction, potentially streamlining compliance with requirements for user consent. Additionally, AP2’s designers explicitly mention working “in concert with industry rules and standards”. We can infer that as AP2 evolves, it may be brought to formal standards bodies (such as the W3C, EMVCo, or ISO) to ensure it aligns with global financial standards. Google has stated commitment to an open, collaborative evolution of AP2 possibly through standards organizations. This open process will help iron out any regulatory concerns and achieve broad acceptance, similar to how previous payment standards (EMV chip cards, 3-D Secure, etc.) underwent industry-wide collaboration.

Interoperability: Avoiding fragmentation is a key goal of AP2. To that end, the protocol is openly published and made available for anyone to implement or integrate. It is not tied to Google Cloud services – in fact, AP2 is open-source (Apache-2 licensed) and the specification plus reference code is on a public GitHub repository. This encourages interoperability because multiple vendors can adopt AP2 and still have their systems work together. Already, the interoperability principle is highlighted: AP2 is an extension of existing open protocols (A2A, MCP) and is non-proprietary, meaning it fosters a competitive ecosystem of implementations rather than a single-vendor solution. In practical terms, an AI agent built by Company A could initiate a transaction with a merchant system from Company B if both follow AP2 – neither side is locked into one platform.

One possible concern is ensuring consistent adoption: if some major players chose a different protocol or closed approach, fragmentation could still occur. However, given the broad coalition behind AP2, it appears poised to become a de facto standard. The inclusion of many identity and security-focused firms (for example, Okta, Cloudflare, Ping Identity) in the AP2 ecosystem Figure: Over 60 companies across finance, tech, and crypto are collaborating on AP2 (partial list of partners). suggests that interoperability and security are being jointly addressed. These partners can help integrate AP2 into identity verification workflows and fraud prevention tools, ensuring that an AP2 transaction can be trusted across systems.

From a technology standpoint, AP2’s use of widely accepted cryptographic techniques (likely JSON-LD or JWT-based verifiable credentials, public key signatures, etc.) makes it compatible with existing security infrastructure. Organizations can use their existing PKI (Public Key Infrastructure) to manage keys for signing mandates. AP2 also seems to anticipate integration with decentralized identity systems: Google mentions that AP2 creates opportunities to innovate in areas like decentralized identity for agent authorization. This means in the future, AP2 could leverage DID (Decentralized Identifier) standards or decentralized identifier verification for identifying agents and users in a trusted way. Such an approach would further enhance interoperability by not relying on any single identity provider. In summary, AP2 emphasizes security through cryptography and clear accountability, aims to be compliance-ready by design, and promotes interoperability through its open standard nature and broad industry support.

Comparison with Existing Protocols

AP2 is a novel protocol addressing a gap that existing payment and agent frameworks have not covered: enabling autonomous agents to perform payments in a secure, standardized manner. In terms of agent communication protocols, AP2 builds on prior work like the Agent2Agent (A2A) protocol. A2A (open-sourced earlier in 2025) allows different AI agents to talk to each other regardless of their underlying frameworks. However, A2A by itself doesn’t define how agents should conduct transactions or payments – it’s more about task negotiation and data exchange. AP2 extends this landscape by adding a transaction layer that any agent can use when a conversation leads to a purchase. In essence, AP2 can be seen as complementary to A2A and MCP, rather than overlapping: A2A covers the communication and collaboration aspects, MCP covers using external tools/APIs, and AP2 covers payments and commerce. Together, they form a stack of standards for a future “agent economy.” This modular approach is somewhat analogous to internet protocols: for example, HTTP for data communication and SSL/TLS for security – here A2A might be like the HTTP of agents, and AP2 the secure transactional layer on top for commerce.

When comparing AP2 to traditional payment protocols and standards, there are both parallels and differences. Traditional online payments (credit card checkouts, PayPal transactions, etc.) typically involve protocols like HTTPS for secure transmission, and standards like PCI DSS for handling card data, plus possibly 3-D Secure for additional user authentication. These assume a user-driven flow (user clicks and perhaps enters a one-time code). AP2, by contrast, introduces a way for a third-party (the agent) to participate in the flow without undermining security. One could compare AP2’s mandate concept to an extension of OAuth-style delegated authority, but applied to payments. In OAuth, a user can grant an application limited access to an account via tokens; similarly in AP2, a user grants an agent authority to spend under certain conditions via mandates. The key difference is that AP2’s “tokens” (mandates) are specific, signed instructions for financial transactions, which is more fine-grained than existing payment authorizations.

Another point of comparison is how AP2 relates to existing e-commerce checkout flows. For instance, many e-commerce sites use protocols like the W3C Payment Request API or platform-specific SDKs to streamline payments. Those mainly standardize how browsers or apps collect payment info from a user, whereas AP2 standardizes how an agent would prove user intent to a merchant and payment processor. AP2’s focus on verifiable intent and non-repudiation sets it apart from simpler payment APIs. It’s adding an additional layer of trust on top of the payment networks. One could say AP2 is not replacing the payment networks (Visa, ACH, blockchain, etc.), but rather augmenting them. The protocol explicitly supports all types of payment methods (even crypto), so it is more about standardizing the agent’s interaction with these systems, not creating a new payment rail from scratch.

In the realm of security and authentication protocols, AP2 shares some spirit with things like digital signatures in EMV chip cards or the notarization in digital contracts. For example, EMV chip card transactions generate cryptograms to prove the card was present; AP2 generates cryptographic proof that the user’s agent was authorized. Both aim to prevent fraud, but AP2’s scope is the agent-user relationship and agent-merchant messaging, which no existing payment standard addresses. Another emerging comparison is with account abstraction in crypto (e.g. ERC-4337) where users can authorize pre-programmed wallet actions. Crypto wallets can be set to allow certain automated transactions (like auto-paying a subscription via a smart contract), but those are typically confined to one blockchain environment. AP2, on the other hand, aims to be cross-platform – it can leverage blockchain for some payments (through its extensions) but also works with traditional banks.

There isn’t a direct “competitor” protocol to AP2 in the mainstream payments industry yet – it appears to be the first concerted effort at an open standard for AI-agent payments. Proprietary attempts may arise (or may already be in progress within individual companies), but AP2’s broad support gives it an edge in becoming the standard. It’s worth noting that IBM and others have an Agent Communication Protocol (ACP) and similar initiatives for agent interoperability, but those don’t encompass the payment aspect in the comprehensive way AP2 does. If anything, AP2 might integrate with or leverage those efforts (for example, IBM’s agent frameworks could implement AP2 for any commerce tasks).

In summary, AP2 distinguishes itself by targeting the unique intersection of AI and payments: where older payment protocols assumed a human user, AP2 assumes an AI intermediary and fills the trust gap that results. It extends, rather than conflicts with, existing payment processes, and complements existing agent protocols like A2A. Going forward, one might see AP2 being used alongside established standards – for instance, an AP2 Cart Mandate might work in tandem with a traditional payment gateway API call, or an AP2 Payment Mandate might be attached to a ISO 8583 message in banking. The open nature of AP2 also means if any alternative approaches emerge, AP2 could potentially absorb or align with them through community collaboration. At this stage, AP2 is setting a baseline that did not exist before, effectively pioneering a new layer of protocol in the AI and payments stack.

Implications for Web3 and Decentralized Systems

From the outset, AP2 has been designed to be inclusive of Web3 and cryptocurrency-based payments. The protocol recognizes that future commerce will span both traditional fiat channels and decentralized blockchain networks. As noted earlier, AP2 supports payment types ranging from credit cards and bank transfers to stablecoins and cryptocurrencies. In fact, alongside AP2’s launch, Google announced a specific extension for crypto payments called A2A x402. This extension, developed in collaboration with crypto-industry players like Coinbase, the Ethereum Foundation, and MetaMask, is a “production-ready solution for agent-based crypto payments”. The name “x402” is an homage to the HTTP 402 “Payment Required” status code, which was never widely used on the Web – AP2’s crypto extension effectively revives the spirit of HTTP 402 for decentralized agents that want to charge or pay each other on-chain. In practical terms, the x402 extension adapts AP2’s mandate concept to blockchain transactions. For example, an agent could hold a signed Intent Mandate from a user and then execute an on-chain payment (say, send a stablecoin) once conditions are met, attaching proof of the mandate to that on-chain transaction. This marries the AP2 off-chain trust framework with the trustless nature of blockchain, giving the best of both worlds: an on-chain payment that off-chain parties (users, merchants) can trust was authorized by the user.

The synergy between AP2 and Web3 is evident in the list of collaborators. Crypto exchanges (Coinbase), blockchain foundations (Ethereum Foundation), crypto wallets (MetaMask), and Web3 startups (e.g. Mysten Labs of Sui, Lightspark for Lightning Network) are involved in AP2’s development. Their participation suggests AP2 is viewed as complementary to decentralized finance rather than competitive. By creating a standard way for AI agents to interact with crypto payments, AP2 could drive more usage of crypto in AI-driven applications. For instance, an AI agent might use AP2 to seamlessly swap between paying with a credit card or paying with a stablecoin, depending on user preference or merchant acceptance. The A2A x402 extension specifically allows agents to monetize or pay for services through on-chain means, which could be crucial in decentralized marketplaces of the future. It hints at agents possibly running as autonomous economic actors on blockchain (a concept some refer to as DACs or DAOs) being able to handle payments required for services (like paying a small fee to another agent for information). AP2 could provide the lingua franca for such transactions, ensuring even on a decentralized network, the agent has a provable mandate for what it’s doing.

In terms of competition, one could ask: do purely decentralized solutions make AP2 unnecessary, or vice-versa? It’s likely that AP2 will coexist with Web3 solutions in a layered approach. Decentralized finance offers trustless execution (smart contracts, etc.), but it doesn’t inherently solve the problem of “Did an AI have permission from a human to do this?”. AP2 addresses that very human-to-AI trust link, which remains important even if the payment itself is on-chain. Rather than competing with blockchain protocols, AP2 can be seen as bridging them with the off-chain world. For example, a smart contract might accept a certain transaction only if it includes a reference to a valid AP2 mandate signature – something that could be implemented to combine off-chain intent proof with on-chain enforcement. Conversely, if there are crypto-native agent frameworks (some blockchain projects explore autonomous agents that operate with crypto funds), they might develop their own methods for authorization. AP2’s broad industry support, however, might steer even those projects to adopt or integrate with AP2 for consistency.

Another angle is decentralized identity and credentials. AP2’s use of verifiable credentials is very much in line with Web3’s approach to identity (e.g. DIDs and VCs as standardized by W3C). This means AP2 could plug into decentralized identity systems – for instance, a user’s DID could be used to sign an AP2 mandate, which a merchant could verify against a blockchain or identity hub. The mention of exploring decentralized identity for agent authorization reinforces that AP2 may leverage Web3 identity innovations for verifying agent and user identities in a decentralized way, rather than relying only on centralized authorities. This is a point of synergy, as both AP2 and Web3 aim to give users more control and cryptographic proof of their actions.

Potential conflicts might arise only if one envisions a fully decentralized commerce ecosystem with no role for large intermediaries – in that scenario, could AP2 (initially pushed by Google and partners) be too centralized or governed by traditional players? It’s important to note AP2 is open source and intended to be standardizable, so it’s not proprietary to Google. This makes it more palatable to the Web3 community, which values open protocols. If AP2 becomes widely adopted, it might reduce the need for separate Web3-specific payment protocols for agents, thereby unifying efforts. On the other hand, some blockchain projects might prefer purely on-chain authorization mechanisms (like multi-signature wallets or on-chain escrow logic) for agent transactions, especially in trustless environments without any centralized authorities. Those could be seen as alternative approaches, but they likely would remain niche unless they can interact with off-chain systems. AP2, by covering both worlds, might actually accelerate Web3 adoption by making crypto just another payment method an AI agent can use seamlessly. Indeed, one partner noted that “stablecoins provide an obvious solution to scaling challenges [for] agentic systems with legacy infrastructure”, highlighting that crypto can complement AP2 in handling scale or cross-border scenarios. Meanwhile, Coinbase’s engineering lead remarked that bringing the x402 crypto extension into AP2 “made sense – it’s a natural playground for agents... exciting to see agents paying each other resonate with the AI community”. This implies a vision where AI agents transacting via crypto networks is not just a theoretical idea but an expected outcome, with AP2 acting as a catalyst.

In summary, AP2 is highly relevant to Web3: it incorporates crypto payments as a first-class citizen and is aligning with decentralized identity and credential standards. Rather than competing head-on with decentralized payment protocols, AP2 likely interoperates with them – providing the authorization layer while the decentralized systems handle the value transfer. As the line between traditional finance and crypto blurs (with stablecoins, CBDCs, etc.), a unified protocol like AP2 could serve as a universal adapter between AI agents and any form of money, centralized or decentralized.

Industry Adoption, Partnerships, and Roadmap

One of AP2’s greatest strengths is the extensive industry backing behind it, even at this early stage. Google Cloud announced that it is “collaborating with a diverse group of more than 60 organizations” on AP2. These include major credit card networks (e.g. Mastercard, American Express, JCB, UnionPay), leading fintech and payment processors (PayPal, Worldpay, Adyen, Checkout.com, Stripe’s competitors), e-commerce and online marketplaces (Etsy, Shopify (via partners like Stripe or others), Lazada, Zalora), enterprise tech companies (Salesforce, ServiceNow, Oracle possibly via partners, Dell, Red Hat), identity and security firms (Okta, Ping Identity, Cloudflare), consulting firms (Deloitte, Accenture), and crypto/Web3 organizations (Coinbase, Ethereum Foundation, MetaMask, Mysten Labs, Lightspark), among others. Such a wide array of participants is a strong indicator of industry interest and likely adoption. Many of these partners have publicly voiced support. For example, Adyen’s Co-CEO highlighted the need for a “common rulebook” for agentic commerce and sees AP2 as a natural extension of their mission to support merchants with new payment building blocks. American Express’s EVP stated that AP2 is important for “the next generation of digital payments” where trust and accountability are paramount. Coinbase’s team, as noted, is excited about integrating crypto payments into AP2. This chorus of support shows that many in the industry view AP2 as the likely standard for AI-driven payments, and they are keen to shape it to ensure it meets their requirements.

From an adoption standpoint, AP2 is currently at the specification and early implementation stage (announced in September 2025). The complete technical spec, documentation, and some reference implementations (in languages like Python) are available on the project’s GitHub for developers to experiment with. Google has also indicated that AP2 will be incorporated into its products and services for agents. A notable example is the AI Agent Marketplace mentioned earlier: this is a platform where third-party AI agents can be offered to users (likely part of Google’s generative AI ecosystem). Google says many partners building agents will make them available in the marketplace with “new, transactable experiences enabled by AP2”. This implies that as the marketplace launches or grows, AP2 will be the backbone for any agent that needs to perform a transaction, whether it’s buying software from the Google Cloud Marketplace autonomously or an agent purchasing goods/services for a user. Enterprise use cases like autonomous procurement (one agent buying from another on behalf of a company) and automatic license scaling have been specifically mentioned as areas AP2 could facilitate soon.

In terms of a roadmap, the AP2 documentation and Google’s announcement give some clear indications:

  • Near-term: Continue open development of the protocol with community input. The GitHub repo will be updated with additional reference implementations and improvements as real-world testing happens. We can expect libraries/SDKs to emerge, making it easier to integrate AP2 into agent applications. Also, initial pilot programs or proofs-of-concept might be conducted by the partner companies. Given that many large payment companies are involved, they might trial AP2 in controlled environments (e.g., an AP2-enabled checkout option in a small user beta).
  • Standards and Governance: Google has expressed a commitment to move AP2 into an open governance model, possibly via standards bodies. This could mean submitting AP2 to organizations like the Linux Foundation (as was done with the A2A protocol) or forming a consortium to maintain it. The Linux Foundation, W3C, or even bodies like ISO/TC68 (financial services) might be in the cards for formalizing AP2. An open governance would reassure the industry that AP2 is not under single-company control and will remain neutral and inclusive.
  • Feature Expansion: Technically, the roadmap includes expanding support to more payment types and use cases. As noted in the spec, after cards, the focus will shift to “push” payments like bank wires and local real-time payment schemes, and digital currencies. This means AP2 will outline how an Intent/Cart/Payment Mandate works for, say, a direct bank transfer or a crypto wallet transfer, where the flow is a bit different than card pulls. The A2A x402 extension is one such expansion for crypto; similarly, we might see an extension for open banking APIs or one for B2B invoicing scenarios.
  • Security & Compliance Enhancements: As real transactions start flowing through AP2, there will be scrutiny from regulators and security researchers. The open process will likely iterate on making mandates even more robust (e.g., ensuring mandate formats are standardized, possibly using W3C Verifiable Credentials format, etc.). Integration with identity solutions (perhaps leveraging biometrics for user signing of mandates, or linking mandates to digital identity wallets) could be part of the roadmap to enhance trust.
  • Ecosystem Tools: An emerging ecosystem is likely. Already, startups are noticing gaps – for instance, the Vellum.ai analysis mentions a startup called Autumn building “billing infrastructure for AI,” essentially tooling on top of Stripe to handle complex pricing for AI services. As AP2 gains traction, we can expect more tools like agent-focused payment gateways, mandate management dashboards, agent identity verification services, etc., to appear. Google’s involvement means AP2 could also be integrated into its Cloud products – imagine AP2 support in Dialogflow or Vertex AI Agents tooling, making it one-click to enable an agent to handle transactions (with all the necessary keys and certificates managed in Google Cloud).

Overall, the trajectory of AP2 is reminiscent of other major industry standards: an initial launch with a strong sponsor (Google), broad industry coalition, open-source reference code, followed by iterative improvement and gradual adoption in real products. The fact that AP2 is inviting all players “to build this future with us” underscores that the roadmap is about collaboration. If the momentum continues, AP2 could become as commonplace in a few years as protocols like OAuth or OpenID Connect are today in their domains – an unseen but critical layer enabling functionality across services.

Conclusion

AP2 (Agents/Agent Payments Protocol) represents a significant step toward a future where AI agents can transact as reliably and securely as humans. Technically, it introduces a clever mechanism of verifiable mandates and credentials that instill trust in agent-led transactions, ensuring user intent is explicit and enforceable. Its open, extensible architecture allows it to integrate both with the burgeoning AI agent frameworks and the established financial infrastructure. By addressing core concerns of authorization, authenticity, and accountability, AP2 lays the groundwork for AI-driven commerce to flourish without sacrificing security or user control.

The introduction of AP2 can be seen as laying a new foundation – much like early internet protocols enabled the web – for what some call the “agent economy.” It paves the way for countless innovations: personal shopper agents, automatic deal-finding bots, autonomous supply chain agents, and more, all operating under a common trust framework. Importantly, AP2’s inclusive design (embracing everything from credit cards to crypto) positions it at the intersection of traditional finance and Web3, potentially bridging these worlds through a common agent-mediated protocol.

Industry response so far has been very positive, with a broad coalition signaling that AP2 is likely to become a widely adopted standard. The success of AP2 will depend on continued collaboration and real-world testing, but its prospects are strong given the clear need it addresses. In a broader sense, AP2 exemplifies how technology evolves: a new capability (AI agents) emerged that broke old assumptions, and the solution was to develop a new open standard to accommodate that capability. By investing in an open, security-first protocol now, Google and its partners are effectively building the trust architecture required for the next era of commerce. As the saying goes, “the best way to predict the future is to build it” – AP2 is a bet on a future where AI agents seamlessly handle transactions for us, and it is actively constructing the trust and rules needed to make that future viable.

Sources:

  • Google Cloud Blog – “Powering AI commerce with the new Agent Payments Protocol (AP2)” (Sept 16, 2025)
  • AP2 GitHub Documentation – “Agent Payments Protocol Specification and Overview”
  • Vellum AI Blog – “Google’s AP2: A new protocol for AI agent payments” (Analysis)
  • Medium Article – “Google Agent Payments Protocol (AP2)” (Summary by Tahir, Sept 2025)
  • Partner Quotes on AP2 (Google Cloud Blog)
  • A2A x402 Extension (AP2 crypto payments extension) – GitHub README

Building Decentralized Encryption with @mysten/seal: A Developer's Tutorial

· 13 min read
Dora Noda
Software Engineer

Privacy is becoming public infrastructure. In 2025, developers need tools that make encryption as easy as storing data. Mysten Labs' Seal provides exactly that—decentralized secrets management with onchain access control. This tutorial will teach you how to build secure Web3 applications using identity-based encryption, threshold security, and programmable access policies.


Introduction: Why Seal Matters for Web3

Traditional cloud applications rely on centralized key management systems where a single provider controls access to encrypted data. While convenient, this creates dangerous single points of failure. If the provider is compromised, goes offline, or decides to restrict access, your data becomes inaccessible or vulnerable.

Seal changes this paradigm entirely. Built by Mysten Labs for the Sui blockchain, Seal is a decentralized secrets management (DSM) service that enables:

  • Identity-based encryption where content is protected before it leaves your environment
  • Threshold encryption that distributes key access across multiple independent nodes
  • Onchain access control with time locks, token-gating, and custom authorization logic
  • Storage agnostic design that works with Walrus, IPFS, or any storage solution

Whether you're building secure messaging apps, gated content platforms, or time-locked asset transfers, Seal provides the cryptographic primitives and access control infrastructure you need.


Getting Started

Prerequisites

Before diving in, ensure you have:

  • Node.js 18+ installed
  • Basic familiarity with TypeScript/JavaScript
  • A Sui wallet for testing (like Sui Wallet)
  • Understanding of blockchain concepts

Installation

Install the Seal SDK via npm:

npm install @mysten/seal

You'll also want the Sui SDK for blockchain interactions:

npm install @mysten/sui

Project Setup

Create a new project and initialize it:

mkdir seal-tutorial
cd seal-tutorial
npm init -y
npm install @mysten/seal @mysten/sui typescript @types/node

Create a simple TypeScript configuration:

// tsconfig.json
{
"compilerOptions": {
"target": "ES2020",
"module": "commonjs",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
}
}

Core Concepts: How Seal Works

Before writing code, let's understand Seal's architecture:

1. Identity-Based Encryption (IBE)

Unlike traditional encryption where you encrypt to a public key, IBE lets you encrypt to an identity (like an email address or Sui address). The recipient can only decrypt if they can prove they control that identity.

2. Threshold Encryption

Instead of trusting a single key server, Seal uses t-of-n threshold schemes. You might configure 3-of-5 key servers, meaning any 3 servers can cooperate to provide decryption keys, but 2 or fewer cannot.

3. Onchain Access Control

Access policies are enforced by Sui smart contracts. Before a key server provides decryption keys, it verifies that the requestor meets the onchain policy requirements (token ownership, time constraints, etc.).

4. Key Server Network

Distributed key servers validate access policies and generate decryption keys. These servers are operated by different parties to ensure no single point of control.


Basic Implementation: Your First Seal Application

Let's build a simple application that encrypts sensitive data and controls access through Sui blockchain policies.

Step 1: Initialize the Seal Client

// src/seal-client.ts
import { SealClient } from '@mysten/seal';
import { SuiClient } from '@mysten/sui/client';

export async function createSealClient() {
// Initialize Sui client for testnet
const suiClient = new SuiClient({
url: 'https://fullnode.testnet.sui.io'
});

// Configure Seal client with testnet key servers
const sealClient = new SealClient({
suiClient,
keyServers: [
'https://keyserver1.seal-testnet.com',
'https://keyserver2.seal-testnet.com',
'https://keyserver3.seal-testnet.com'
],
threshold: 2, // 2-of-3 threshold
network: 'testnet'
});

return { sealClient, suiClient };
}

Step 2: Simple Encryption/Decryption

// src/basic-encryption.ts
import { createSealClient } from './seal-client';

async function basicExample() {
const { sealClient } = await createSealClient();

// Data to encrypt
const sensitiveData = "This is my secret message!";
const recipientAddress = "0x742d35cc6d4c0c08c0f9bf3c9b2b6c64b3b4f5c6d7e8f9a0b1c2d3e4f5a6b7c8";

try {
// Encrypt data for a specific Sui address
const encryptedData = await sealClient.encrypt({
data: Buffer.from(sensitiveData, 'utf-8'),
recipientId: recipientAddress,
// Optional: add metadata
metadata: {
contentType: 'text/plain',
timestamp: Date.now()
}
});

console.log('Encrypted data:', {
ciphertext: encryptedData.ciphertext.toString('base64'),
encryptionId: encryptedData.encryptionId
});

// Later, decrypt the data (requires proper authorization)
const decryptedData = await sealClient.decrypt({
ciphertext: encryptedData.ciphertext,
encryptionId: encryptedData.encryptionId,
recipientId: recipientAddress
});

console.log('Decrypted data:', decryptedData.toString('utf-8'));

} catch (error) {
console.error('Encryption/decryption failed:', error);
}
}

basicExample();

Access Control with Sui Smart Contracts

The real power of Seal comes from programmable access control. Let's create a time-locked encryption example where data can only be decrypted after a specific time.

Step 1: Deploy Access Control Contract

First, we need a Move smart contract that defines our access policy:

// contracts/time_lock.move
module time_lock::policy {
use sui::clock::{Self, Clock};
use sui::object::{Self, UID};
use sui::tx_context::{Self, TxContext};

public struct TimeLockPolicy has key, store {
id: UID,
unlock_time: u64,
authorized_user: address,
}

public fun create_time_lock(
unlock_time: u64,
authorized_user: address,
ctx: &mut TxContext
): TimeLockPolicy {
TimeLockPolicy {
id: object::new(ctx),
unlock_time,
authorized_user,
}
}

public fun can_decrypt(
policy: &TimeLockPolicy,
user: address,
clock: &Clock
): bool {
let current_time = clock::timestamp_ms(clock);
policy.authorized_user == user && current_time >= policy.unlock_time
}
}

Step 2: Integrate with Seal

// src/time-locked-encryption.ts
import { createSealClient } from './seal-client';
import { TransactionBlock } from '@mysten/sui/transactions';

async function createTimeLocked() {
const { sealClient, suiClient } = await createSealClient();

// Create access policy on Sui
const txb = new TransactionBlock();

const unlockTime = Date.now() + 60000; // Unlock in 1 minute
const authorizedUser = "0x742d35cc6d4c0c08c0f9bf3c9b2b6c64b3b4f5c6d7e8f9a0b1c2d3e4f5a6b7c8";

txb.moveCall({
target: 'time_lock::policy::create_time_lock',
arguments: [
txb.pure(unlockTime),
txb.pure(authorizedUser)
]
});

// Execute transaction to create policy
const result = await suiClient.signAndExecuteTransactionBlock({
transactionBlock: txb,
signer: yourKeypair, // Your Sui keypair
});

const policyId = result.objectChanges?.find(
change => change.type === 'created'
)?.objectId;

// Now encrypt with this policy
const sensitiveData = "This will unlock in 1 minute!";

const encryptedData = await sealClient.encrypt({
data: Buffer.from(sensitiveData, 'utf-8'),
recipientId: authorizedUser,
accessPolicy: {
policyId,
policyType: 'time_lock'
}
});

console.log('Time-locked data created. Try decrypting after 1 minute.');

return {
encryptedData,
policyId,
unlockTime
};
}

Practical Examples

Example 1: Secure Messaging Application

// src/secure-messaging.ts
import { createSealClient } from './seal-client';

class SecureMessenger {
private sealClient: any;

constructor(sealClient: any) {
this.sealClient = sealClient;
}

async sendMessage(
message: string,
recipientAddress: string,
senderKeypair: any
) {
const messageData = {
content: message,
timestamp: Date.now(),
sender: senderKeypair.toSuiAddress(),
messageId: crypto.randomUUID()
};

const encryptedMessage = await this.sealClient.encrypt({
data: Buffer.from(JSON.stringify(messageData), 'utf-8'),
recipientId: recipientAddress,
metadata: {
type: 'secure_message',
sender: senderKeypair.toSuiAddress()
}
});

// Store encrypted message on decentralized storage (Walrus)
return this.storeOnWalrus(encryptedMessage);
}

async readMessage(encryptionId: string, recipientKeypair: any) {
// Retrieve from storage
const encryptedData = await this.retrieveFromWalrus(encryptionId);

// Decrypt with Seal
const decryptedData = await this.sealClient.decrypt({
ciphertext: encryptedData.ciphertext,
encryptionId: encryptedData.encryptionId,
recipientId: recipientKeypair.toSuiAddress()
});

return JSON.parse(decryptedData.toString('utf-8'));
}

private async storeOnWalrus(data: any) {
// Integration with Walrus storage
// This would upload the encrypted data to Walrus
// and return the blob ID for retrieval
}

private async retrieveFromWalrus(blobId: string) {
// Retrieve encrypted data from Walrus using blob ID
}
}

Example 2: Token-Gated Content Platform

// src/gated-content.ts
import { createSealClient } from './seal-client';

class ContentGating {
private sealClient: any;
private suiClient: any;

constructor(sealClient: any, suiClient: any) {
this.sealClient = sealClient;
this.suiClient = suiClient;
}

async createGatedContent(
content: string,
requiredNftCollection: string,
creatorKeypair: any
) {
// Create NFT ownership policy
const accessPolicy = await this.createNftPolicy(
requiredNftCollection,
creatorKeypair
);

// Encrypt content with NFT access requirement
const encryptedContent = await this.sealClient.encrypt({
data: Buffer.from(content, 'utf-8'),
recipientId: 'nft_holders', // Special recipient for NFT holders
accessPolicy: {
policyId: accessPolicy.policyId,
policyType: 'nft_ownership'
}
});

return {
contentId: encryptedContent.encryptionId,
accessPolicy: accessPolicy.policyId
};
}

async accessGatedContent(
contentId: string,
userAddress: string,
userKeypair: any
) {
// Verify NFT ownership first
const hasAccess = await this.verifyNftOwnership(
userAddress,
contentId
);

if (!hasAccess) {
throw new Error('Access denied: Required NFT not found');
}

// Decrypt content
const decryptedContent = await this.sealClient.decrypt({
encryptionId: contentId,
recipientId: userAddress
});

return decryptedContent.toString('utf-8');
}

private async createNftPolicy(collection: string, creator: any) {
// Create Move contract that checks NFT ownership
// Returns policy object ID
}

private async verifyNftOwnership(user: string, contentId: string) {
// Check if user owns required NFT
// Query Sui for NFT ownership
}
}

Example 3: Time-Locked Asset Transfer

// src/time-locked-transfer.ts
import { createSealClient } from './seal-client';

async function createTimeLockTransfer(
assetData: any,
recipientAddress: string,
unlockTimestamp: number,
senderKeypair: any
) {
const { sealClient, suiClient } = await createSealClient();

// Create time-lock policy on Sui
const timeLockPolicy = await createTimeLockPolicy(
unlockTimestamp,
recipientAddress,
senderKeypair,
suiClient
);

// Encrypt asset transfer data
const transferData = {
asset: assetData,
recipient: recipientAddress,
unlockTime: unlockTimestamp,
transferId: crypto.randomUUID()
};

const encryptedTransfer = await sealClient.encrypt({
data: Buffer.from(JSON.stringify(transferData), 'utf-8'),
recipientId: recipientAddress,
accessPolicy: {
policyId: timeLockPolicy.policyId,
policyType: 'time_lock'
}
});

console.log(`Asset locked until ${new Date(unlockTimestamp)}`);

return {
transferId: encryptedTransfer.encryptionId,
unlockTime: unlockTimestamp,
policyId: timeLockPolicy.policyId
};
}

async function claimTimeLockTransfer(
transferId: string,
recipientKeypair: any
) {
const { sealClient } = await createSealClient();

try {
const decryptedData = await sealClient.decrypt({
encryptionId: transferId,
recipientId: recipientKeypair.toSuiAddress()
});

const transferData = JSON.parse(decryptedData.toString('utf-8'));

// Process the asset transfer
console.log('Asset transfer unlocked:', transferData);

return transferData;
} catch (error) {
console.error('Transfer not yet unlocked or access denied:', error);
throw error;
}
}

Integration with Walrus Decentralized Storage

Seal works seamlessly with Walrus, Sui's decentralized storage solution. Here's how to integrate both:

// src/walrus-integration.ts
import { createSealClient } from './seal-client';

class SealWalrusIntegration {
private sealClient: any;
private walrusClient: any;

constructor(sealClient: any, walrusClient: any) {
this.sealClient = sealClient;
this.walrusClient = walrusClient;
}

async storeEncryptedData(
data: Buffer,
recipientAddress: string,
accessPolicy?: any
) {
// Encrypt with Seal
const encryptedData = await this.sealClient.encrypt({
data,
recipientId: recipientAddress,
accessPolicy
});

// Store encrypted data on Walrus
const blobId = await this.walrusClient.store(
encryptedData.ciphertext
);

// Return reference that includes both Seal and Walrus info
return {
blobId,
encryptionId: encryptedData.encryptionId,
accessPolicy: encryptedData.accessPolicy
};
}

async retrieveAndDecrypt(
blobId: string,
encryptionId: string,
userKeypair: any
) {
// Retrieve from Walrus
const encryptedData = await this.walrusClient.retrieve(blobId);

// Decrypt with Seal
const decryptedData = await this.sealClient.decrypt({
ciphertext: encryptedData,
encryptionId,
recipientId: userKeypair.toSuiAddress()
});

return decryptedData;
}
}

// Usage example
async function walrusExample() {
const { sealClient } = await createSealClient();
const walrusClient = new WalrusClient('https://walrus-testnet.sui.io');

const integration = new SealWalrusIntegration(sealClient, walrusClient);

const fileData = Buffer.from('Important document content');
const recipientAddress = '0x...';

// Store encrypted
const result = await integration.storeEncryptedData(
fileData,
recipientAddress
);

console.log('Stored with Blob ID:', result.blobId);

// Later, retrieve and decrypt
const decrypted = await integration.retrieveAndDecrypt(
result.blobId,
result.encryptionId,
recipientKeypair
);

console.log('Retrieved data:', decrypted.toString());
}

Threshold Encryption Advanced Configuration

For production applications, you'll want to configure custom threshold encryption with multiple key servers:

// src/advanced-threshold.ts
import { SealClient } from '@mysten/seal';

async function setupProductionSeal() {
// Configure with multiple independent key servers
const keyServers = [
'https://keyserver-1.your-org.com',
'https://keyserver-2.partner-org.com',
'https://keyserver-3.third-party.com',
'https://keyserver-4.backup-provider.com',
'https://keyserver-5.fallback.com'
];

const sealClient = new SealClient({
keyServers,
threshold: 3, // 3-of-5 threshold
network: 'mainnet',
// Advanced options
retryAttempts: 3,
timeoutMs: 10000,
backupKeyServers: [
'https://backup-1.emergency.com',
'https://backup-2.emergency.com'
]
});

return sealClient;
}

async function robustEncryption() {
const sealClient = await setupProductionSeal();

const criticalData = "Mission critical encrypted data";

// Encrypt with high security guarantees
const encrypted = await sealClient.encrypt({
data: Buffer.from(criticalData, 'utf-8'),
recipientId: '0x...',
// Require all 5 servers for maximum security
customThreshold: 5,
// Add redundancy
redundancy: 2,
accessPolicy: {
// Multi-factor requirements
requirements: ['nft_ownership', 'time_lock', 'multisig_approval']
}
});

return encrypted;
}

Security Best Practices

1. Key Management

// src/security-practices.ts

// GOOD: Use secure key derivation
import { generateKeypair } from '@mysten/sui/cryptography/ed25519';

const keypair = generateKeypair();

// GOOD: Store keys securely (example with environment variables)
const keypair = Ed25519Keypair.fromSecretKey(
process.env.PRIVATE_KEY
);

// BAD: Never hardcode keys
const badKeypair = Ed25519Keypair.fromSecretKey(
"hardcoded-secret-key-12345" // Don't do this!
);

2. Access Policy Validation

// Always validate access policies before encryption
async function secureEncrypt(data: Buffer, recipient: string) {
const { sealClient } = await createSealClient();

// Validate recipient address
if (!isValidSuiAddress(recipient)) {
throw new Error('Invalid recipient address');
}

// Check policy exists and is valid
const policy = await validateAccessPolicy(policyId);
if (!policy.isValid) {
throw new Error('Invalid access policy');
}

return sealClient.encrypt({
data,
recipientId: recipient,
accessPolicy: policy
});
}

3. Error Handling and Fallbacks

// Robust error handling
async function resilientDecrypt(encryptionId: string, userKeypair: any) {
const { sealClient } = await createSealClient();

try {
return await sealClient.decrypt({
encryptionId,
recipientId: userKeypair.toSuiAddress()
});
} catch (error) {
if (error.code === 'ACCESS_DENIED') {
throw new Error('Access denied: Check your permissions');
} else if (error.code === 'KEY_SERVER_UNAVAILABLE') {
// Try with backup configuration
return await retryWithBackupServers(encryptionId, userKeypair);
} else if (error.code === 'THRESHOLD_NOT_MET') {
throw new Error('Insufficient key servers available');
} else {
throw new Error(`Decryption failed: ${error.message}`);
}
}
}

4. Data Validation

// Validate data before encryption
function validateDataForEncryption(data: Buffer): boolean {
// Check size limits
if (data.length > 1024 * 1024) { // 1MB limit
throw new Error('Data too large for encryption');
}

// Check for sensitive patterns (optional)
const dataStr = data.toString();
if (containsSensitivePatterns(dataStr)) {
console.warn('Warning: Data contains potentially sensitive patterns');
}

return true;
}

Performance Optimization

1. Batching Operations

// Batch multiple encryptions for efficiency
async function batchEncrypt(dataItems: Buffer[], recipients: string[]) {
const { sealClient } = await createSealClient();

const promises = dataItems.map((data, index) =>
sealClient.encrypt({
data,
recipientId: recipients[index]
})
);

return Promise.all(promises);
}

2. Caching Key Server Responses

// Cache key server sessions to reduce latency
class OptimizedSealClient {
private sessionCache = new Map();

async encryptWithCaching(data: Buffer, recipient: string) {
let session = this.sessionCache.get(recipient);

if (!session || this.isSessionExpired(session)) {
session = await this.createNewSession(recipient);
this.sessionCache.set(recipient, session);
}

return this.encryptWithSession(data, session);
}
}

Testing Your Seal Integration

Unit Testing

// tests/seal-integration.test.ts
import { describe, it, expect } from 'jest';
import { createSealClient } from '../src/seal-client';

describe('Seal Integration', () => {
it('should encrypt and decrypt data successfully', async () => {
const { sealClient } = await createSealClient();
const testData = Buffer.from('test message');
const recipient = '0x742d35cc6d4c0c08c0f9bf3c9b2b6c64b3b4f5c6d7e8f9a0b1c2d3e4f5a6b7c8';

const encrypted = await sealClient.encrypt({
data: testData,
recipientId: recipient
});

expect(encrypted.encryptionId).toBeDefined();
expect(encrypted.ciphertext).toBeDefined();

const decrypted = await sealClient.decrypt({
ciphertext: encrypted.ciphertext,
encryptionId: encrypted.encryptionId,
recipientId: recipient
});

expect(decrypted.toString()).toBe('test message');
});

it('should enforce access control policies', async () => {
// Test that unauthorized users cannot decrypt
const { sealClient } = await createSealClient();

const encrypted = await sealClient.encrypt({
data: Buffer.from('secret'),
recipientId: 'authorized-user'
});

await expect(
sealClient.decrypt({
ciphertext: encrypted.ciphertext,
encryptionId: encrypted.encryptionId,
recipientId: 'unauthorized-user'
})
).rejects.toThrow('Access denied');
});
});

Deployment to Production

Environment Configuration

// config/production.ts
export const productionConfig = {
keyServers: [
process.env.KEY_SERVER_1,
process.env.KEY_SERVER_2,
process.env.KEY_SERVER_3,
process.env.KEY_SERVER_4,
process.env.KEY_SERVER_5
],
threshold: 3,
network: 'mainnet',
suiRpc: process.env.SUI_RPC_URL,
walrusGateway: process.env.WALRUS_GATEWAY,
// Security settings
maxDataSize: 1024 * 1024, // 1MB
sessionTimeout: 3600000, // 1 hour
retryAttempts: 3
};

Monitoring and Logging

// utils/monitoring.ts
export class SealMonitoring {
static logEncryption(encryptionId: string, recipient: string) {
console.log(`[SEAL] Encrypted data ${encryptionId} for ${recipient}`);
// Send to your monitoring service
}

static logDecryption(encryptionId: string, success: boolean) {
console.log(`[SEAL] Decryption ${encryptionId}: ${success ? 'SUCCESS' : 'FAILED'}`);
}

static logKeyServerHealth(serverUrl: string, status: string) {
console.log(`[SEAL] Key server ${serverUrl}: ${status}`);
}
}

Resources and Next Steps

Official Documentation

Community and Support

  • Sui Discord: Join the #seal channel for community support
  • GitHub Issues: Report bugs and request features
  • Developer Forums: Sui community forums for discussions

Advanced Topics to Explore

  1. Custom Access Policies: Build complex authorization logic with Move contracts
  2. Cross-Chain Integration: Use Seal with other blockchain networks
  3. Enterprise Key Management: Set up your own key server infrastructure
  4. Audit and Compliance: Implement logging and monitoring for regulated environments

Sample Applications

  • Secure Chat App: End-to-end encrypted messaging with Seal
  • Document Management: Enterprise document sharing with access controls
  • Digital Rights Management: Content distribution with usage policies
  • Privacy-Preserving Analytics: Encrypted data processing workflows

Conclusion

Seal represents a fundamental shift toward making privacy and encryption infrastructure-level concerns in Web3. By combining identity-based encryption, threshold security, and programmable access control, it provides developers with powerful tools to build truly secure and decentralized applications.

The key advantages of building with Seal include:

  • No Single Point of Failure: Distributed key servers eliminate central authorities
  • Programmable Security: Smart contract-based access policies provide flexible authorization
  • Developer-Friendly: TypeScript SDK integrates seamlessly with existing Web3 tooling
  • Storage Agnostic: Works with Walrus, IPFS, or any storage solution
  • Production Ready: Built by Mysten Labs with enterprise security standards

Whether you're securing user data, implementing subscription models, or building complex multi-party applications, Seal provides the cryptographic primitives and access control infrastructure you need to build with confidence.

Start building today, and join the growing ecosystem of developers making privacy a fundamental part of public infrastructure.


Ready to start building? Install @mysten/seal and begin experimenting with the examples in this tutorial. The decentralized web is waiting for applications that put privacy and security first.

The Web3 Legal Playbook: 50 FAQs Every Builder Should Master

· 5 min read
Dora Noda
Software Engineer

Launching a protocol or scaling an on-chain product is no longer just a technical exercise. Regulators are scrutinizing everything from token launches to wallet privacy, while users expect consumer-grade protections. To keep shipping with confidence, every founding team needs a structured way to translate dense legal memos into product decisions. Drawing from 50 of the most common questions web3 lawyers hear, this playbook breaks the conversation into builder-ready moves.

1. Formation & Governance: Separate the Devco, the Foundation, and the Community

  • Pick the right wrapper. Standard C-corps or LLCs still handle payroll, IP, and investor diligence best. If you plan to steward a protocol or grant program, a separate non-profit or foundation keeps incentives clean and governance transparent.
  • Paper every relationship. Use IP assignments, confidentiality agreements, and vesting schedules with clear cliffs, lockups, and bad-actor clawbacks. Document board approvals and keep token cap tables as tight as your equity ledgers.
  • Draw bright lines between entities. A development company can build under license, but budget, treasury policy, and decision rights should sit with a foundation or DAO that has its own charter and constitution. Where a DAO needs legal personality, wrap it in an LLC or equivalent.

2. Tokens & Securities: Design for Utility, Document the Rationale

  • Assume regulators look past labels. “Governance” or “utility” tags only matter if users actually interact with a live network, buy for consumption, and are not pitched profit upside. Lockups can reduce speculation but should be justified as stability or anti-sybil safeguards.
  • Differentiate access from investment. Access tokens should read like product passes—pricing, docs, and marketing must reinforce entitlement to services, not future profits. Stablecoins trigger their own payments or e-money regimes depending on reserves and redemption rights.
  • Treat staking and yields like financial products. Any promise of APRs, pooling, or reliance on the team’s efforts raises securities risk. Keep marketing plain, share risk factors, and map a compliant SAFT-to-mainnet plan if you raise with future tokens.
  • Remember NFTs can be securities. Fractionalized ownership, revenue shares, or profit language tips them into investment territory. Lean, consumptive NFTs with explicit licenses are safer.

3. Fundraising & Sales: Market the Network, Not the Moonshot

  • Disclose like a grown-up. Purpose, functionality, vesting, allocations, transfer limits, dependencies, and use of proceeds belong in every sale memo. Keep marketing copy aligned with those docs—no “guaranteed yield” tweets.
  • Respect jurisdictional lines. If you cannot comply with U.S. or other high-friction regimes, layer geofencing with eligibility checks, contractual restrictions, and post-sale monitoring. KYC/AML is standard for sales and increasingly for airdrops.
  • Manage promotion risk. Influencer campaigns need clear sponsorship disclosures and compliant scripts. Exchange listings or market-making deals demand written agreements, conflict checks, and honest communications to venues.

4. AML, Tax, and IP: Build Controls Into the Product

  • Know your regulatory role. Non-custodial software faces lighter AML obligations, but once you touch fiat ramps, custody, or intermediated exchange, money-transmitter or VASP rules apply. Prepare sanctions screening, escalation paths, and travel-rule readiness where relevant.
  • Treat tokens like cash for accounting. Token inflows are typically income at fair market value; sales later trigger gains or losses. Compensation grants often create taxable income at vesting—use written grants, track basis, and prepare for volatility.
  • Respect IP boundaries. Pair NFTs and on-chain content with explicit licenses, honor third-party open-source terms, and register trademarks. If you are training AI models, confirm dataset rights and scrub sensitive data.

5. Privacy & Data: Limit Collection, Plan for Deletion

  • Assume wallet addresses are personal data. Combine them with IPs, device IDs, or emails and you have personal identifiable information. Collect only what you need, store off-chain when possible, and hash or tokenize identifiers.
  • Engineer for erasure. Immutable ledgers do not excuse you from privacy laws—keep PII off-chain, remove references when users request deletion, and sever links that could re-identify hashed data.
  • Be transparent about telemetry. Cookie banners, analytics disclosures, and opt-outs are table stakes. Document an incident response plan that covers severity levels, notification timelines, and contact points.

6. Operations & Risk: Audit Early, Communicate Often

  • Audit and disclose. Independent smart-contract audits, formal verification where warranted, and an ongoing bug bounty signal maturity. Publish reports and explain residual risks plainly.
  • Set clear Terms of Service. Spell out custody status, eligibility, prohibited uses, dispute resolution, and how you handle forks. Align ToS, privacy policy, and in-product behavior.
  • Plan for forks, insurance, and cross-border growth. Reserve rights to choose supported chains, snapshot dates, and migration paths. Explore cyber, crime, D&O, and tech E&O coverage. When operating globally, localize terms, vet export controls, and use EOR/PEO partners to avoid misclassification.
  • Prepare for disputes. Decide in advance whether arbitration or class-action waivers fit your user base. Log law-enforcement requests, verify legal process, and explain technical limits like the absence of key custody.

7. The Builder’s Action Checklist

  • Map your operational role: software vendor, custodian, broker-like service, or payments intermediary.
  • Keep marketing factual and functionality-focused; avoid language that implies speculative returns.
  • Minimize custody and personal data collection; document any unavoidable touchpoints.
  • Maintain living docs for token allocation, governance design, audit status, and risk decisions.
  • Budget for legal counsel, compliance tooling, audits, bug bounties, and tax expertise from day one.

Regulation will not slow down for builders. What changes outcomes is embedding legal considerations into backlog grooming, treasury management, and user communications. Make counsel part of sprint reviews, rehearse incident response, and iterate on disclosures the same way you iterate on UX. Do that, and the 50 FAQs above stop being a blocker and start becoming a competitive moat for your protocol.

From Passwords to Portable Proofs: A 2025 Builder’s Guide to Web3 Identity

· 10 min read
Dora Noda
Software Engineer

Most apps still anchor identity to usernames, passwords, and centralized databases. That model is fragile (breaches), leaky (data resale), and clunky (endless KYC repeats). The new stack emerging around decentralized identifiers (DIDs), verifiable credentials (VCs), and attestations points to a different future: users carry cryptographic proof of facts about themselves and reveal only what’s needed—no more, no less.

This post distills the landscape and offers a practical blueprint you can ship with today.


The Shift: From Accounts to Credentials

The core of this new identity stack is built on two foundational W3C standards that fundamentally change how we handle user data.

  • Decentralized Identifiers (DIDs): These are user-controlled identifiers that don’t require a central registry like a domain name system. Think of a DID as a permanent, self-owned address for identity. A DID resolves to a signed “DID document” containing public keys and service endpoints, allowing for secure, decentralized authentication. The v1.0 standard became an official W3C Recommendation on July 19, 2022, marking a major milestone for the ecosystem.
  • Verifiable Credentials (VCs): This is a tamper-evident, digital format for expressing claims, like "age is over 18," "holds a diploma from University X," or "has passed a KYC check." The VC Data Model 2.0 became a W3C Recommendation on May 15, 2025, locking in a modern foundation for how these credentials are issued and verified.

What changes for developers? The shift is profound. Instead of storing sensitive personally identifiable information (PII) in your database, you verify cryptographic proofs supplied by the user’s wallet. You can request only the specific piece of information you need (e.g., residency in a specific country) without seeing the underlying document, thanks to powerful primitives like selective disclosure.


Where It Meets the Logins You Already Use

This new world doesn't require abandoning familiar login experiences. Instead, it complements them.

  • Passkeys / WebAuthn: This is your go-to for phishing-resistant authentication. Passkeys are FIDO credentials bound to a device or biometric (like Face ID or a fingerprint), and they are now widely supported across all major browsers and operating systems. They offer a seamless, passwordless login experience for your app or wallet.
  • Sign-In with Ethereum (SIWE / EIP-4361): This standard lets a user prove control of a blockchain address and link it to an application session. It works via a simple, signed, nonce-based message, creating a clean bridge between traditional Web2 sessions and Web3 control.

The best practice is to use them together: implement passkeys for mainstream, everyday sign-in and offer SIWE for wallet-linked flows where a user needs to authorize a crypto-native action.


The Rails for Issuing and Checking Credentials

For credentials to be useful, we need standardized ways to issue them to users and for users to present them to apps. The OpenID Foundation provides the two key protocols for this.

  • Issuance: OpenID for Verifiable Credential Issuance (OID4VCI) defines an OAuth-protected API for getting credentials from issuers (like a government agency or a KYC provider) into a user's digital wallet. It’s designed to be flexible, supporting multiple credential formats.
  • Presentation: OpenID for Verifiable Presentations (OID4VP) standardizes how your application makes a "proof request" and how a user's wallet responds to it. This can happen over classic OAuth redirects or through modern browser APIs.

When building, you’ll encounter a few key credential formats designed for different ecosystems and use cases:

  • W3C VC with Data Integrity Suites (JSON-LD): Often paired with BBS+ cryptography to enable powerful selective disclosure.
  • VC-JOSE-COSE and SD-JWT VC (IETF): These formats are built for JWT and CBOR-based ecosystems, also featuring strong selective disclosure capabilities.

Fortunately, interoperability is improving rapidly. Profiles like OpenID4VC High Assurance are helping to narrow the technical options, making cross-vendor integrations much saner for developers.


DID Methods: Picking the Right Address Scheme

A DID is just an identifier; a "DID method" specifies how it's anchored to a root of trust. You’ll want to support a couple of common ones.

  • did:web: This method backs a DID with a domain you control. It’s incredibly easy to deploy and is a fantastic choice for enterprises, issuers, and organizations who want to leverage their existing web infrastructure as a trust anchor.
  • did:pkh: This method derives a DID directly from a blockchain address (e.g., an Ethereum address). This is highly useful when your user base already has crypto wallets and you want to link their identity to on-chain assets.

Rule of thumb: Support at least two methods—did:web for organizations and did:pkh for individual users. Use a standard DID resolver library to handle the lookup, and consult official registries to evaluate the security, persistence, and governance of any new method you consider adding.


Useful Building Blocks You Can Plug In

Beyond the core standards, several tools can enhance your identity stack.

  • ENS (Ethereum Name Service): Provides human-readable names (yourname.eth) that can map to blockchain addresses and DIDs. This is an invaluable tool for improving user experience, reducing errors, and providing a simple profile layer.
  • Attestations: These are flexible, verifiable "facts about anything" that can be recorded on-chain or off-chain. The Ethereum Attestation Service (EAS), for example, provides a robust substrate for building reputation and trust graphs without ever storing PII on a public ledger.

Compliance Tailwinds You Should Track

Regulation is often seen as a hurdle, but in this space, it’s a massive accelerator. The EU Digital Identity Framework (eIDAS 2.0), officially adopted as Regulation EU 2024/1183 on May 20, 2024, is the most significant development. It mandates that all EU Member States offer citizens a free EU Digital Identity Wallet (EUDI). With implementing regulations published on May 7, 2025, this is a powerful signal for the adoption of wallet-based credentials across both public and private services in Europe.

Even if you don't operate in the EU, expect the EUDI Wallet and its underlying protocols to shape user expectations and drive wallet adoption globally.


Design Patterns That Work in Production (2025)

  • Passwordless First, Wallets Optional: Default to passkeys for sign-in. It's secure, simple, and familiar. Only introduce SIWE when users need to perform a crypto-linked action like minting an NFT or receiving a payout.
  • Ask for Proofs, Not Documents: Replace clunky document uploads with a crisp VC proof request using OID4VP. Instead of asking for a driver's license, ask for a proof of "age over 18" or "country of residence is X." Accept credentials that support selective disclosure, like those using BBS+ or SD-JWT.
  • Keep PII Off Your Servers: When a user proves something, record an attestation or a short-lived verification result, not the raw credential itself. On-chain attestations are a powerful way to create an auditable record—"User Y passed KYC with Issuer Z on date D"—without storing any personal data.
  • Let Orgs Be Issuers with did:web: Businesses, universities, and other organizations already control their domains. Let them sign credentials as issuers using did:web, allowing them to manage their cryptographic keys under their existing web governance models.
  • Use ENS for Names, Not Identity: Treat ENS as a user-friendly handle and profile pointer. It's great for UX, but keep the authoritative identity claims within credentials and attestations.

A Starter Architecture

Here’s a blueprint for a modern, credential-based identity system:

  • Authentication
    • Default Login: Passkeys (FIDO/WebAuthn).
    • Crypto-Linked Sessions: Sign-In with Ethereum (SIWE) for wallet-based actions.
  • Credentials
    • Issuance: Integrate with OID4VCI endpoints from your chosen issuers (e.g., a KYC provider, a university).
    • Presentation: Trigger OID4VP proof requests from your web or native app. Be prepared to accept both W3C VCs (with BBS+) and SD-JWT VCs.
  • Resolution & Trust
    • DID Resolver: Use a library that supports at least did:web and did:pkh. Maintain an allowlist of trusted issuer DIDs to prevent spoofing.
  • Attestations & Reputation
    • Durable Records: When you need an auditable signal of a verification, mint an attestation containing a hash, the issuer's DID, and a timestamp, rather than storing the claim itself.
  • Storage & Privacy
    • Minimalism: Drastically minimize the PII you store server-side. Encrypt everything at rest and set strict time-to-live (TTL) policies. Prefer ephemeral proofs and lean heavily on zero-knowledge or selective disclosure.

UX Lessons Learned

  • Start Invisible: For most users, the best wallet is the one they don’t have to think about. Use passkeys to handle sign-in seamlessly and only surface wallet interactions contextually when they are absolutely necessary.
  • Progressive Disclosure: Don't ask for everything at once. Request the smallest possible proof that unblocks the user's immediate goal. With selective disclosure, you don't need their full document to verify one fact.
  • Key Recovery Matters: A credential bound to a single device key is a liability. Plan for re-issuance and cross-device portability from day one. This is a key reason modern profiles are adopting formats like SD-JWT VC and claims-based holder binding.
  • Human-Readable Handles Help: An ENS name is far less intimidating than a long hexadecimal address. It reduces user error and adds a layer of recognizable context, even if the true authority lives in the underlying credentials.

What to Ship Next Quarter: A Pragmatic Roadmap

  • Weeks 1–2:
    • Add passkeys for your primary sign-in flow.
    • Gate all crypto-native actions behind a SIWE check.
  • Weeks 3–6:
    • Pilot a simple age or region gate using an OID4VP request.
    • Accept VC 2.0 credentials with selective disclosure (BBS+ or SD-JWT VC).
    • Start creating attestations for "verification passed" events instead of logging PII.
  • Weeks 7–10:
    • Onboard a partner issuer (e.g., your KYC provider) using did:web and implement a DID allowlist.
    • Offer ENS name linking in user profiles to improve address UX.
  • Weeks 11–12:
    • Threat-model your presentation and revocation flows. Add telemetry for common failure modes (expired credential, untrusted issuer).
    • Publish a clear privacy posture explaining exactly what you ask for, why, how long you retain it, and how users can audit it.

What’s Changing Fast (Keep an Eye on This)

  • EU EUDI Wallet Rollouts: The implementation and conformance testing of these wallets will massively shape capabilities and verification UX across the globe.
  • OpenID4VC Profiles: Interoperability between issuers, wallets, and verifiers is constantly improving thanks to new profiles and test suites.
  • Selective Disclosure Cryptosuites: Libraries and developer guidance for both BBS+ and SD-JWT VC are rapidly maturing, making them easier to implement.

Principles to Build By

  • Prove, Don’t Store: Default to verifying claims over storing raw PII.
  • Interoperate by Default: Support multiple credential formats and DID methods from day one to future-proof your stack.
  • Minimize & Disclose: Ask for the smallest possible claim. Be transparent with users about what you are checking and why.
  • Make Recovery Boring: Plan for device loss and issuer rotation. Avoid brittle key-binding that strands users.

If you’re building fintech, social, or creator platforms, credential-first identity isn’t a future bet anymore—it’s the shortest path to lower risk, smoother onboarding, and global interoperability.

Seal on Sui: A Programmable Secrets Layer for On-Chain Access Control

· 4 min read
Dora Noda
Software Engineer

Public blockchains give every participant a synchronized, auditable ledger—but they also expose every piece of data by default. Seal, now live on Sui Mainnet as of September 3, 2025, addresses this by pairing on-chain policy logic with decentralized key management so that Web3 builders can decide exactly who gets to decrypt which payloads.

TL;DR

  • What it is: Seal is a secrets-management network that lets Sui smart contracts enforce decryption policies on-chain while clients encrypt data with identity-based encryption (IBE) and rely on threshold key servers for key derivation.
  • Why it matters: Instead of custom backends or opaque off-chain scripts, privacy and access control become first-class Move primitives. Builders can store ciphertexts anywhere—Walrus is the natural companion—but still gate who can read.
  • Who benefits: Teams shipping token-gated media, time-locked reveals, private messaging, or policy-aware AI agents can plug into Seal’s SDK and focus on product logic, not bespoke crypto plumbing.

Policy Logic Lives in Move

Seal packages come with seal_approve* Move functions that define who can request keys for a given identity string and under which conditions. Policies can mix NFT ownership, allowlists, time locks, or custom role systems. When a user or agent asks to decrypt, key servers evaluate these policies via Sui full-node state and only approve if the chain agrees.

Because the access rules are part of your on-chain package, they are transparent, auditable, and versionable alongside the rest of your smart contract code. Governance updates can be rolled out like any other Move upgrade, with community review and on-chain history.

Threshold Cryptography Handles the Keys

Seal encrypts data to application-defined identities. A committee of independent key servers—chosen by the developer—shares the IBE master secret. When a policy check passes, each server derives a key share for the requested identity. Once a quorum of t servers responds, the client combines the shares into a usable decryption key.

You get to set the trade-off between liveness and confidentiality by picking committee members (Ruby Nodes, NodeInfra, Overclock, Studio Mirai, H2O Nodes, Triton One, or Mysten’s Enoki service) and selecting the threshold. Need stronger availability? Choose a larger committee with a lower threshold. Want higher privacy assurances? Tighten the quorum and lean on permissioned providers.

Developer Experience: SDKs and Session Keys

Seal ships a TypeScript SDK (npm i @mysten/seal) that handles encrypt/decrypt flows, identity formatting, and batching. It also issues session keys so wallets are not constantly spammed with prompts when an app needs repeated access. For advanced workflows, Move contracts can request on-chain decryption via specialized modes, allowing logic like escrow reveals or MEV-resistant auctions to run directly in smart contract code.

Because Seal is storage-agnostic, teams can pair it with Walrus for verifiable blob storage, with IPFS, or even with centralized stores when that fits operational realities. The encryption boundary—and its policy enforcement—travels with the data regardless of where the ciphertext lives.

Designing with Seal: Best Practices

  • Model availability risk: Thresholds such as 2-of-3 or 3-of-5 map directly to uptime guarantees. Production deployments should mix providers, monitor telemetry, and negotiate SLAs before entrusting critical workflows.
  • Be mindful of state variance: Policy evaluation depends on full nodes performing dry_run calls. Avoid rules that hinge on rapidly changing counters or intra-checkpoint ordering to prevent inconsistent approvals across servers.
  • Plan for key hygiene: Derived keys live on the client. Instrument logging, rotate session keys, and consider envelope encryption—use Seal to protect a symmetric key that encrypts the larger payload—to limit blast radius if a device is compromised.
  • Architect for rotation: A ciphertext’s committee is fixed at encryption time. Build upgrade paths that re-encrypt data through new committees when you need to swap providers or adjust trust assumptions.

What Comes Next

Seal’s roadmap points toward validator-operated MPC servers, DRM-style client tooling, and post-quantum KEM options. For builders exploring AI agents, premium content, or regulated data flows, today’s release already provides a clear blueprint: encode your policy in Move, compose a diverse key committee, and deliver encrypted experiences that respect user privacy without leaving Sui’s trust boundary.

If you are considering Seal for your next launch, start by prototyping a simple NFT-gated policy with a 2-of-3 open committee, then iterate toward the provider mix and operational controls that match your app’s risk profile.

Chain Abstraction Is How Enterprises Will Finally Use Web3 (Without Thinking About Chains)

· 8 min read
Dora Noda
Software Engineer

TL;DR

Cross-chain abstraction turns a maze of chains, bridges, and wallets into a single, coherent platform experience for both developers and end users. The ecosystem has quietly matured: intent standards, account abstraction, native stablecoin mobility, and network-level initiatives like the OP Superchain and Polygon's AggLayer make a "many chains, one experience" future realistic in 2025. For enterprises, the win is pragmatic: simpler integrations, enforceable risk controls, deterministic operations, and compliance-ready auditability—without betting the farm on any single chain.


The Problem Enterprises Actually Have (and Why Bridges Alone Didn’t Fix It)

Most enterprise teams don’t want to “pick a chain.” They want outcomes: settle a payment, issue an asset, clear a trade, or update a record—reliably, auditably, and at a predictable cost. The trouble is that production Web3 today is irredeemably multichain. Hundreds of rollups, appchains, and L2s have launched over the past 18 months alone, each with its own fees, finality times, tooling, and trust assumptions.

Traditional cross-chain approaches solved transport—moving tokens or messages from A to B—but not the experience. Teams are still forced to manage wallets per network, provision gas per chain, pick a bridge per route, and shoulder security differences they can’t easily quantify. That friction is the real adoption tax.

Cross-chain abstraction removes that tax by hiding chain selection and transport behind declarative APIs, intent-driven user experiences, and unified identity and gas. In other words, users and applications express what they want; the platform determines how and where it happens, safely. Chain abstraction makes blockchain technology invisible to end users while preserving its core benefits.

Why 2025 is Different: The Building Blocks Finally Clicked

The vision of a seamless multi-chain world isn't new, but the foundational technology is finally ready for production. Several key components have matured and converged, making robust chain abstraction possible.

  • Network-Level Unification: Projects are now building frameworks to make separate chains feel like a single, unified network. The OP Superchain aims to standardize OP-Stack L2s with shared tooling and communication layers. Polygon's AggLayer aggregates many ZK-secured chains with "pessimistic proofs" for chain-level accounting, preventing one chain’s issues from contaminating others. Meanwhile, IBC v2 is expanding standardized interoperability beyond the Cosmos ecosystem, pushing toward "IBC everywhere."

  • Mature Interop Rails: The middleware for cross-chain communication is now battle-tested and widely available. Chainlink CCIP offers enterprise-grade token and data transfer across a growing number of chains. LayerZero v2 provides omnichain messaging and standardized OFT tokens with a unified supply. Axelar delivers General Message Passing (GMP) for complex contract calls across ecosystems, connecting EVM and Cosmos chains. Platforms like Hyperlane enable permissionless deployments, allowing new chains to join the network without gatekeepers, while Wormhole offers a generalized messaging layer used across more than 40 chains.

  • Intent & Account Abstraction: The user experience has been transformed by two critical standards. ERC-7683 standardizes cross-chain intents, allowing apps to declare goals and let a shared solver network execute them efficiently across chains. Concurrently, EIP-4337 smart accounts, combined with Paymasters, enable gas abstraction. This allows an application to sponsor transaction fees or let users pay in stablecoins, which is essential for any flow that might touch multiple networks.

  • Native Stablecoin Mobility: Circle’s Cross-Chain Transfer Protocol (CCTP) moves native USDC across chains via a secure burn-and-mint process, reducing wrapped-asset risk and unifying liquidity. The latest version, CCTP v2, further cuts latency and simplifies developer workflows, making stablecoin settlement a seamless part of the abstracted experience.

What “Cross-Chain Abstraction” Looks Like in an Enterprise Stack

Think of it as a layered capability you can add to existing systems. The goal is to have a single endpoint to express an intent and a single policy plane to govern how it executes across any number of chains.

  1. Unified Identity & Policy: At the top layer are smart accounts (EIP-4337) with role-based access controls, social recovery, and modern custody options like passkeys or MPC. This is governed by a central policy engine that defines who can do what, where, using allow- and deny-lists for specific chains, assets, and bridges.

  2. Gas & Fee Abstraction: Paymasters remove the "I need native gas on chain X" headache. Users or services can pay fees in stablecoins, or the application can sponsor them entirely, subject to predefined policies and budgets.

  3. Intent-Driven Execution: Users express outcomes, not transactions. For example, "swap USDC for wETH and deliver it to our supplier's wallet on chain Y before 5 p.m." The ERC-7683 standard defines the format for these orders, allowing shared solver networks to compete to execute them safely and cheaply.

  4. Programmable Settlement & Messaging: Under the hood, the system uses a consistent API to select the right rail for each route. It might use CCIP for a token transfer where enterprise support is key, Axelar GMP for a cross-ecosystem contract call, or IBC where native light-client security fits the risk model.

  5. Observability & Compliance by Default: The entire workflow is traceable, from the initial intent to the final settlement. This produces clear audit trails and allows data to be exported to existing SIEMs. Risk frameworks can be programmed to enforce allowlists or trigger emergency brakes, for instance, by pausing routes if a bridge’s security posture degrades.

A Reference Architecture

From the top down, a chain-abstracted system is composed of clear layers:

  • Experience Layer: Application surfaces that collect user intents and completely hide chain details, paired with SSO-style smart account wallet flows.
  • Control Plane: A policy engine for managing permissions, quotas, and budgets. This plane integrates with KMS/HSM systems and maintains allowlists for chains, assets, and bridges. It also ingests risk feeds to circuit-break vulnerable routes automatically.
  • Execution Layer: An intent router that selects the best interop rail (CCIP, LayerZero, Axelar, etc.) based on policy, price, and latency requirements. A Paymaster handles fees, drawing from a treasury of pooled gas and stablecoin budgets.
  • Settlement & State: Canonical on-chain contracts for core functions like custody and issuance. A unified indexer tracks cross-chain events and proofs, exporting data to a warehouse or SIEM for analysis and compliance.

Build vs. Buy: How to Evaluate Providers of Chain Abstraction

When selecting a partner to provide chain abstraction capabilities, enterprises should ask several key questions:

  • Security & Trust Model: What are the underlying verification assumptions? Does the system rely on oracles, guardian sets, light clients, or validator networks? What can be slashed or vetoed?
  • Coverage & Neutrality: Which chains and assets are supported today? How quickly can new ones be added? Is the process permissionless or gated by the provider?
  • Standards Alignment: Does the platform support key standards like ERC-7683, EIP-4337, OFT, IBC, and CCIP?
  • Operations: What are the provider’s SLAs? How transparent are they about incidents? Do they offer replayable proofs, deterministic retries, and structured audit logs?
  • Governance & Portability: Can you switch interop rails per route without rewriting your application? Vendor-neutral abstractions are critical for long-term flexibility.
  • Compliance: What controls are available for data retention and residency? What is their SOC2/ISO posture? Can you bring your own KMS/HSM?

A Pragmatic 90-Day Enterprise Rollout

  • Days 0–15: Baseline & Policy: Inventory all chains, assets, bridges, and wallets currently in use. Define an initial allowlist and establish circuit-break rules based on a clear risk framework.
  • Days 16–45: Prototype: Convert a single user journey, such as a cross-chain payout, to use an intent-based flow with account abstraction and a paymaster. Measure the impact on user drop-off, latency, and support load.
  • Days 46–75: Expand Rails: Add a second interoperability rail to the system and route transactions dynamically based on policy. Integrate CCTP for native USDC mobility if stablecoins are part of the workflow.
  • Days 76–90: Harden: Wire the platform’s observability data to your SIEM, run chaos tests on route failures, and document all operating procedures, including emergency pause protocols.

Common Pitfalls (and How to Avoid Them)

  • Routing by "Gas Price Only": Latency, finality, and security assumptions matter as much as fees. Price alone is not a complete risk model.
  • Ignoring Gas: If your experience touches multiple chains, gas abstraction isn't optional—it's table stakes for a usable product.
  • Treating Bridges as Interchangeable: They aren’t. Their security assumptions differ significantly. Codify allowlists and implement circuit breakers to manage this risk.
  • Wrapped-Asset Sprawl: Whenever possible, prefer native asset mobility (like USDC via CCTP) to minimize liquidity fragmentation and reduce counterparty risk.

The Enterprise Upside

When chain abstraction is done well, blockchain stops being a collection of idiosyncratic networks and becomes an execution fabric your teams can program against. It offers policies, SLAs, and audit trails that match the standards you already operate under. Thanks to mature intent standards, account abstraction, robust interop rails, and native stablecoin transport, you can finally deliver Web3 outcomes without forcing users—or your own developers—to care about which chain did the work.