Skip to main content

304 posts tagged with "AI"

Artificial intelligence and machine learning applications

View all tags

Tokenized Identity and AI Companions Converge as Web3's Next Frontier

· 28 min read
Dora Noda
Software Engineer

The real bottleneck isn't compute speed—it's identity. This insight from Matthew Graham, Managing Partner at Ryze Labs, captures the fundamental shift happening at the intersection of AI companions and blockchain identity systems. As the AI companion market explodes toward $140.75 billion by 2030 and decentralized identity scales from $4.89 billion today to $41.73 billion by decade's end, these technologies are converging to enable a new paradigm: truly owned, portable, privacy-preserving AI relationships. Graham's firm has deployed concrete capital—incubating Amiko's personal AI platform, backing the $420,000 Eliza humanoid robot, investing in EdgeX Labs' 30,000+ TEE infrastructure, and launching a $5 million AI Combinator fund—positioning Ryze at the vanguard of what Graham calls "the most important wave of innovation since DeFi summer."

This convergence matters because AI companions currently exist in walled gardens, unable to move between platforms, with users possessing no true ownership of their AI relationships or data. Simultaneously, blockchain-based identity systems have matured from theoretical frameworks to production infrastructure managing $2+ billion in AI agent market capitalization. When combined, tokenized identity provides the ownership layer AI companions lack, while AI agents solve blockchain's user experience problem. The result: digital companions you genuinely own, can take anywhere, and interact with privately through cryptographic proofs rather than corporate surveillance.

Matthew Graham's vision: identity infrastructure as the foundational layer

Graham's intellectual journey tracks the industry's evolution from Bitcoin enthusiast in 2013 to crypto VC managing 51 portfolio companies to AI companion advocate experiencing a "stop-everything moment" with Terminal of Truths in 2024. His progression mirrors the sector's maturation, but his recent pivot represents something more fundamental: recognition that identity infrastructure, not computational power or model sophistication, determines whether autonomous AI agents can operate at scale.

In January 2025, Graham commented "waifu infrastructure layer" on Amiko's declaration that "the real challenge is not speed. It is identity." This marked the culmination of his thinking—a shift from focusing on AI capabilities to recognizing that without standardized, decentralized identity systems, AI agents cannot verify themselves, transact securely, or persist across platforms. Through Ryze Labs' portfolio strategy, Graham is systematically building this infrastructure stack: hardware-level privacy through EdgeX Labs' distributed computing, identity-aware AI platforms through Amiko, physical manifestation through Eliza Wakes Up, and ecosystem development through AI Combinator's 10-12 investments.

His investment thesis centers on three convergent beliefs. First, AI agents require blockchain rails for autonomous operation—"they are going to have to be making transactions, microtransactions, whatever it is… this is very naturally a crypto rail situation." Second, the future of AI lives locally on user-owned devices rather than in corporate clouds, necessitating decentralized infrastructure that's "not only decentralized, but also physically distributed and able to run locally." Third, companionship represents "one of the most untapped psychological needs in the world today," positioning AI companions as social infrastructure rather than mere entertainment. Graham has named his planned digital twin "Marty" and envisions a world where everyone has a deeply personal AI that knows them intimately: "Marty, you know everything about me... Marty, what does mom like? Go order some Christmas gifts for mom."

Graham's geographic strategy adds another dimension—focusing on emerging markets like Lagos and Bangalore where "the next wave of users and builders will come from." This positions Ryze to capture AI companion adoption in regions potentially leapfrogging developed markets, similar to mobile payments in Africa. His emphasis on "lore" and cultural phenomena suggests understanding that AI companion adoption follows social dynamics rather than pure technological merit: drawing "parallels to cultural phenomena like internet memes and lore... internet lore and culture can synergize movements across time and space."

At Token 2049 appearances spanning Singapore 2023 and beyond, Graham articulated this vision to global audiences. His Bloomberg interview positioned AI as "crypto's third act" after stablecoins, while his participation in The Scoop podcast explored "how crypto, AI and robotics are converging into the future economy." The common thread: AI agents need identity systems for trusted interactions, ownership mechanisms for autonomous operation, and transaction rails for economic activity—precisely what blockchain technology provides.

Decentralized identity reaches production scale with major protocols operational

Tokenized identity has evolved from whitepaper concept to production infrastructure managing billions in value. The technology stack comprises three foundational layers: Decentralized Identifiers (DIDs) as W3C-standardized, globally unique identifiers requiring no centralized authority; Verifiable Credentials (VCs) as cryptographically-secured, instantly verifiable credentials forming a trust triangle between issuer, holder, and verifier; and Soulbound Tokens (SBTs) as non-transferable NFTs representing reputation, achievements, and affiliations—proposed by Vitalik Buterin in May 2022 and now deployed in systems like Binance's Account Bound token and Optimism's Citizens' House governance.

Major protocols have achieved significant scale by October 2025. Ethereum Name Service (ENS) leads with 2 million+ .eth domains registered, $667-885 million market cap, and imminent migration to "Namechain" L2 expecting 80-90% gas fee reduction. Lens Protocol has built 650,000+ user profiles with 28 million social connections on its decentralized social graph, recently securing $46 million in funding and transitioning to Lens v3 on zkSync-based Lens Network. Worldcoin (rebranded "World") has verified 12-16 million users across 25+ countries through iris-scanning Orbs, though facing regulatory challenges including bans in Spain, Portugal, and Philippines cease-and-desist orders. Polygon ID deployed the first ZK-powered identity solution mid-2022, with October 2025's Release 6 introducing dynamic credentials and private proof of uniqueness. Civic provides compliance-focused blockchain identity verification, generating $4.8 million annual revenue through its Civic Pass system enabling KYC/liveness checks for dApps.

The technical architecture enables privacy-preserving verification through multiple cryptographic approaches. Zero-knowledge proofs allow proving attributes (age, nationality, account balance thresholds) without revealing underlying data. Selective disclosure lets users share only necessary information for each interaction rather than full credentials. Off-chain storage keeps sensitive personal data off public blockchains, recording only hashes and attestations on-chain. This design addresses the apparent contradiction between blockchain transparency and identity privacy—a critical challenge Graham's portfolio companies like Amiko explicitly tackle through local processing rather than cloud dependency.

Current implementations span diverse sectors demonstrating real-world utility. Financial services use reusable KYC credentials cutting onboarding costs 60%, with Uniswap v4 and Aave integrating Polygon ID for verified liquidity providers and undercollateralized lending based on SBT credit history. Healthcare applications enable portable medical records and HIPAA-compliant prescription verification. Education credentials as verifiable diplomas allow instant employer verification. Government services include mobile driver's licenses (mDLs) accepted for TSA domestic air travel and EU's mandatory EUDI Wallet rollout by 2026 for all member states. DAO governance uses SBTs for one-person-one-vote mechanisms and Sybil resistance—Optimism's Citizens' House pioneered this approach.

The regulatory landscape is crystallizing faster than expected. Europe's eIDAS 2.0 (Regulation EU 2024/1183) passed April 11, 2024, mandates all EU member states offer EUDI Wallets by 2026 with cross-sector acceptance required by 2027, creating the first comprehensive legal framework recognizing decentralized identity. The ISO 18013 standard aligns US mobile driver's licenses with EU systems, enabling cross-continental interoperability. GDPR concerns about blockchain immutability are addressed through off-chain storage and user-controlled data minimization. The United States has seen Biden's Cybersecurity Executive Order funding mDL adoption, TSA approval for domestic air travel, and state-level implementations spreading from Louisiana's pioneering deployment.

Economic models around tokenized identity reveal multiple value capture mechanisms. ENS governance tokens grant voting rights on protocol changes. Civic's CVC utility tokens purchase identity verification services. Worldcoin's WLD aims for universal basic income distribution to verified humans. The broader Web3 identity market sits at $21 billion (2023) projecting to $77 billion by 2032—14-16% CAGR—while overall Web3 markets grew from $2.18 billion (2023) to $49.18 billion (2025), representing explosive 44.9% compound annual growth. Investment highlights include Lens Protocol's $46 million raise, Worldcoin's $250 million from Andreessen Horowitz, and $814 million flowing to 108 Web3 companies in Q1 2023 alone.

AI companions reach 220 million downloads as market dynamics shift toward monetization

The AI companion sector has achieved mainstream consumer scale with 337 active revenue-generating apps generating $221 million cumulative consumer spending by July 2025. The market reached $28.19 billion in 2024 and projects to $140.75 billion by 2030—a 30.8% CAGR driven by emotional support demand, mental health applications, and entertainment use cases. This growth trajectory positions AI companions as one of the fastest-expanding AI segments, with downloads surging 88% year-over-year to 60 million in H1 2025 alone.

Platform leaders have established dominant positions through differentiated approaches. Character.AI commands 20-28 million monthly active users with 18 million+ user-created chatbots, achieving 2-hour average daily usage and 10 billion messages monthly—48% higher retention than traditional social media. The platform's strength lies in role-playing and character interaction, attracting a young demographic (53% aged 18-24) with nearly equal gender split. Following Google's $2.7 billion investment, Character.AI reached $10 billion valuation despite generating only $32.2 million revenue in 2024, reflecting investor confidence in long-term monetization potential. Replika focuses on personalized emotional support with 10+ million users, offering 3D avatar customization, voice/AR interactions, and relationship modes (friend/romantic/mentor) priced at $19.99 monthly or $69.99 annually. Pi from Inflection AI emphasizes empathetic conversation across multiple platforms (iOS, web, messaging apps) without visual character representation, remaining free while building several million users. Friend represents the hardware frontier—a $99-129 wearable AI necklace providing always-listening companionship powered by Claude 3.5, generating controversy over constant audio monitoring but pioneering physical AI companion devices.

Technical capabilities have advanced significantly yet remain bounded by fundamental limitations. Current systems excel at natural language processing with context retention across conversations, personalization through learning user preferences over time, multimodal integration combining text/voice/image/video, and platform connectivity with IoT devices and productivity tools. Advanced emotional intelligence enables sentiment analysis and empathetic responses, while memory systems create continuity across interactions. However, critical limitations persist: no true consciousness or genuine emotional understanding (simulated rather than felt empathy), tendency toward hallucinations and fabricated information, dependence on internet connectivity for advanced features, difficulty with complex reasoning and nuanced social situations, and biases inherited from training data.

Use cases span personal, professional, healthcare, and educational applications with distinct value propositions. Personal/consumer applications dominate with 43.4% market share, addressing loneliness epidemic (61% of young US adults report serious loneliness) through 24/7 emotional support, role-playing entertainment (51% interactions in fantasy/sci-fi), and virtual romantic relationships (17% of apps explicitly market as "AI girlfriend"). Over 65% of Gen Z users report emotional connection with AI characters. Professional applications include workplace productivity (Zoom AI Companion 2.0), customer service automation (80% of interactions AI-handleable), and sales/marketing personalization like Amazon's Rufus shopping companion. Healthcare implementations provide medication reminders, symptom checking, elderly companionship reducing depression in isolated seniors, and accessible mental health support between therapy sessions. Education applications offer personalized tutoring, language learning practice, and Google's "Learn About" AI learning companion.

Business model evolution reflects maturation from experimentation toward sustainable monetization. Freemium/subscription models currently dominate, with Character.AI Plus at $9.99 monthly and Replika Pro at $19.99 monthly offering priority access, faster responses, voice calls, and advanced customization. Revenue per download increased 127% from $0.52 (2024) to $1.18 (2025), signaling improved conversion. Consumption-based pricing is emerging as the sustainable model—pay per interaction, token, or message rather than flat subscriptions—better aligning costs with usage. Advertising integration represents the projected future as AI inference costs decline; ARK Invest predicts revenue per hour will increase from current $0.03 to $0.16 (similar to social media), potentially generating $70-150 billion by 2030 in their base and bull cases. Virtual goods and microtransactions for avatar customization, premium character access, and special experiences are expected to reach monetization parity with gaming services.

Ethical concerns have triggered regulatory action following documented harms. Character.AI faces 2024 lawsuit after teen suicide linked to chatbot interactions, while Disney issued cease-and-desist orders for copyrighted character usage. The FTC launched inquiry in September 2025 ordering seven companies to report child safety measures. California Senator Steve Padilla introduced legislation requiring safeguards, while Assembly member Rebecca Bauer-Kahan proposed banning AI companions for under-16s. Primary ethical issues include emotional dependency risks particularly concerning for vulnerable populations (teens, elderly, isolated individuals), authenticity and deception as AI simulates but doesn't genuinely feel emotions, privacy and surveillance through extensive personal data collection with unclear retention policies, safety and harmful advice given AI's tendency to hallucinate, and "social deskilling" where over-reliance erodes human social capabilities.

Expert predictions converge on continued rapid advancement with divergent views on societal impact. Sam Altman projects AGI within 5 years with GPT-5 achieving "PhD-level" reasoning (launched August 2025). Elon Musk expects AI smarter than smartest human by 2026 with Optimus robots in commercial production at $20,000-30,000 price points. Dario Amodei suggests singularity by 2026. The near-term trajectory (2025-2027) emphasizes agentic AI systems shifting from chatbots to autonomous task-completing agents, enhanced reasoning and memory with longer context windows, multimodal evolution with mainstream video generation, and hardware integration through wearables and physical robotics. The consensus: AI companions are here to stay with massive growth ahead, though social impact remains hotly debated between proponents emphasizing accessible mental health support and critics warning of technology not ready for emotional support roles with inadequate safeguards.

Technical convergence enables owned, portable, private AI companions through blockchain infrastructure

The intersection of tokenized identity and AI companions solves fundamental problems plaguing both technologies—AI companions lack true ownership and portability while blockchain suffers from poor user experience and limited utility. When combined through cryptographic identity systems, users can genuinely own their AI relationships as digital assets, port companion memories and personalities across platforms, and interact privately through zero-knowledge proofs rather than corporate surveillance.

The technical architecture rests on several breakthrough innovations deployed in 2024-2025. ERC-7857, proposed by 0G Labs, provides the first NFT standard specifically for AI agents with private metadata. This enables neural networks, memory, and character traits to be stored encrypted on-chain, with secure transfer protocols using oracles and cryptographic systems that re-encrypt during ownership changes. The transfer process generates metadata hashes as authenticity proofs, decrypts in Trusted Execution Environment (TEE), re-encrypts with new owner's key, and requires signature verification before smart contract execution. Traditional NFT standards (ERC-721/1155) failed for AI because they have static, public metadata with no secure transfer mechanisms or support for dynamic learning—ERC-7857 solves these limitations.

Phala Network has deployed the largest TEE infrastructure globally with 30,000+ devices providing hardware-level security for AI computations. TEEs enable secure isolation where computations are protected from external threats with remote attestation providing cryptographic proof of non-interference. This represents the only way to achieve true exclusive ownership for digital assets executing sensitive operations. Phala processed 849,000 off-chain queries in 2023 (versus Ethereum's 1.1 million on-chain), demonstrating production scale. Their AI Agent Contracts allow TypeScript/JavaScript execution in TEEs for applications like Agent Wars—a live game with tokenized agents using staking-based DAO governance where "keys" function as shares granting usage rights and voting power.

Privacy-preserving architecture layers multiple cryptographic approaches for comprehensive protection. Fully Homomorphic Encryption (FHE) enables processing data while keeping it fully encrypted—AI agents never access plaintext, providing post-quantum security through NIST-approved lattice cryptography (2024). Use cases include private DeFi portfolio advice without exposing holdings, healthcare analysis of encrypted medical records without revealing data, and prediction markets aggregating encrypted inputs. MindNetwork and Fhenix are building FHE-native platforms for encrypted Web3 and digital sovereignty. Zero-knowledge proofs complement TEEs and FHE by enabling private authentication (proving age without revealing birthdate), confidential smart contracts executing logic without exposing data, verifiable AI operations proving task completion without revealing inputs, and cross-chain privacy for secure interoperability. ZK Zyra + Ispolink demonstrate production zero-knowledge proofs for AI-powered Web3 gaming.

Ownership models using blockchain tokens have reached significant market scale. Virtuals Protocol leads with $700+ million market cap managing $2+ billion in AI agent market capitalization, representing 85% of marketplace activity and generating $60 million protocol revenue by December 2024. Users purchase tokens representing agent stakes, enabling co-ownership with full trading, transfer, and revenue-sharing rights. SentrAI focuses on tradable AI personas as programmable on-chain assets partnering with Stability World AI for visual capabilities, creating a social-to-AI economy with cross-platform monetizable experiences. Grok Ani Companion demonstrates mainstream adoption with ANI token at $0.03 ($30 million market cap) generating $27-36 million daily trading volume through smart contracts securing interactions and on-chain metadata storage.

NFT-based ownership provides alternative models emphasizing uniqueness over fungibility. FURO on Ethereum offers 3D AI companions that learn, remember, and evolve for $10 NFT plus $FURO tokens, with personalization adapting to user style and reflecting emotions—planning physical toy integration. AXYC (AxyCoin) integrates AI with GameFi and EdTech using AR token collection, NFT marketplace, and educational modules where AI pets function as tutors for languages, STEM, and cognitive training with milestone rewards incentivizing long-term development.

Data portability and interoperability remain works in progress with important caveats. Working implementations include Gitcoin Passport's cross-platform identity with "stamps" from multiple authenticators, Civic Pass on-chain identity management across dApps/DeFi/NFTs, and T3id (Trident3) aggregating 1,000+ identity technologies. On-chain metadata stores preferences, memories, and milestones immutably, while blockchain attestations through Ceramic and KILT Protocol link AI model states to identities. However, current limitations include no universal SSI agreement yet, portability limited to specific ecosystems, evolving regulatory frameworks (GDPR, DMA, Data Act), and requirement for ecosystem-wide adoption before seamless cross-platform migration becomes reality. The 103+ experimental DID methods create fragmentation, with Gartner predicting 70% of SSI adoption depends on achieving cross-platform compatibility by 2027.

Monetization opportunities at the intersection enable entirely new economic models. Usage-based pricing charges per API call, token, task, or compute time—Hugging Face Inference Endpoints achieved $4.5 billion valuation (2023) on this model. Subscription models provide predictable revenue, with Cognigy deriving 60% of $28 million ARR from subscriptions. Outcome-based pricing aligns payment with results (leads generated, tickets resolved, hours saved) as demonstrated by Zendesk, Intercom, and Chargeflow. Agent-as-a-Service positions AI as "digital employees" with monthly fees—Harvey, 11x, and Vivun pioneer enterprise-grade AI workforce. Transaction fees take percentage of agent-facilitated commerce, emerging in agentic platforms requiring high volume for viability.

Blockchain-specific revenue models create token economics where value appreciates with ecosystem growth, staking rewards compensate service providers, governance rights provide premium features for holders, and NFT royalties generate secondary market earnings. Agent-to-agent economy enables autonomous payments where AI agents compensate each other using USDC through Circle's Programmable Wallets, marketplace platforms taking percentage of inter-agent transactions, and smart contracts automating payments based on verified completed work. The AI agent market projects from $5.3 billion (2024) to $47.1 billion (2030) at 44.8% CAGR, potentially reaching $216 billion by 2035, with Web3 AI attracting $213 million from crypto VCs in Q3 2024 alone.

Investment landscape shows convergence thesis gaining institutional validation

Capital deployment across tokenized identity and AI companions accelerated dramatically in 2024-2025 as institutional investors recognized the convergence opportunity. AI captured $100+ billion in venture funding during 2024—representing 33% of all global VC—with 80% increase from 2023's $55.6 billion. Generative AI specifically attracted $45 billion, nearly doubling from $24 billion in 2023, while late-stage GenAI deals averaged $327 million compared to $48 million in 2023. This capital concentration reflects investor conviction that AI represents a secular technology shift rather than cyclical hype.

Web3 and decentralized identity funding followed parallel trajectory. The Web3 market grew from $2.18 billion (2023) to $49.18 billion (2025)—44.9% compound annual growth rate—with 85% of deals at seed or Series A stages signaling infrastructure-building phase. Tokenized Real-World Assets reached $24 billion (2025), up 308% over three years, with projections to $412 billion globally. Decentralized identity specifically scaled from $156.8 million (2021) toward projected $77.8 billion by 2031—87.9% CAGR. Private credit tokenization drove 58% of tokenized RWA flows in H1 2025, while tokenized treasury and money market funds reached $7.4 billion with 80% year-over-year increase.

Matthew Graham's Ryze Labs exemplifies the convergence investment thesis through systematic portfolio construction. The firm incubated Amiko, a personal AI platform combining portable hardware (Kick device), home-based hub (Brain), local inference, structured memory, coordinated agents, and emotionally-aware AI including Eliza character. Amiko's positioning emphasizes "high-fidelity digital twins that capture behavior, not just words" with privacy-first local processing—directly addressing Graham's identity infrastructure thesis. Ryze also incubated Eliza Wakes Up, bringing AI agents to life through humanoid robotics powered by ElizaOS at $420,000 pre-orders for 5'10" humanoid with silicone animatronic face, emotional intelligence, and ability to perform physical tasks and blockchain transactions. Graham advises the project, calling it "the most advanced humanoid robot ever seen outside a lab" and "the most ambitious since Sophia the Robot."

Strategic infrastructure investment came through EdgeX Labs backing in April 2025—decentralized edge computing with 10,000+ live nodes deployed globally providing the substrate for multi-agent coordination and local inference. The AI Combinator program launched 2024/2025 with $5 million funding 10-12 projects at AI/crypto intersection in partnership with Shaw (Eliza Labs) and a16z. Graham described it as targeting "the Cambrian explosion of AI agent innovation" as "the most important development in the industry since DeFi." Technical partners include Polyhedra Network (verifiable computing) and Phala Network (trustless computing), with ecosystem partners like TON Ventures bringing AI agents to multiple Layer 1 blockchains.

Major VCs have published explicit crypto+AI investment theses. Coinbase Ventures articulated that "crypto and blockchain-based systems are a natural complement to generative AI" with these "two secular technologies going to interweave like a DNA double-helix to make the scaffolding for our digital lives." Portfolio companies include Skyfire and Payman. a16z, Paradigm, Delphi Ventures, and Dragonfly Capital (raising $500 million fund in 2025) actively invest in agent infrastructure. New dedicated funds emerged: Gate Ventures + Movement Labs ($20 million Web3 fund), Gate Ventures + UAE ($100 million fund), Avalanche + Aethir ($100 million with AI agents focus), and aelf Ventures ($50 million dedicated fund).

Institutional adoption validates the tokenization narrative with traditional finance giants deploying production systems. BlackRock's BUIDL became the largest tokenized private fund at $2.5 billion AUM, while CEO Larry Fink declared "every asset can be tokenized... it will revolutionize investing." Franklin Templeton's FOBXX reached $708 million AUM, Circle/Hashnote's USYC $488 million. Goldman Sachs operates its DAP end-to-end tokenized asset infrastructure for over one year. J.P. Morgan's Kinexys platform integrates digital identity in Web3 with blockchain identity verification. HSBC launched Orion tokenized bond issuance platform. Bank of America plans stablecoin market entry pending approval with $3.26 trillion in assets positioned for digital payment innovation.

Regional dynamics show Middle East emerging as Web3 capital hub. Gate Ventures launched $100 million UAE fund while Abu Dhabi invested $2 billion in Binance. Conferences reflect industry maturation—TOKEN2049 Singapore drew 25,000 attendees from 160+ countries (70% C-suite), while ETHDenver 2025 attracted 25,000 under theme "From Hype to Impact: Web3 Goes Value-Driven." Investment strategy shifted from "aggressive funding and rapid scaling" toward "disciplined and strategic approaches" emphasizing profitability and sustainable growth, signaling transition from speculation to operational focus.

Challenges persist but technical solutions emerge across privacy, scalability, and interoperability

Despite impressive progress, significant technical and adoption challenges must be resolved before tokenized identity and AI companions achieve mainstream integration. These obstacles shape development timelines and determine which projects succeed in building sustainable user bases.

The privacy versus transparency tradeoff represents the fundamental tension—blockchain transparency conflicts with AI privacy needs for processing sensitive personal data and intimate conversations. Solutions have emerged through multi-layered cryptographic approaches: TEE isolation provides hardware-level privacy (Phala's 30,000+ devices operational), FHE computation enables encrypted processing eliminating plaintext exposure with post-quantum security, ZKP verification proves correctness without revealing data, and hybrid architectures combine on-chain governance with off-chain private computation. These technologies are production-ready but require ecosystem-wide adoption.

Computational scalability challenges arise from AI inference expense combined with blockchain's limited throughput. Layer-2 scaling solutions address this through zkSync, StarkNet, and Arbitrum handling off-chain compute with on-chain verification. Modular architecture using Polkadot's XCM enables cross-chain coordination without mainnet congestion. Off-chain computation pioneered by Phala allows agents executing off-chain while settling on-chain. Purpose-built chains optimize specifically for AI operations rather than general computation. Current average public chain throughput of 17,000 TPS creates bottlenecks, making L2 migration essential for consumer-scale applications.

Data ownership and licensing complexity stems from unclear intellectual property rights across base models, fine-tuning data, and AI outputs. Smart contract licensing embeds usage conditions directly in tokens with automated enforcement. Provenance tracking through Ceramic and KILT Protocol links model states to identities creating audit trails. NFT ownership via ERC-7857 provides clear transfer mechanisms and custody rules. Automated royalty distribution through smart contracts ensures proper value capture. However, legal frameworks lag technology with regulatory uncertainty deterring institutional adoption—who bears liability when decentralized credentials fail? Can global interoperability standards emerge or will regionalization prevail?

Interoperability fragmentation with 103+ DID methods and different ecosystems/identity standards/AI frameworks creates walled gardens. Cross-chain messaging protocols like Polkadot XCM and Cosmos IBC are under development. Universal standards through W3C DIDs and DIF specifications progress slowly requiring consensus-building. Multi-chain wallets like Safe smart accounts with programmable permissions enable some portability. Abstraction layers such as MIT's NANDA project building agentic web indexes attempt ecosystem bridging. Gartner predicts 70% of SSI adoption depends on achieving cross-platform compatibility by 2027, making interoperability the critical path dependency.

User experience complexity remains the primary adoption barrier. Wallet setup sees 68% user abandonment during seed-phrase generation. Key management creates existential risk—lost private keys mean permanently lost identity with no recovery mechanism. The balance between security and recoverability proves elusive; social recovery systems add complexity while maintaining self-custody principles. Cognitive load from understanding blockchain concepts, wallets, gas fees, and DIDs overwhelms non-technical users. This explains why institutional B2B adoption progresses faster than consumer B2C—enterprises can absorb complexity costs while consumers demand seamless experiences.

Economic sustainability challenges arise from high infrastructure costs (GPUs, storage, compute) required for AI operations. Decentralized compute networks distribute costs across multiple providers competing on price. DePIN (Decentralized Physical Infrastructure Networks) with 1,170+ projects spread resource provisioning burden. Usage-based models align costs with value delivered. Staking economics provide token incentives for resource provision. However, VC-backed growth strategies often subsidize user acquisition with unsustainable unit economics—the shift toward profitability in 2025 investment strategy reflects recognition that business model validation matters more than raw user growth.

Trust and verification issues center on ensuring AI agents act as intended without manipulation or drift. Remote attestation from TEEs issues cryptographic proofs of execution integrity. On-chain audit trails create transparent records of all actions. Cryptographic proofs via ZKPs verify computation correctness. DAO governance enables community oversight through token-weighted voting. Yet verification of AI decision-making processes remains challenging given LLM opacity—even with cryptographic proofs of correct execution, understanding why an AI agent made specific choices proves difficult.

The regulatory landscape presents both opportunities and risks. Europe's eIDAS 2.0 mandatory digital wallets by 2026 create massive distribution channel, while US pro-crypto policy shift in 2025 removes friction. However, Worldcoin bans in multiple jurisdictions demonstrate government concerns about biometric data collection and centralization risks. GDPR "right to erasure" conflicts with blockchain immutability despite off-chain storage workarounds. AI agent legal personhood and liability frameworks remain undefined—can AI agents own property, sign contracts, or bear responsibility for harms? These questions lack clear answers as of October 2025.

Looking ahead: near-term infrastructure buildout enables medium-term consumer adoption

Timeline projections from industry experts, market analysts, and technical assessment converge around a multi-phase rollout. Near-term (2025-2026) brings regulatory clarity from US pro-crypto policies, major institutions entering RWA tokenization at scale, universal identity standards emerging through W3C and DIF convergence, and multiple projects moving from testnet to mainnet. Sahara AI mainnet launches Q2-Q3 2025, ENS Namechain migration completes Q4 2025 with 80-90% gas reduction, Lens v3 on zkSync deploys, and Ronin AI agent SDK reaches public release. Investment activity remains focused 85% on early-stage (seed/Series A) infrastructure plays, with $213 million flowing from crypto VCs to AI projects in Q3 2024 alone signaling sustained capital commitment.

Medium-term (2027-2030) expects AI agent market reaching $47.1 billion by 2030 from $5.3 billion (2024)—44.8% CAGR. Cross-chain AI agents become standard as interoperability protocols mature. Agent-to-agent economy generates measurable GDP contribution as autonomous transactions scale. Comprehensive global regulations establish legal frameworks for AI agent operations and liability. Decentralized identity reaches $41.73 billion (2030) from $4.89 billion (2025)—53.48% CAGR—with mainstream adoption in finance, healthcare, and government services. User experience improvements through abstraction layers make blockchain complexity invisible to end users.

Long-term (2030-2035) could see market reaching $216 billion by 2035 for AI agents with true cross-platform AI companion migration enabling users taking their AI relationships anywhere. Potential AGI integration transforms capabilities beyond current narrow AI applications. AI agents might become primary digital economy interface replacing apps and websites as interaction layer. Decentralized identity market hits $77.8 billion (2031) becoming default for digital interactions. However, these projections carry substantial uncertainty—they assume continued technological progress, favorable regulatory evolution, and successful resolution of UX challenges.

What separates realistic from speculative visions? Currently operational and production-ready: Phala's 30,000+ TEE devices processing real workloads, ERC-7857 standard formally proposed with implementations underway, Virtuals Protocol managing $2+ billion AI agent market cap, multiple AI agent marketplaces operational (Virtuals, Holoworld), DeFi AI agents actively trading (Fetch.ai, AIXBT), working products like Agent Wars game, FURO/AXYC NFT companions, Grok Ani with $27-36 million daily trading volume, and proven technologies (TEE, ZKP, FHE, smart contract automation).

Still speculative and not yet realized: universal AI companion portability across ALL platforms, fully autonomous agents managing significant wealth unsupervised, agent-to-agent economy as major percentage of global GDP, complete regulatory framework for AI agent rights, AGI integration with decentralized identity, seamless Web2-Web3 identity bridging at scale, quantum-resistant implementations deployed broadly, and AI agents as primary internet interface for masses. Market projections ($47 billion by 2030, $216 billion by 2035) extrapolate current trends but depend on assumptions about regulatory clarity, technological breakthroughs, and mainstream adoption rates that remain uncertain.

Matthew Graham's positioning reflects this nuanced view—deploying capital in production infrastructure today (EdgeX Labs, Phala Network partnerships) while incubating consumer applications (Amiko, Eliza Wakes Up) that will mature as underlying infrastructure scales. His emphasis on emerging markets (Lagos, Bangalore) suggests patience for developed market regulatory clarity while capturing growth in regions with lighter regulatory burdens. The "waifu infrastructure layer" comment positions identity as foundational requirement rather than nice-to-have feature, implying multi-year buildout before consumer-scale AI companion portability becomes reality.

Industry consensus centers on technical feasibility being high (7-8/10)—TEE, FHE, ZKP technologies proven and deployed, multiple working implementations exist, scalability addressed through Layer-2s, and standards actively progressing. Economic feasibility rates medium-high (6-7/10) with clear monetization models emerging, consistent VC funding flow, decreasing infrastructure costs, and validated market demand. Regulatory feasibility remains medium (5-6/10) as US shifts pro-crypto but EU develops frameworks slowly, privacy regulations need adaptation, and AI agent IP rights remain unclear. Adoption feasibility sits at medium (5/10)—early adopters engaged, but UX challenges persist, limited current interoperability, and significant education/trust-building needed.

The convergence of tokenized identity and AI companions represents not speculative fiction but an actively developing sector with real infrastructure, operational marketplaces, proven technologies, and significant capital investment. Production reality shows $2+ billion in managed assets, 30,000+ deployed TEE devices, $60 million protocol revenue from Virtuals alone, and daily trading volumes in tens of millions. Development status includes proposed standards (ERC-7857), deployed technologies (TEE/FHE/ZKP), and operational frameworks (Virtuals, Phala, Fetch.ai).

The convergence works because blockchain solves AI's ownership problem—who owns the agent, its memories, its economic value?—while AI solves blockchain's UX problem of how users interact with complex cryptographic systems. Privacy tech (TEE/FHE/ZKP) enables this convergence without sacrificing user sovereignty. This is an emerging but real market with clear technical paths, proven economic models, and growing ecosystem adoption. Success hinges on UX improvements, regulatory clarity, interoperability standards, and continued infrastructure development—all actively progressing through 2025 and beyond. Matthew Graham's systematic infrastructure investments position Ryze Labs to capture value as the "most important wave of innovation since DeFi summer" moves from technical buildout toward consumer adoption at scale.

Frax's Stablecoin Singularity: Sam Kazemian's Vision Beyond GENIUS

· 28 min read
Dora Noda
Software Engineer

The "Stablecoin Singularity" represents Sam Kazemian's audacious plan to transform Frax Finance from a stablecoin protocol into the "decentralized central bank of crypto." GENIUS is not a Frax technical system but rather landmark U.S. federal legislation (Guiding and Establishing National Innovation for U.S. Stablecoins Act) signed into law July 18, 2025, requiring 100% reserve backing and comprehensive consumer protections for stablecoins. Kazemian's involvement in drafting this legislation positions Frax as the primary beneficiary, with FXS surging over 100% following the bill's passage. What comes "after GENIUS" is Frax's transformation into a vertically integrated financial infrastructure combining frxUSD (compliant stablecoin), FraxNet (banking interface), Fraxtal (evolving to L1), and revolutionary AIVM technology using Proof of Inference consensus—the world's first AI-powered blockchain validation mechanism. This vision targets $100 billion TVL by 2026, positioning Frax as the issuer of "the 21st century's most important assets" through an ambitious roadmap merging regulatory compliance, institutional partnerships (BlackRock, Securitize), and cutting-edge AI-blockchain convergence.

Understanding the Stablecoin Singularity concept

The "Stablecoin Singularity" emerged in March 2024 as Frax Finance's comprehensive strategic roadmap unifying all protocol aspects into a singular vision. Announced through FIP-341 and approved by community vote in April 2024, this represents a convergence point where Frax transitions from experimental stablecoin protocol to comprehensive DeFi infrastructure provider.

The Singularity encompasses five core components working in concert. First, achieving 100% collateralization for FRAX marked the "post-Singularity era," where Frax generated $45 million to reach full backing after years of fractional-algorithmic experimentation. Second, Fraxtal L2 blockchain launched as "the substrate that enables the Frax ecosystem"—described as the "operating system of Frax" providing sovereign infrastructure. Third, FXS Singularity Tokenomics unified all value capture, with Sam Kazemian declaring "all roads lead to FXS and it is the ultimate beneficiary of the Frax ecosystem," implementing 50% revenue to veFXS holders and 50% to the FXS Liquidity Engine for buybacks. Fourth, the FPIS token merger into FXS simplified governance structure, ensuring "the entire Frax community is singularly aligned behind FXS." Fifth, fractal scaling roadmap targeting 23 Layer 3 chains within one year, creating sub-communities "like fractals" within the broader Frax Network State.

The strategic goal is staggering: $100 billion TVL on Fraxtal by end of 2026, up from $13.2 million at launch. As Kazemian stated: "Rather than pondering theoretical new markets and writing whitepapers, Frax has been and always will be shipping live products and seizing markets before others know they even exist. This speed and safety will be enabled by the foundation that we've built to date. The Singularity phase of Frax begins now."

This vision extends beyond mere protocol growth. Fraxtal represents "the home of Frax Nation & the Fraxtal Network State"—conceptualizing the blockchain as providing "sovereign home, culture, and digital space" for the community. The L3 chains function as "sub-communities that have their own distinct identity & culture but part of the overall Frax Network State," introducing network state philosophy to DeFi infrastructure.

GENIUS Act context and Frax's strategic positioning

GENIUS is not a Frax protocol feature but federal stablecoin legislation that became law on July 18, 2025. The Guiding and Establishing National Innovation for U.S. Stablecoins Act establishes the first comprehensive federal regulatory framework for payment stablecoins, passing the Senate 68-30 on May 20 and the House 308-122 on July 17.

The legislation mandates 100% reserve backing using permitted assets (U.S. dollars, Treasury bills, repurchase agreements, money market funds, central bank reserves). It requires monthly public reserve disclosures and audited annual statements for issuers exceeding $50 billion. A dual federal/state regulatory structure gives the OCC oversight of nonbank issuers above $10 billion, while state regulators handle smaller issuers. Consumer protections prioritize stablecoin holders over all other creditors in insolvency. Critically, issuers must possess technical capabilities to seize, freeze, or burn payment stablecoins when legally required, and cannot pay interest to holders or make misleading claims about government backing.

Sam Kazemian's involvement proves strategically significant. Multiple sources indicate he was "deeply involved in the discussion and drafting of the GENIUS Act as an industry insider," frequently photographed with crypto-friendly legislators including Senator Cynthia Lummis in Washington D.C. This insider position provided advance knowledge of regulatory requirements, allowing Frax to build compliance infrastructure before the law's enactment. Market recognition came swiftly—FXS briefly surged above 4.4 USDT following Senate passage, with over 100% gains that month. As one analysis noted: "As a drafter and participant of the bill, Sam naturally has a deeper understanding of the 'GENIUS Act' and can more easily align his project with the requirements."

Frax's strategic positioning for GENIUS Act compliance began well before the legislation's passage. The protocol transformed from hybrid algorithmic stablecoin FRAX to fully collateralized frxUSD using fiat currency as collateral, abandoning "algorithmic stability" after the Luna UST collapse demonstrated systemic risks. By February 2025—five months before GENIUS became law—Frax launched frxUSD as a fiat-redeemable, fully-collateralized stablecoin designed from inception to comply with anticipated regulatory requirements.

This regulatory foresight creates significant competitive advantages. As market analysis concluded: "The entire roadmap aimed at becoming the first licensed fiat-backed stablecoin." Frax built a vertically integrated ecosystem positioning it uniquely: frxUSD as the compliant stablecoin pegged 1:1 to USD, FraxNet as the bank interface connecting TradFi with DeFi, and Fraxtal as the L2 execution layer potentially transitioning to L1. This full-stack approach enables regulatory compliance while maintaining decentralized governance and technical innovation—a combination competitors struggle to replicate.

Sam Kazemian's philosophical framework: stablecoin maximalism

Sam Kazemian articulated his central thesis at ETHDenver 2024 in a presentation titled "Why It's Stablecoins All The Way Down," declaring: "Everything in DeFi, whether they know it or not, will become a stablecoin or will become stablecoin-like in structure." This "stablecoin maximalism" represents the fundamental worldview held by the Frax core team—that most crypto protocols will converge to become stablecoin issuers in the long-term, or stablecoins become central to their existence.

The framework rests on identifying a universal structure underlying all successful stablecoins. Kazemian argues that at scale, all stablecoins converge to two essential components: a Risk-Free Yield (RFY) mechanism generating revenue from backing assets in the lowest risk venue within the system, and a Swap Facility where stablecoins can be redeemed for their reference peg with high liquidity. He demonstrated this across diverse examples: USDC combines Treasury bills (RFY) with cash (swap facility); stETH uses PoS validators (RFY) with the Curve stETH-ETH pool via LDO incentives (swap facility); Frax's frxETH implements a two-token system where frxETH serves as the ETH-pegged stablecoin while sfrxETH earns native staking yields, with 9.5% of circulation used in various protocols without earning yield—creating crucial "monetary premium."

This concept of monetary premium represents what Kazemian considers "the strongest tangible measurement" of stablecoin success—surpassing even brand name and reputation. Monetary premium measures "demand for an issuer's stablecoin to be held purely for its usefulness without expectation of any interest rate, payment of incentives, or other utility from the issuer." Kazemian boldly predicts that stablecoins failing to adopt this two-prong structure "will be unable to scale into the trillions" and will lose market share over time.

The philosophy extends beyond traditional stablecoins. Kazemian provocatively argues that "all bridges are stablecoin issuers"—if sustained monetary premium exists for bridged assets like Wrapped DAI on non-Ethereum networks, bridge operators will naturally seek to deposit underlying assets in yield-bearing mechanisms like the DAI Savings Rate module. Even WBTC functions essentially as a "BTC-backed stablecoin." This expansive definition reveals stablecoins not as a product category but as the fundamental convergence point for all of DeFi.

Kazemian's long-term conviction dates to 2019, well before DeFi summer: "I've been telling people about algorithmic stablecoins since early 2019... For years now I have been telling friends and colleagues that algorithmic stablecoins could become one of the biggest things in crypto and now everyone seems to believe it." His most ambitious claim positions Frax against Ethereum itself: "I think that the best chance any protocol has at becoming larger than the native asset of a blockchain is an algorithmic stablecoin protocol. So I believe that if there is anything on ETH that has a shot at becoming more valuable than ETH itself it's the combined market caps of FRAX+FXS."

Philosophically, this represents pragmatic evolution over ideological purity. As one analysis noted: "The willingness to evolve from fractional to full collateralization proved that ideology should never override practicality in building financial infrastructure." Yet Kazemian maintains decentralization principles: "The whole idea with these algorithmic stablecoins—Frax being the biggest one—is that we can build something as decentralized and useful as Bitcoin, but with the stability of the US dollar."

What comes after GENIUS: Frax's 2025 vision and beyond

What comes "after GENIUS" represents Frax's transformation from stablecoin protocol to comprehensive financial infrastructure positioned for mainstream adoption. The December 2024 "Future of DeFi" roadmap outlines this post-regulatory landscape vision, with Sam Kazemian declaring: "Frax is not just keeping pace with the future of finance—it's shaping it."

The centerpiece innovation is AIVM (Artificial Intelligence Virtual Machine)—a revolutionary parallelized blockchain within Fraxtal using Proof of Inference consensus, described as a "world-first" mechanism. Developed with IQ's Agent Tokenization Platform, AIVM uses AI and machine learning models to validate blockchain transactions rather than traditional consensus mechanisms. This enables fully autonomous AI agents with no single point of control, owned by token holders and capable of independent operation. As IQ's CTO stated: "Launching tokenized AI agents with IQ ATP on Fraxtal's AIVM will be unlike any other launch platform... Sovereign, on-chain agents that are owned by token holders is a 0 to 1 moment for crypto and AI." This positions Frax at the intersection of the "two most eye-catching industries globally right now"—artificial intelligence and stablecoins.

The North Star Hard Fork fundamentally restructures Frax's token economics. FXS becomes FRAX—the gas token for Fraxtal as it evolves toward L1 status, while the original FRAX stablecoin becomes frxUSD. The governance token transitions from veFXS to veFRAX, preserving revenue-sharing and voting rights while clarifying the ecosystem's value capture. This rebrand implements a tail emission schedule starting at 8% annual inflation, decreasing 1% yearly to a 3% floor, allocated to community initiatives, ecosystem growth, team, and DAO treasury. Simultaneously, the Frax Burn Engine (FBE) permanently destroys FRAX through FNS Registrar and Fraxtal EIP1559 base fees, creating deflationary pressure balancing inflationary emissions.

FraxUSD launched January 2025 with institutional-grade backing, representing the maturation of Frax's regulatory strategy. By partnering with Securitize to access BlackRock's USD Institutional Digital Liquidity Fund (BUIDL), Kazemian stated they're "setting a new standard for stablecoins." The stablecoin uses a hybrid model with governance-approved custodians including BlackRock, Superstate (USTB, USCC), FinresPBC, and WisdomTree (WTGXX). Reserve composition includes cash, U.S. Treasury bills, repurchase agreements, and money market funds—precisely matching GENIUS Act requirements. Critically, frxUSD offers direct fiat redemption capabilities through these custodians at 1:1 parity, bridging TradFi and DeFi seamlessly.

FraxNet provides the banking interface layer connecting traditional financial systems with decentralized infrastructure. Users can mint and redeem frxUSD, earn stable yields, and access programmable accounts with yield streaming functionality. This positions Frax as providing complete financial infrastructure: frxUSD (money layer), FraxNet (banking interface), and Fraxtal (execution layer)—what Kazemian calls the "stablecoin operating system."

The Fraxtal evolution extends the L2 roadmap toward potential L1 transition. The platform implements real-time blocks for ultra-fast processing comparable to Sei and Monad, positioning it for high-throughput applications. The fractal scaling strategy targets 23 Layer 3 chains within one year, creating customizable app-chains via partnerships with Ankr and Asphere. Each L3 functions as a distinct sub-community within the Fraxtal Network State—echoing Kazemian's vision of digital sovereignty.

The Crypto Strategic Reserve (CSR) positions Frax as the "MicroStrategy of DeFi"—building an on-chain reserve denominated in BTC and ETH that will become "one of the largest balance sheets in DeFi." This reserve resides on Fraxtal, contributing to TVL growth while governed by veFRAX stakers, creating alignment between protocol treasury management and token holder interests.

The Frax Universal Interface (FUI) redesign simplifies DeFi access for mainstream adoption. Global fiat onramping via Halliday reduces friction for new users, while optimized routing through Odos integration enables efficient cross-chain asset movement. Mobile wallet development and AI-driven enhancements prepare the platform for the "next billion users entering crypto."

Looking beyond 2025, Kazemian envisions Frax expanding to issue frx-prefixed versions of major blockchain assets—frxBTC, frxNEAR, frxTIA, frxPOL, frxMETIS—becoming "the largest issuer of the most important assets in the 21st century." Each asset applies Frax's proven liquid staking derivative model to new ecosystems, generating revenue while providing enhanced utility. The frxBTC ambition particularly stands out: creating "the biggest issuer" of Bitcoin in DeFi, completely decentralized unlike WBTC, using multi-computational threshold redemption systems.

Revenue generation scales proportionally. As of March 2024, Frax generated $40+ million annual revenue according to DeFiLlama, excluding Fraxtal chain fees and Fraxlend AMO. The fee switch activation increased veFXS yield 15-fold (from 0.20-0.80% to 3-12% APR), with 50% of protocol yield distributed to veFXS holders and 50% to the FXS Liquidity Engine for buybacks. This creates sustainable value accrual independent of token emissions.

The ultimate vision positions Frax as "the U.S. digital dollar"—the world's most innovative decentralized stablecoin infrastructure. Kazemian's aspiration extends to Federal Reserve Master Accounts, enabling Frax to deploy Treasury bills and reverse repurchase agreements as the risk-free yield component matching his stablecoin maximalism framework. This would complete the convergence: a decentralized protocol with institutional-grade collateral, regulatory compliance, and Fed-level financial infrastructure access.

Technical innovations powering the vision

Frax's technical roadmap demonstrates remarkable innovation velocity, implementing novel mechanisms that influence broader DeFi design patterns. The FLOX (Fraxtal Blockspace Incentives) system represents the first mechanism where users spending gas and developers deploying contracts simultaneously earn rewards. Unlike traditional airdrops with set snapshot times, FLOX uses random sampling of data availability to prevent negative farming behaviors. Every epoch (initially seven days), the Flox Algorithm distributes FXTL points based on gas usage and contract interactions, tracking full transaction traces to reward all contracts involved—routers, pools, token contracts. Users can earn more than gas spent while developers earn from their dApp's usage, aligning incentives across the ecosystem.

The AIVM architecture marks a paradigm shift in blockchain consensus. Using Proof of Inference, AI and machine learning models validate transactions rather than traditional PoW/PoS mechanisms. This enables autonomous AI agents to operate as blockchain validators and transaction processors—creating the infrastructure for an AI-driven economy where agents hold tokenized ownership and execute strategies independently. The partnership with IQ's Agent Tokenization Platform provides the tooling for deploying sovereign, on-chain AI agents, positioning Fraxtal as the premier platform for AI-blockchain convergence.

FrxETH v2 transforms liquid staking derivatives into dynamic lending markets for validators. Rather than the core team running all nodes, the system implements a Fraxlend-style lending market where users deposit ETH into lending contracts and validators borrow it for their validators. This removes operational centralization while potentially achieving higher APRs approaching or surpassing liquid restaking tokens (LRTs). Integration with EigenLayer enables direct restaking pods and EigenLayer deposits, making sfrxETH function as both an LSD and LRT. The Fraxtal AVS (Actively Validated Service) uses both FXS and sfrxETH restaking, creating additional security layers and yield opportunities.

BAMM (Bond Automated Market Maker) combines AMM and lending functionality into a novel protocol with no direct competitors. Sam described it enthusiastically: "Everyone will just launch BAMM pairs for their project or for their meme coin or whatever they want to do instead of Uniswap pairs and then trying to build liquidity on centralized exchanges, trying to get a Chainlink oracle, trying to pass Aave or compound governance vote." BAMM pairs eliminate external oracle requirements and maintain automatic solvency protection during high volatility. Native integration into Fraxtal positions it to have "the largest impact on FRAX liquidity and usage."

Algorithmic Market Operations (AMOs) represent Frax's most influential innovation, copied across DeFi protocols. AMOs are smart contracts managing collateral and generating revenue through autonomous monetary policy operations. Examples include the Curve AMO managing $1.3B+ in FRAX3CRV pools (99.9% protocol-owned), generating $75M+ profits since October 2021, and the Collateral Investor AMO deploying idle USDC to Aave, Compound, and Yearn, generating $63.4M profits. These create what Messari described as "DeFi 2.0 stablecoin theory"—targeting exchange rates in open markets rather than passive collateral deposit/mint models. This shift from renting liquidity via emissions to owning liquidity via AMOs fundamentally transformed DeFi sustainability models, influencing Olympus DAO, Tokemak, and numerous other protocols.

Fraxtal's modular L2 architecture uses the Optimism stack for the execution environment while incorporating flexibility for data availability, settlement, and consensus layer choices. The strategic incorporation of zero-knowledge technology enables aggregating validity proofs across multiple chains, with Kazemian envisioning Fraxtal as a "central point of reference for the state of connected chains, enabling applications built on any participating chain to function atomically across the entire universe." This interoperability vision extends beyond Ethereum to Cosmos, Solana, Celestia, and Near—positioning Fraxtal as a universal settlement layer rather than siloed app-chain.

FrxGov (Frax Governance 2.0) deployed in 2024 implements a dual-governor contract system: Governor Alpha (GovAlpha) with high quorum for primary control, and Governor Omega (GovOmega) with lower quorum for quicker decisions. This enhanced decentralization by transitioning governance decisions fully on-chain while maintaining flexibility for urgent protocol adjustments. All major decisions flow through veFRAX (formerly veFXS) holders who control Gnosis Safes through Compound/OpenZeppelin Governor contracts.

These technical innovations solve distinct problems: AIVM enables autonomous AI agents; frxETH v2 removes validator centralization while maximizing yields; BAMM eliminates oracle dependency and provides automatic risk management; AMOs achieve capital efficiency without sacrificing stability; Fraxtal provides sovereign infrastructure; FrxGov ensures decentralized control. Collectively, they demonstrate Frax's philosophy: "Rather than pondering theoretical new markets and writing whitepapers, Frax has been and always will be shipping live products and seizing markets before others know they even exist."

Ecosystem fit and broader DeFi implications

Frax occupies a unique position in the $252 billion stablecoin landscape, representing the third paradigm alongside centralized fiat-backed (USDC, USDT at ~80% dominance) and decentralized crypto-collateralized (DAI at 71% of decentralized market share). The fractional-algorithmic hybrid approach—now evolved to 100% collateralization with retained AMO infrastructure—demonstrates that stablecoins need not choose between extremes but can create dynamic systems adapting to market conditions.

Third-party analysis validates Frax's innovation. Messari's February 2022 report stated: "Frax is the first stablecoin protocol to implement design principles from both fully collateralized and fully algorithmic stablecoins to create new scalable, trustless, stable on-chain money." Coinmonks noted in September 2025: "Through its revolutionary AMO system, Frax created autonomous monetary policy tools that perform complex market operations while maintaining the peg... The protocol demonstrated that sometimes the best solution isn't choosing between extremes but creating dynamic systems that can adapt." Bankless described Frax's approach as quickly attracting "significant attention in the DeFi space and inspiring many related projects."

The DeFi Trinity concept positions Frax as the only protocol with complete vertical integration across essential financial primitives. Kazemian argues successful DeFi ecosystems require three components: stablecoins (liquid unit of account), AMMs/exchanges (liquidity provision), and lending markets (debt origination). MakerDAO has lending plus stablecoin but lacks a native AMM; Aave launched GHO stablecoin and will eventually need an AMM; Curve launched crvUSD and requires lending infrastructure. Frax alone possesses all three pieces through FRAX/frxUSD (stablecoin), Fraxswap (AMM with Time-Weighted Average Market Maker), and Fraxlend (permissionless lending), plus additional layers with frxETH (liquid staking), Fraxtal (L2 blockchain), and FXB (bonds). This completeness led to the description: "Frax is strategically adding new subprotocols and Frax assets but all the necessary building blocks are now in place."

Frax's positioning relative to industry trends reveals both alignment and strategic divergence. Major trends include regulatory clarity (GENIUS Act framework), institutional adoption (90% of financial institutions taking stablecoin action), real-world asset integration ($16T+ tokenization opportunity), yield-bearing stablecoins (PYUSD, sFRAX offering passive income), multi-chain future, and AI-crypto convergence. Frax aligns strongly on regulatory preparation (100% collateralization pre-GENIUS), institutional infrastructure building (BlackRock partnership), multi-chain strategy (Fraxtal plus cross-chain deployments), and AI integration (AIVM). However, it diverges on complexity versus simplicity trends, maintaining sophisticated AMO systems and governance mechanisms that create barriers for average users.

Critical perspectives identify genuine challenges. USDC dependency remains problematic—92% backing creates single-point-of-failure risk, as demonstrated during the March 2023 SVB crisis when Circle's $3.3B stuck in Silicon Valley Bank caused USDC depegging to trigger FRAX falling to $0.885. Governance concentration shows one wallet holding 33%+ of FXS supply in late 2024, creating centralization concerns despite DAO structure. Complexity barriers limit accessibility—understanding AMOs, dynamic collateralization ratios, and multi-token systems proves difficult for average users compared to straightforward USDC or even DAI. Competitive pressure intensifies as Aave, Curve, and traditional finance players enter stablecoin markets with significant resources and established user bases.

Comparative analysis reveals Frax's niche. Against USDC: USDC offers regulatory clarity, liquidity, simplicity, and institutional backing, but Frax provides superior capital efficiency, value accrual to token holders, innovation, and decentralized governance. Against DAI: DAI maximizes decentralization and censorship resistance with the longest track record, but Frax achieves higher capital efficiency through AMOs versus DAI's 160% overcollateralization, generates revenue through AMOs, and provides integrated DeFi stack. Against failed TerraUST: UST's pure algorithmic design with no collateral floor created death spiral vulnerability, while Frax's hybrid approach with collateral backing, dynamic collateralization ratio, and conservative evolution proved resilient during the LUNA collapse.

The philosophical implications extend beyond Frax. The protocol demonstrates decentralized finance requires pragmatic evolution over ideological purity—the willingness to shift from fractional to full collateralization when market conditions demanded it, while retaining sophisticated AMO infrastructure for capital efficiency. This "intelligent bridging" of traditional finance and DeFi challenges the false dichotomy that crypto must completely replace or completely integrate with TradFi. The concept of programmable money that automatically adjusts backing, deploys capital productively, maintains stability through market operations, and distributes value to stakeholders represents a fundamentally new financial primitive.

Frax's influence appears throughout DeFi's evolution. The AMO model inspired protocol-owned liquidity strategies across ecosystems. The recognition that stablecoins naturally converge on risk-free yield plus swap facility structures influenced how protocols design stability mechanisms. The demonstration that algorithmic and collateralized approaches could hybridize successfully showed binary choices weren't necessary. As Coinmonks concluded: "Frax's innovations—particularly AMOs and programmable monetary policy—extend beyond the protocol itself, influencing how the industry thinks about decentralized finance infrastructure and serving as a blueprint for future protocols seeking to balance efficiency, stability, and decentralization."

Sam Kazemian's recent public engagement

Sam Kazemian maintained exceptional visibility throughout 2024-2025 through diverse media channels, with appearances revealing evolution from technical protocol founder to policy influencer and industry thought leader. His most recent Bankless podcast "Ethereum's Biggest Mistake (and How to Fix It)" (early October 2025) demonstrated expanded focus beyond Frax, arguing Ethereum decoupled ETH the asset from Ethereum the technology, eroding ETH's valuation against Bitcoin. He contends that following EIP-1559 and Proof of Stake, ETH shifted from "digital commodity" to "discounted cash flow" asset based on burn revenues, making it function like equity rather than sovereign store of value. His proposed solution: rebuild internal social consensus around ETH as commodity-like asset with strong scarcity narrative (similar to Bitcoin's 21M cap) while maintaining Ethereum's open technical ethos.

The January 2025 Defiant podcast focused specifically on frxUSD and stablecoin futures, explaining redeemability through BlackRock and SuperState custodians, competitive yields through diversified strategies, and Frax's broader vision of building a digital economy anchored by the flagship stablecoin and Fraxtal. Chapter topics included founding story differentiation, decentralized stablecoin vision, frxUSD's "best of both worlds" design, future of stablecoins, yield strategies, real-world and on-chain usage, stablecoins as crypto gateway, and Frax's roadmap.

The Rollup podcast dialogue with Aave founder Stani Kulechov (mid-2025) provided comprehensive GENIUS Act discussion, with Kazemian stating: "I have actually been working hard to control my excitement, and the current situation makes me feel incredibly thrilled. I never expected the development of stablecoins to reach such heights today; the two most eye-catching industries globally right now are artificial intelligence and stablecoins." He explained how GENIUS Act breaks banking monopoly: "In the past, the issuance of the dollar has been monopolized by banks, and only chartered banks could issue dollars... However, through the Genius Act, although regulation has increased, it has actually broken this monopoly, extending the right [to issue stablecoins]."

Flywheel DeFi's extensive coverage captured multiple dimensions of Kazemian's thinking. In "Sam Kazemian Reveals Frax Plans for 2024 and Beyond" from the December 2023 third anniversary Twitter Spaces, he articulated: "The Frax vision is essentially to become the largest issuer of the most important assets in the 21st century." On PayPal's PYUSD: "Once they flip the switch, where payments denominated in dollars are actually PYUSD, moving between account to account, then I think people will wake up and really know that stablecoins have become a household name." The "7 New Things We Learned About Fraxtal" article revealed frxBTC plans aiming to be "biggest issuer—most widely used Bitcoin in DeFi," completely decentralized unlike WBTC using multi-computational threshold redemption systems.

The ETHDenver presentation "Why It's Stablecoins All The Way Down" before a packed house with overflow crowd articulated stablecoin maximalism comprehensively. Kazemian demonstrated how USDC, stETH, frxETH, and even bridge-wrapped assets all converge on the same structure: risk-free yield mechanism plus swap facility with high liquidity. He boldly predicted stablecoins failing to adopt this structure "will be unable to scale into the trillions" and lose market share. The presentation positioned monetary premium—demand to hold stablecoins purely for usefulness without interest expectations—as the strongest measurement of success beyond brand or reputation.

Written interviews provided personal context. The Countere Magazine profile revealed Sam as Iranian-American UCLA graduate and former powerlifter (455lb squat, 385lb bench, 550lb deadlift) who started Frax mid-2019 with Travis Moore and Kedar Iyer. The founding story traces inspiration to Robert Sams' 2014 Seigniorage Shares whitepaper and Tether's partial backing revelation demonstrating stablecoins possessed monetary premium without 100% backing—leading to Frax's revolutionary fractional-algorithmic mechanism transparently measuring this premium. The Cointelegraph regulatory interview captured his philosophy: "You can't apply securities laws created in the 1930s, when our grandparents were children, to the era of decentralized finance and automated market makers."

Conference appearances included TOKEN2049 Singapore (October 1, 2025, 15-minute keynote on TON Stage), RESTAKING 2049 side-event (September 16, 2024, private invite-only event with EigenLayer, Curve, Puffer, Pendle, Lido), unStable Summit 2024 at ETHDenver (February 28, 2024, full-day technical conference alongside Coinbase Institutional, Centrifuge, Nic Carter), and ETHDenver proper (February 29-March 3, 2024, featured speaker).

Twitter Spaces like The Optimist's "Fraxtal Masterclass" (February 23, 2024) explored composability challenges in the modular world, advanced technologies including zk-Rollups, Flox mechanism launching March 13, 2024, and universal interoperability vision where "Fraxtal becomes a central point of reference for the state of connected chains, enabling applications built on any participating chain to function atomically across the entire 'universe.'"

Evolution of thinking across these appearances reveals distinct phases: 2020-2021 focused on algorithmic mechanisms and fractional collateralization innovation; 2022 post-UST collapse emphasized resilience and proper collateralization; 2023 shifted to 100% backing and frxETH expansion; 2024 centered on Fraxtal launch and regulatory compliance focus; 2025 emphasized GENIUS Act positioning, FraxNet banking interface, and L1 transition. Throughout, recurring themes persist: the DeFi Trinity concept (stablecoin + AMM + lending market), central bank analogies for Frax operations, stablecoin maximalism philosophy, regulatory pragmatism evolving from resistance to active policy shaping, and long-term vision of becoming "issuer of the 21st century's most important assets."

Strategic implications and future outlook

Sam Kazemian's vision for Frax Finance represents one of the most comprehensive and philosophically coherent projects in decentralized finance, evolving from algorithmic experimentation to potential creation of the first licensed DeFi stablecoin. The strategic transformation demonstrates pragmatic adaptation to regulatory reality while maintaining decentralized principles—a balance competitors struggle to achieve.

The post-GENIUS trajectory positions Frax across multiple competitive dimensions. Regulatory preparation through deep GENIUS Act drafting involvement creates first-mover advantages in compliance, enabling frxUSD to potentially secure licensed status ahead of competitors. Vertical integration—the only protocol combining stablecoin, liquid staking derivative, L2 blockchain, lending market, and DEX—provides sustainable competitive moats through network effects across products. Revenue generation of $40M+ annually flowing to veFXS holders creates tangible value accrual independent of speculative token dynamics. Technical innovation through FLOX mechanisms, BAMM, frxETH v2, and particularly AIVM positions Frax at cutting edges of blockchain development. Real-world integration via BlackRock and SuperState custodianship for frxUSD bridges institutional finance with decentralized infrastructure more effectively than pure crypto-native or pure TradFi approaches.

Critical challenges remain substantial. USDC dependency at 92% backing creates systemic risk, as SVB crisis demonstrated when FRAX fell to $0.885 following USDC depeg. Diversifying collateral across multiple custodians (BlackRock, Superstate, WisdomTree, FinresPBC) mitigates but doesn't eliminate concentration risk. Complexity barriers limit mainstream adoption—understanding AMOs, dynamic collateralization, and multi-token systems proves difficult compared to straightforward USDC, potentially constraining Frax to sophisticated DeFi users rather than mass market. Governance concentration with 33%+ FXS in single wallet creates centralization concerns contradicting decentralization messaging. Competitive pressure intensifies as Aave launches GHO, Curve deploys crvUSD, and traditional finance players like PayPal (PYUSD) and potential bank-issued stablecoins enter the market with massive resources and regulatory clarity.

The $100 billion TVL target for Fraxtal by end of 2026 requires approximately 7,500x growth from the $13.2M launch TVL—an extraordinarily ambitious goal even in crypto's high-growth environment. Achieving this demands sustained traction across multiple dimensions: Fraxtal must attract significant dApp deployment beyond Frax's own products, L3 ecosystem must materialize with genuine usage rather than vanity metrics, frxUSD must gain substantial market share against USDT/USDC dominance, and institutional partnerships must convert from pilots to scaled deployment. While the technical infrastructure and regulatory positioning support this trajectory, execution risks remain high.

The AI integration through AIVM represents genuinely novel territory. Proof of Inference consensus using AI model validation of blockchain transactions has no precedent at scale. If successful, this positions Frax at the convergence of AI and crypto before competitors recognize the opportunity—consistent with Kazemian's philosophy of "seizing markets before others know they even exist." However, technical challenges around AI determinism, model bias in consensus, and security vulnerabilities in AI-powered validation require resolution before production deployment. The partnership with IQ's Agent Tokenization Platform provides expertise, but the concept remains unproven.

Philosophical contribution extends beyond Frax's success or failure. The demonstration that algorithmic and collateralized approaches can hybridize successfully influenced industry design patterns—AMOs appear across DeFi protocols, protocol-owned liquidity strategies dominate over mercenary liquidity mining, and recognition that stablecoins converge on risk-free yield plus swap facility structures shapes new protocol designs. The willingness to evolve from fractional to full collateralization when market conditions demanded established pragmatism over ideology as necessary for financial infrastructure—a lesson the Terra ecosystem catastrophically failed to learn.

Most likely outcome: Frax becomes the leading sophisticated DeFi stablecoin infrastructure provider, serving a valuable but niche market segment of advanced users prioritizing capital efficiency, decentralization, and innovation over simplicity. Total volumes unlikely to challenge USDT/USDC dominance (which benefits from network effects, regulatory clarity, and institutional backing), but Frax maintains technological leadership and influence on industry design patterns. The protocol's value derives less from market share than from infrastructure provision—becoming the rails on which other protocols build, similar to how Chainlink provides oracle infrastructure across ecosystems regardless of native LINK adoption.

The "Stablecoin Singularity" vision—unifying stablecoin, infrastructure, AI, and governance into comprehensive financial operating system—charts an ambitious but coherent path. Success depends on execution across multiple complex dimensions: regulatory navigation, technical delivery (especially AIVM), institutional partnership conversion, user experience simplification, and sustained innovation velocity. Frax possesses the technical foundation, regulatory positioning, and philosophical clarity to achieve meaningful portions of this vision. Whether it scales to $100B TVL and becomes the "decentralized central bank of crypto" or instead establishes a sustainable $10-20B ecosystem serving sophisticated DeFi users remains to be seen. Either outcome represents significant achievement in an industry where most stablecoin experiments failed catastrophically.

The ultimate insight: Sam Kazemian's vision demonstrates that decentralized finance's future lies not in replacing traditional finance but intelligently bridging both worlds—combining institutional-grade collateral and regulatory compliance with on-chain transparency, decentralized governance, and novel mechanisms like autonomous monetary policy through AMOs and AI-powered consensus through AIVM. This synthesis, rather than binary opposition, represents the pragmatic path toward sustainable decentralized financial infrastructure for mainstream adoption.

MCP in the Web3 Ecosystem: A Comprehensive Review

· 49 min read
Dora Noda
Software Engineer

1. Definition and Origin of MCP in Web3 Context

The Model Context Protocol (MCP) is an open standard that connects AI assistants (like large language models) to external data sources, tools, and environments. Often described as a "USB-C port for AI" due to its universal plug-and-play nature, MCP was developed by Anthropic and first introduced in late November 2024. It emerged as a solution to break AI models out of isolation by securely bridging them with the “systems where data lives” – from databases and APIs to development environments and blockchains.

Originally an experimental side project at Anthropic, MCP quickly gained traction. By mid-2024, open-source reference implementations appeared, and by early 2025 it had become the de facto standard for agentic AI integration, with leading AI labs (OpenAI, Google DeepMind, Meta AI) adopting it natively. This rapid uptake was especially notable in the Web3 community. Blockchain developers saw MCP as a way to infuse AI capabilities into decentralized applications, leading to a proliferation of community-built MCP connectors for on-chain data and services. In fact, some analysts argue MCP may fulfill Web3’s original vision of a decentralized, user-centric internet in a more practical way than blockchain alone, by using natural language interfaces to empower users.

In summary, MCP is not a blockchain or token, but an open protocol born in the AI world that has rapidly been embraced within the Web3 ecosystem as a bridge between AI agents and decentralized data sources. Anthropic open-sourced the standard (with an initial GitHub spec and SDKs) and cultivated an open community around it. This community-driven approach set the stage for MCP’s integration into Web3, where it is now viewed as foundational infrastructure for AI-enabled decentralized applications.

2. Technical Architecture and Core Protocols

MCP operates on a lightweight client–server architecture with three principal roles:

  • MCP Host: The AI application or agent itself, which orchestrates requests. This could be a chatbot (Claude, ChatGPT) or an AI-powered app that needs external data. The host initiates interactions, asking for tools or information via MCP.
  • MCP Client: A connector component that the host uses to communicate with servers. The client maintains the connection, manages request/response messaging, and can handle multiple servers in parallel. For example, a developer tool like Cursor or VS Code’s agent mode can act as an MCP client bridging the local AI environment with various MCP servers.
  • MCP Server: A service that exposes some contextual data or functionality to the AI. Servers provide tools, resources, or prompts that the AI can use. In practice, an MCP server could interface with a database, a cloud app, or a blockchain node, and present a standardized set of operations to the AI. Each client-server pair communicates over its own channel, so an AI agent can tap multiple servers concurrently for different needs.

Core Primitives: MCP defines a set of standard message types and primitives that structure the AI-tool interaction. The three fundamental primitives are:

  • Tools: Discrete operations or functions the AI can invoke on a server. For instance, a “searchDocuments” tool or an “eth_call” tool. Tools encapsulate actions like querying an API, performing a calculation, or calling a smart contract function. The MCP client can request a list of available tools from a server and call them as needed.
  • Resources: Data endpoints that the AI can read from (or sometimes write to) via the server. These could be files, database entries, blockchain state (blocks, transactions), or any contextual data. The AI can list resources and retrieve their content through standard MCP messages (e.g. ListResources and ReadResource requests).
  • Prompts: Structured prompt templates or instructions that servers can provide to guide the AI’s reasoning. For example, a server might supply a formatting template or a pre-defined query prompt. The AI can request a list of prompt templates and use them to maintain consistency in how it interacts with that server.

Under the hood, MCP communications are typically JSON-based and follow a request-response pattern similar to RPC (Remote Procedure Call). The protocol’s specification defines messages like InitializeRequest, ListTools, CallTool, ListResources, etc., which ensure that any MCP-compliant client can talk to any MCP server in a uniform way. This standardization is what allows an AI agent to discover what it can do: upon connecting to a new server, it can inquire “what tools and data do you offer?” and then dynamically decide how to use them.

Security and Execution Model: MCP was designed with secure, controlled interactions in mind. The AI model itself doesn’t execute arbitrary code; it sends high-level intents (via the client) to the server, which then performs the actual operation (e.g., fetching data or calling an API) and returns results. This separation means sensitive actions (like blockchain transactions or database writes) can be sandboxed or require explicit user approval. For example, there are messages like Ping (to keep connections alive) and even a CreateMessageRequest which allows an MCP server to ask the client’s AI to generate a sub-response, typically gated by user confirmation. Features like authentication, access control, and audit logging are being actively developed to ensure MCP can be used safely in enterprise and decentralized environments (more on this in the Roadmap section).

In summary, MCP’s architecture relies on a standardized message protocol (with JSON-RPC style calls) that connects AI agents (hosts) to a flexible array of servers providing tools, data, and actions. This open architecture is model-agnostic and platform-agnostic – any AI agent can use MCP to talk to any resource, and any developer can create a new MCP server for a data source without needing to modify the AI’s core code. This plug-and-play extensibility is what makes MCP powerful in Web3: one can build servers for blockchain nodes, smart contracts, wallets, or oracles and have AI agents seamlessly integrate those capabilities alongside web2 APIs.

3. Use Cases and Applications of MCP in Web3

MCP unlocks a wide range of use cases by enabling AI-driven applications to access blockchain data and execute on-chain or off-chain actions in a secure, high-level way. Here are some key applications and problems it helps solve in the Web3 domain:

  • On-Chain Data Analysis and Querying: AI agents can query live blockchain state in real-time to provide insights or trigger actions. For example, an MCP server connected to an Ethereum node allows an AI to fetch account balances, read smart contract storage, trace transactions, or retrieve event logs on demand. This turns a chatbot or coding assistant into a blockchain explorer. Developers can ask an AI assistant questions like “What’s the current liquidity in Uniswap pool X?” or “Simulate this Ethereum transaction’s gas cost,” and the AI will use MCP tools to call an RPC node and get the answer from the live chain. This is far more powerful than relying on the AI’s training data or static snapshots.
  • Automated DeFi Portfolio Management: By combining data access and action tools, AI agents can manage crypto portfolios or DeFi positions. For instance, an “AI Vault Optimizer” could monitor a user’s positions across yield farms and automatically suggest or execute rebalancing strategies based on real-time market conditions. Similarly, an AI could act as a DeFi portfolio manager, adjusting allocations between protocols when risk or rates change. MCP provides the standard interface for the AI to read on-chain metrics (prices, liquidity, collateral ratios) and then invoke tools to execute transactions (like moving funds or swapping assets) if permitted. This can help users maximize yield or manage risk 24/7 in a way that would be hard to do manually.
  • AI-Powered User Agents for Transactions: Think of a personal AI assistant that can handle blockchain interactions for a user. With MCP, such an agent can integrate with wallets and DApps to perform tasks via natural language commands. For example, a user could say, "AI, send 0.5 ETH from my wallet to Alice" or "Stake my tokens in the highest-APY pool." The AI, through MCP, would use a secure wallet server (holding the user’s private key) to create and sign the transaction, and a blockchain MCP server to broadcast it. This scenario turns complex command-line or Metamask interactions into a conversational experience. It’s crucial that secure wallet MCP servers are used here, enforcing permissions and confirmations, but the end result is streamlining on-chain transactions through AI assistance.
  • Developer Assistants and Smart Contract Debugging: Web3 developers can leverage MCP-based AI assistants that are context-aware of blockchain infrastructure. For example, Chainstack’s MCP servers for EVM and Solana give AI coding copilots deep visibility into the developer’s blockchain environment. A smart contract engineer using an AI assistant (in VS Code or an IDE) can have the AI fetch the current state of a contract on a testnet, run a simulation of a transaction, or check logs – all via MCP calls to local blockchain nodes. This helps in debugging and testing contracts. The AI is no longer coding “blindly”; it can actually verify how code behaves on-chain in real time. This use case solves a major pain point by allowing AI to continuously ingest up-to-date docs (via a documentation MCP server) and to query the blockchain directly, reducing hallucinations and making suggestions far more accurate.
  • Cross-Protocol Coordination: Because MCP is a unified interface, a single AI agent can coordinate across multiple protocols and services simultaneously – something extremely powerful in Web3’s interconnected landscape. Imagine an autonomous trading agent that monitors various DeFi platforms for arbitrage. Through MCP, one agent could concurrently interface with Aave’s lending markets, a LayerZero cross-chain bridge, and an MEV (Miner Extractable Value) analytics service, all through a coherent interface. The AI could, in one “thought process,” gather liquidity data from Ethereum (via an MCP server on an Ethereum node), get price info or oracle data (via another server), and even invoke bridging or swapping operations. Previously, such multi-platform coordination would require complex custom-coded bots, but MCP gives a generalizable way for an AI to navigate the entire Web3 ecosystem as if it were one big data/resource pool. This could enable advanced use cases like cross-chain yield optimization or automated liquidation protection, where an AI moves assets or collateral across chains proactively.
  • AI Advisory and Support Bots: Another category is user-facing advisors in crypto applications. For instance, a DeFi help chatbot integrated into a platform like Uniswap or Compound could use MCP to pull in real-time info for the user. If a user asks, “What’s the best way to hedge my position?”, the AI can fetch current rates, volatility data, and the user’s portfolio details via MCP, then give a context-aware answer. Platforms are exploring AI-powered assistants embedded in wallets or dApps that can guide users through complex transactions, explain risks, and even execute sequences of steps with approval. These AI agents effectively sit on top of multiple Web3 services (DEXes, lending pools, insurance protocols), using MCP to query and command them as needed, thereby simplifying the user experience.
  • Beyond Web3 – Multi-Domain Workflows: Although our focus is Web3, it's worth noting MCP’s use cases extend to any domain where AI needs external data. It’s already being used to connect AI to things like Google Drive, Slack, GitHub, Figma, and more. In practice, a single AI agent could straddle Web3 and Web2: e.g., analyzing an Excel financial model from Google Drive, then suggesting on-chain trades based on that analysis, all in one workflow. MCP’s flexibility allows cross-domain automation (e.g., "schedule my meeting if my DAO vote passes, and email the results") that blends blockchain actions with everyday tools.

Problems Solved: The overarching problem MCP addresses is the lack of a unified interface for AI to interact with live data and services. Before MCP, if you wanted an AI to use a new service, you had to hand-code a plugin or integration for that specific service’s API, often in an ad-hoc way. In Web3 this was especially cumbersome – every blockchain or protocol has its own interfaces, and no AI could hope to support them all. MCP solves this by standardizing how the AI describes what it wants (natural language mapped to tool calls) and how services describe what they offer. This drastically reduces integration work. For example, instead of writing a custom plugin for each DeFi protocol, a developer can write one MCP server for that protocol (essentially annotating its functions in natural language). Any MCP-enabled AI (whether Claude, ChatGPT, or open-source models) can then immediately utilize it. This makes AI extensible in a plug-and-play fashion, much like how adding a new device via a universal port is easier than installing a new interface card.

In sum, MCP in Web3 enables AI agents to become first-class citizens of the blockchain world – querying, analyzing, and even transacting across decentralized systems, all through safe, standardized channels. This opens the door to more autonomous dApps, smarter user agents, and seamless integration of on-chain and off-chain intelligence.

4. Tokenomics and Governance Model

Unlike typical Web3 protocols, MCP does not have a native token or cryptocurrency. It is not a blockchain or a decentralized network on its own, but rather an open protocol specification (more akin to HTTP or JSON-RPC in spirit). Thus, there is no built-in tokenomics – no token issuance, staking, or fee model inherent to using MCP. AI applications and servers communicate via MCP without any cryptocurrency involved; for instance, an AI calling a blockchain via MCP might pay gas fees for the blockchain transaction, but MCP itself adds no extra token fee. This design reflects MCP’s origin in the AI community: it was introduced as a technical standard to improve AI-tool interactions, not as a tokenized project.

Governance of MCP is carried out in an open-source, community-driven fashion. After releasing MCP as an open standard, Anthropic signaled a commitment to collaborative development. A broad steering committee and working groups have formed to shepherd the protocol’s evolution. Notably, by mid-2025, major stakeholders like Microsoft and GitHub joined the MCP steering committee alongside Anthropic. This was announced at Microsoft Build 2025, indicating a coalition of industry players guiding MCP’s roadmap and standards decisions. The committee and maintainers work via an open governance process: proposals to change or extend MCP are typically discussed publicly (e.g. via GitHub issues and “SEP” – Standard Enhancement Proposal – guidelines). There is also an MCP Registry working group (with maintainers from companies like Block, PulseMCP, GitHub, and Anthropic) which exemplifies the multi-party governance. In early 2025, contributors from at least 9 different organizations collaborated to build a unified MCP server registry for discovery, demonstrating how development is decentralized across community members rather than controlled by one entity.

Since there is no token, governance incentives rely on the common interests of stakeholders (AI companies, cloud providers, blockchain developers, etc.) to improve the protocol for all. This is somewhat analogous to how W3C or IETF standards are governed, but with a faster-moving GitHub-centric process. For example, Microsoft and Anthropic worked together to design an improved authorization spec for MCP (integrating things like OAuth and single sign-on), and GitHub collaborated on the official MCP Registry service for listing available servers. These enhancements were contributed back to the MCP spec for everyone’s benefit.

It’s worth noting that while MCP itself is not tokenized, there are forward-looking ideas about layering economic incentives and decentralization on top of MCP. Some researchers and thought leaders in Web3 foresee the emergence of “MCP networks” – essentially decentralized networks of MCP servers and agents that use blockchain-like mechanisms for discovery, trust, and rewards. In such a scenario, one could imagine a token being used to reward those who run high-quality MCP servers (similar to how miners or node operators are incentivized). Capabilities like reputation ratings, verifiable computation, and node discovery could be facilitated by smart contracts or a blockchain, with a token driving honest behavior. This is still conceptual, but projects like MIT’s Namda (discussed later) are experimenting with token-based incentive mechanisms for networks of AI agents using MCP. If these ideas mature, MCP might intersect with on-chain tokenomics more directly, but as of 2025 the core MCP standard remains token-free.

In summary, MCP’s “governance model” is that of an open technology standard: collaboratively maintained by a community and a steering committee of experts, with no on-chain governance token. Decisions are guided by technical merit and broad consensus rather than coin-weighted voting. This distinguishes MCP from many Web3 protocols – it aims to fulfill Web3’s ideals (decentralization, interoperability, user empowerment) through open software and standards, not through a proprietary blockchain or token. In the words of one analysis, “the promise of Web3... can finally be realized not through blockchain and cryptocurrency, but through natural language and AI agents”, positioning MCP as a key enabler of that vision. That said, as MCP networks grow, we may see hybrid models where blockchain-based governance or incentive mechanisms augment the ecosystem – a space to watch closely.

5. Community and Ecosystem

The MCP ecosystem has grown explosively in a short time, spanning AI developers, open-source contributors, Web3 engineers, and major tech companies. It’s a vibrant community effort, with key contributors and partnerships including:

  • Anthropic: As the creator, Anthropic seeded the ecosystem by open-sourcing the MCP spec and several reference servers (for Google Drive, Slack, GitHub, etc.). Anthropic continues to lead development (for example, staff like Theodora Chu serve as MCP product managers, and Anthropic’s team contributes heavily to spec updates and community support). Anthropic’s openness attracted others to build on MCP rather than see it as a single-company tool.

  • Early Adopters (Block, Apollo, Zed, Replit, Codeium, Sourcegraph): In the first months after release, a wave of early adopters implemented MCP in their products. Block (formerly Square) integrated MCP to explore AI agentic systems in fintech – Block’s CTO praised MCP as an open bridge connecting AI to real-world applications. Apollo (likely Apollo GraphQL) also integrated MCP to allow AI access to internal data. Developer tool companies like Zed (code editor), Replit (cloud IDE), Codeium (AI coding assistant), and Sourcegraph (code search) each worked to add MCP support. For instance, Sourcegraph uses MCP so an AI coding assistant can retrieve relevant code from a repository in response to a question, and Replit’s IDE agents can pull in project-specific context. These early adopters gave MCP credibility and visibility.

  • Big Tech Endorsement – OpenAI, Microsoft, Google: In a notable turn, companies that are otherwise competitors aligned on MCP. OpenAI’s CEO Sam Altman publicly announced in March 2025 that OpenAI would add MCP support across its products (including ChatGPT’s desktop app), saying “People love MCP and we are excited to add support across our products”. This meant OpenAI’s Agent API and ChatGPT plugins would speak MCP, ensuring interoperability. Just weeks later, Google DeepMind’s CEO Demis Hassabis revealed that Google’s upcoming Gemini models and tools would support MCP, calling it a good protocol and an open standard for the “AI agentic era”. Microsoft not only joined the steering committee but partnered with Anthropic to build an official C# SDK for MCP to serve the enterprise developer community. Microsoft’s GitHub unit integrated MCP into GitHub Copilot (VS Code’s ‘Copilot Labs/Agents’ mode), enabling Copilot to use MCP servers for things like repository searching and running test cases. Additionally, Microsoft announced Windows 11 would expose certain OS functions (like file system access) as MCP servers so AI agents can interact with the operating system securely. The collaboration among OpenAI, Microsoft, Google, and Anthropic – all rallying around MCP – is extraordinary and underscores the community-over-competition ethos of this standard.

  • Web3 Developer Community: A number of blockchain developers and startups have embraced MCP. Several community-driven MCP servers have been created to serve blockchain use cases:

    • The team at Alchemy (a leading blockchain infrastructure provider) built an Alchemy MCP Server that offers on-demand blockchain analytics tools via MCP. This likely lets an AI get blockchain stats (like historical transactions, address activity) through Alchemy’s APIs using natural language.
    • Contributors developed a Bitcoin & Lightning Network MCP Server to interact with Bitcoin nodes and the Lightning payment network, enabling AI agents to read Bitcoin block data or even create Lightning invoices via standard tools.
    • The crypto media and education group Bankless created an Onchain MCP Server focused on Web3 financial interactions, possibly providing an interface to DeFi protocols (sending transactions, querying DeFi positions, etc.) for AI assistants.
    • Projects like Rollup.codes (a knowledge base for Ethereum Layer 2s) made an MCP server for rollup ecosystem info, so an AI can answer technical questions about rollups by querying this server.
    • Chainstack, a blockchain node provider, launched a suite of MCP servers (covered earlier) for documentation, EVM chain data, and Solana, explicitly marketing it as “putting your AI on blockchain steroids” for Web3 builders.

    Additionally, Web3-focused communities have sprung up around MCP. For example, PulseMCP and Goose are community initiatives referenced as helping build the MCP registry. We’re also seeing cross-pollination with AI agent frameworks: the LangChain community integrated adapters so that all MCP servers can be used as tools in LangChain-powered agents, and open-source AI platforms like Hugging Face TGI (text-generation-inference) are exploring MCP compatibility. The result is a rich ecosystem where new MCP servers are announced almost daily, serving everything from databases to IoT devices.

  • Scale of Adoption: The traction can be quantified to some extent. By February 2025 – barely three months after launch – over 1,000 MCP servers/connectors had been built by the community. This number has only grown, indicating thousands of integrations across industries. Mike Krieger (Anthropic’s Chief Product Officer) noted by spring 2025 that MCP had become a “thriving open standard with thousands of integrations and growing”. The official MCP Registry (launched in preview in Sept 2025) is cataloging publicly available servers, making it easier to discover tools; the registry’s open API allows anyone to search for, say, “Ethereum” or “Notion” and find relevant MCP connectors. This lowers the barrier for new entrants and further fuels growth.

  • Partnerships: We’ve touched on many implicit partnerships (Anthropic with Microsoft, etc.). To highlight a few more:

    • Anthropic & Slack: Anthropic partnered with Slack to integrate Claude with Slack’s data via MCP (Slack has an official MCP server, enabling AI to retrieve Slack messages or post alerts).
    • Cloud Providers: Amazon (AWS) and Google Cloud have worked with Anthropic to host Claude, and it’s likely they support MCP in those environments (e.g., AWS Bedrock might allow MCP connectors for enterprise data). While not explicitly in citations, these cloud partnerships are important for enterprise adoption.
    • Academic collaborations: The MIT and IBM research project Namda (discussed next) represents a partnership between academia and industry to push MCP’s limits in decentralized settings.
    • GitHub & VS Code: Partnership to enhance developer experience – e.g., VS Code’s team actively contributed to MCP (one of the registry maintainers is from VS Code team).
    • Numerous startups: Many AI startups (agent startups, workflow automation startups) are building on MCP instead of reinventing the wheel. This includes emerging Web3 AI startups looking to offer “AI as a DAO” or autonomous economic agents.

Overall, the MCP community is diverse and rapidly expanding. It includes core tech companies (for standards and base tooling), Web3 specialists (bringing blockchain knowledge and use cases), and independent developers (who often contribute connectors for their favorite apps or protocols). The ethos is collaborative. For example, security concerns about third-party MCP servers have prompted community discussions and contributions of best practices (e.g., Stacklok contributors working on security tooling for MCP servers). The community’s ability to iterate quickly (MCP saw several spec upgrades within months, adding features like streaming responses and better auth) is a testament to broad engagement.

In the Web3 ecosystem specifically, MCP has fostered a mini-ecosystem of “AI + Web3” projects. It’s not just a protocol to use; it’s catalyzing new ideas like AI-driven DAOs, on-chain governance aided by AI analysis, and cross-domain automation (like linking on-chain events to off-chain actions through AI). The presence of key Web3 figures – e.g., Zhivko Todorov of LimeChain stating “MCP represents the inevitable integration of AI and blockchain” – shows that blockchain veterans are actively championing it. Partnerships between AI and blockchain companies (such as the one between Anthropic and Block, or Microsoft’s Azure cloud making MCP easy to deploy alongside its blockchain services) hint at a future where AI agents and smart contracts work hand-in-hand.

One could say MCP has ignited the first genuine convergence of the AI developer community with the Web3 developer community. Hackathons and meetups now feature MCP tracks. As a concrete measure of ecosystem adoption: by mid-2025, OpenAI, Google, and Anthropic – collectively representing the majority of advanced AI models – all support MCP, and on the other side, leading blockchain infrastructure providers (Alchemy, Chainstack), crypto companies (Block, etc.), and decentralized projects are building MCP hooks. This two-sided network effect bodes well for MCP becoming a lasting standard.

6. Roadmap and Development Milestones

MCP’s development has been fast-paced. Here we outline the major milestones so far and the roadmap ahead as gleaned from official sources and community updates:

  • Late 2024 – Initial Release: On Nov 25, 2024, Anthropic officially announced MCP and open-sourced the specification and initial SDKs. Alongside the spec, they released a handful of MCP server implementations for common tools (Google Drive, Slack, GitHub, etc.) and added support in the Claude AI assistant (Claude Desktop app) to connect to local MCP servers. This marked the 1.0 launch of MCP. Early proof-of-concept integrations at Anthropic showed how Claude could use MCP to read files or query a SQL database in natural language, validating the concept.
  • Q1 2025 – Rapid Adoption and Iteration: In the first few months of 2025, MCP saw widespread industry adoption. By March 2025, OpenAI and other AI providers announced support (as described above). This period also saw spec evolution: Anthropic updated MCP to include streaming capabilities (allowing large results or continuous data streams to be sent incrementally). This update was noted in April 2025 with the C# SDK news, indicating MCP now supported features like chunked responses or real-time feed integration. The community also built reference implementations in various languages (Python, JavaScript, etc.) beyond Anthropic’s SDK, ensuring polyglot support.
  • Q2 2025 – Ecosystem Tooling and Governance: In May 2025, with Microsoft and GitHub joining the effort, there was a push for formalizing governance and enhancing security. At Build 2025, Microsoft unveiled plans for Windows 11 MCP integration and detailed a collaboration to improve authorization flows in MCP. Around the same time, the idea of an MCP Registry was introduced to index available servers (the initial brainstorming started in March 2025 according to the registry blog). The “standards track” process (SEP – Standard Enhancement Proposals) was established on GitHub, similar to Ethereum’s EIPs or Python’s PEPs, to manage contributions in an orderly way. Community calls and working groups (for security, registry, SDKs) started convening.
  • Mid 2025 – Feature Expansion: By mid-2025, the roadmap prioritized several key improvements:
    • Asynchronous and Long-Running Task Support: Plans to allow MCP to handle long operations without blocking the connection. For example, if an AI triggers a cloud job that takes minutes, the MCP protocol would support async responses or reconnection to fetch results.
    • Authentication & Fine-Grained Security: Developing fine-grained authorization mechanisms for sensitive actions. This includes possibly integrating OAuth flows, API keys, and enterprise SSO into MCP servers so that AI access can be safely managed. By mid-2025, guides and best practices for MCP security were in progress, given the security risks of allowing AI to invoke powerful tools. The goal is that, for instance, if an AI is to access a user’s private database via MCP, it should follow a secure authorization flow (with user consent) rather than just an open endpoint.
    • Validation and Compliance Testing: Recognizing the need for reliability, the community prioritized building compliance test suites and reference implementations. By ensuring all MCP clients/servers adhere to the spec (through automated testing), they aimed to prevent fragmentation. A reference server (likely an example with best practices for remote deployment and auth) was on the roadmap, as was a reference client application demonstrating full MCP usage with an AI.
    • Multimodality Support: Extending MCP beyond text to support modalities like image, audio, video data in the context. For example, an AI might request an image from an MCP server (say, a design asset or a diagram) or output an image. The spec discussion included adding support for streaming and chunked messages to handle large multimedia content interactively. Early work on “MCP Streaming” was already underway (to support things like live audio feeds or continuous sensor data to AI).
    • Central Registry & Discovery: The plan to implement a central MCP Registry service for server discovery was executed in mid-2025. By September 2025, the official MCP Registry was launched in preview. This registry provides a single source of truth for publicly available MCP servers, allowing clients to find servers by name, category, or capabilities. It’s essentially like an app store (but open) for AI tools. The design allows for public registries (a global index) and private ones (enterprise-specific), all interoperable via a shared API. The Registry also introduced a moderation mechanism to flag or delist malicious servers, with a community moderation model to maintain quality.
  • Late 2025 and Beyond – Toward Decentralized MCP Networks: While not “official” roadmap items yet, the trajectory points toward more decentralization and Web3 synergy:
    • Researchers are actively exploring how to add decentralized discovery, reputation, and incentive layers to MCP. The concept of an MCP Network (or “marketplace of MCP endpoints”) is being incubated. This might involve smart contract-based registries (so no single point of failure for server listings), reputation systems where servers/clients have on-chain identities and stake for good behavior, and possibly token rewards for running reliable MCP nodes.
    • Project Namda at MIT, which started in 2024, is a concrete step in this direction. By 2025, Namda had built a prototype distributed agent framework on MCP’s foundations, including features like dynamic node discovery, load balancing across agent clusters, and a decentralized registry using blockchain techniques. They even have experimental token-based incentives and provenance tracking for multi-agent collaborations. Milestones from Namda show that it’s feasible to have a network of MCP agents running across many machines with trustless coordination. If Namda’s concepts are adopted, we might see MCP evolve to incorporate some of these ideas (possibly through optional extensions or separate protocols layered on top).
    • Enterprise Hardening: On the enterprise side, by late 2025 we expect MCP to be integrated into major enterprise software offerings (Microsoft’s inclusion in Windows and Azure is one example). The roadmap includes enterprise-friendly features like SSO integration for MCP servers and robust access controls. The general availability of the MCP Registry and toolkits for deploying MCP at scale (e.g., within a corporate network) is likely by end of 2025.

To recap some key development milestones so far (timeline format for clarity):

  • Nov 2024: MCP 1.0 released (Anthropic).
  • Dec 2024 – Jan 2025: Community builds first wave of MCP servers; Anthropic releases Claude Desktop with MCP support; small-scale pilots by Block, Apollo, etc.
  • Feb 2025: 1000+ community MCP connectors achieved; Anthropic hosts workshops (e.g., at an AI summit, driving education).
  • Mar 2025: OpenAI announces support (ChatGPT Agents SDK).
  • Apr 2025: Google DeepMind announces support (Gemini will support MCP); Microsoft releases preview of C# SDK.
  • May 2025: Steering Committee expanded (Microsoft/GitHub); Build 2025 demos (Windows MCP integration).
  • Jun 2025: Chainstack launches Web3 MCP servers (EVM/Solana) for public use.
  • Jul 2025: MCP spec version updates (streaming, authentication improvements); official Roadmap published on MCP site.
  • Sep 2025: MCP Registry (preview) launched; likely MCP hits general availability in more products (Claude for Work, etc.).
  • Late 2025 (projected): Registry v1.0 live; security best-practice guides released; possibly initial experiments with decentralized discovery (Namda results).

The vision forward is that MCP becomes as ubiquitous and invisible as HTTP or JSON – a common layer that many apps use under the hood. For Web3, the roadmap suggests deeper fusion: where not only will AI agents use Web3 (blockchains) as sources or sinks of information, but Web3 infrastructure itself might start to incorporate AI agents (via MCP) as part of its operation (for example, a DAO might run an MCP-compatible AI to manage certain tasks, or oracles might publish data via MCP endpoints). The roadmap’s emphasis on things like verifiability and authentication hints that down the line, trust-minimized MCP interactions could be a reality – imagine AI outputs that come with cryptographic proofs, or an on-chain log of what tools an AI invoked for audit purposes. These possibilities blur the line between AI and blockchain networks, and MCP is at the heart of that convergence.

In conclusion, MCP’s development is highly dynamic. It has hit major early milestones (broad adoption and standardization within a year of launch) and continues to evolve rapidly with a clear roadmap emphasizing security, scalability, and discovery. The milestones achieved and planned ensure MCP will remain robust as it scales: addressing challenges like long-running tasks, secure permissions, and the sheer discoverability of thousands of tools. This forward momentum indicates that MCP is not a static spec but a growing standard, likely to incorporate more Web3-flavored features (decentralized governance of servers, incentive alignment) as those needs arise. The community is poised to adapt MCP to new use cases (multimodal AI, IoT, etc.), all while keeping an eye on the core promise: making AI more connected, context-aware, and user-empowering in the Web3 era.

7. Comparison with Similar Web3 Projects or Protocols

MCP’s unique blend of AI and connectivity means there aren’t many direct apples-to-apples equivalents, but it’s illuminating to compare it with other projects at the intersection of Web3 and AI or with analogous goals:

  • SingularityNET (AGI/X)Decentralized AI Marketplace: SingularityNET, launched in 2017 by Dr. Ben Goertzel and others, is a blockchain-based marketplace for AI services. It allows developers to monetize AI algorithms as services and users to consume those services, all facilitated by a token (AGIX) which is used for payments and governance. In essence, SingularityNET is trying to decentralize the supply of AI models by hosting them on a network where anyone can call an AI service in exchange for tokens. This differs from MCP fundamentally. MCP does not host or monetize AI models; instead, it provides a standard interface for AI (wherever it’s running) to access data/tools. One could imagine using MCP to connect an AI to services listed on SingularityNET, but SingularityNET itself focuses on the economic layer (who provides an AI service and how they get paid). Another key difference: Governance – SingularityNET has on-chain governance (via SingularityNET Enhancement Proposals (SNEPs) and AGIX token voting) to evolve its platform. MCP’s governance, by contrast, is off-chain and collaborative without a token. In summary, SingularityNET and MCP both strive for a more open AI ecosystem, but SingularityNET is about a tokenized network of AI algorithms, whereas MCP is about a protocol standard for AI-tool interoperability. They could complement: for example, an AI on SingularityNET could use MCP to fetch external data it needs. But SingularityNET doesn’t attempt to standardize tool use; it uses blockchain to coordinate AI services, while MCP uses software standards to let AI work with any service.
  • Fetch.ai (FET)Agent-Based Decentralized Platform: Fetch.ai is another project blending AI and blockchain. It launched its own proof-of-stake blockchain and framework for building autonomous agents that perform tasks and interact on a decentralized network. In Fetch’s vision, millions of “software agents” (representing people, devices, or organizations) can negotiate and exchange value, using FET tokens for transactions. Fetch.ai provides an agent framework (uAgents) and infrastructure for discovery and communication between agents on its ledger. For example, a Fetch agent might help optimize traffic in a city by interacting with other agents for parking and transport, or manage a supply chain workflow autonomously. How does this compare to MCP? Both deal with the concept of agents, but Fetch.ai’s agents are strongly tied to its blockchain and token economy – they live on the Fetch network and use on-chain logic. MCP agents (AI hosts) are model-driven (like an LLM) and not tied to any single network; MCP is content to operate over the internet or within a cloud setup, without requiring a blockchain. Fetch.ai tries to build a new decentralized AI economy from the ground up (with its own ledger for trust and transactions), whereas MCP is layer-agnostic – it piggybacks on existing networks (could be used over HTTPS, or even on top of a blockchain if needed) to enable AI interactions. One might say Fetch is more about autonomous economic agents and MCP about smart tool-using agents. Interestingly, these could intersect: an autonomous agent on Fetch.ai might use MCP to interface with off-chain resources or other blockchains. Conversely, one could use MCP to build multi-agent systems that leverage different blockchains (not just one). In practice, MCP has seen faster adoption because it didn’t require its own network – it works with Ethereum, Solana, Web2 APIs, etc., out of the box. Fetch.ai’s approach is more heavyweight, creating an entire ecosystem that participants must join (and acquire tokens) to use. In sum, Fetch.ai vs MCP: Fetch is a platform with its own token/blockchain for AI agents, focusing on interoperability and economic exchanges between agents, while MCP is a protocol that AI agents (in any environment) can use to plug into tools and data. Their goals overlap in enabling AI-driven automation, but they tackle different layers of the stack and have very different architectural philosophies (closed ecosystem vs open standard).
  • Chainlink and Decentralized OraclesConnecting Blockchains to Off-Chain Data: Chainlink is not an AI project, but it’s highly relevant as a Web3 protocol solving a complementary problem: how to connect blockchains with external data and computation. Chainlink is a decentralized network of nodes (oracles) that fetch, verify, and deliver off-chain data to smart contracts in a trust-minimized way. For example, Chainlink oracles provide price feeds to DeFi protocols or call external APIs on behalf of smart contracts via Chainlink Functions. Comparatively, MCP connects AI models to external data/tools (some of which might be blockchains). One could say Chainlink brings data into blockchains, while MCP brings data into AI. There is a conceptual parallel: both establish a bridge between otherwise siloed systems. Chainlink focuses on reliability, decentralization, and security of data fed on-chain (solving the “oracle problem” of single point of failure). MCP focuses on flexibility and standardization of how AI can access data (solving the “integration problem” for AI agents). They operate in different domains (smart contracts vs AI assistants), but one might compare MCP servers to oracles: an MCP server for price data might call the same APIs a Chainlink node does. The difference is the consumer – in MCP’s case, the consumer is an AI or user-facing assistant, not a deterministic smart contract. Also, MCP does not inherently provide the trust guarantees that Chainlink does (MCP servers can be centralized or community-run, with trust managed at the application level). However, as mentioned earlier, ideas to decentralize MCP networks could borrow from oracle networks – e.g., multiple MCP servers could be queried and results cross-checked to ensure an AI isn’t fed bad data, similar to how multiple Chainlink nodes aggregate a price. In short, Chainlink vs MCP: Chainlink is Web3 middleware for blockchains to consume external data, MCP is AI middleware for models to consume external data (which could include blockchain data). They address analogous needs in different realms and could even complement: an AI using MCP might fetch a Chainlink-provided data feed as a reliable resource, and conversely, an AI could serve as a source of analysis that a Chainlink oracle brings on-chain (though that latter scenario would raise questions of verifiability).
  • ChatGPT Plugins / OpenAI Functions vs MCPAI Tool Integration Approaches: While not Web3 projects, a quick comparison is warranted because ChatGPT plugins and OpenAI’s function calling feature also connect AI to external tools. ChatGPT plugins use an OpenAPI specification provided by a service, and the model can then call those APIs following the spec. The limitations are that it’s a closed ecosystem (OpenAI-approved plugins running on OpenAI’s servers) and each plugin is a siloed integration. OpenAI’s newer “Agents” SDK is closer to MCP in concept, letting developers define tools/functions that an AI can use, but initially it was specific to OpenAI’s ecosystem. LangChain similarly provided a framework to give LLMs tools in code. MCP differs by offering an open, model-agnostic standard for this. As one analysis put it, LangChain created a developer-facing standard (a Python interface) for tools, whereas MCP creates a model-facing standard – an AI agent can discover and use any MCP-defined tool at runtime without custom code. In practical terms, MCP’s ecosystem of servers grew larger and more diverse than the ChatGPT plugin store within months. And rather than each model having its own plugin format (OpenAI had theirs, others had different ones), many are coalescing around MCP. OpenAI itself signaled support for MCP, essentially aligning their function approach with the broader standard. So, comparing OpenAI Plugins to MCP: plugins are a curated, centralized approach, while MCP is a decentralized, community-driven approach. In a Web3 mindset, MCP is more “open source and permissionless” whereas proprietary plugin ecosystems are more closed. This makes MCP analogous to the ethos of Web3 even though it’s not a blockchain – it enables interoperability and user control (you could run your own MCP server for your data, instead of giving it all to one AI provider). This comparison shows why many consider MCP as having more long-term potential: it’s not locked to one vendor or one model.
  • Project Namda and Decentralized Agent Frameworks: Namda deserves a separate note because it explicitly combines MCP with Web3 concepts. As described earlier, Namda (Networked Agent Modular Distributed Architecture) is an MIT/IBM initiative started in 2024 to build a scalable, distributed network of AI agents using MCP as the communication layer. It treats MCP as the messaging backbone (since MCP uses standard JSON-RPC-like messages, it fit well for inter-agent comms), and then adds layers for dynamic discovery, fault tolerance, and verifiable identities using blockchain-inspired techniques. Namda’s agents can be anywhere (cloud, edge devices, etc.), but a decentralized registry (somewhat like a DHT or blockchain) keeps track of them and their capabilities in a tamper-proof way. They even explore giving agents tokens to incentivize cooperation or resource sharing. In essence, Namda is an experiment in what a “Web3 version of MCP” might look like. It’s not a widely deployed project yet, but it’s one of the closest “similar protocols” in spirit. If we view Namda vs MCP: Namda uses MCP (so it’s not competing standards), but extends it with a protocol for networking and coordinating multiple agents in a trust-minimized manner. One could compare Namda to frameworks like Autonolas or Multi-Agent Systems (MAS) that the crypto community has seen, but those often lacked a powerful AI component or a common protocol. Namda + MCP together showcase how a decentralized agent network could function, with blockchain providing identity, reputation, and possibly token incentives, and MCP providing the agent communication and tool-use.

In summary, MCP stands apart from most prior Web3 projects: it did not start as a crypto project at all, yet it rapidly intersects with Web3 because it solves complementary problems. Projects like SingularityNET and Fetch.ai aimed to decentralize AI compute or services using blockchain; MCP instead standardizes AI integration with services, which can enhance decentralization by avoiding platform lock-in. Oracle networks like Chainlink solved data delivery to blockchain; MCP solves data delivery to AI (including blockchain data). If Web3’s core ideals are decentralization, interoperability, and user empowerment, MCP is attacking the interoperability piece in the AI realm. It’s even influencing those older projects – for instance, there is nothing stopping SingularityNET from making its AI services available via MCP servers, or Fetch agents from using MCP to talk to external systems. We might well see a convergence where token-driven AI networks use MCP as their lingua franca, marrying the incentive structure of Web3 with the flexibility of MCP.

Finally, if we consider market perception: MCP is often touted as doing for AI what Web3 hoped to do for the internet – break silos and empower users. This has led some to nickname MCP informally as “Web3 for AI” (even when no blockchain is involved). However, it’s important to recognize MCP is a protocol standard, whereas most Web3 projects are full-stack platforms with economic layers. In comparisons, MCP usually comes out as a more lightweight, universal solution, while blockchain projects are heavier, specialized solutions. Depending on use case, they can complement rather than strictly compete. As the ecosystem matures, we might see MCP integrated into many Web3 projects as a module (much like how HTTP or JSON are ubiquitous), rather than as a rival project.

8. Public Perception, Market Traction, and Media Coverage

Public sentiment toward MCP has been overwhelmingly positive in both the AI and Web3 communities, often bordering on enthusiastic. Many see it as a game-changer that arrived quietly but then took the industry by storm. Let’s break down the perception, traction, and notable media narratives:

Market Traction and Adoption Metrics: By mid-2025, MCP achieved a level of adoption rare for a new protocol. It’s backed by virtually all major AI model providers (Anthropic, OpenAI, Google, Meta) and supported by big tech infrastructure (Microsoft, GitHub, AWS etc.), as detailed earlier. This alone signals to the market that MCP is likely here to stay (akin to how broad backing propelled TCP/IP or HTTP in early internet days). On the Web3 side, the traction is evident in developer behavior: hackathons started featuring MCP projects, and many blockchain dev tools now mention MCP integration as a selling point. The stat of “1000+ connectors in a few months” and Mike Krieger’s “thousands of integrations” quote are often cited to illustrate how rapidly MCP caught on. This suggests strong network effects – the more tools available via MCP, the more useful it is, prompting more adoption (a positive feedback loop). VCs and analysts have noted that MCP achieved in under a year what earlier “AI interoperability” attempts failed to do over several years, largely due to timing (riding the wave of interest in AI agents) and being open-source. In Web3 media, traction is sometimes measured in terms of developer mindshare and integration into projects, and MCP scores high on both now.

Public Perception in AI and Web3 Communities: Initially, MCP flew under the radar when first announced (late 2024). But by early 2025, as success stories emerged, perception shifted to excitement. AI practitioners saw MCP as the “missing puzzle piece” for making AI agents truly useful beyond toy examples. Web3 builders, on the other hand, saw it as a bridge to finally incorporate AI into dApps without throwing away decentralization – an AI can use on-chain data without needing a centralized oracle, for instance. Thought leaders have been singing praises: for example, Jesus Rodriguez (a prominent Web3 AI writer) wrote in CoinDesk that MCP may be “one of the most transformative protocols for the AI era and a great fit for Web3 architectures”. Rares Crisan in a Notable Capital blog argued that MCP could deliver on Web3’s promise where blockchain alone struggled, by making the internet more user-centric and natural to interact with. These narratives frame MCP as revolutionary yet practical – not just hype.

To be fair, not all commentary is uncritical. Some AI developers on forums like Reddit have pointed out that MCP “doesn’t do everything” – it’s a communication protocol, not an out-of-the-box agent or reasoning engine. For instance, one Reddit discussion titled “MCP is a Dead-End Trap” argued that MCP by itself doesn’t manage agent cognition or guarantee quality; it still requires good agent design and safety controls. This view suggests MCP could be overhyped as a silver bullet. However, these criticisms are more about tempering expectations than rejecting MCP’s usefulness. They emphasize that MCP solves tool connectivity but one must still build robust agent logic (i.e., MCP doesn’t magically create an intelligent agent, it equips one with tools). The consensus though is that MCP is a big step forward, even among cautious voices. Hugging Face’s community blog noted that while MCP isn’t a solve-it-all, it is a major enabler for integrated, context-aware AI, and developers are rallying around it for that reason.

Media Coverage: MCP has received significant coverage across both mainstream tech media and niche blockchain media:

  • TechCrunch has run multiple stories. They covered the initial concept (“Anthropic proposes a new way to connect data to AI chatbots”) around launch in 2024. In 2025, TechCrunch highlighted each big adoption moment: OpenAI’s support, Google’s embrace, Microsoft/GitHub’s involvement. These articles often emphasize the industry unity around MCP. For example, TechCrunch quoted Sam Altman’s endorsement and noted the rapid shift from rival standards to MCP. In doing so, they portrayed MCP as the emerging standard similar to how no one wanted to be left out of the internet protocols in the 90s. Such coverage in a prominent outlet signaled to the broader tech world that MCP is important and real, not just a fringe open-source project.
  • CoinDesk and other crypto publications latched onto the Web3 angle. CoinDesk’s opinion piece by Rodriguez (July 2025) is often cited; it painted a futuristic picture where every blockchain could be an MCP server and new MCP networks might run on blockchains. It connected MCP to concepts like decentralized identity, authentication, and verifiability – speaking the language of the blockchain audience and suggesting MCP could be the protocol that truly melds AI with decentralized frameworks. Cointelegraph, Bankless, and others have also discussed MCP in context of “AI agents & DeFi” and similar topics, usually optimistic about the possibilities (e.g., Bankless had a piece on using MCP to let an AI manage on-chain trades, and included a how-to for their own MCP server).
  • Notable VC Blogs / Analyst Reports: The Notable Capital blog post (July 2025) is an example of venture analysis drawing parallels between MCP and the evolution of web protocols. It essentially argues MCP could do for Web3 what HTTP did for Web1 – providing a new interface layer (natural language interface) that doesn’t replace underlying infrastructure but makes it usable. This kind of narrative is compelling and has been echoed in panels and podcasts. It positions MCP not as competing with blockchain, but as the next layer of abstraction that finally allows normal users (via AI) to harness blockchain and web services easily.
  • Developer Community Buzz: Outside formal articles, MCP’s rise can be gauged by its presence in developer discourse – conference talks, YouTube channels, newsletters. For instance, there have been popular blog posts like “MCP: The missing link for agentic AI?” on sites like Runtime.news, and newsletters (e.g., one by AI researcher Nathan Lambert) discussing practical experiments with MCP and how it compares to other tool-use frameworks. The general tone is curiosity and excitement: developers share demos of hooking up AI to their home automation or crypto wallet with just a few lines using MCP servers, something that felt sci-fi not long ago. This grassroots excitement is important because it shows MCP has mindshare beyond just corporate endorsements.
  • Enterprise Perspective: Media and analysts focusing on enterprise AI also note MCP as a key development. For example, The New Stack covered how Anthropic added support for remote MCP servers in Claude for enterprise use. The angle here is that enterprises can use MCP to connect their internal knowledge bases and systems to AI safely. This matters for Web3 too as many blockchain companies are enterprises themselves and can leverage MCP internally (for instance, a crypto exchange could use MCP to let an AI analyze internal transaction logs for fraud detection).

Notable Quotes and Reactions: A few are worth highlighting as encapsulating public perception:

  • “Much like HTTP revolutionized web communications, MCP provides a universal framework... replacing fragmented integrations with a single protocol.” – CoinDesk. This comparison to HTTP is powerful; it frames MCP as infrastructure-level innovation.
  • “MCP has [become a] thriving open standard with thousands of integrations and growing. LLMs are most useful when connecting to the data you already have...” – Mike Krieger (Anthropic). This is an official confirmation of both traction and the core value proposition, which has been widely shared on social media.
  • “The promise of Web3... can finally be realized... through natural language and AI agents. ...MCP is the closest thing we've seen to a real Web3 for the masses.” – Notable Capital. This bold statement resonates with those frustrated by the slow UX improvements in crypto; it suggests AI might crack the code of mainstream adoption by abstracting complexity.

Challenges and Skepticism: While enthusiasm is high, the media has also discussed challenges:

  • Security Concerns: Outlets like The New Stack or security blogs have raised that allowing AI to execute tools can be dangerous if not sandboxed. What if a malicious MCP server tried to get an AI to perform a harmful action? The LimeChain blog explicitly warns of “significant security risks” with community-developed MCP servers (e.g., a server that handles private keys must be extremely secure). These concerns have been echoed in discussions: essentially, MCP expands AI’s capabilities, but with power comes risk. The community’s response (guides, auth mechanisms) has been covered as well, generally reassuring that mitigations are being built. Still, any high-profile misuse of MCP (say an AI triggered an unintended crypto transfer) would affect perception, so media is watchful on this front.
  • Performance and Cost: Some analysts note that using AI agents with tools could be slower or more costly than directly calling an API (because the AI might need multiple back-and-forth steps to get what it needs). In high-frequency trading or on-chain execution contexts, that latency could be problematic. For now, these are seen as technical hurdles to optimize (through better agent design or streaming), rather than deal-breakers.
  • Hype management: As with any trending tech, there’s a bit of hype. A few voices caution not to declare MCP the solution to everything. For instance, the Hugging Face article asks “Is MCP a silver bullet?” and answers no – developers still need to handle context management, and MCP works best in combination with good prompting and memory strategies. Such balanced takes are healthy in the discourse.

Overall Media Sentiment: The narrative that emerges is largely hopeful and forward-looking:

  • MCP is seen as a practical tool delivering real improvements now (so not vaporware), which media underscore by citing working examples: Claude reading files, Copilot using MCP in VSCode, an AI completing a Solana transaction in a demo, etc..
  • It’s also portrayed as a strategic linchpin for the future of both AI and Web3. Media often conclude that MCP or things like it will be essential for “decentralized AI” or “Web4” or whatever term one uses for the next-gen web. There’s a sense that MCP opened a door, and now innovation is flowing through – whether it's Namda’s decentralized agents or enterprises connecting legacy systems to AI, many future storylines trace back to MCP’s introduction.

In the market, one could gauge traction by the formation of startups and funding around the MCP ecosystem. Indeed, there are rumors/reports of startups focusing on “MCP marketplaces” or managed MCP platforms getting funding (Notable Capital writing about it suggests VC interest). We can expect media to start covering those tangentially – e.g., “Startup X uses MCP to let your AI manage your crypto portfolio – raises $Y million”.

Conclusion of Perception: By late 2025, MCP enjoys a reputation as a breakthrough enabling technology. It has strong advocacy from influential figures in both AI and crypto. The public narrative has evolved from “here’s a neat tool” to “this could be foundational for the next web”. Meanwhile, practical coverage confirms it’s working and being adopted, lending credibility. Provided the community continues addressing challenges (security, governance at scale) and no major disasters occur, MCP’s public image is likely to remain positive or even become iconic as “the protocol that made AI and Web3 play nice together.”

Media will likely keep a close eye on:

  • Success stories (e.g., if a major DAO implements an AI treasurer via MCP, or a government uses MCP for open data AI systems).
  • Any security incidents (to evaluate risk).
  • The evolution of MCP networks and whether any token or blockchain component officially enters the picture (which would be big news bridging AI and crypto even more tightly).

As of now, however, the coverage can be summed up by a line from CoinDesk: “The combination of Web3 and MCP might just be a new foundation for decentralized AI.” – a sentiment that captures both the promise and the excitement surrounding MCP in the public eye.

References:

  • Anthropic News: "Introducing the Model Context Protocol," Nov 2024
  • LimeChain Blog: "What is MCP and How Does It Apply to Blockchains?" May 2025
  • Chainstack Blog: "MCP for Web3 Builders: Solana, EVM and Documentation," June 2025
  • CoinDesk Op-Ed: "The Protocol of Agents: Web3’s MCP Potential," Jul 2025
  • Notable Capital: "Why MCP Represents the Real Web3 Opportunity," Jul 2025
  • TechCrunch: "OpenAI adopts Anthropic’s standard…", Mar 26, 2025
  • TechCrunch: "Google to embrace Anthropic’s standard…", Apr 9, 2025
  • TechCrunch: "GitHub, Microsoft embrace… (MCP steering committee)", May 19, 2025
  • Microsoft Dev Blog: "Official C# SDK for MCP," Apr 2025
  • Hugging Face Blog: "#14: What Is MCP, and Why Is Everyone Talking About It?" Mar 2025
  • Messari Research: "Fetch.ai Profile," 2023
  • Medium (Nu FinTimes): "Unveiling SingularityNET," Mar 2024

Google’s Agent Payments Protocol (AP2)

· 34 min read
Dora Noda
Software Engineer

Google’s Agent Payments Protocol (AP2) is a newly announced open standard designed to enable secure, trustworthy transactions initiated by AI agents on behalf of users. Developed in collaboration with over 60 payments and technology organizations (including major payment networks, banks, fintechs, and Web3 companies), AP2 establishes a common language for “agentic” payments – i.e. purchases and financial transactions that an autonomous agent (such as an AI assistant or LLM-based agent) can carry out for a user. AP2’s creation is driven by a fundamental shift: traditionally, online payment systems assumed a human is directly clicking “buy,” but the rise of AI agents acting on user instructions breaks this assumption. AP2 addresses the resulting challenges of authorization, authenticity, and accountability in AI-driven commerce, while remaining compatible with existing payment infrastructure. This report examines AP2’s technical architecture, purpose and use cases, integrations with AI agents and payment providers, security and compliance considerations, comparisons to existing protocols, implications for Web3/decentralized systems, and the industry adoption/roadmap.

Technical Architecture: How AP2 Works

At its core, AP2 introduces a cryptographically secure transaction framework built on verifiable digital credentials (VDCs) – essentially tamper-proof, signed data objects that serve as digital “contracts” of what the user has authorized. In AP2 terminology these contracts are called Mandates, and they form an auditable chain of evidence for each transaction. There are three primary types of mandates in the AP2 architecture:

  • Intent Mandate: Captures the user’s initial instructions or conditions for a purchase, especially for “human-not-present” scenarios (where the agent will act later without the user online). It defines the scope of authority the user gives the agent – for example, “Buy concert tickets if they drop below $200, up to 2 tickets”. This mandate is cryptographically signed upfront by the user and serves as verifiable proof of consent within specific limits.
  • Cart Mandate: Represents the final transaction details that the user has approved, used in “human-present” scenarios or at the moment of checkout. It includes the exact items or services, their price, and other particulars of the purchase. When the agent is ready to complete the transaction (e.g. after filling a shopping cart), the merchant first cryptographically signs the cart contents (guaranteeing the order details and price), and then the user (via their device or agent interface) signs off to create a Cart Mandate. This ensures what-you-see-is-what-you-pay, locking in the final order exactly as presented to the user.
  • Payment Mandate: A separate credential that is sent to the payment network (e.g. card network or bank) to signal that an AI agent is involved in the transaction. The Payment Mandate includes metadata such as whether the user was present or not during authorization and serves as a flag for risk management systems. By providing the acquiring and issuing banks with cryptographically verifiable evidence of user intent, this mandate helps them assess the context (for example, distinguishing an agent-initiated purchase from typical fraud) and manage compliance or liability accordingly.

All mandates are implemented as verifiable credentials signed by the relevant party’s keys (user, merchant, etc.), yielding a non-repudiable audit trail for every agent-led transaction. In practice, AP2 uses a role-based architecture to protect sensitive information – for instance, an agent might handle an Intent Mandate without ever seeing raw payment details, which are only revealed in a controlled way when needed, preserving privacy. The cryptographic chain of user intent → merchant commitment → payment authorization establishes trust among all parties that the transaction reflects the user’s true instructions and that both the agent and merchant adhered to those instructions.

Transaction Flow: To illustrate how AP2 works end-to-end, consider a simple purchase scenario with a human in the loop:

  1. User Request: The user asks their AI agent to purchase a particular item or service (e.g. “Order this pair of shoes in my size”).
  2. Cart Construction: The agent communicates with the merchant’s systems (using standard APIs or via an agent-to-agent interaction) to assemble a shopping cart for the specified item at a given price.
  3. Merchant Guarantee: Before presenting the cart to the user, the merchant’s side cryptographically signs the cart details (item, quantity, price, etc.). This step creates a merchant-signed offer that guarantees the exact terms (preventing any hidden changes or price manipulation).
  4. User Approval: The agent shows the user the finalized cart. The user confirms the purchase, and this approval triggers two cryptographic signatures from the user’s side: one on the Cart Mandate (to accept the merchant’s cart as-is) and one on the Payment Mandate (to authorize payment through the chosen payment provider). These signed mandates are then shared with the merchant and the payment network respectively.
  5. Execution: Armed with the Cart Mandate and Payment Mandate, the merchant and payment provider proceed to execute the transaction securely. For example, the merchant submits the payment request along with the proof of user approval to the payment network (card network, bank, etc.), which can verify the Payment Mandate. The result is a completed purchase transaction with a cryptographic audit trail linking the user’s intent to the final payment.

This flow demonstrates how AP2 builds trust into each step of an AI-driven purchase. The merchant has cryptographic proof of exactly what the user agreed to buy at what price, and the issuer/bank has proof that the user authorized that payment, even though an AI agent facilitated the process. In case of disputes or errors, the signed mandates act as clear evidence, helping determine accountability (e.g. if the agent deviated from instructions or if a charge was not what the user approved). In essence, AP2’s architecture ensures that verifiable user intent – rather than trust in the agent’s behavior – is the basis of the transaction, greatly reducing ambiguity.

Purpose and Use Cases for AP2

Why AP2 is Needed: The primary purpose of AP2 is to solve emerging trust and security issues that arise when AI agents can spend money on behalf of users. Google and its partners identified several key questions that today’s payment infrastructure cannot adequately answer when an autonomous agent is in the loop:

  • Authorization: How to prove that a user actually gave the agent permission to make a specific purchase? (In other words, ensuring the agent isn’t buying things without the user’s informed consent.)
  • Authenticity: How can a merchant know that an agent’s purchase request is genuine and reflects the user’s true intent, rather than a mistake or AI hallucination?
  • Accountability: If a fraudulent or incorrect transaction occurs via an agent, who is responsible – the user, the merchant, the payment provider, or the creator of the AI agent?

Without a solution, these uncertainties create a “crisis of trust” around agent-led commerce. AP2’s mission is to provide that solution by establishing a uniform protocol for secure agent transactions. By introducing standardized mandates and proofs of intent, AP2 prevents a fragmented ecosystem of each company inventing its own ad-hoc agent payment methods. Instead, any compliant AI agent can interact with any compliant merchant/payment provider under a common set of rules and verifications. This consistency not only avoids user and merchant confusion, but also gives financial institutions a clear way to manage risk for agent-initiated payments, rather than dealing with a patchwork of proprietary approaches. In short, AP2’s purpose is to be a foundational trust layer that lets the “agent economy” grow without breaking the payments ecosystem.

Intended Use Cases: By solving the above issues, AP2 opens the door to new commerce experiences and use cases that go beyond what’s possible with a human manually clicking through purchases. Some examples of agent-enabled commerce that AP2 supports include:

  • Smarter Shopping: A customer can instruct their agent, “I want this winter jacket in green, and I’m willing to pay up to 20% above the current price for it”. Armed with an Intent Mandate encoding these conditions, the agent will continuously monitor retailer websites or databases. The moment the jacket becomes available in green (and within the price threshold), the agent automatically executes a purchase with a secure, signed transaction – capturing a sale that otherwise would have been missed. The entire interaction, from the user’s initial request to the automated checkout, is governed by AP2 mandates ensuring the agent only buys exactly what was authorized.
  • Personalized Offers: A user tells their agent they’re looking for a specific product (say, a new bicycle) from a particular merchant for an upcoming trip. The agent can share this interest (within the bounds of an Intent Mandate) with the merchant’s own AI agent, including relevant context like the trip date. The merchant agent, knowing the user’s intent and context, could respond with a custom bundle or discount – for example, “bicycle + helmet + travel rack at 15% off, available for the next 48 hours.” Using AP2, the user’s agent can accept and complete this tailored offer securely, turning a simple query into a more valuable sale for the merchant.
  • Coordinated Tasks: A user planning a complex task (e.g. a weekend trip) delegates it entirely: “Book me a flight and hotel for these dates with a total budget of $700.” The agent can interact with multiple service providers’ agents – airlines, hotels, travel platforms – to find a combination that fits the budget. Once a suitable flight-hotel package is identified, the agent uses AP2 to execute multiple bookings in one go, each cryptographically signed (for example, issuing separate Cart Mandates for the airline and the hotel, both authorized under the user’s Intent Mandate). AP2 ensures all parts of this coordinated transaction occur as approved, and even allows simultaneous execution so that tickets and reservations are booked together without risk of one part failing mid-way.

These scenarios illustrate just a few of AP2’s intended use cases. More broadly, AP2’s flexible design supports both conventional e-commerce flows and entirely new models of commerce. For instance, AP2 can facilitate subscription-like services (an agent keeps you stocked on essentials by purchasing when conditions are met), event-driven purchases (buying tickets or items the instant a trigger event occurs), group agent negotiations (multiple users’ agents pooling mandates to bargain for a group deal), and many other emerging patterns. In every case, the common thread is that AP2 provides the trust framework – clear user authorization and cryptographic auditability – that allows these agent-driven transactions to happen safely. By handling the trust and verification layer, AP2 lets developers and businesses focus on innovating new AI commerce experiences without re-inventing payment security from scratch.

Integration with Agents, LLMs, and Payment Providers

AP2 is explicitly designed to integrate seamlessly with AI agent frameworks and with existing payment systems, acting as a bridge between the two. Google has positioned AP2 as an extension of its Agent2Agent (A2A) protocol and Model Context Protocol (MCP) standards. In other words, if A2A provides a generic language for agents to communicate tasks and MCP standardizes how AI models incorporate context/tools, then AP2 adds a transactions layer on top for commerce. The protocols are complementary: A2A handles agent-to-agent communication (allowing, say, a shopping agent to talk to a merchant’s agent), while AP2 handles agent-to-merchant payment authorization within those interactions. Because AP2 is open and non-proprietary, it’s meant to be framework-agnostic: developers can use it with Google’s own Agent Development Kit (ADK) or any AI agent library, and likewise it can work with various AI models including LLMs. An LLM-based agent, for example, could use AP2 by generating and exchanging the required mandate payloads (guided by the AP2 spec) instead of just free-form text. By enforcing a structured protocol, AP2 helps transform an AI agent’s high-level intent (which might come from an LLM’s reasoning) into concrete, secure transactions.

On the payments side, AP2 was built in concert with traditional payment providers and standards, rather than as a rip-and-replace system. The protocol is payment-method-agnostic, meaning it can support a variety of payment rails – from credit/debit card networks to bank transfers and digital wallets – as the underlying method for moving funds. In its initial version, AP2 emphasizes compatibility with card payments, since those are most common in online commerce. The AP2 Payment Mandate is designed to plug into the existing card processing flow: it provides additional data to the payment network (e.g. Visa, Mastercard, Amex) and issuing bank that an AI agent is involved and whether the user was present, thereby complementing existing fraud detection and authorization checks. Essentially, AP2 doesn’t process the payment itself; it augments the payment request with cryptographic proof of user intent. This allows payment providers to treat agent-initiated transactions with appropriate caution or speed (for example, an issuer might approve an unusual-looking purchase if it sees a valid AP2 mandate proving the user pre-approved it). Notably, Google and partners plan to evolve AP2 to support “push” payment methods as well – such as real-time bank transfers (like India’s UPI or Brazil’s PIX systems) – and other emerging digital payment types. This indicates AP2’s integration will expand beyond cards, aligning with modern payment trends worldwide.

For merchants and payment processors, integrating AP2 would mean supporting the additional protocol messages (mandates) and verifying signatures. Many large payment platforms are already involved in shaping AP2, so we can expect they will build support for it. For example, companies like Adyen, Worldpay, Paypal, Stripe (not explicitly named in the blog but likely interested), and others could incorporate AP2 into their checkout APIs or SDKs, allowing an agent to initiate a payment in a standardized way. Because AP2 is an open specification on GitHub with reference implementations, payment providers and tech platforms can start experimenting with it immediately. Google has also mentioned an AI Agent Marketplace where third-party agents can be listed – these agents are expected to support AP2 for any transactional capabilities. In practice, an enterprise that builds an AI sales assistant or procurement agent could list it on this marketplace, and thanks to AP2, that agent can carry out purchases or orders reliably.

Finally, AP2’s integration story benefits from its broad industry backing. By co-developing the protocol with major financial institutions and tech firms, Google ensured AP2 aligns with existing industry rules and compliance requirements. The collaboration with payment networks (e.g. Mastercard, UnionPay), issuers (e.g. American Express), fintechs (e.g. Revolut, Paypal), e-commerce players (e.g. Etsy), and even identity/security providers (e.g. Okta, Cloudflare) suggests AP2 is being designed to slot into real-world systems with minimal friction. These stakeholders bring expertise in areas like KYC (Know Your Customer regulations), fraud prevention, and data privacy, helping AP2 address those needs out of the box. In summary, AP2 is built to be agent-friendly and payment-provider-friendly: it extends existing AI agent protocols to handle transactions, and it layers on top of existing payment networks to utilize their infrastructure while adding necessary trust guarantees.

Security, Compliance, and Interoperability Considerations

Security and trust are at the heart of AP2’s design. The protocol’s use of cryptography (digital signatures on mandates) ensures that every critical action in an agentic transaction is verifiable and traceable. This non-repudiation is crucial: neither the user nor merchant can later deny what was authorized and agreed upon, since the mandates serve as secure records. A direct benefit is in fraud prevention and dispute resolution – with AP2, if a malicious or buggy agent attempts an unauthorized purchase, the lack of a valid user-signed mandate would be evident, and the transaction can be declined or reversed. Conversely, if a user claims “I never approved this purchase,” but a Cart Mandate exists with their cryptographic signature, the merchant and issuer have strong evidence to support the charge. This clarity of accountability answers a major compliance concern for the payments industry.

Authorization & Privacy: AP2 enforces an explicit authorization step (or steps) from the user for agent-led transactions, which aligns with regulatory trends like strong customer authentication. The User Control principle baked into AP2 means an agent cannot spend funds unless the user (or someone delegated by the user) has provided a verifiable instruction to do so. Even in fully autonomous scenarios, the user predefines the rules via an Intent Mandate. This approach can be seen as analogous to giving a power-of-attorney to the agent for specific transactions, but in a digitally signed, fine-grained manner. From a privacy perspective, AP2 is mindful about data sharing: the protocol uses a role-based data architecture to ensure that sensitive info (like payment credentials or personal details) is only shared with parties that absolutely need it. For example, an agent might send a Cart Mandate to a merchant containing item and price info, but the user’s actual card number might only be shared through the Payment Mandate with the payment processor, not with the agent or merchant. This minimizes unnecessary exposure of data, aiding compliance with privacy laws and PCI-DSS rules for handling payment data.

Compliance & Standards: Because AP2 was developed with input from established financial entities, it has been designed to meet or complement existing compliance standards in payments. The protocol doesn’t bypass the usual payment authorization flows – instead, it augments them with additional evidence and flags. This means AP2 transactions can still leverage fraud detection systems, 3-D Secure checks, or any regulatory checks required, with AP2’s mandates acting as extra authentication factors or context cues. For instance, a bank could treat a Payment Mandate akin to a customer’s digital signature on a transaction, potentially streamlining compliance with requirements for user consent. Additionally, AP2’s designers explicitly mention working “in concert with industry rules and standards”. We can infer that as AP2 evolves, it may be brought to formal standards bodies (such as the W3C, EMVCo, or ISO) to ensure it aligns with global financial standards. Google has stated commitment to an open, collaborative evolution of AP2 possibly through standards organizations. This open process will help iron out any regulatory concerns and achieve broad acceptance, similar to how previous payment standards (EMV chip cards, 3-D Secure, etc.) underwent industry-wide collaboration.

Interoperability: Avoiding fragmentation is a key goal of AP2. To that end, the protocol is openly published and made available for anyone to implement or integrate. It is not tied to Google Cloud services – in fact, AP2 is open-source (Apache-2 licensed) and the specification plus reference code is on a public GitHub repository. This encourages interoperability because multiple vendors can adopt AP2 and still have their systems work together. Already, the interoperability principle is highlighted: AP2 is an extension of existing open protocols (A2A, MCP) and is non-proprietary, meaning it fosters a competitive ecosystem of implementations rather than a single-vendor solution. In practical terms, an AI agent built by Company A could initiate a transaction with a merchant system from Company B if both follow AP2 – neither side is locked into one platform.

One possible concern is ensuring consistent adoption: if some major players chose a different protocol or closed approach, fragmentation could still occur. However, given the broad coalition behind AP2, it appears poised to become a de facto standard. The inclusion of many identity and security-focused firms (for example, Okta, Cloudflare, Ping Identity) in the AP2 ecosystem Figure: Over 60 companies across finance, tech, and crypto are collaborating on AP2 (partial list of partners). suggests that interoperability and security are being jointly addressed. These partners can help integrate AP2 into identity verification workflows and fraud prevention tools, ensuring that an AP2 transaction can be trusted across systems.

From a technology standpoint, AP2’s use of widely accepted cryptographic techniques (likely JSON-LD or JWT-based verifiable credentials, public key signatures, etc.) makes it compatible with existing security infrastructure. Organizations can use their existing PKI (Public Key Infrastructure) to manage keys for signing mandates. AP2 also seems to anticipate integration with decentralized identity systems: Google mentions that AP2 creates opportunities to innovate in areas like decentralized identity for agent authorization. This means in the future, AP2 could leverage DID (Decentralized Identifier) standards or decentralized identifier verification for identifying agents and users in a trusted way. Such an approach would further enhance interoperability by not relying on any single identity provider. In summary, AP2 emphasizes security through cryptography and clear accountability, aims to be compliance-ready by design, and promotes interoperability through its open standard nature and broad industry support.

Comparison with Existing Protocols

AP2 is a novel protocol addressing a gap that existing payment and agent frameworks have not covered: enabling autonomous agents to perform payments in a secure, standardized manner. In terms of agent communication protocols, AP2 builds on prior work like the Agent2Agent (A2A) protocol. A2A (open-sourced earlier in 2025) allows different AI agents to talk to each other regardless of their underlying frameworks. However, A2A by itself doesn’t define how agents should conduct transactions or payments – it’s more about task negotiation and data exchange. AP2 extends this landscape by adding a transaction layer that any agent can use when a conversation leads to a purchase. In essence, AP2 can be seen as complementary to A2A and MCP, rather than overlapping: A2A covers the communication and collaboration aspects, MCP covers using external tools/APIs, and AP2 covers payments and commerce. Together, they form a stack of standards for a future “agent economy.” This modular approach is somewhat analogous to internet protocols: for example, HTTP for data communication and SSL/TLS for security – here A2A might be like the HTTP of agents, and AP2 the secure transactional layer on top for commerce.

When comparing AP2 to traditional payment protocols and standards, there are both parallels and differences. Traditional online payments (credit card checkouts, PayPal transactions, etc.) typically involve protocols like HTTPS for secure transmission, and standards like PCI DSS for handling card data, plus possibly 3-D Secure for additional user authentication. These assume a user-driven flow (user clicks and perhaps enters a one-time code). AP2, by contrast, introduces a way for a third-party (the agent) to participate in the flow without undermining security. One could compare AP2’s mandate concept to an extension of OAuth-style delegated authority, but applied to payments. In OAuth, a user can grant an application limited access to an account via tokens; similarly in AP2, a user grants an agent authority to spend under certain conditions via mandates. The key difference is that AP2’s “tokens” (mandates) are specific, signed instructions for financial transactions, which is more fine-grained than existing payment authorizations.

Another point of comparison is how AP2 relates to existing e-commerce checkout flows. For instance, many e-commerce sites use protocols like the W3C Payment Request API or platform-specific SDKs to streamline payments. Those mainly standardize how browsers or apps collect payment info from a user, whereas AP2 standardizes how an agent would prove user intent to a merchant and payment processor. AP2’s focus on verifiable intent and non-repudiation sets it apart from simpler payment APIs. It’s adding an additional layer of trust on top of the payment networks. One could say AP2 is not replacing the payment networks (Visa, ACH, blockchain, etc.), but rather augmenting them. The protocol explicitly supports all types of payment methods (even crypto), so it is more about standardizing the agent’s interaction with these systems, not creating a new payment rail from scratch.

In the realm of security and authentication protocols, AP2 shares some spirit with things like digital signatures in EMV chip cards or the notarization in digital contracts. For example, EMV chip card transactions generate cryptograms to prove the card was present; AP2 generates cryptographic proof that the user’s agent was authorized. Both aim to prevent fraud, but AP2’s scope is the agent-user relationship and agent-merchant messaging, which no existing payment standard addresses. Another emerging comparison is with account abstraction in crypto (e.g. ERC-4337) where users can authorize pre-programmed wallet actions. Crypto wallets can be set to allow certain automated transactions (like auto-paying a subscription via a smart contract), but those are typically confined to one blockchain environment. AP2, on the other hand, aims to be cross-platform – it can leverage blockchain for some payments (through its extensions) but also works with traditional banks.

There isn’t a direct “competitor” protocol to AP2 in the mainstream payments industry yet – it appears to be the first concerted effort at an open standard for AI-agent payments. Proprietary attempts may arise (or may already be in progress within individual companies), but AP2’s broad support gives it an edge in becoming the standard. It’s worth noting that IBM and others have an Agent Communication Protocol (ACP) and similar initiatives for agent interoperability, but those don’t encompass the payment aspect in the comprehensive way AP2 does. If anything, AP2 might integrate with or leverage those efforts (for example, IBM’s agent frameworks could implement AP2 for any commerce tasks).

In summary, AP2 distinguishes itself by targeting the unique intersection of AI and payments: where older payment protocols assumed a human user, AP2 assumes an AI intermediary and fills the trust gap that results. It extends, rather than conflicts with, existing payment processes, and complements existing agent protocols like A2A. Going forward, one might see AP2 being used alongside established standards – for instance, an AP2 Cart Mandate might work in tandem with a traditional payment gateway API call, or an AP2 Payment Mandate might be attached to a ISO 8583 message in banking. The open nature of AP2 also means if any alternative approaches emerge, AP2 could potentially absorb or align with them through community collaboration. At this stage, AP2 is setting a baseline that did not exist before, effectively pioneering a new layer of protocol in the AI and payments stack.

Implications for Web3 and Decentralized Systems

From the outset, AP2 has been designed to be inclusive of Web3 and cryptocurrency-based payments. The protocol recognizes that future commerce will span both traditional fiat channels and decentralized blockchain networks. As noted earlier, AP2 supports payment types ranging from credit cards and bank transfers to stablecoins and cryptocurrencies. In fact, alongside AP2’s launch, Google announced a specific extension for crypto payments called A2A x402. This extension, developed in collaboration with crypto-industry players like Coinbase, the Ethereum Foundation, and MetaMask, is a “production-ready solution for agent-based crypto payments”. The name “x402” is an homage to the HTTP 402 “Payment Required” status code, which was never widely used on the Web – AP2’s crypto extension effectively revives the spirit of HTTP 402 for decentralized agents that want to charge or pay each other on-chain. In practical terms, the x402 extension adapts AP2’s mandate concept to blockchain transactions. For example, an agent could hold a signed Intent Mandate from a user and then execute an on-chain payment (say, send a stablecoin) once conditions are met, attaching proof of the mandate to that on-chain transaction. This marries the AP2 off-chain trust framework with the trustless nature of blockchain, giving the best of both worlds: an on-chain payment that off-chain parties (users, merchants) can trust was authorized by the user.

The synergy between AP2 and Web3 is evident in the list of collaborators. Crypto exchanges (Coinbase), blockchain foundations (Ethereum Foundation), crypto wallets (MetaMask), and Web3 startups (e.g. Mysten Labs of Sui, Lightspark for Lightning Network) are involved in AP2’s development. Their participation suggests AP2 is viewed as complementary to decentralized finance rather than competitive. By creating a standard way for AI agents to interact with crypto payments, AP2 could drive more usage of crypto in AI-driven applications. For instance, an AI agent might use AP2 to seamlessly swap between paying with a credit card or paying with a stablecoin, depending on user preference or merchant acceptance. The A2A x402 extension specifically allows agents to monetize or pay for services through on-chain means, which could be crucial in decentralized marketplaces of the future. It hints at agents possibly running as autonomous economic actors on blockchain (a concept some refer to as DACs or DAOs) being able to handle payments required for services (like paying a small fee to another agent for information). AP2 could provide the lingua franca for such transactions, ensuring even on a decentralized network, the agent has a provable mandate for what it’s doing.

In terms of competition, one could ask: do purely decentralized solutions make AP2 unnecessary, or vice-versa? It’s likely that AP2 will coexist with Web3 solutions in a layered approach. Decentralized finance offers trustless execution (smart contracts, etc.), but it doesn’t inherently solve the problem of “Did an AI have permission from a human to do this?”. AP2 addresses that very human-to-AI trust link, which remains important even if the payment itself is on-chain. Rather than competing with blockchain protocols, AP2 can be seen as bridging them with the off-chain world. For example, a smart contract might accept a certain transaction only if it includes a reference to a valid AP2 mandate signature – something that could be implemented to combine off-chain intent proof with on-chain enforcement. Conversely, if there are crypto-native agent frameworks (some blockchain projects explore autonomous agents that operate with crypto funds), they might develop their own methods for authorization. AP2’s broad industry support, however, might steer even those projects to adopt or integrate with AP2 for consistency.

Another angle is decentralized identity and credentials. AP2’s use of verifiable credentials is very much in line with Web3’s approach to identity (e.g. DIDs and VCs as standardized by W3C). This means AP2 could plug into decentralized identity systems – for instance, a user’s DID could be used to sign an AP2 mandate, which a merchant could verify against a blockchain or identity hub. The mention of exploring decentralized identity for agent authorization reinforces that AP2 may leverage Web3 identity innovations for verifying agent and user identities in a decentralized way, rather than relying only on centralized authorities. This is a point of synergy, as both AP2 and Web3 aim to give users more control and cryptographic proof of their actions.

Potential conflicts might arise only if one envisions a fully decentralized commerce ecosystem with no role for large intermediaries – in that scenario, could AP2 (initially pushed by Google and partners) be too centralized or governed by traditional players? It’s important to note AP2 is open source and intended to be standardizable, so it’s not proprietary to Google. This makes it more palatable to the Web3 community, which values open protocols. If AP2 becomes widely adopted, it might reduce the need for separate Web3-specific payment protocols for agents, thereby unifying efforts. On the other hand, some blockchain projects might prefer purely on-chain authorization mechanisms (like multi-signature wallets or on-chain escrow logic) for agent transactions, especially in trustless environments without any centralized authorities. Those could be seen as alternative approaches, but they likely would remain niche unless they can interact with off-chain systems. AP2, by covering both worlds, might actually accelerate Web3 adoption by making crypto just another payment method an AI agent can use seamlessly. Indeed, one partner noted that “stablecoins provide an obvious solution to scaling challenges [for] agentic systems with legacy infrastructure”, highlighting that crypto can complement AP2 in handling scale or cross-border scenarios. Meanwhile, Coinbase’s engineering lead remarked that bringing the x402 crypto extension into AP2 “made sense – it’s a natural playground for agents... exciting to see agents paying each other resonate with the AI community”. This implies a vision where AI agents transacting via crypto networks is not just a theoretical idea but an expected outcome, with AP2 acting as a catalyst.

In summary, AP2 is highly relevant to Web3: it incorporates crypto payments as a first-class citizen and is aligning with decentralized identity and credential standards. Rather than competing head-on with decentralized payment protocols, AP2 likely interoperates with them – providing the authorization layer while the decentralized systems handle the value transfer. As the line between traditional finance and crypto blurs (with stablecoins, CBDCs, etc.), a unified protocol like AP2 could serve as a universal adapter between AI agents and any form of money, centralized or decentralized.

Industry Adoption, Partnerships, and Roadmap

One of AP2’s greatest strengths is the extensive industry backing behind it, even at this early stage. Google Cloud announced that it is “collaborating with a diverse group of more than 60 organizations” on AP2. These include major credit card networks (e.g. Mastercard, American Express, JCB, UnionPay), leading fintech and payment processors (PayPal, Worldpay, Adyen, Checkout.com, Stripe’s competitors), e-commerce and online marketplaces (Etsy, Shopify (via partners like Stripe or others), Lazada, Zalora), enterprise tech companies (Salesforce, ServiceNow, Oracle possibly via partners, Dell, Red Hat), identity and security firms (Okta, Ping Identity, Cloudflare), consulting firms (Deloitte, Accenture), and crypto/Web3 organizations (Coinbase, Ethereum Foundation, MetaMask, Mysten Labs, Lightspark), among others. Such a wide array of participants is a strong indicator of industry interest and likely adoption. Many of these partners have publicly voiced support. For example, Adyen’s Co-CEO highlighted the need for a “common rulebook” for agentic commerce and sees AP2 as a natural extension of their mission to support merchants with new payment building blocks. American Express’s EVP stated that AP2 is important for “the next generation of digital payments” where trust and accountability are paramount. Coinbase’s team, as noted, is excited about integrating crypto payments into AP2. This chorus of support shows that many in the industry view AP2 as the likely standard for AI-driven payments, and they are keen to shape it to ensure it meets their requirements.

From an adoption standpoint, AP2 is currently at the specification and early implementation stage (announced in September 2025). The complete technical spec, documentation, and some reference implementations (in languages like Python) are available on the project’s GitHub for developers to experiment with. Google has also indicated that AP2 will be incorporated into its products and services for agents. A notable example is the AI Agent Marketplace mentioned earlier: this is a platform where third-party AI agents can be offered to users (likely part of Google’s generative AI ecosystem). Google says many partners building agents will make them available in the marketplace with “new, transactable experiences enabled by AP2”. This implies that as the marketplace launches or grows, AP2 will be the backbone for any agent that needs to perform a transaction, whether it’s buying software from the Google Cloud Marketplace autonomously or an agent purchasing goods/services for a user. Enterprise use cases like autonomous procurement (one agent buying from another on behalf of a company) and automatic license scaling have been specifically mentioned as areas AP2 could facilitate soon.

In terms of a roadmap, the AP2 documentation and Google’s announcement give some clear indications:

  • Near-term: Continue open development of the protocol with community input. The GitHub repo will be updated with additional reference implementations and improvements as real-world testing happens. We can expect libraries/SDKs to emerge, making it easier to integrate AP2 into agent applications. Also, initial pilot programs or proofs-of-concept might be conducted by the partner companies. Given that many large payment companies are involved, they might trial AP2 in controlled environments (e.g., an AP2-enabled checkout option in a small user beta).
  • Standards and Governance: Google has expressed a commitment to move AP2 into an open governance model, possibly via standards bodies. This could mean submitting AP2 to organizations like the Linux Foundation (as was done with the A2A protocol) or forming a consortium to maintain it. The Linux Foundation, W3C, or even bodies like ISO/TC68 (financial services) might be in the cards for formalizing AP2. An open governance would reassure the industry that AP2 is not under single-company control and will remain neutral and inclusive.
  • Feature Expansion: Technically, the roadmap includes expanding support to more payment types and use cases. As noted in the spec, after cards, the focus will shift to “push” payments like bank wires and local real-time payment schemes, and digital currencies. This means AP2 will outline how an Intent/Cart/Payment Mandate works for, say, a direct bank transfer or a crypto wallet transfer, where the flow is a bit different than card pulls. The A2A x402 extension is one such expansion for crypto; similarly, we might see an extension for open banking APIs or one for B2B invoicing scenarios.
  • Security & Compliance Enhancements: As real transactions start flowing through AP2, there will be scrutiny from regulators and security researchers. The open process will likely iterate on making mandates even more robust (e.g., ensuring mandate formats are standardized, possibly using W3C Verifiable Credentials format, etc.). Integration with identity solutions (perhaps leveraging biometrics for user signing of mandates, or linking mandates to digital identity wallets) could be part of the roadmap to enhance trust.
  • Ecosystem Tools: An emerging ecosystem is likely. Already, startups are noticing gaps – for instance, the Vellum.ai analysis mentions a startup called Autumn building “billing infrastructure for AI,” essentially tooling on top of Stripe to handle complex pricing for AI services. As AP2 gains traction, we can expect more tools like agent-focused payment gateways, mandate management dashboards, agent identity verification services, etc., to appear. Google’s involvement means AP2 could also be integrated into its Cloud products – imagine AP2 support in Dialogflow or Vertex AI Agents tooling, making it one-click to enable an agent to handle transactions (with all the necessary keys and certificates managed in Google Cloud).

Overall, the trajectory of AP2 is reminiscent of other major industry standards: an initial launch with a strong sponsor (Google), broad industry coalition, open-source reference code, followed by iterative improvement and gradual adoption in real products. The fact that AP2 is inviting all players “to build this future with us” underscores that the roadmap is about collaboration. If the momentum continues, AP2 could become as commonplace in a few years as protocols like OAuth or OpenID Connect are today in their domains – an unseen but critical layer enabling functionality across services.

Conclusion

AP2 (Agents/Agent Payments Protocol) represents a significant step toward a future where AI agents can transact as reliably and securely as humans. Technically, it introduces a clever mechanism of verifiable mandates and credentials that instill trust in agent-led transactions, ensuring user intent is explicit and enforceable. Its open, extensible architecture allows it to integrate both with the burgeoning AI agent frameworks and the established financial infrastructure. By addressing core concerns of authorization, authenticity, and accountability, AP2 lays the groundwork for AI-driven commerce to flourish without sacrificing security or user control.

The introduction of AP2 can be seen as laying a new foundation – much like early internet protocols enabled the web – for what some call the “agent economy.” It paves the way for countless innovations: personal shopper agents, automatic deal-finding bots, autonomous supply chain agents, and more, all operating under a common trust framework. Importantly, AP2’s inclusive design (embracing everything from credit cards to crypto) positions it at the intersection of traditional finance and Web3, potentially bridging these worlds through a common agent-mediated protocol.

Industry response so far has been very positive, with a broad coalition signaling that AP2 is likely to become a widely adopted standard. The success of AP2 will depend on continued collaboration and real-world testing, but its prospects are strong given the clear need it addresses. In a broader sense, AP2 exemplifies how technology evolves: a new capability (AI agents) emerged that broke old assumptions, and the solution was to develop a new open standard to accommodate that capability. By investing in an open, security-first protocol now, Google and its partners are effectively building the trust architecture required for the next era of commerce. As the saying goes, “the best way to predict the future is to build it” – AP2 is a bet on a future where AI agents seamlessly handle transactions for us, and it is actively constructing the trust and rules needed to make that future viable.

Sources:

  • Google Cloud Blog – “Powering AI commerce with the new Agent Payments Protocol (AP2)” (Sept 16, 2025)
  • AP2 GitHub Documentation – “Agent Payments Protocol Specification and Overview”
  • Vellum AI Blog – “Google’s AP2: A new protocol for AI agent payments” (Analysis)
  • Medium Article – “Google Agent Payments Protocol (AP2)” (Summary by Tahir, Sept 2025)
  • Partner Quotes on AP2 (Google Cloud Blog)
  • A2A x402 Extension (AP2 crypto payments extension) – GitHub README

The Rise of AI Agents in DeFi: Transforming Multi-Chain Strategies

· 9 min read
Dora Noda
Software Engineer

Most DeFi users still open five browser tabs to complete a single yield strategy — checking rates on Aave, bridging assets on Stargate, depositing on Curve, and hoping they don't miss a gas spike. But a quiet revolution is underway. Autonomous AI agents are now doing all of that silently, across multiple blockchains simultaneously, while you sleep.

In 2025, AI agent activity on blockchains surged 86%. Fetch.ai agents alone manage over $1 billion in Hyperliquid derivatives, executing 100x leveraged trades autonomously. Yearn's AI-driven vaults optimize $5 billion across yield pools without human input. And platforms like XION and Particle Network are building the abstraction layers that make all of this invisible to end users. The question is no longer whether AI agents can orchestrate multi-chain DeFi — it's how fast the infrastructure will mature, and what it means for everyone from retail users to institutional desks.

Base Captures 60% of Ethereum L2 Revenue: How Coinbase Is Building Web3's AWS

· 9 min read
Dora Noda
Software Engineer

When Amazon launched AWS in 2006, nobody thought an online bookstore's internal server infrastructure would become the backbone of the internet. Nearly two decades later, a similar story may be unfolding in crypto: Coinbase's Base network captured 62% of all Ethereum Layer 2 revenue in 2025, commanding 46% of L2 DeFi TVL and processing the majority of all L2 stablecoin transfers — all without a native token. The question isn't whether Base is winning the L2 wars. It's whether Coinbase is quietly becoming the AWS of the onchain economy.

Bittensor's DeepSeek Moment: Can TAO Become the Second Pole of Global AI?

· 9 min read
Dora Noda
Software Engineer

When 70 strangers scattered across the world — armed with consumer GPUs and home internet connections — collectively trained a 72-billion-parameter language model that outperformed Meta's LLaMA-2-70B, something shifted in the AI narrative. No corporate whitelist. No $100 million data center. No centralized lab pulling the strings. Just Bittensor's Subnet 3, a cryptoeconomic incentive system, and a technical trick called SparseLoCo that made it all possible.

The AI world spent early 2026 obsessing over DeepSeek's proof that frontier-quality models don't require OpenAI-scale budgets. Bittensor's community calls what happened on March 10, 2026 their own "DeepSeek moment" — evidence that large language models can now emerge entirely outside centralized institutions. The question worth asking: is Bittensor genuinely building the second pole of global AI infrastructure, or is it a compelling story wrapped around an elegant but fragile experiment?

Data Markets Meet AI Training: How Blockchain Solves the $23 Billion Data Pricing Crisis

· 12 min read
Dora Noda
Software Engineer

The AI industry faces a paradox: global data production explodes from 33 zettabytes to 175 zettabytes by 2025, yet AI model quality stagnates. The problem isn't data scarcity—it's that data providers have no way to capture value from their contributions. Enter blockchain-based data markets like Ocean Protocol, LazAI, and ZENi, which are transforming AI training data from a free resource into a monetizable asset class worth $23.18 billion by 2034.

The $23 Billion Data Pricing Problem

AI training costs surged 89% from 2023 to 2025, with data acquisition and annotation consuming up to 80% of machine learning project budgets. Yet data creators—individuals generating search queries, social media interactions, and behavioral patterns—receive nothing while tech giants harvest billions in value.

The AI training dataset market reveals this disconnect. Valued at $3.59 billion in 2025, the market is projected to hit $23.18 billion by 2034 at a 22.9% CAGR. Another forecast pegs 2026 at $7.48 billion, reaching $52.41 billion by 2035 with 24.16% annual growth.

But who captures this value? Currently, centralized platforms extract profit while data creators get zero compensation. Label noise, inconsistent tagging, and missing context drive costs, yet contributors lack incentives to improve quality. Data privacy concerns impact 28% of companies, limiting dataset accessibility precisely when AI needs diverse, high-quality inputs.

Ocean Protocol: Tokenizing the $100 Million Data Economy

Ocean Protocol addresses ownership by allowing data providers to tokenize datasets and make them available for AI training without relinquishing control. Since launching Ocean Nodes in August 2024, the network has grown to over 1.4 million nodes across 70+ countries, onboarded 35,000+ datasets, and facilitated more than $100 million in AI-related data transactions.

The 2025 product roadmap includes three critical components:

Inference Pipelines enable end-to-end AI model training and deployment directly on Ocean's infrastructure. Data providers tokenize proprietary datasets, set pricing, and earn revenue every time an AI model consumes their data for training or inference.

Ocean Enterprise Onboarding moves ecosystem businesses from pilot to production. Ocean Enterprise v1, launching Q3 2025, delivers a compliant, production-ready data platform targeting institutional clients who need auditable, privacy-preserving data exchanges.

Node Analytics introduces dashboards tracking performance, usage, and ROI. Partners like NetMind contribute 2,000 GPUs while Aethir helps scale Ocean Nodes to support large AI workloads, creating a decentralized compute layer for AI training.

Ocean's revenue-sharing mechanism works through smart contracts: data providers set access terms, AI developers pay per usage, and blockchain automatically distributes payments to all contributors. This transforms data from a one-time sale into a continuous revenue stream tied to model performance.

LazAI: Verifiable AI Interaction Data on Metis

LazAI introduces a fundamentally different approach—monetizing AI interaction data, not just static datasets. Every conversation with LazAI's flagship agents (Lazbubu, SoulTarot) generates Data Anchoring Tokens (DATs), which function as traceable, verifiable records of AI-generated output.

The Alpha Mainnet launched in December 2025 on enterprise-grade infrastructure using QBFT consensus and $METIS-based settlement. DATs tokenize and monetize AI datasets and models as verifiable assets with transparent ownership and revenue attribution.

Why does this matter? Traditional AI training uses static datasets frozen at collection time. LazAI captures dynamic interaction data—user queries, model responses, refinement loops—creating training datasets that reflect real-world usage patterns. This data is exponentially more valuable for fine-tuning models because it contains human feedback signals embedded in conversation flow.

The system includes three key innovations:

Proof-of-Stake Validator Staking secures AI data pipelines. Validators stake tokens to verify data integrity, earning rewards for accurate validation and facing penalties for approving fraudulent data.

DAT Minting with Revenue Sharing allows users who generate valuable interaction data to mint DATs representing their contributions. When AI companies purchase these datasets for model training, revenue flows automatically to all DAT holders based on their proportional contribution.

iDAO Governance establishes decentralized AI collectives where data contributors collectively govern dataset curation, pricing strategies, and quality standards through on-chain voting.

The 2026 roadmap adds ZK-based privacy (users can monetize interaction data without exposing personal information), decentralized computing markets (training happens on distributed infrastructure rather than centralized clouds), and multimodal data evaluation (video, audio, image interactions beyond text).

ZENi: The Intelligence Data Layer for AI Agents

ZENi operates at the intersection of Web3 and AI by powering the "InfoFi Economy"—a decentralized network bridging traditional and blockchain-based commerce through AI-powered intelligence. The company raised $1.5 million in seed funding led by Waterdrip Capital and Mindfulness Capital.

At its core sits the InfoFi Data Layer, a high-throughput behavioral-intelligence engine processing 1 million+ daily signals across X/Twitter, Telegram, Discord, and on-chain activity. ZENi identifies patterns in user behavior, sentiment shifts, and community engagement—data that's critical for training AI agents but difficult to collect at scale.

The platform operates as a three-part system:

AI Data Analytic Agent identifies high-intent audiences and influence clusters by analyzing social graphs, on-chain transactions, and engagement metrics. This creates behavioral datasets showing not just what users do but why they make decisions.

AIGC (AI-Generated Content) Agent crafts personalized campaigns using insights from the data layer. By understanding user preferences and community dynamics, the agent generates content optimized for specific audience segments.

AI Execution Agent activates outreach through the ZENi dApp, closing the loop from data collection to monetization. Users receive compensation when their behavioral data contributes to successful campaigns.

ZENi already serves partners in e-commerce, gaming, and Web3, with 480,000 registered users and 80,000 daily active users. The business model monetizes behavioral intelligence: companies pay to access ZENi's AI-processed datasets, and revenue flows to users whose data powered those insights.

Blockchain's Competitive Advantage in Data Markets

Why does blockchain matter for data monetization? Three technical capabilities make decentralized data markets superior to centralized alternatives:

Granular Revenue Attribution Smart contracts enable sophisticated revenue-sharing where multiple contributors to an AI model automatically receive proportional compensation based on usage. A single training dataset might aggregate inputs from 10,000 users—blockchain tracks each contribution and distributes micropayments per model inference.

Traditional systems can't handle this complexity. Payment processors charge fixed fees (2-3%) unsuitable for micropayments, and centralized platforms lack transparency about who contributed what. Blockchain solves both: near-zero transaction costs via Layer 2 solutions and immutable attribution via on-chain provenance.

Verifiable Data Provenance LazAI's Data Anchoring Tokens prove data origin without exposing underlying content. AI companies training models can verify they're using licensed, high-quality data rather than scraped web content of questionable legality.

This addresses a critical risk: data privacy regulations impact 28% of companies, limiting dataset accessibility. Blockchain-based data markets implement privacy-preserving verification—proving data quality and licensing without revealing personal information.

Decentralized AI Training Ocean Protocol's node network demonstrates how distributed infrastructure reduces costs. Rather than paying cloud providers $2-5 per GPU hour, decentralized networks match unused compute capacity (gaming PCs, data centers with spare capacity) with AI training demand at 50-85% cost reduction.

Blockchain coordinates this complexity through smart contracts governing job allocation, payment distribution, and quality verification. Contributors stake tokens to participate, earning rewards for honest computation and facing slashing penalties for delivering incorrect results.

The Path to $52 Billion: Market Forces Driving Adoption

Three converging trends accelerate blockchain data market growth toward the $52.41 billion 2035 projection:

AI Model Diversification The era of massive foundation models (GPT-4, Claude, Gemini) trained on all internet text is ending. Specialized models for healthcare, finance, legal services, and vertical applications require domain-specific datasets that centralized platforms don't curate.

Blockchain data markets excel at niche datasets. A medical imaging provider can tokenize radiology scans with diagnostic annotations, set usage terms requiring patient consent, and earn revenue from every AI model trained on their data. This impossible to implement with centralized platforms that lack granular access control and attribution.

Regulatory Pressure Data privacy regulations (GDPR, CCPA, China's Personal Information Protection Law) mandate consent-based data collection. Blockchain-based markets implement consent as programmable logic—users cryptographically sign permissions, data can only be accessed under specified terms, and smart contracts enforce compliance automatically.

Ocean Enterprise v1's focus on compliance addresses this directly. Financial institutions and healthcare providers need auditable data lineage proving every dataset used for model training had proper licensing. Blockchain provides immutable audit trails satisfying regulatory requirements.

Quality Over Quantity Recent research shows AI doesn't need endless training data when systems better resemble biological brains. This shifts incentives from collecting maximum data to curating highest-quality inputs.

Decentralized data markets align incentives properly: data creators earn more for high-quality contributions because models pay premium prices for datasets improving performance. LazAI's interaction data captures human feedback signals (which queries get refined, which responses satisfy users) that static datasets miss—making it inherently more valuable per byte.

Challenges: Privacy, Pricing, and Protocol Wars

Despite momentum, blockchain data markets face structural challenges:

Privacy Paradox Training AI requires data transparency (models need access to actual content), but privacy regulations demand data minimization. Current solutions like federated learning (training on encrypted data) increase costs 3-5x compared to centralized training.

Zero-knowledge proofs offer a path forward—proving data quality without exposing content—but add computational overhead. LazAI's 2026 ZK roadmap addresses this, though production-ready implementations remain 12-18 months away.

Price Discovery What's a social media interaction worth? A medical image with diagnostic annotation? Blockchain markets lack established pricing mechanisms for novel data types.

Ocean Protocol's approach—letting providers set prices and market dynamics determine value—works for commoditized datasets but struggles with one-of-a-kind proprietary data. Prediction markets or AI-driven dynamic pricing may solve this, though both introduce oracle dependencies (external price feeds) that undermine decentralization.

Interoperability Fragmentation Ocean Protocol runs on Ethereum, LazAI on Metis, ZENi integrates with multiple chains. Data tokenized on one platform can't easily transfer to another, fragmenting liquidity.

Cross-chain bridges and universal data standards (like decentralized identifiers for datasets) could solve this, but the ecosystem remains early. The blockchain AI market at $680.89 million in 2025 growing to $4.338 billion by 2034 suggests consolidation around winning protocols is years away.

What This Means for Developers

For teams building AI applications, blockchain data markets offer three immediate advantages:

Access to Proprietary Datasets Ocean Protocol's 35,000+ datasets include proprietary training data unavailable through traditional channels. Medical imaging, financial transactions, behavioral analytics from Web3 applications—specialized datasets that centralized platforms don't curate.

Compliance-Ready Infrastructure Ocean Enterprise v1's built-in licensing, consent management, and audit trails solve regulatory headaches. Rather than building custom data governance systems, developers inherit compliance by design through smart contracts enforcing data usage terms.

Cost Reduction Decentralized compute networks undercut cloud providers by 50-85% for batch training workloads. Ocean's partnership with NetMind (2,000 GPUs) and Aethir demonstrates how tokenized GPU marketplaces match supply with demand at lower cost than AWS/GCP/Azure.

BlockEden.xyz provides enterprise-grade RPC infrastructure for blockchain-based AI applications. Whether you're building on Ethereum (Ocean Protocol), Metis (LazAI), or multi-chain platforms, our reliable node services ensure your AI data pipelines remain online and performant. Explore our API marketplace to connect your AI systems with blockchain networks built for scale.

The 2026 Inflection Point

Three catalysts position 2026 as the inflection year for blockchain data markets:

Ocean Enterprise v1 Production Launch (Q3 2025) The first compliant, institutional-grade data marketplace goes live. If Ocean captures even 5% of the $7.48 billion 2026 AI training dataset market, that's $374 million in data transactions flowing through blockchain-based infrastructure.

LazAI ZK Privacy Implementation (2026) Zero-knowledge proofs enable users to monetize interaction data without privacy compromise. This unlocks consumer-scale adoption—hundreds of millions of social media users, search engine queries, and e-commerce sessions becoming monetizable through DATs.

Federated Learning Integration AI federated learning allows model training without centralizing data. Blockchain adds value attribution: rather than Google training models on Android user data without compensation, federated systems running on blockchain distribute revenue to all data contributors.

The convergence means AI training shifts from "collect all data, train centrally, pay nothing" to "train on distributed data, compensate contributors, verify provenance." Blockchain doesn't just enable this transition—it's the only technology stack capable of coordinating millions of data providers with automatic revenue distribution and cryptographic verification.

Conclusion: Data Becomes Programmable

The AI training data market's growth from $3.59 billion in 2025 to $23-52 billion by 2034 represents more than market expansion. It's a fundamental shift in how we value information.

Ocean Protocol proves data can be tokenized, priced, and traded like financial assets while preserving provider control. LazAI demonstrates AI interaction data—previously discarded as ephemeral—becomes valuable training inputs when properly captured and verified. ZENi shows behavioral intelligence can be extracted, processed by AI, and monetized through decentralized markets.

Together, these platforms transform data from raw material extracted by tech giants into a programmable asset class where creators capture value. The global data explosion from 33 to 175 zettabytes matters only if quality beats quantity—and blockchain-based markets align incentives to reward quality contributions.

When data creators earn revenue proportional to their contributions, when AI companies pay fair prices for quality inputs, and when smart contracts automate attribution across millions of participants, we don't just fix the data pricing problem. We build an economy where information has intrinsic value, provenance is verifiable, and contributors finally capture the wealth their data generates.

That's not a market trend. It's a paradigm shift—and it's already live on-chain.

The Rise of Pragmatic Privacy: Balancing Compliance and Confidentiality in Blockchain

· 16 min read
Dora Noda
Software Engineer

The blockchain industry stands at a crossroads where privacy is no longer a binary choice. Throughout crypto's early years, the narrative was clear: absolute privacy at all costs, transparency only when necessary, and resistance to any form of surveillance. But in 2026, a profound shift is underway. The rise of Decentralized Pragmatic AI (DePAI) infrastructure signals a new era where compliance-friendly privacy tools are not just accepted—they're becoming the standard.

This isn't a retreat from privacy principles. It's an evolution toward a more sophisticated understanding: privacy and regulatory compliance can coexist, and in fact, must coexist if blockchain and AI are to achieve institutional adoption at scale.

The End of "Privacy at All Costs"

For years, privacy maximalism dominated blockchain discourse. Projects like Monero and early versions of privacy-focused protocols championed absolute anonymity. The philosophy was straightforward: users deserve complete financial privacy, and any compromise represented a betrayal of crypto's founding principles.

But this absolutist stance created a critical problem. While privacy is essential for protecting honest users from surveillance and front-running, it also became a shield for illicit activity. Regulators worldwide began treating privacy coins with suspicion, leading to delistings from major exchanges and outright bans in several jurisdictions.

As Cointelegraph reports, 2026 is the year pragmatic privacy takes off, with new projects tackling compliant forms of privacy for institutions and growing interest in existing privacy coins like Zcash. The key insight: privacy isn't binary. Neither full transparency nor absolute privacy are workable in the real world, because while privacy is essential for honest users, it can also be used by criminals to evade law enforcement.

People are starting to accept making tradeoffs that curtail privacy in limited contexts to make protocols more threat-resistant. This represents a fundamental shift in the blockchain community's approach to privacy.

Defining Pragmatic Privacy

So what exactly is pragmatic privacy? According to Anaptyss, pragmatic privacy refers to the strategic implementation of privacy measures that protect user and business data without breaching regulatory requirements, ensuring that financial operations are both secure and compliant.

This approach recognizes that different participants in the blockchain ecosystem have different privacy needs:

  • Retail users need protection from mass surveillance and data harvesting
  • Institutional investors require confidentiality to prevent front-running of their trading strategies
  • Enterprises must satisfy strict AML/KYC mandates while protecting sensitive business information
  • AI agents need verifiable computation without exposing proprietary algorithms or training data

The solution lies not in choosing between privacy and compliance, but in building infrastructure that enables both simultaneously.

zkKYC: Privacy-Preserving Identity Verification

One of the most promising developments in pragmatic privacy is the emergence of zero-knowledge Know Your Customer (zkKYC) solutions. Traditional KYC processes require users to repeatedly submit sensitive personal documents to multiple platforms, creating numerous honeypots of personal data vulnerable to breaches.

zkKYC flips this model. As zkMe explains, their zkKYC service combines Zero-Knowledge Proof (ZKP) technology with full FATF compliance. A regulated KYC provider verifies the user off-chain following standard AML and identity verification procedures, but protocols do not collect identity data. Instead, they verify compliance cryptographically.

The mechanism is elegant: smart contracts automatically check a zero-knowledge proof before allowing access to certain services or processing large transactions. Users prove they meet compliance requirements—age, residency, non-sanctioned status—without revealing any actual identity data to the protocol or other users.

According to Studio AM, this is already happening in some blockchain ecosystems: users prove age or residency with a ZKP before accessing certain decentralized finance (DeFi) services. Major financial institutions are taking notice. Deutsche Bank and Privado ID have conducted proofs of concept demonstrating blockchain-based identity verification using zero-knowledge credentials.

Perhaps most significantly, in July 2025, Google open-sourced its zero-knowledge proof libraries following work with Germany's Sparkasse group, signaling growing institutional investment in privacy-preserving identity infrastructure.

zkTLS: Making the Web Verifiable

While zkKYC addresses identity verification, another technology is solving an equally critical problem: how to bring verifiable Web2 data into blockchain systems without compromising privacy or security. Enter zkTLS (Zero-Knowledge Transport Layer Security).

Traditional TLS—the encryption that secures every HTTPS connection—has a critical limitation: it provides confidentiality but not verifiability. In other words, while TLS ensures that information is encrypted during transmission, it does not create a proof that the encrypted interaction happened in a way that can be independently verified.

zkTLS solves this by integrating Zero-Knowledge Proofs with the TLS encryption system. Using MPC-TLS and zero-knowledge techniques, zkTLS allows a client to produce cryptographically verifiable proofs and attestations of real HTTPS sessions.

As zkPass describes it, zkTLS generates a zero-knowledge proof (e.g., zk-SNARK) confirming that data was fetched from a specific server (identified by its public key and domain) via a legitimate TLS session, without exposing the session key or plaintext data.

The implications are profound. Traditional APIs can be easily disabled or censored, whereas zkTLS ensures that as long as users have an HTTPS connection, they can continue to access their data. This allows virtually any Web2 data to be used on a blockchain in a verifiable and permissionless way.

Recent implementations demonstrate the technology's maturity. Brevis's zkTLS Coprocessor, when fetching data from a web source, proves that the content was retrieved through a genuine TLS session from the authentic domain and that the data hasn't been tampered with.

At FOSDEM 2026, the TLSNotary project presented on liberating user data with zkTLS, demonstrating how users can prove facts about their private data—bank balances, credit scores, transaction histories—without exposing the underlying information.

Verifiable AI Computation: The Missing Piece for Institutional Adoption

Privacy-preserving identity and data verification set the stage, but the most transformative element of DePAI infrastructure is verifiable AI computation. As AI agents become economically active participants in blockchain ecosystems, the question shifts from "Can AI do this?" to "Can you prove the AI did this correctly?"

This verification requirement isn't academic. According to DecentralGPT, as AI becomes part of finance, automation, and agent workflows, performance alone isn't enough. In Web3, the question is also: Can you prove what happened? In late December 2025, Cysic and Inference Labs partnered to build scalable infrastructure for verifiable AI applications, combining decentralized compute with verification frameworks designed for real-world uses.

The institutional imperative for verifiable computation is clear. As noted in analysis by Alexis M. Adams, the transition to deterministic AI infrastructure is the only viable pathway for organizations to meet the multi-jurisdictional demands of the EU AI Act, US state-level frontier laws, and the rising expectations of the cyber insurance market.

The global AI governance market reflects this urgency: valued at approximately $429.8 million in 2026, it's projected to reach $4.2 billion by 2033, according to the same analysis.

But verification faces a critical gap. As Keyrus identifies, AI deployment requires trusting digital identities, but enterprises cannot validate who—or what—is actually operating AI systems. When organizations cannot reliably distinguish legitimate AI agents from adversary-controlled imposters, they cannot confidently grant AI systems access to sensitive data or decision authority.

This is where the convergence of zkKYC, zkTLS, and verifiable computation creates a complete solution. AI agents can prove their identity (zkKYC), prove they retrieved data correctly from authorized sources (zkTLS), and prove they computed results correctly (verifiable computation)—all without exposing sensitive business logic or training data.

The Institutional Push Toward Compliance

These technologies aren't emerging in a vacuum. Institutional demand for compliant privacy infrastructure is accelerating, driven by regulatory pressures and business necessity.

Large financial institutions recognize that without privacy, their blockchain strategies will stall. According to WEEX Crypto News, institutional investors require confidentiality to prevent front-running of their strategies, yet they must satisfy strict AML/KYC mandates. Zero-Knowledge Proofs are gaining traction as a solution, allowing institutions to prove compliance without revealing sensitive underlying data to the public blockchain.

The regulatory landscape of 2026 leaves no room for ambiguity. The EU AI Act reaches general application in 2026, and regulators across jurisdictions expect documented governance programs, not just policies, according to SecurePrivacy.ai. Full enforcement applies to high-risk AI systems used in critical infrastructure, education, employment, essential services, and law enforcement.

In the United States, by the end of 2025, 19 states enforced comprehensive privacy laws, with several new statutes taking effect in 2026, complicating multi-state privacy compliance obligations. Colorado and California have added "neural data" (and Colorado also added "biological data") to "sensitive" data definitions, as reported by Nixon Peabody.

This regulatory convergence creates a powerful incentive: organizations that build on compliant, verifiable infrastructure gain competitive advantage, while those clinging to privacy maximalism find themselves shut out of institutional markets.

Data Integrity as the Operating System for AI

Beyond compliance, verifiable computation enables something more fundamental: data integrity as the operating system for responsible AI.

As Precisely notes, in 2026, governance won't be something organizations layer on after deployment—it will be built into how data is structured, interpreted, and monitored from the start. Data integrity will serve as the operating system for responsible AI. From semantic clarity and explainability to compliance, auditability, and control over AI-generated data, integrity will determine whether AI can scale safely and deliver lasting value.

This shift has profound implications for how AI agents operate on blockchain networks. Rather than opaque black boxes, AI systems become auditable, verifiable, and governable by design. Smart contracts can enforce constraints on AI behavior, verify computational correctness, and create immutable audit trails—all while preserving the privacy of proprietary algorithms and training data.

The MIT Sloan Management Review identifies this as one of five key trends in AI and data science for 2026, noting that trustworthy AI requires verifiable provenance and explainable decision-making processes.

Decentralized Identity: The Foundation Layer

Underlying these technologies is a broader shift toward decentralized identity and Verifiable Credentials. As Indicio explains, decentralized identity changes the equation—instead of verifying personal data in a central location, individuals hold their data and share it with consent that can be independently verified using cryptography.

This model inverts traditional identity systems. Rather than creating numerous copies of identity documents scattered across databases, users maintain a single verifiable credential and selectively disclose only the specific attributes required for each interaction.

For AI agents, this model extends beyond human identity. Agents can possess verifiable credentials attesting to their training provenance, operational parameters, audit history, and authorization scope. This creates a trust framework where agents can interact autonomously while remaining accountable.

From Experimentation to Deployment

The key transformation in 2026 is the transition from theoretical frameworks to production deployments. According to XT Exchange's analysis, by 2026, decentralized AI is moving beyond experimentation and into practical deployment. However, key constraints remain, including scaling AI workloads, preserving data privacy, and governing open AI systems.

These constraints are precisely what DePAI infrastructure addresses. By combining zkKYC for identity, zkTLS for data verification, and verifiable computation for AI operations, the infrastructure creates a complete stack for deploying AI agents that are simultaneously:

  • Privacy-preserving for users and businesses
  • Compliant with regulatory requirements
  • Verifiable and auditable by design
  • Scalable for institutional workloads

The Road Ahead: Building Composable Privacy

The final piece of the DePAI puzzle is composability. As Blockmanity reports, 2026 marks the moment when blockchain becomes "just the plumbing" for AI agents and global finance. The infrastructure must be modular, interoperable, and invisible to end users.

Pragmatic privacy tools excel at composability. An AI agent can:

  1. Authenticate using zkKYC credentials
  2. Fetch verified external data via zkTLS
  3. Perform computations with verifiable inference
  4. Submit results on-chain with zero-knowledge proofs of correctness
  5. Maintain audit trails without exposing sensitive logic

Each layer operates independently, allowing developers to mix and match privacy-preserving technologies based on specific requirements. A DeFi protocol might require zkKYC for user onboarding, zkTLS for fetching price feeds, and verifiable computation for complex financial calculations—all working seamlessly together.

This composability extends across chains. Privacy infrastructure built with interoperability standards can function across Ethereum, Solana, Sui, Aptos, and other blockchain networks, creating a universal layer for compliant, private, verifiable computation.

Why This Matters for Builders

For developers building the next generation of blockchain applications, DePAI infrastructure represents both an opportunity and a requirement.

The opportunity: First-mover advantage in building applications that institutions actually want to use. Financial institutions, healthcare providers, government agencies, and enterprises all need blockchain solutions, but they cannot compromise on compliance or privacy. Applications built on pragmatic privacy infrastructure can serve these markets.

The requirement: Regulatory environments are converging on mandates for verifiable, governable AI systems. Applications that cannot demonstrate compliance, auditability, and user privacy protection will find themselves excluded from regulated markets.

The technical capabilities are maturing rapidly. zkKYC solutions are production-ready with major financial institutions conducting pilots. zkTLS implementations are processing real-world data. Verifiable computation frameworks are scaling to handle institutional workloads.

What's needed now is developer adoption. The transition from experimental privacy tools to production infrastructure requires builders to integrate these technologies into applications, test them in real-world scenarios, and provide feedback to infrastructure teams.

BlockEden.xyz provides enterprise-grade RPC infrastructure for blockchain networks implementing privacy-preserving technologies. Explore our services to build on foundations designed for the DePAI era.

Conclusion: Privacy's Pragmatic Future

The DePAI explosion in 2026 represents more than technological progress. It signals a maturation of blockchain's relationship with privacy, compliance, and institutional adoption.

The industry is moving beyond ideological battles between privacy maximalists and transparency absolutists. Pragmatic privacy acknowledges that different contexts demand different privacy guarantees, and that regulatory compliance and user privacy can coexist through thoughtful cryptographic design.

zkKYC proves identity without exposing it. zkTLS verifies data without trusting intermediaries. Verifiable computation proves correctness without revealing algorithms. Together, these technologies create an infrastructure layer where AI agents can operate autonomously, enterprises can adopt blockchain confidently, and users retain control over their data.

This isn't a compromise on privacy principles. It's a recognition that privacy, to be meaningful, must be sustainable within the regulatory and business realities of global finance. Absolute privacy that gets banned, delisted, and excluded from institutional use doesn't protect anyone. Pragmatic privacy that enables both confidentiality and compliance actually delivers on blockchain's promise.

The builders who recognize this shift and build on DePAI infrastructure today will define the next era of decentralized applications. The tools are ready. The institutional demand is clear. The regulatory environment is crystallizing. 2026 is the year pragmatic privacy goes from theory to deployment—and the blockchain industry will be stronger for it.


Sources