Decentralized Storage Services: Arweave, Pinata, and Comparative Analysis
Decentralized storage networks aim to address issues of data impermanence, censorship, and centralization by distributing data across peer-to-peer networks. Traditional web content is surprisingly ephemeral – for example, studies indicate over 98% of the internet becomes inaccessible after 20 years, highlighting the need for resilient long-term storage. Vendors like Arweave and Pinata (built on IPFS) have emerged to offer permanent or distributed storage solutions, alongside others such as Filecoin, Storj, Sia, Ceramic, and the underlying IPFS protocol. This report analyzes these services in terms of: (1) technical architecture and capabilities, (2) pricing models, (3) developer experience, (4) user adoption, (5) ecosystem maturity, and (6) key use cases (e.g. NFT metadata hosting, dApp backends, archival data, content delivery). Comparative tables and examples are provided to illustrate differences. All sources are linked to official documentation or authoritative analyses.
1. Technical Capabilities and Architecture
Arweave: Arweave is a blockchain-like permanent storage network built on a novel Blockweave data structure. Unlike traditional blockchains that link blocks linearly, Arweave’s blockweave links each block to its immediate predecessor and a random earlier block, creating a web-like structure. This design (coupled with a Succinct Proof of Random Access (SPoRA) consensus) means miners must verify random old data to mine new blocks, incentivizing them to store as much of the archive as possible. The result is high redundancy – in fact, there are currently approximately 200 replicas of the entire Arweave dataset distributed globally. Data uploaded to Arweave becomes part of this “Permaweb” and is immutable and permanent. To improve performance and scalability, Arweave uses Bundling (combining many small files into one transaction) to handle large data throughput (e.g. an Arweave bundler once stored 47 GB of data in a single tx). A Wildfire mechanism ranks nodes by responsiveness to encourage fast data propagation across the network. Overall, Arweave acts as a decentralized hard drive – storing data permanently on-chain, with the expectation that storage costs will keep dropping so that miners can be paid forever from an upfront endowment.
IPFS and Pinata: The InterPlanetary File System (IPFS) provides a content-addressed, peer-to-peer file system for distributed data storage and sharing. Data on IPFS is identified by a content hash (CID) and retrieved via a global distributed hash table (DHT). By design, IPFS itself is file-sharing infrastructure – it does not guarantee persistence of data unless nodes explicitly continue to host (“pin”) the content. Services like Pinata build on IPFS by providing pinning and bandwidth: Pinata runs IPFS nodes that pin your data to keep it available, and offers a fast HTTP gateway with CDN integration for quick retrieval (often called “hot storage” for frequently accessed data). Technically, Pinata’s architecture is centralized cloud infrastructure backing the decentralized IPFS network – your files are distributed via IPFS (content-addressed, and retrievable by any IPFS peer), but Pinata ensures high availability by keeping copies on their servers and caching through dedicated gateways. Pinata also offers private IPFS networks (for isolated use), an IPFS-backed key-value data store, and other developer tools, all of which leverage IPFS under the hood. In summary, IPFS+Pinata provides a decentralized storage protocol (IPFS) with a managed service layer (Pinata) to handle reliability and performance.
Filecoin: Filecoin is often considered the incentive layer for IPFS. It’s a blockchain-powered decentralized storage network where storage providers (miners) rent out disk space in an open market. Filecoin uses a novel Proof-of-Replication (PoRep) to ensure a miner has saved unique copies of client data, and Proof-of-Spacetime (PoSt) to continuously verify that the data remains stored over time. These proofs, built on zero-knowledge proofs, are recorded on Filecoin’s blockchain, giving a cryptoeconomic guarantee that data is being stored as agreed. The Filecoin network is built on IPFS technology for content addressing and data transfer, but adds smart contracts (“storage deals”) enforced on-chain. In a storage deal, a user pays a miner in Filecoin (FIL) to store data for a specified duration. Miners put up collateral that can be slashed if they fail to prove storage, ensuring reliability. Filecoin does not automatically make data public; users typically combine it with IPFS or other retrieval networks for content delivery. It is scalable and flexible – large files can be split and stored with multiple miners, and clients can choose redundancy level by making deals with multiple providers for the same data to guard against node failure. This design favors bulk storage: miners optimize for large datasets, and retrieval speed might involve separate “retrieval miners” or use of IPFS caches. In essence, Filecoin is like a decentralized Amazon S3 + Glacier: a storage marketplace with verifiable durability and user-defined redundancy.
Storj: Storj is a distributed cloud object storage network that does not use a blockchain for consensus but instead coordinates storage through a decentralized network of nodes and a satellite metadata service. When a file is uploaded to Storj (via their service called Storj DCS – Decentralized Cloud Storage), it is first client-side encrypted and then erasure-coded into 80 pieces (by default) such that only a subset (e.g. 29 of 80 pieces) is needed to reconstruct the file. These encrypted pieces are distributed to diverse Storage Nodes all over the world (each node only holds random fragments, not useful data by itself). This gives Storj extremely high durability (claimed 11×9s durability – 99.999999999% data survival) and also parallelism in downloads – a user fetching a file can retrieve pieces from dozens of nodes simultaneously, often improving throughput. Storj uses a proof-of-retrievability concept (storage nodes periodically audit that they still have their pieces). The network operates on a zero-trust model with end-to-end encryption: only the file owner (who holds the decryption key) can read the data. The architecture has no central data center – instead it taps into existing excess disk capacity provided by node operators, which improves sustainability and global distribution (Storj notes this yields CDN-like performance and much lower carbon footprint). Coordination (file metadata, payment) is handled by “satellites” run by Storj Labs. In summary, Storj’s technical approach is encrypted, sharded, and distributed object storage, delivering high redundancy and download speeds comparable to or better than traditional CDNs, without a blockchain consensus but with cryptographic audits of storage.
Sia: Sia is another decentralized cloud storage platform, utilizing its own blockchain and cryptocurrency (Siacoin) to form storage contracts. Sia splits files into 30 encrypted shards using Reed–Solomon erasure coding, and requires any 10 of those shards to recover the file (providing built-in 3x redundancy). Those shards are stored on independent hosts across the network. Sia’s blockchain is Proof-of-Work and is used to enforce smart contracts between renters and hosts. In a Sia storage contract, the renter locks up Siacoin for a period and the host puts up collateral; the host must periodically submit storage proofs (similar in spirit to Filecoin’s proofs) that they are storing the data, or they lose their collateral. At contract end, hosts are paid from the escrowed funds (and a small portion goes to Siafund holders as a protocol fee). This mechanism ensures hosts have economic incentive and penalties to reliably store data. Sia’s design emphasizes privacy (all data is end-to-end encrypted; hosts cannot see user files) and censorship-resistance (no central server). Like Storj, Sia enables parallel downloads of file pieces from multiple hosts, improving speed and uptime. However, Sia does require users to renew contracts periodically (default contracts last 3 months) to maintain storage, meaning data is not “permanent” unless the user continually pays. Sia has also introduced a layer called Skynet (earlier) for web-centric use: Skynet provided content addressing (via “skylinks”) and web portals for easy retrieval of Sia-hosted content, effectively acting as a decentralized CDN for Sia files. In summary, Sia’s architecture is blockchain-secured cloud storage with strong redundancy and privacy, suitable for “hot” data (fast retrieval) in a decentralized manner.
Ceramic: Ceramic is a bit different – it is a decentralized network for mutable data streams rather than bulk file storage. It targets use-cases like dynamic JSON documents, user profiles, identities (DIDs), social content, etc. that need to be stored in a decentralized way but also updated frequently. Ceramic’s protocol uses cryptographically signed events (updates) that are anchored to a blockchain for ordering. In practice, data on Ceramic is stored as “streams” or smart documents – each piece of content lives in a stream that can be updated by its owner (with verifiable history of versions). Under the hood, Ceramic uses IPFS for content storage of each update, and an event log is maintained so that all nodes can agree on the latest state of a document. The consensus comes from anchoring stream updates onto an underlying blockchain (originally Ethereum) to get an immutable timestamp and ordering. There is no native token; nodes simply replicate data for the dapps using Ceramic. Technical features include DID (decentralized identity) integration for update authentication and global schemas (data models) to ensure interoperable formats. Ceramic is designed to be scalable (each stream’s state is maintained independently, so there’s no global “ledger” of all data, avoiding bottlenecks). In summary, Ceramic provides decentralized databases and mutable storage for Web3 applications – it’s complementary to the file storage networks, focusing on structured data and content management (whereas networks like Arweave/Filecoin/Storj focus on static file blobs).
Summary of Architectures: The table below compares key technical aspects of these systems:
Project | Architecture & Mechanism | Data Persistence | Redundancy | Performance |
---|---|---|---|---|
Arweave | Blockchain “Blockweave”; Proof of Access (SPoRA) consensus. All data on-chain (permaweb). | Permanent (one-time on-chain storage). | Very high – essentially 200+ full replicas across network (miners store old blocks to mine new ones). | Write: moderate (on-chain tx, bundling helps throughput); Read: via gateways (decentralized web, slightly slower than CDN). |
IPFS (protocol) | P2P content-addressed file system; DHT for locating content. No built-in consensus or payments. | Ephemeral (content persists only if pinned on some node). | Configurable – depends on how many nodes pin the data. (No default replication). | Write: immediate add on local node; Read: potentially fast if content is nearby, otherwise needs DHT discovery (can be slow without a pinning service). |
Pinata (service) | Managed IPFS pinning cluster + HTTP gateways. Centralized cloud ensures files stay online, built on IPFS protocol. | As long as Pinata (or user’s nodes) pins the data (subscription-based persistence). | Pinata likely stores multiple copies across their infrastructure for reliability (details proprietary). | Write: fast uploads via API/SDK; Read: fast CDN-backed gateway (suitable for hot content). |
Filecoin | Blockchain with Proof-of-Replication + Proof-of-Spacetime. Content addressed (IPFS), deals via smart contracts. | User-defined duration (e.g. 6 months or 2 years deals, extendable). Not permanent unless continuously renewed. | User can choose number of copies (deals with multiple miners) – e.g. NFT.Storage uses 6× redundancy for each NFT file. Network capacity is huge (EB scale). | Write: batched into sectors, higher latency for initial storage; Read: not instantaneous unless data cached – often served via IPFS gateways or emerging retrieval nodes (Filecoin is improving here). |
Storj | Distributed cloud with erasure coding (80 pieces per file) and audits (proofs of retrievability). Central coordination via Satellites (not blockchain). | As long as the user pays for service (data automatically repaired if nodes drop). Providers are paid in STORJ tokens or USD. | Very high – 80 shards spread globally; file tolerates ~50/80 node failures. Network auto-heals by replicating shards if a node quits. | Write: high throughput (uploads are parallelized to many nodes); Read: very fast – downloads pull from up to 80 nodes, and automatically skip slow nodes (“long-tail elimination” for performance). |
Sia | Blockchain with smart contracts for storage. Erasure-coded 30-of-10 scheme; Proof-of-Work chain for contract enforcement. | Time-bound contracts (typically 3 months); users renew to maintain storage. Not perpetual by default. | ~3× redundancy (30 shards for 10 needed). Hosts may geographically diversify; network also replicates shards to new hosts if one goes offline. | Write: moderate (uploads require forming contracts and splitting data); subsequent updates need renewing contracts. Read: fast parallel fetch from 10+ hosts; Skynet HTTP portals enabled CDN-like retrieval for public data. |
Ceramic | Event stream network on top of IPFS; data updates anchored periodically to a blockchain for ordering. No mining – nodes replicate streams of interest. | Data exists as long as at least one node (often developer-run or community) stores the stream. No token incentives (uses a community-run model). | Depending on adoption – popular data models likely on many nodes. Generally not for large files, but for pieces of data in many apps (which encourages widespread replication of shared streams). | Write: near-real-time for updates (just needs to propagate to a few nodes + anchor, which is efficient); Read: fast, queryable via indexing nodes (some use GraphQL). Ceramic is optimized for many small transactions (social posts, profile edits) at web scale. |
2. Pricing Models
Despite similar goals of decentralized storage, these services use different pricing and economic models:
- Arweave Pricing: Arweave requires a one-time upfront payment in AR tokens to store data *forever*. Users pay for at least 200 years of storage for the data, and the protocol places ~86% of that fee into an endowment fund. The endowment’s accrual (through interest and appreciating value of AR) is designed to pay storage miners indefinitely, under the assumption that hardware costs decline over time (historically ~30% cheaper per year). In practical terms, the price fluctuates with AR’s market price, but as of 2023 it was around $3,500 per 1 TB one-time (note: this buys permanent storage, whereas traditional cloud is a recurring cost). Arweave’s model shifts burden upfront: users pay more initially, but then nothing thereafter. This can be costly for large data, but it guarantees permanence without needing to trust a provider in the future.
- Pinata (IPFS) Pricing: Pinata uses a subscription model (fiat pricing) common in Web2 SaaS. It offers a Free tier (up to 1 GB storage, 10 GB/month bandwidth, 500 files) and paid plans. The popular “Pinata *Picnic*” plan is $20/month which includes 1 TB of pinned storage and 500 GB bandwidth, with overage rates of ~$0.07 per GB for storage and $0.10/GB for bandwidth. A higher “Fiesta” plan at $100/month raises this to 5 TB storage and 2.5 TB bandwidth, with even cheaper overages. All paid tiers include features like custom gateways, increased API request limits, and collaboration (multi-user workspaces) at additional cost. There is also an enterprise tier with custom pricing. Pinata’s costs are thus predictable monthly fees, similar to cloud storage providers, and not token-based – it abstracts IPFS into a familiar pricing structure (storage + bandwidth, with free CDN caching in gateways).
- Filecoin Pricing: Filecoin operates as an open market, so prices are determined by supply and demand of storage miners, typically denominated in the native FIL token. In practice, due to abundant supply, Filecoin storage has been extremely cheap. As of mid-2023, storing data on Filecoin costs on the order of $2.33 for 1 TB per year – significantly cheaper than centralized alternatives (AWS S3 is ~$250/TB/yr for frequently accessed storage) and even other decentralized options. However, this rate is not fixed – clients post bids and miners offer asks; the market price can vary. Filecoin storage deals also have a specified duration (e.g. 1 year); if you want to keep data beyond the term, you must renew (pay again) or make long duration deals up front. There is also a concept of Filecoin Plus (FIL+), an incentive program that gives “verified” clients (storing useful public data) a bonus to attract miners at lower effective cost. In addition to storage fees, users may pay small FIL for retrieval on a per-request basis, though retrieval markets are still developing (many rely on free retrieval via IPFS for now). Importantly, Filecoin’s tokenomics (block rewards) heavily subsidize miners – block rewards in FIL supplement the fees paid by users. This means today’s low prices are partly due to inflationary rewards; over time, as block rewards taper, storage fees may adjust upward. In summary, Filecoin’s pricing is dynamic and token-based, generally very low cost per byte, but users must manage renewals and FIL currency risk.
- Storj Pricing: Storj is priced in traditional currency terms (though payments can be made in fiat or STORJ token). It follows a usage-based cloud pricing model: currently $4.00 per TB-month for storage, and $7.00 per TB of egress bandwidth. In granular terms, that is $0.004 per GB-month for data stored, and $0.007 per GB downloaded. There is also a tiny charge per object (segment) stored to account for metadata overhead (about $0.0000088 per segment per month), which only matters if you store millions of very small files. Notably, ingress (uploads) is free, and Storj has a policy of waiving egress fees if you decide to migrate out (to avoid vendor lock-in). Storj’s pricing is transparent and fixed (no bidding markets), and substantially undercuts traditional cloud (they advertise ~80% savings vs AWS, due to no need for regional replication or large data center overhead). End-users don’t have to interact with tokens if they don’t want to – you can simply pay your usage bill in USD. Storj Labs then compensates node operators with STORJ tokens (the token supply is fixed and operators bear some price volatility). This model makes Storj developer-friendly in pricing while still leveraging a token for the decentralized payouts under the hood.
- Sia Pricing: Sia’s storage market is also algorithmic and token-denominated, using Siacoin (SC). Like Filecoin, renters and hosts agree on prices via the network’s market, and historically Sia has been known for extremely low costs. In early years, Sia advertised storage at ~$2 per TB per month, though actual prices depend on host offerings. One Reddit community calculation in 2020 found the true cost around $1-3/TB-month for renters, excluding redundancy overhead (with redundancy, effective cost might be a few times higher, e.g. $7/TB-month when accounting for the 3x redundancy) – still very cheap. As of Q3 2024, storage prices on Sia rose ~22% QoQ due to increased demand and SC token fluctuations, but remain far below centralized cloud prices. Renters on Sia also need to allocate some SC for bandwidth (upload/download) and collateral. The economics are such that hosts compete to offer low prices (since they want to attract contracts and earn SC), and renters benefit from that competition. However, because using Sia requires operating a wallet with Siacoin and dealing with contract setup, it’s a bit less user-friendly to calculate costs than, say, Storj or Pinata. In short, Sia’s costs are token-market-driven and very low per TB, but the user must continually pay (with SC) to extend contracts. There is no upfront lump-sum for perpetuity – it’s a pay-as-you-go in crypto form. Many users obtain SC through an exchange and then can lock in contracts for months of storage at predetermined rates.
- Ceramic Pricing: Ceramic does not charge for usage at a protocol level; there is no native token or fee to create/update streams beyond the minor gas cost of anchoring updates on the Ethereum blockchain (which is typically handled by Ceramic’s infrastructure and is negligible per update when batched). Running a Ceramic node is an open activity – anyone can run one to index and serve data. 3Box Labs (the team behind Ceramic) did offer a hosted service for developers (Ceramic Cloud), which might introduce enterprise pricing for convenience, but the network itself is free to use aside from the effort of running a node. Thus, the “price” of Ceramic is mainly the operational cost developers incur if they self-host nodes or the trust cost if using a third-party node. In essence, Ceramic’s model is more akin to a decentralized database or blockchain RPC service – monetization (if any) is through value-added services, not micropayments for data. This makes it attractive for developers to experiment with dynamic data storage without needing a token, but it also means ensuring long-term node support (since altruistic or grant-based nodes are providing the storage).
Pricing Summary: The table below summarizes the pricing and payment models:
Service | Pricing Model | Cost Example | Payment Medium | Notes |
---|---|---|---|---|
Arweave | One-time upfront fee for perpetual storage. | ~$3,500 per TB once (for indefinite storage). Smaller files cost proportionally (e.g. ~$0.035 per MB). | AR token (crypto). | 86% of fee to endowment for future miner incentives. No recurring fees; user shoulders cost upfront. |
Pinata | Subscription tiers + usage overages. | Free: 1 GB; $20/mo: 1 TB storage + 0.5 TB bandwidth included; $100/mo: 5 TB + 2.5 TB BW. Overages: ~$0.07/GB storage, $0.08-0.10/GB egress. | USD (credit card) – no crypto required. | Simple Web2-style pricing. Billed monthly. “Unlimited files” (count) on paid plans, just limited by total GB. Enterprise plans available. |
Filecoin | Open market bidding with prices in FIL. Block rewards subsidize storage (low user cost). | ~$2.33 per TB/year (market rate mid-2023). Prices vary; some miners even offer near-zero cost for verified data (earning mainly block rewards). | FIL cryptocurrency. Some services (e.g. NFT.storage) abstract this and offer “free” storage backed by Filecoin deals. | Renewal needed at contract end (e.g. 1 year). Users must maintain FIL balance. Network has huge supply, keeping prices low. Retrieval deals (if any) also in FIL. |
Storj | Fixed utility pricing (usage-based). | $4.00 per TB-month storage, $7.00 per TB egress. Free ingress, free repair, minimal per-file metadata fee. | USD (can pay by credit card or STORJ token; payouts to node operators in STORJ). | Post-pay billing (with credits for free tier/trial). Clear predictable costs and significantly cheaper than AWS/Google Cloud. |
Sia | Decentralized market in Siacoin. | ~$1–3 per TB/month historically (excluding redundancy overhead). With 3× redundancy, effective ~$3–7/TB/month to the user. | Siacoin (SC) cryptocurrency. Users must acquire SC to form contracts. | No set price – user software auto-chooses hosts by price. Very cheap, but requires ongoing payments (e.g. fund an allowance for N months). Hosts may also charge for bandwidth in SC. |
Ceramic | No direct fees for data – open network. | N/A (No cost per stream or per update; you mainly pay indirectly for any Ethereum tx fees for anchoring, often cents). | N/A (Protocol has no token; some nodes might charge for hosting data on behalf of users, but core is free). | Ceramic is run by community and the developing company’s nodes. Pricing is not an obstacle – monetization could come from SaaS offerings around Ceramic (if using a hosted API endpoint, e.g. Infura-style). |
3. Developer Experience
A key factor for adoption is how easily developers can integrate these storage solutions – via APIs, SDKs, documentation, and tooling:
- Arweave Developer Experience: Arweave provides a graphQL API endpoint (at
arweave.net/graphql
) which allows querying the permaweb for transactions and data – developers can search stored content by tags, wallet addresses, etc. There are official SDKs like Arweave.js for browser and Node.js that simplify uploading files and posting transactions to the network. For example, a developer can use the Arweave SDK to bundle and upload a file with just a few lines of code. Because each upload is an on-chain transaction, the UX for large-scale uploads was historically challenging, but the introduction of Bundlr (Bundlr Network) has greatly improved throughput. Bundlr (now rebranded to “Iris” for Arweave scaling) is essentially a network of bundling nodes that let developers pay once and upload many files off-chain, then periodically commit them to Arweave in bulk. This allows dApps (especially NFT platforms) to upload thousands of files quickly without spamming the chain, while still getting eventual permanence. Arweave’s tooling ecosystem also includes Arweave Deploy CLI, and ArDrive (a user-friendly app for file management on Arweave). The Permaweb concept extends to hosting web apps – developers can deploy HTML/JS to Arweave via tools like Ardor or the Web3 bundler, and have it available at a permanent URL. Documentation for Arweave is extensive, covering how to price uploads (there’s even a calculator), how to retrieve data (via gateways or running a lightweight node), and community-made “cookbooks” for common tasks. One learning curve is handling the wallet key for signing transactions; Arweave uses RSA-based keys that developers manage (though web wallets and cloud key management solutions exist). Overall, dev experience is improving as Arweave matures, with reliable SDKs, a straightforward REST-like interface (GraphQL), and community tooling. One noteworthy aspect: since users pay in AR, developers must integrate a crypto payment flow – some solve this by pre-paying for users or using third-party services that accept credit cards and convert to AR. - Pinata Developer Experience (IPFS): Pinata is built with developers in mind – its slogan is “Add IPFS file uploads and retrieval in minutes” and it provides a simple REST API and a robust JavaScript SDK. For instance, using Node.js, a dev can
npm install @pinata/sdk
and then dopinata.pinFileToIPFS(file)
or the newerpinata.upload
methods to store files on IPFS via Pinata’s service. The SDK handles authentication (Pinata uses API keys or JWTs) and abstracts away running any IPFS node. Pinata’s documentation is clear, with examples for uploading files, pinning by CID (if the content is already on IPFS), and managing pins (unpinning, pin status, etc.). It also supports a content gateway: developers can use a custom subdomain (e.g.myapp.mypinata.cloud
) to serve content over HTTP, with built-in CDN and even image optimization. This means devs can treat IPFS-stored images almost like they would with Cloudinary or Imgix (Pinata’s image optimizer can resize/crop on the fly via URL parameters). Pinata recently introduced features like “Pinata KV” (key-value storage for JSON or metadata, useful alongside file storage) and Access Controls (to set content as public or restricted). These higher-level features make it easier to build full applications. Additionally, since Pinata is just interfacing with IPFS, developers maintain the flexibility to leave – they can always take a CID pinned via Pinata and pin it elsewhere (or on their own node) since IPFS is interoperable. Pinata’s support (guides, community) is well-regarded, and they even partner with Protocol Labs on initiatives like NFT.Storage migration (providing guides to help users move data between services). For those wanting not to touch crypto at all, Pinata is ideal – no blockchain to integrate, just simple API calls and a credit card. The flip side is less decentralization for the integration itself, since you rely on Pinata’s availability and service quality (though your content is still hash-addressed and replicable on IPFS). In summary, Pinata offers excellent DX: easy setup, comprehensive docs, SDKs, and features (gateway, CDN, analytics) that abstract the complexities of IPFS. - Filecoin Developer Experience: Using Filecoin directly can be complex – it traditionally required running a Filecoin node (e.g. Lotus) and dealing with concepts like sectors, deals, miners, etc. However, the ecosystem has created many developer-facing services and libraries to simplify it. Notably, web3.storage and NFT.storage (by Protocol Labs) allow developers to store data on IPFS with Filecoin backup without needing to handle any FIL tokens or deal mechanics. These services provide a simple API (similar to Pinata’s) – e.g. an NFT project can call NFT.storage’s API to upload an image and metadata; NFT.storage will pin it on IPFS and make Filecoin deals with multiple miners to store it long-term, all free of charge (subsidized by PL). This has been a game-changer for dev adoption in the NFT space. Beyond that, there are tools like Estuary, Powergate (from Textile), and Glacier that offer developer-friendly gateways to Filecoin storage. There’s also a growing ecosystem around the Filecoin Virtual Machine (FVM), which launched in 2023, enabling smart contracts on Filecoin – developers can now write programs that run on the Filecoin blockchain, opening up possibilities for data-centric dApps (like auto-renewing storage deals, or incentivizing retrieval). For basic storage and retrieval, most devs will use either an IPFS layer on top (thus treating Filecoin as “cold storage” backup) or a hosted solution. It’s worth noting that because Filecoin is an open network, many third-party services exist: e.g. Lighthouse.storage offers a “pay once, store forever” service built on Filecoin (it charges an upfront fee and uses an endowment concept much like Arweave, but implemented via Filecoin deals). For developers who want more control, the Filecoin documentation provides libraries (in Go, JavaScript, etc.) to interact with the network, and there are frameworks like Slate (for building user-facing storage apps) and Space (Fleek’s Filecoin+IPFS user storage SDK). The learning curve is higher than for Pinata or Storj, especially if going low-level – devs must understand content addressing (CIDs), deal lifecycle, and possibly run an IPFS node for fast retrieval. The IPFS docs emphasize that IPFS and Filecoin are complementary; indeed, a dev using Filecoin will nearly always pair it with IPFS for actual data access in their app. So effectively, a Filecoin developer experience often becomes an IPFS developer experience with additional steps for persistence. The ecosystem is large: as of 2022 there were 330+ projects built on Filecoin/IPFS, spanning NFTs, Web3 gaming, metaverse storage, video, and more. This means abundant community examples and support. In summary, Filecoin’s DX ranges from turnkey (NFT.storage) to highly customizable (Lotus and FVM) – it is powerful but can be complex, though the availability of free IPFS+Filecoin storage services has eased adoption for many common use cases.
- Storj Developer Experience: Storj DCS positions itself as a drop-in replacement for traditional object storage. It offers an S3-compatible API – meaning developers can use familiar AWS S3 SDKs or tools (boto3, etc.) by simply pointing the endpoint to Storj’s gateway. This drastically lowers the barrier to entry, as virtually any software that works with S3 (backup tools, file browsers, etc.) can work with Storj with minimal config changes. For those who prefer using Storj’s native interfaces, they provide libraries (in Go, Node, Python, etc.) and a CLI called uplink. The documentation on storj.io and storj.dev is thorough, including example code for common tasks (upload, download, sharing, setting access grants). One unique feature is Storj’s access grant tokens – a security mechanism that encapsulates encryption keys and permissions, enabling client-side trust: a dev can create a limited permission token (say read-only access to a certain bucket) to embed in an app, without exposing root keys. This is developer-friendly for creating sharable links or client-side uploads directly to the network. Storj’s dashboard helps monitor usage, and their support resources (community forum, Slack/Discord) are active with both devs and node operators. Integration guides with third-party services exist – for example, FileZilla (the FTP client) integrated Storj so users can drag-and-drop files to Storj like any server. Rclone, a popular command-line sync tool, also supports Storj out-of-the-box, making it easy for developers to incorporate Storj into data pipelines. Because Storj handles encryption automatically, developers don’t need to implement that themselves – but it also means if they lose their keys, Storj can’t recover the data (a trade-off for zero-trust security). Performance-wise, devs might notice that uploading many tiny files has overhead (due to the segment fee and erasure coding), so best practice is to pack small files together or use multipart upload (similar to how one would use any cloud storage). The learning curve is quite small for anyone familiar with cloud storage concepts, and many are: Storj intentionally mirrors the AWS developer experience where possible (SDKs, docs) but offers the decentralized backend. In essence, Storj provides a familiar DX (S3 API, well-documented SDKs) with the benefits of encryption and decentralization – making it one of the smoother onboarding experiences among decentralized storage options.
- Sia Developer Experience: Sia historically required running a Sia client (daemon) on your machine, which exposes a local API for uploads and downloads. This was manageable but not as convenient as cloud APIs – developers had to incorporate a Sia node in their stack. The Sia team and community have worked on improving usability: for instance, Sia-UI is a desktop app for manual file uploading, and libraries like sia.js exist for interacting with a local node. However, the more significant DX improvement came with Skynet, introduced in 2020. Skynet allowed developers to use public web portals (like siasky.net, skyportal.xyz, etc.) to upload data without running a node; these portals handle the Sia interaction and give back a Skylink (a content hash/ID) that can be used to retrieve the file from any portal. This made using Sia storage as easy as an HTTP API – one could curl a file to a Skynet portal and get a link. Additionally, Skynet enabled hosting web apps (similar to Arweave’s permaweb) – developers built dApps like SkyID (decentralized identity), SkyFeed (social feed), and even entire app marketplaces on Skynet. From a developer standpoint, Skynet’s introduction meant you didn’t have to worry about Siacoin, contracts, or running nodes; you could rely on community-run portals (some free, some commercial) to handle the heavy lifting. There were also SDKs (SkyNet JS, etc.) for integrating this into web apps. The challenge, however, is that the primary backer of Skynet (Skynet Labs) shut down in 2022 due to funding issues, and the community and Sia Foundation have been working to keep the concept alive (open-sourcing portal code, etc.). As of 2025, Sia’s developer experience is bifurcated: if you want maximum decentralization, you run a Sia node and deal with SC and contracts – powerful but relatively low-level. If you want ease of use, you might use a gateway service like Filebase or Skynet portals (if available) to abstract that. Filebase, for instance, is a service that provides an S3-compatible API but actually stores data on Sia (and now other networks too); so a developer could use Filebase like they would Storj or AWS, and under the hood it handles Sia’s mechanics. In terms of docs, Sia has improved its documentation and has an active community channel. They also offer a Host ranking (HostScore) and network stats (SiaStats/SiaGraph) so developers can gauge network health. Another new initiative in Sia is the S5 project, which aims to present Sia storage in a content-addressed way akin to IPFS (with compatibility for S3 too) – this suggests ongoing efforts to streamline developer interaction. Overall, Sia’s DX has historically lagged some others due to the need to handle a blockchain and currency, but with Skynet and third-party integrations, it’s become easier. Developers valuing privacy and control can use Sia with some effort, while others can leverage services on top of Sia for a smoother experience.
- Ceramic Developer Experience: Ceramic targets web3 dApp developers, especially those building social features, identities, or dynamic content. Developers interact with Ceramic by running a Ceramic node or using a hosted node (offered by 3Box Labs or community providers). The key concept is “ComposeDB”, a semantic data layer for Ceramic: devs can define a data model (schema) for their application’s data (e.g. a profile model with name, avatar, etc.), and then use GraphQL queries to store and retrieve that data from Ceramic. Essentially, Ceramic feels like using a database that’s global and decentralized. The Ceramic team provides a CLI and SDK to help bootstrap applications – for example, glaze/JS to manage data models and self.id (an identity SDK) for authenticating users with their crypto wallets/DIDs to control their data. Because it’s relatively new, the tooling is still evolving, but there’s solid documentation and a growing set of example apps (for social networks, blog platforms, credential storage, etc.). One important part of Ceramic DX is DID (Decentralized ID) integration: every update to data is signed by a DID, often using IDX (Identity Index) which 3Box Labs built to manage user identity data across streams. For developers, this means you often incorporate a library like did-js to authenticate users (commonly via their Ethereum wallet, which gives a DID using Ceramic’s did:3 method). Once authenticated, you can read/write that user’s data in Ceramic streams as if it were any database. The learning curve here is understanding decentralized identity and the concept of streams vs tables. However, those familiar with web development will find that ComposeDB’s GraphQL abstractions make it quite natural – you can query Ceramic for all posts in a blog app, for instance, using a GraphQL query that the Ceramic node resolves by looking at the relevant streams. Ceramic’s documentation covers “How it Works”, and emphasizes that it’s not for large files – rather, you store references to IPFS or Arweave for large media, and use Ceramic for metadata, indexes, and user-generated content. In practice, a dApp might use Ceramic for things like user profiles or comments (so they can be updated and shared across platforms), and use Filecoin/IPFS for the big files like images or videos. The community around Ceramic is active, with hackathons and grants, and tools like Orbis (a decentralized Twitter-like protocol built on Ceramic) provide higher-level SDKs for social features. In summary, Ceramic offers a high-level, Web3-native DX: developers work with DIDs, models, and GraphQL, which is quite different from low-level storage management – it’s more akin to building on a decentralized Firebase or MongoDB. For those use cases that need mutable, interoperable data, the developer experience is cutting-edge (if a bit bleeding-edge), and for others it may be unnecessary complexity.
4. User Adoption and Usage Metrics
Assessing adoption of decentralized storage is multifaceted: we consider data stored, number of users/developers, notable use cases or partners, and market share. Below we compile key adoption metrics and examples for each:
- Arweave Adoption: Arweave’s network, launched in 2018, stores a smaller total volume of data compared to Filecoin but has carved out a critical niche in permanent storage. As of early 2023, approximately 140 TB of data was stored on the Arweave permaweb. While that is orders of magnitude less than Filecoin, Arweave emphasizes that this data is fully paid-up and permanently preserved. The rate of growth has been steady – developers and archival projects contribute data ranging from web pages (e.g. Arweave is used to archive webpages via the “archivist” community, akin to a decentralized Wayback Machine) to blockchain history (the Solana blockchain, for instance, uses Arweave to offload its historical data). A significant adoption milestone: Meta (Facebook) integrated Arweave in 2022 to permanently store Instagram’s NFT digital collectible media, signaling trust from a Web2 giant in Arweave’s permanence. (Though Meta later halted the NFT initiative, the fact remains they chose Arweave for immutable storage.) In the blockchain world, Solana’s NFT platform Metaplex uses Arweave to store NFT metadata and assets – Solana’s popular Candy Machine standard automatically uploads media to Arweave for permanence. This has resulted in millions of NFTs referencing Arweave URIs (often via
arweave.net
). Another example: KYVE, a Web3 archiving project, launched its mainnet on Arweave and by late 2023 had uploaded over 2,000 TB (2 PB) of data to Arweave – notably huge, this includes snapshots of other blockchains and datasets. Arweave’s ecosystem counts hundreds of developers; the official website Ar.io notes an endowment of 44,000+ AR accumulated by Jan 2023 to sustain storage. On social metrics, Arweave’s community is strong among NFT creators and archival enthusiasts – the term “permaweb” has become synonymous with preserving NFT artwork, web content (e.g. mirror.xyz uses Arweave to store decentralized blog posts permanently), and even permaweb-based applications (email, forums). Arweave has received backing from major crypto VCs and its founder Sam Williams is a prominent figure advocating for data permanence. While not as large in raw bytes, Arweave’s adoption is high-impact: it’s used wherever guaranteed permanence is required. It is also integrated into many Web3 stacks indirectly (for example, Ledger’s hardware wallet uses Arweave to store some NFT provenance data, and The Graph indexing protocol can use Arweave for storing subgraph data). In summary, Arweave’s adoption is strong in the NFT and blockchain metadata space, in permanent web archives, and has growing enterprise interest for long-term records. The current network utilization (140+ TB) may seem small, but each byte is intended to last forever, and usage has been accelerating. - Pinata and IPFS Adoption: IPFS is arguably the most widely adopted decentralized storage technology by sheer numbers, as it’s free and open for anyone to use. It’s hard to measure IPFS “storage” since anyone can run a node and add content – but it’s pervasive in the Web3 world. Pinata, as one of the leading IPFS pinning services, offers a window into IPFS usage by developers. Pinata’s website touts “Trusted by 600,000+ developers” – a huge number reflecting its popularity, likely boosted by the 2021 NFT boom when many projects used Pinata to host NFT assets. From indie artists using Pinata’s free tier to major NFT marketplaces integrating Pinata for content delivery, the service has become an industry standard. The NFT.Storage team noted in 2023 that “Pinata has been a trusted name in the IPFS community since 2018, powering many top projects and marketplaces.”. This includes well-known NFT platforms, game developers, and even some DeFi projects that needed to serve frontend assets over IPFS. For example, OpenSea (the largest NFT marketplace) uses IPFS for many stored assets and has at times recommended pinning services like Pinata to NFT creators for ensuring their content’s availability. Many profile-picture NFT collections (from CryptoPunks derivatives to countless generative art sets on Ethereum) use IPFS CIDs for images, and it’s common to find Pinata’s gateway URLs in token metadata. Pinata hasn’t publicly released stats on total data pinned, but anecdotally it is responsible for pinning petabytes of NFT data. Another dimension: IPFS is integrated into web browsers (Brave, Opera) and has a global peer network; Pinata’s role in that is as a reliable backbone for content hosting. Because IPFS is free to use self-hosted, Pinata’s large user count indicates that many developers prefer the convenience and performance it adds. Pinata also has enterprise users in media and entertainment (for example, some music NFT platforms used Pinata to manage audio content). It’s worth noting IPFS adoption extends beyond Pinata: competitors like Infura’s IPFS service, Cloudflare’s IPFS gateway, and others (Temporal, Crust, etc.) also contribute, but Pinata is among the most prominent. Summarily, IPFS is ubiquitous in Web3, and Pinata’s adoption reflects that ubiquity – it’s a backbone for NFT and dApp content, with hundreds of thousands of users and integration in production apps worldwide.
- Filecoin Adoption: Filecoin has seen the largest uptake in terms of raw storage capacity. It reportedly has 22 exabytes (22,000,000+ TB) of available storage in its network, of which about 3% (660+ PB) was utilized by mid-2023. (By comparison, that used storage is three orders of magnitude above Arweave’s, showing Filecoin’s focus on big data.) Much of this capacity comes from large-scale miners; however, useful stored data has also grown significantly thanks to programs like Filecoin Plus. By early 2022, 45 PiB (~45,000 TB) of real data was stored, and it has likely grown much more since then as large archives onboard data. In terms of users, Filecoin’s adoption is bolstered by ecosystem projects: for instance, NFT.storage (which uses Filecoin under the hood) has over 150 million NFT assets uploaded as of 2023. Many NFT marketplaces rely on NFT.storage or similar services, indirectly making Filecoin a backend for those NFTs. Web3.storage (general IPFS/Filecoin storage for apps) has tens of thousands of users and stores data for applications like Web3 games and metaverse content. Notably, Filecoin has attracted enterprise and institutional partnerships: it partnered with University of California Berkeley to store research data, with the NYC government to preserve open data sets, and with companies like Seagate (a hard drive manufacturer exploring Filecoin for enterprise backup solutions) and Ernst & Young (EY) for decentralized storage in business use cases. OpenSea also became a Filecoin client, using it to back up NFT data. These high-profile clients show confidence in Filecoin’s model. Moreover, by number of projects: over 600 projects and dApps were built on Filecoin/IPFS by late 2022, including everything from video platforms (e.g. VideoCoin, Huddle01) to DeFi oracle data archives, to scientific data repositories (Shoah Foundation’s Holocaust archives via Starling project). Filecoin’s blockchain has a wide community of over 3,900 storage providers globally, making it one of the most decentralized infrastructures by geography. However, Filecoin’s user adoption is sometimes tempered by complexity; many users interact through the easier IPFS layer. Still, with the advent of FVM and a push toward Filecoin as a full cloud platform (storage + compute), developer and enterprise interest is accelerating. In summary, Filecoin leads in capacity and enterprise engagement: it is the decentralized storage network in terms of scale, and while much of that capacity is underutilized, initiatives are in place to fill it with valuable content (open science data, Web2 archives, Web3 app data). Its proven ability to handle exabyte-scale makes it a strong contender to disrupt traditional cloud storage if demand catches up.
- Storj Adoption: Storj has grown steadily by targeting web2/web3 hybrid use cases (especially media). The network consists of around 13,000+ storage nodes (individual operators running the Storj software at home or data centers) across more than 100 countries – providing strong decentralization. On the customer side, Storj has landed enterprise partnerships in media and IT: for example, Videon’s LivePeer (video streaming) uses Storj to distribute live video chunks globally, Fastly’s Compute@Edge partnered with Storj for storing assets, and as seen on their website, Storj is trusted by organizations like Cloudwave, Caltech, TrueNAS, Vivint, and several media production houses. The presence of Caltech (a leading research university) suggests use in scientific data storage, while Vivint (a smart home company) implies IoT or camera footage storage – diverse real-world applications. Storj has won industry recognition, such as Product of the Year 2025 at NAB (National Association of Broadcasters), for its solution in media workflows. They highlight case studies: e.g. Inovo streaming video to millions of users cost-effectively, Treatment Studios using Storj for global video collaboration, and Ammo Content streaming 30+ million hours of content via Storj’s network. These examples indicate Storj is capable of handling high-bandwidth, high-volume content delivery – a critical proof point. Developer adoption is also significant: over 20,000+ developers had accounts on Storj DCS by 2022 (from a Storj stat report). The open-source community has embraced Storj in integrations (as mentioned, FileZilla, ownCloud, Zenko, etc.). Node operator interest is high because Storj pays out in tokens; at times there have been waitlists to become a node due to demand. In terms of data stored, Storj hasn’t publicly announced total PB stored lately, but it’s known to be in the multiple petabytes and rapidly growing especially with recent pushes into the Web3 space. It might not rival Filecoin’s raw numbers (because Storj focuses on active data, not just capacity), but it’s likely the largest encrypted cloud storage network by data count. Storj’s multi-region, CDN-like performance has attracted Web2 users purely for cost-performance benefits (some don’t even care about it being decentralized, they just enjoy 80% cost savings). This “Trojan horse” into traditional industries means adoption can grow outside the typical crypto circles. Overall, Storj’s adoption is strong in media streaming, backup, and developer tooling. It demonstrates that a decentralized service can meet enterprise SLAs (reflected by their 11 9’s durability and partnerships with firms like Evergreen for backup solutions). With its pivot to also offer decentralized cloud GPUs, Storj is positioning as a broader decentralized cloud provider, which could further drive adoption.
- Sia Adoption: Sia is one of the oldest projects here (launching in 2015), but its adoption trajectory has been more modest. As of Q3 2024, Sia’s network stored 2,310 TB (2.31 PB) of data, which was a ~17% quarterly increase, indicating usage is growing steadily, albeit from a smaller base. Sia’s utilization rate relative to capacity also improved, suggesting more hosts are getting business. The Sia network historically had many individual users using it for personal backups due to its low cost – imagine tech-savvy users storing their photo collections or running Sia as a cheaper “Backblaze alternative”. On the enterprise side, Sia hasn’t seen the same level of public partnerships as Filecoin or Storj. Partly this is due to the early-stage UX and the fact that Sia’s parent company Nebulous pivoted to Skynet (which targeted Web3 dApps and content hosting). Skynet adoption was promising in 2020–2021: it powered a Web3 social media ecosystem (e.g. SkyFeed had thousands of users), and even some NFT projects used Skynet for hosting artwork (Skylinks appear in some NFT metadata as an alternative to IPFS). Audius, the decentralized music platform, experimented with Skynet for some content delivery. However, the shutdown of Skynet’s main portal has put some of that momentum into community hands. The Sia Foundation (established in 2021) is now driving development, and they introduced Sia v2 (a hardfork in 2025) with improvements to performance and perhaps economics, which could spur future adoption. The ecosystem is smaller: Sia’s stats show 32 projects built on Sia (not counting user-facing apps), and a total of $3.2M in grants allocated by 2025 to foster growth. This includes projects like Filebase (which uses Sia as one backend), SiaStream (for media streaming storage on Sia), and community tools like HostScore and SiaFS. Sia’s community, while smaller, is passionate – for instance, there was a notable user-run operation storing the Library of Congress’s public data on Sia. The number of hosts on Sia is in the hundreds (not thousands like Storj), and many provide enterprise-grade setups (data center nodes) because profitability as a host is thin unless you have very cheap storage to offer. In summary, Sia’s adoption is niche but steady: it’s used by a core community for low-cost cloud storage and by some Web3 projects for hosting decentralized web content. Its usage (2+ PB stored) is non-trivial but lags far behind Filecoin; however, Sia distinguishes itself by being non-profit and community-driven, which resonates with those who prioritize decentralization ethos. The ongoing improvements (Sia v2) and focus on being “the world’s safest cloud” may yet attract more users concerned with self-sovereign data.
- Ceramic Adoption: Ceramic being a specialized network for data/composable content, its adoption is measured by developers and applications rather than sheer storage volume. According to the Ceramic website (2025), over 400 apps and services are built on Ceramic, managing around 10 million streams of content. This indicates a growing interest in decentralized data among Web3 app developers. Some notable projects using Ceramic include Orbis (decentralized social networking protocol, akin to Twitter on Ceramic), CyberConnect (social graph protocol initially built on Ceramic DIDs), Gitcoin (which explored Ceramic for decentralized user profiles), and Self.ID (an identity hub for users to manage profiles across dApps). Additionally, DID adoption via Ceramic’s 3ID has been significant – for example, many Ethereum-based applications leveraged Ceramic to store user profiles (so your profile could port between say Uniswap and Boardroom and Snapshot for DAOs). There have been partnerships like NEAR Protocol integrating Ceramic for cross-chain identity, showing that Layer-1 blockchains see Ceramic as a solution for off-chain user data. Another domain is DeSci (decentralized science): projects use Ceramic to store research metadata, lab notes, etc., where data needs to be shared and verifiable but not immutable (updates needed). The fact that 3Box Labs (Ceramic’s founding team) recently joined with Textile (a team known for IPFS/Filecoin tooling) is also telling – it suggests an effort to combine forces and perhaps will expand Ceramic’s reach in the data infrastructure domain. The number of active Ceramic nodes is not public, but many apps run their own or use the community nodes. In the big picture, Ceramic is newer and its concept of a “dataverse” is still catching on; it doesn’t have household name enterprise users yet, but it’s seeing grassroots Web3 adoption in areas that existing storage networks don’t serve well (like social media content and cross-app data interoperability). As a benchmark, if we consider each stream as a piece of data, 10 million streams is substantial, though many streams are tiny (like a user’s profile doc or a single post). The metric to watch is how many end-users those 400 apps bring – potentially in the hundreds of thousands, if apps like decentralized social networks scale up. In summary, Ceramic’s adoption is promising in the Web3 dev community (hundreds of apps, integration in various Web3 ecosystems), but it’s inherently limited to specific use cases and is not competing on storage size or throughput with the likes of Filecoin/Arweave.
To visualize adoption, the table below highlights some metrics and notable adopters:
Network | Data Stored / Capacity | User Base & Developers | Notable Usage Examples / Partners |
---|---|---|---|
Arweave | ~140 TB stored (2023) (fully permanent). | Thousands of users; strong NFT and archival dev community. | Solana NFT metadata via Metaplex Candy Machine; Meta/Instagram NFT media storage; KYVE (2 PB of blockchain data); Permanent web archives (e.g. web pages, documents) by Internet Archive enthusiasts. |
Pinata/IPFS | Difficult to measure (IPFS global network in PBs). Pinata pins probably many PB of NFT data. | 600k+ developers on Pinata; IPFS used by millions via browsers and apps. | Top NFT projects & marketplaces (Ethereum and others) rely on IPFS+Pinata; Browser integrations (Brave uses IPFS for content); Cloudflare and Infura run public IPFS gateways serving billions of requests. |
Filecoin | ~22 EB capacity, ~0.66 EB (660 PB) used (2023). Used storage growing fast (45 PB in early 2022; now much higher with FIL+). | Thousands of clients (direct or via services); 3,900+ miners globally; 600+ ecosystem projects. | OpenSea (backing up NFT data); UC Berkeley (research data); NYC Open Data; Shoah Foundation archives; Seagate & EY partnerships for enterprise storage; NFT.storage & Web3.storage (over 150M NFT files). |
Storj | Several PB stored (exact not public; growing via media use). Network: ~13k nodes in 100+ countries. | 20k+ developers; mix of Web3 and Web2 customers. Node operator community worldwide. | Video/Media platforms (e.g. 30M+ hours streamed via Storj for one client); Telecom/Smart Home (Vivint); Academia (Caltech); ownCloud integration for enterprise file sharing; FileZilla integration for backups; recognized by Forrester as a top disruptor. |
Sia | ~2.3 PB used (Q3 2024); capacity somewhat higher (a lot of free space still on hosts). | Hundreds of active hosts; user count not published (likely in thousands). Developer count relatively small (32 projects listed). | Personal & small business backups (via Filebase, Sia-UI); Skynet dApps (decentralized social media, web hosting – e.g. SkyFeed had thousands of users at peak); VPN/Proxy services using Sia for logs (privacy-sensitive storage); Library of Congress data (community-driven archival on Sia). |
Ceramic | ~10 million streams (pieces of content) across network (data size is small per stream). | 400+ apps built on it; user reach in tens of thousands (through those apps). Growing dev community via grants and hackathons. | Decentralized Social (Orbis for Twitter-like feeds); Cross-app profiles (e.g. used in multiple Ethereum dApps for unified profiles); DAO tools (governance forums storing proposals/comments via Ceramic); Identity (DID wallets, verifiable credentials in DeFi KYC); Near Protocol using Ceramic for profiles. |