Skip to main content

438 posts tagged with "Blockchain"

General blockchain technology and innovation

View all tags

Verifiable On-Chain AI with zkML and Cryptographic Proofs

· 36 min read
Dora Noda
Software Engineer

Introduction: The Need for Verifiable AI on Blockchain

As AI systems grow in influence, ensuring their outputs are trustworthy becomes critical. Traditional methods rely on institutional assurances (essentially “just trust us”), which offer no cryptographic guarantees. This is especially problematic in decentralized contexts like blockchains, where a smart contract or user must trust an AI-derived result without being able to re-run a heavy model on-chain. Zero-knowledge Machine Learning (zkML) addresses this by allowing cryptographic verification of ML computations. In essence, zkML enables a prover to generate a succinct proof that “the output $Y$ came from running model $M$ on input $X$”without revealing $X$ or the internal details of $M$. These zero-knowledge proofs (ZKPs) can be verified by anyone (or any contract) efficiently, shifting AI trust from “policy to proof”.

On-chain verifiability of AI means a blockchain can incorporate advanced computations (like neural network inferences) by verifying a proof of correct execution instead of performing the compute itself. This has broad implications: smart contracts can make decisions based on AI predictions, decentralized autonomous agents can prove they followed their algorithms, and cross-chain or off-chain compute services can provide verifiable outputs rather than unverifiable oracles. Ultimately, zkML offers a path to trustless and privacy-preserving AI – for example, proving an AI model’s decisions are correct and authorized without exposing private data or proprietary model weights. This is key for applications ranging from secure healthcare analytics to blockchain gaming and DeFi oracles.

How zkML Works: Compressing ML Inference into Succinct Proofs

At a high level, zkML combines cryptographic proof systems with ML inference so that a complex model evaluation can be “compressed” into a small proof. Internally, the ML model (e.g. a neural network) is represented as a circuit or program consisting of many arithmetic operations (matrix multiplications, activation functions, etc.). Rather than revealing all intermediate values, a prover performs the full computation off-chain and then uses a zero-knowledge proof protocol to attest that every step was done correctly. The verifier, given only the proof and some public data (like the final output and an identifier for the model), can be cryptographically convinced of the correctness without re-executing the model.

To achieve this, zkML frameworks typically transform the model computation into a format amenable to ZKPs:

  • Circuit Compilation: In SNARK-based approaches, the computation graph of the model is compiled into an arithmetic circuit or set of polynomial constraints. Each layer of the neural network (convolutions, matrix multiplies, nonlinear activations) becomes a sub-circuit with constraints ensuring the outputs are correct given the inputs. Because neural nets involve non-linear operations (ReLUs, Sigmoids, etc.) not naturally suited to polynomials, techniques like lookup tables are used to handle these efficiently. For example, a ReLU (output = max(0, input)) can be enforced by a custom constraint or lookup that verifies output equals input if input≥0 else zero. The end result is a set of cryptographic constraints that the prover must satisfy, which implicitly proves the model ran correctly.
  • Execution Trace & Virtual Machines: An alternative is to treat the model inference as a program trace, as done in zkVM approaches. For instance, the JOLT zkVM targets the RISC-V instruction set; one can compile the ML model (or the code that computes it) to RISC-V and then prove each CPU instruction executed properly. JOLT introduces a “lookup singularity” technique, replacing expensive arithmetic constraints with fast table lookups for each valid CPU operation. Every operation (add, multiply, bitwise op, etc.) is checked via a lookup in a giant table of pre-computed valid outcomes, using a specialized argument (Lasso/SHOUT) to keep this efficient. This drastically reduces the prover workload: even complex 64-bit operations become a single table lookup in the proof instead of many arithmetic constraints.
  • Interactive Protocols (GKR Sum-Check): A third approach uses interactive proofs like GKR (Goldwasser–Kalai–Rotblum) to verify a layered computation. Here the model’s computation is viewed as a layered arithmetic circuit (each neural network layer is one layer of the circuit graph). The prover runs the model normally but then engages in a sum-check protocol to prove that each layer’s outputs are correct given its inputs. In Lagrange’s approach (DeepProve, detailed next), the prover and verifier perform an interactive polynomial protocol (made non-interactive via Fiat-Shamir) that checks consistency of each layer’s computations without re-doing them. This sum-check method avoids generating a monolithic static circuit; instead it verifies the consistency of computations in a step-by-step manner with minimal cryptographic operations (mostly hashing or polynomial evaluations).

Regardless of approach, the outcome is a succinct proof (typically a few kilobytes to a few tens of kilobytes) that attests to the correctness of the entire inference. The proof is zero-knowledge, meaning any secret inputs (private data or model parameters) can be kept hidden – they influence the proof but are not revealed to verifiers. Only the intended public outputs or assertions are revealed. This allows scenarios like “prove that model $M$ when applied to patient data $X$ yields diagnosis $Y$, without revealing $X$ or the model’s weights.”

Enabling on-chain verification: Once a proof is generated, it can be posted to a blockchain. Smart contracts can include verification logic to check the proof, often using precompiled cryptographic primitives. For example, Ethereum has precompiles for BLS12-381 pairing operations used in many zk-SNARK verifiers, making on-chain verification of SNARK proofs efficient. STARKs (hash-based proofs) are larger, but can still be verified on-chain with careful optimization or possibly with some trust assumptions (StarkWare’s L2, for instance, verifies STARK proofs on Ethereum by an on-chain verifier contract, albeit with higher gas cost than SNARKs). The key is that the chain does not need to execute the ML model – it only runs a verification which is much cheaper than the original compute. In summary, zkML compresses expensive AI inference into a small proof that blockchains (or any verifier) can check in milliseconds to seconds.

Lagrange DeepProve: Architecture and Performance of a zkML Breakthrough

DeepProve by Lagrange Labs is a state-of-the-art zkML inference framework focusing on speed and scalability. Launched in 2025, DeepProve introduced a new proving system that is dramatically faster than prior solutions like Ezkl. Its design centers on the GKR interactive proof protocol with sum-check and specialized optimizations for neural network circuits. Here’s how DeepProve works and achieves its performance:

  • One-Time Preprocessing: Developers start with a trained neural network (currently supported types include multilayer perceptrons and popular CNN architectures). The model is exported to ONNX format, a standard graph representation. DeepProve’s tool then parses the ONNX model and quantizes it (converts weights to fixed-point/integer form) for efficient field arithmetic. In this phase, it also generates the proving and verification keys for the cryptographic protocol. This setup is done once per model and does not need to be repeated per inference. DeepProve emphasizes ease of integration: “Export your model to ONNX → one-time setup → generate proofs → verify anywhere”.

  • Proving (Inference + Proof Generation): After setup, a prover (which could be run by a user, a service, or Lagrange’s decentralized prover network) takes a new input $X$ and runs the model $M$ on it, obtaining output $Y$. During this execution, DeepProve records an execution trace of each layer’s computations. Instead of translating every multiplication into a static circuit upfront (as SNARK approaches do), DeepProve uses the linear-time GKR protocol to verify each layer on the fly. For each network layer, the prover commits to the layer’s inputs and outputs (e.g., via cryptographic hashes or polynomial commitments) and then engages in a sum-check argument to prove that the outputs indeed result from the inputs as per the layer’s function. The sum-check protocol iteratively convinces the verifier of the correctness of a sum of evaluations of a polynomial that encodes the layer’s computation, without revealing the actual values. Non-linear operations (like ReLU, softmax) are handled efficiently through lookup arguments in DeepProve – if an activation’s output was computed, DeepProve can prove that each output corresponds to a valid input-output pair from a precomputed table for that function. Layer by layer, proofs are generated and then aggregated into one succinct proof covering the whole model’s forward pass. The heavy lifting of cryptography is minimized – DeepProve’s prover mostly performs normal numeric computations (the actual inference) plus some light cryptographic commitments, rather than solving a giant system of constraints.

  • Verification: The verifier uses the final succinct proof along with a few public values – typically the model’s committed identifier (a cryptographic commitment to $M$’s weights), the input $X$ (if not private), and the claimed output $Y$ – to check correctness. Verification in DeepProve’s system involves verifying the sum-check protocol’s transcript and the final polynomial or hash commitments. This is more involved than verifying a classic SNARK (which might be a few pairings), but it’s vastly cheaper than re-running the model. In Lagrange’s benchmarks, verifying a DeepProve proof for a medium CNN takes on the order of 0.5 seconds in software. That is ~0.5s to confirm, for example, that a convolutional network with hundreds of thousands of parameters ran correctly – over 500× faster than naively re-computing that CNN on a GPU for verification. (In fact, DeepProve measured up to 521× faster verification for CNNs and 671× for MLPs compared to re-execution.) The proof size is small enough to transmit on-chain (tens of KB), and verification could be performed in a smart contract if needed, although 0.5s of computation might require careful gas optimization or layer-2 execution.

Architecture and Tooling: DeepProve is implemented in Rust and provides a toolkit (the zkml library) for developers. It natively supports ONNX model graphs, making it compatible with models from PyTorch or TensorFlow (after exporting). The proving process currently targets models up to a few million parameters (tests include a 4M-parameter dense network). DeepProve leverages a combination of cryptographic components: a multilinear polynomial commitment (to commit to layer outputs), the sum-check protocol for verifying computations, and lookup arguments for non-linear ops. Notably, Lagrange’s open-source repository acknowledges it builds on prior work (the sum-check and GKR implementation from Scroll’s Ceno project), indicating an intersection of zkML with zero-knowledge rollup research.

To achieve real-time scalability, Lagrange pairs DeepProve with its Prover Network – a decentralized network of specialized ZK provers. Heavy proof generation can be offloaded to this network: when an application needs an inference proved, it sends the job to Lagrange’s network, where many operators (staked on EigenLayer for security) compute proofs and return the result. This network economically incentivizes reliable proof generation (malicious or failed jobs get the operator slashed). By distributing work across provers (and potentially leveraging GPUs or ASICs), the Lagrange Prover Network hides the complexity and cost from end-users. The result is a fast, scalable, and decentralized zkML service: “verifiable AI inferences fast and affordable”.

Performance Milestones: DeepProve’s claims are backed by benchmarks against the prior state-of-the-art, Ezkl. For a CNN with ~264k parameters (CIFAR-10 scale model), DeepProve’s proving time was ~1.24 seconds versus ~196 seconds for Ezkl – about 158× faster. For a larger dense network with 4 million parameters, DeepProve proved an inference in ~2.3 seconds vs ~126.8 seconds for Ezkl (~54× faster). Verification times also dropped: DeepProve verified the 264k CNN proof in ~0.6s, whereas verifying the Ezkl proof (Halo2-based) took over 5 minutes on CPU in that test. The speedups come from DeepProve’s near-linear complexity: its prover scales roughly O(n) with the number of operations, whereas circuit-based SNARK provers often have superlinear overhead (FFT and polynomial commitments scaling). In fact, DeepProve’s prover throughput can be within an order of magnitude of plain inference runtime – recent GKR systems can be <10× slower than raw execution for large matrix multiplications, an impressive achievement in ZK. This makes real-time or on-demand proofs more feasible, paving the way for verifiable AI in interactive applications.

Use Cases: Lagrange is already collaborating with Web3 and AI projects to apply zkML. Example use cases include: verifiable NFT traits (proving an AI-generated evolution of a game character or collectible is computed by the authorized model), provenance of AI content (proving an image or text was generated by a specific model, to combat deepfakes), DeFi risk models (proving a model’s output that assesses financial risk without revealing proprietary data), and private AI inference in healthcare or finance (where a hospital can get AI predictions with a proof, ensuring correctness without exposing patient data). By making AI outputs verifiable and privacy-preserving, DeepProve opens the door to “AI you can trust” in decentralized systems – moving from an era of “blind trust in black-box models” to one of “objective guarantees”.

SNARK-Based zkML: Ezkl and the Halo2 Approach

The traditional approach to zkML uses zk-SNARKs (Succinct Non-interactive Arguments of Knowledge) to prove neural network inference. Ezkl (by ZKonduit/Modulus Labs) is a leading example of this approach. It builds on the Halo2 proving system (a PLONK-style SNARK with polynomial commitments over BLS12-381). Ezkl provides a tooling chain where a developer can take a PyTorch or TensorFlow model, export it to ONNX, and have Ezkl compile it into a custom arithmetic circuit automatically.

How it works: Each layer of the neural network is converted into constraints:

  • Linear layers (dense or convolution) become collections of multiplication-add constraints that enforce the dot-products between inputs, weights, and outputs.
  • Non-linear layers (like ReLU, sigmoid, etc.) are handled via lookups or piecewise constraints because such functions are not polynomial. For instance, a ReLU can be implemented by a boolean selector $b$ with constraints ensuring $y = x \cdot b$ and $0 \le b \le 1$ and $b=1$ if $x>0$ (one way to do it), or more efficiently by a lookup table mapping $x \mapsto \max(0,x)$ for a range of $x$ values. Halo2’s lookup arguments allow mapping 16-bit (or smaller) chunks of values, so large domains (like all 32-bit values) are usually “chunked” into several smaller lookups. This chunking increases the number of constraints.
  • Big integer ops or divisions (if any) are similarly broken into small pieces. The result is a large set of R1CS/PLONK constraints tailored to the specific model architecture.

Ezkl then uses Halo2 to generate a proof that these constraints hold given the secret inputs (model weights, private inputs) and public outputs. Tooling and integration: One advantage of the SNARK approach is that it leverages well-known primitives. Halo2 is already used in Ethereum rollups (e.g. Zcash, zkEVMs), so it’s battle-tested and has an on-chain verifier readily available. Ezkl’s proofs use BLS12-381 curve, which Ethereum can verify via precompiles, making it straightforward to verify an Ezkl proof in a smart contract. The team has also provided user-friendly APIs; for example, data scientists can work with their models in Python and use Ezkl’s CLI to produce proofs, without deep knowledge of circuits.

Strengths: Ezkl’s approach benefits from the generality and ecosystem of SNARKs. It supports reasonably complex models and has already seen “practical integrations (from DeFi risk models to gaming AI)”, proving real-world ML tasks. Because it operates at the level of the model’s computation graph, it can apply ML-specific optimizations: e.g. pruning insignificant weights or quantizing parameters to reduce circuit size. It also means model confidentiality is natural – the weights can be treated as private witness data, so the verifier only sees that some valid model produced the output, or at best a commitment to the model. The verification of SNARK proofs is extremely fast (typically a few milliseconds or less on-chain), and proof sizes are small (a few kilobytes), which is ideal for blockchain usage.

Weaknesses: Performance is the Achilles’ heel. Circuit-based proving imposes large overheads, especially as models grow. It’s noted that historically, SNARK circuits could be a million times more work for the prover than just running the model itself. Halo2 and Ezkl optimize this, but still, operations like large matrix multiplications generate tons of constraints. If a model has millions of parameters, the prover must handle correspondingly millions of constraints, performing heavy FFTs and multiexponentiation in the process. This leads to high proving times (often minutes or hours for non-trivial models) and high memory usage. For example, proving even a relatively small CNN (e.g. a few hundred thousand parameters) can take tens of minutes with Ezkl on a single machine. The team behind DeepProve cited that Ezkl took hours for certain model proofs that DeepProve can do in minutes. Large models might not even fit in memory or require splitting into multiple proofs (which then need recursive aggregation). While Halo2 is “moderately optimized”, any need to “chunk” lookups or handle wide-bit operations translates to extra overhead. In summary, scalability is limited – Ezkl works well for small-to-medium models (and indeed outperformed some earlier alternatives like naive Stark-based VMs in benchmarks), but struggles as model size grows beyond a point.

Despite these challenges, Ezkl and similar SNARK-based zkML libraries are important stepping stones. They proved that verified ML inference is possible on-chain and have active usage. Notably, projects like Modulus Labs demonstrated verifying an 18-million-parameter model on-chain using SNARKs (with heavy optimization). The cost was non-trivial, but it shows the trajectory. Moreover, the Mina Protocol has its own zkML toolkit that uses SNARKs to allow smart contracts on Mina (which are Snark-based) to verify ML model execution. This indicates a growing multi-platform support for SNARK-based zkML.

STARK-Based Approaches: Transparent and Programmable ZK for ML

zk-STARKs (Scalable Transparent ARguments of Knowledge) offer another route to zkML. STARKs use hash-based cryptography (like FRI for polynomial commitments) and avoid any trusted setup. They often operate by simulating a CPU or VM and proving the execution trace is correct. In context of ML, one can either build a custom STARK for the neural network or use a general-purpose STARK VM to run the model code.

General STARK VMs (RISC Zero, Cairo): A straightforward approach is to write inference code and run it in a STARK VM. For example, Risc0 provides a RISC-V environment where any code (e.g., C++ or Rust implementation of a neural network) can be executed and proven via a STARK. Similarly, StarkWare’s Cairo language can express arbitrary computations (like an LSTM or CNN inference) which are then proved by the StarkNet STARK prover. The advantage is flexibility – you don’t need to design custom circuits for each model. However, early benchmarks showed that naive STARK VMs were slower compared to optimized SNARK circuits for ML. In one test, a Halo2-based proof (Ezkl) was about 3× faster than a STARK-based approach on Cairo, and even 66× faster than a RISC-V STARK VM on a certain benchmark in 2024. This gap is due to the overhead of simulating every low-level instruction in a STARK and the larger constants in STARK proofs (hashing is fast but you need a lot of it; STARK proof sizes are bigger, etc.). However, STARK VMs are improving and have the benefit of transparent setup (no trusted setup) and post-quantum security. As STARK-friendly hardware and protocols advance, proving speeds will improve.

DeepProve’s approach vs STARK: Interestingly, DeepProve’s use of GKR and sum-check yields a proof more akin to a STARK in spirit – it’s an interactive, hash-based proof with no need for a structured reference string. The trade-off is that its proofs are larger and verification is heavier than a SNARK. Yet, DeepProve shows that careful protocol design (specialized to ML’s layered structure) can vastly outperform both generic STARK VMs and SNARK circuits in proving time. We can consider DeepProve as a bespoke STARK-style zkML prover (though they use the term zkSNARK for succinctness, it doesn’t have a traditional SNARK’s small constant-size verification, since 0.5s verify is bigger than typical SNARK verify). Traditional STARK proofs (like StarkNet’s) often involve tens of thousands of field operations to verify, whereas SNARK verifies in maybe a few dozen. Thus, one trade-off is evident: SNARKs yield smaller proofs and faster verifiers, while STARKs (or GKR) offer easier scaling and no trusted setup at the cost of proof size and verify speed.

Emerging improvements: The JOLT zkVM (discussed earlier under JOLTx) is actually outputting SNARKs (using PLONKish commitments) but it embodies ideas that could be applied in STARK context too (Lasso lookups could theoretically be used with FRI commitments). StarkWare and others are researching ways to speed up proving of common operations (like using custom gates or hints in Cairo for big int ops, etc.). There’s also Circomlib-ML by Privacy&Scaling Explorations (PSE), which provides Circom templates for CNN layers, etc. – that’s SNARK-oriented, but conceptually similar templates could be made for STARK languages.

In practice, non-Ethereum ecosystems leveraging STARKs include StarkNet (which could allow on-chain verification of ML if someone writes a verifier, though cost is high) and Risc0’s Bonsai service (which is an off-chain proving service that emits STARK proofs which can be verified on various chains). As of 2025, most zkML demos on blockchain have favored SNARKs (due to verifier efficiency), but STARK approaches remain attractive for their transparency and potential in high-security or quantum-resistant settings. For example, a decentralized compute network might use STARKs to let anyone verify work without a trusted setup, useful for longevity. Also, some specialized ML tasks might exploit STARK-friendly structures: e.g. computations heavily using XOR/bit operations could be faster in STARKs (since those are cheap in boolean algebra and hashing) than in SNARK field arithmetic.

Summary of SNARK vs STARK for ML:

  • Performance: SNARKs (like Halo2) have huge proving overhead per gate but benefit from powerful optimizations and small constants for verify; STARKs (generic) have larger constant overhead but scale more linearly and avoid expensive crypto like pairings. DeepProve shows that customizing the approach (sum-check) yields near-linear proving time (fast) but with a STARK-like proof. JOLT shows that even a general VM can be made faster with heavy use of lookups. Empirically, for models up to millions of operations: a well-optimized SNARK (Ezkl) can handle it but might take tens of minutes, whereas DeepProve (GKR) can do it in seconds. STARK VMs in 2024 were likely in between or worse than SNARKs unless specialized (Risc0 was slower in tests, Cairo was slower without custom hints).
  • Verification: SNARK proofs verify quickest (milliseconds, and minimal data on-chain ~ a few hundred bytes to a few KB). STARK proofs are larger (dozens of KB) and take longer (tens of ms to seconds) to verify due to many hashing steps. In blockchain terms, a SNARK verify might cost e.g. ~200k gas, whereas a STARK verify could cost millions of gas – often too high for L1, acceptable on L2 or with succinct verification schemes.
  • Setup and Security: SNARKs like Groth16 require a trusted setup per circuit (unfriendly for arbitrary models), but universal SNARKs (PLONK, Halo2) have a one-time setup that can be reused for any circuit up to certain size. STARKs need no setup and use only hash assumptions (plus classical polynomial complexity assumptions), and are post-quantum secure. This makes STARKs appealing for longevity – proofs remain secure even if quantum computers emerge, whereas current SNARKs (BLS12-381 based) would be broken by quantum attacks.

We will consolidate these differences in a comparison table shortly.

FHE for ML (FHE-o-ML): Private Computation vs. Verifiable Computation

Fully Homomorphic Encryption (FHE) is a cryptographic technique that allows computations to be performed directly on encrypted data. In the context of ML, FHE can enable a form of privacy-preserving inference: for example, a client can send encrypted input to a model host, the host runs the neural network on the ciphertext without decrypting it, and sends back an encrypted result which the client can decrypt. This ensures data confidentiality – the model owner learns nothing about the input (and potentially the client learns only the output, not the model’s internals if they only get output). However, FHE by itself does not produce a proof of correctness in the same way ZKPs do. The client must trust that the model owner actually performed the computation honestly (the ciphertext could have been manipulated). Usually, if the client has the model or expects a certain distribution of outputs, blatant cheating can be detected, but subtle errors or use of a wrong model version would not be evident just from the encrypted output.

Trade-offs in performance: FHE is notoriously heavy in computation. Running deep learning inference under FHE incurs orders-of-magnitude slowdown. Early experiments (e.g., CryptoNets in 2016) took tens of seconds to evaluate a tiny CNN on encrypted data. By 2024, improvements like CKKS (for approximate arithmetic) and better libraries (Microsoft SEAL, Zama’s Concrete) have reduced this overhead, but it remains large. For example, a user reported that using Zama’s Concrete-ML to run a CIFAR-10 classifier took 25–30 minutes per inference on their hardware. After optimizations, Zama’s team achieved ~40 seconds for that inference on a 192-core server. Even 40s is extremely slow compared to a plaintext inference (which might be 0.01s), showing a ~$10^3$–$10^4\times$ overhead. Larger models or higher precision increase the cost further. Additionally, FHE operations consume a lot of memory and require occasional bootstrapping (a noise-reduction step) which is computationally expensive. In summary, scalability is a major issue – state-of-the-art FHE might handle a small CNN or simple logistic regression, but scaling to large CNNs or Transformers is beyond current practical limits.

Privacy advantages: FHE’s big appeal is data privacy. The input can remain completely encrypted throughout the process. This means an untrusted server can compute on a client’s private data without learning anything about it. Conversely, if the model is sensitive (proprietary), one could envisage encrypting the model parameters and having the client perform FHE inference on their side – but this is less common because if the client has to do the heavy FHE compute, it negates the idea of offloading to a powerful server. Typically, the model is public or held by server in the clear, and the data is encrypted by the client’s key. Model privacy in that scenario is not provided by default (the server knows the model; the client learns outputs but not weights). There are more exotic setups (like secure two-party computation or multi-key FHE) where both model and data can be kept private from each other, but those incur even more complexity. In contrast, zkML via ZKPs can ensure model privacy and data privacy at once – the prover can have both the model and data as secret witness, only revealing what’s needed to the verifier.

No on-chain verification needed (and none possible): With FHE, the result comes out encrypted to the client. The client then decrypts it to obtain the actual prediction. If we want to use that result on-chain, the client (or whoever holds the decryption key) would have to publish the plaintext result and convince others it’s correct. But at that point, trust is back in the loop – unless combined with a ZKP. In principle, one could combine FHE and ZKP: e.g., use FHE to keep data private during compute, and then generate a ZK-proof that the plaintext result corresponds to a correct computation. However, combining them means you pay the performance penalty of FHE and ZKP – extremely impractical with today’s tech. So, in practice FHE-of-ML and zkML serve different use cases:

  • FHE-of-ML: Ideal when the goal is confidentiality between two parties (client and server). For instance, a cloud service can host an ML model and users can query it with their sensitive data without revealing the data to the cloud (and if the model is sensitive, perhaps deploy it via FHE-friendly encodings). This is great for privacy-preserving ML services (medical predictions, etc.). The user still has to trust the service to faithfully run the model (since no proof), but at least any data leakage is prevented. Some projects like Zama are even exploring an “FHE-enabled EVM (fhEVM)” where smart contracts could operate on encrypted inputs, but verifying those computations on-chain would require the contract to somehow enforce correct computation – an open challenge likely requiring ZK proofs or specialized secure hardware.
  • zkML (ZKPs): Ideal when the goal is verifiability and public auditability. If you want anyone (or any contract) to be sure that “Model $M$ was evaluated correctly on $X$ and produced $Y$”, ZKPs are the solution. They also provide privacy as a bonus (you can hide $X$ or $Y$ or $M$ if needed by treating them as private inputs to the proof), but their primary feature is the proof of correct execution.

A complementary relationship: It’s worth noting that ZKPs protect the verifier (they learn nothing about secrets, only that the computation was correctly done), whereas FHE protects the prover’s data from the computing party. In some scenarios, these could be combined – for example, a network of untrusted nodes could use FHE to compute on users’ private data and then provide ZK proofs to the users (or blockchain) that the computations were done according to the protocol. This would cover both privacy and correctness, but the performance cost is enormous with today’s algorithms. More feasible in the near term are hybrids like Trusted Execution Environments (TEE) plus ZKP or Functional Encryption plus ZKP – these are beyond our scope, but they aim to provide something similar (TEEs keep data/model secret during compute, then a ZKP can attest the TEE did the right thing).

In summary, FHE-of-ML prioritizes confidentiality of inputs/outputs, while zkML prioritizes verifiable correctness (with possible privacy). Table 1 below contrasts the key properties:

ApproachProver Performance (Inference & Proof)Proof Size & VerificationPrivacy FeaturesTrusted Setup?Post-Quantum?
zk-SNARK (Halo2, Groth16, PLONK, etc)Heavy prover overhead (up to 10^6× normal runtime without optimizations; in practice 10^3–10^5×). Optimized for specific model/circuit; proving time in minutes for medium models, hours for large. Recent zkML SNARKs (DeepProve with GKR) vastly improve this (near-linear overhead, e.g. seconds instead of minutes for million-param models).Very small proofs (often < 100 KB, sometimes ~a few KB). Verification is fast: a few pairings or polynomial evals (typically < 50 ms on-chain). DeepProve’s GKR-based proofs are larger (tens–hundreds KB) and verify in ~0.5 s (still much faster than re-running the model).Data confidentiality: Yes – inputs can be private in proof (not revealed). Model privacy: Yes – prover can commit to model weights and not reveal them. Output hiding: Optional – proof can be of a statement without revealing output (e.g. “output has property P”). However, if the output itself is needed on-chain, it typically becomes public. Overall, SNARKs offer full zero-knowledge flexibility (hide whichever parts you want).Depends on scheme. Groth16/EZKL require a trusted setup per circuit; PLONK/Halo2 use a universal setup (one time). DeepProve’s sum-check GKR is transparent (no setup) – a bonus of that design.Classical SNARKs (BLS12-381 curves) are not PQ-safe (vulnerable to quantum attacks on elliptic curve discrete log). Some newer SNARKs use PQ-safe commitments, but Halo2/PLONK as used in Ezkl are not PQ-safe. GKR (DeepProve) uses hash commitments (e.g. Poseidon/Merkle) which are conjectured PQ-safe (relying on hash preimage resistance).
zk-STARK (FRI, hash-based proof)Prover overhead is high but more linear scaling. Typically 10^2–10^4× slower than native for large tasks, with room to parallelize. General STARK VMs (Risc0, Cairo) saw slower performance vs SNARK for ML in 2024 (e.g. 3×–66× slower than Halo2 in some cases). Specialized STARKs (or GKR) can approach linear overhead and outperform SNARKs for large circuits.Proofs are larger: often tens of KB (growing with circuit size/log(n)). Verifier must do multiple hash and FFT checks – verification time ~O(n^ε) for small ε (e.g. ~50 ms to 500 ms depending on proof size). On-chain, this is costlier (StarkWare’s L1 verifier can take millions of gas per proof). Some STARKs support recursive proofs to compress size, at cost of prover time.Data & Model privacy: A STARK can be made zero-knowledge by randomizing trace data (adding blinding to polynomial evaluations), so it can hide private inputs similarly to SNARK. Many STARK implementations focus on integrity, but zk-STARK variants do allow privacy. So yes, they can hide inputs/models like SNARKs. Output hiding: likewise possible in theory (prover doesn’t declare the output as public), but rarely used since usually the output is what we want to reveal/verify.No trusted setup. Transparency is a hallmark of STARKs – only require common random string (which Fiat-Shamir can derive). This makes them attractive for open-ended use (any model, any time, no per-model ceremony).Yes, STARKs rely on hash and information-theoretic security assumptions (like random oracle and difficulty of certain codeword decoding in FRI). These are believed to be secure against quantum adversaries. STARK proofs are thus PQ-resistant, an advantage for future-proofing verifiable AI.
FHE for ML (Fully Homomorphic Encryption applied to inference)Prover = party doing computation on encrypted data. The computation time is extremely high: 10^3–10^5× slower than plaintext inference is common. High-end hardware (many-core servers, FPGA, etc.) can mitigate this. Some optimizations (low-precision inference, leveled FHE parameters) can reduce overhead but there is a fundamental performance hit. FHE is currently practical for small models or simple linear models; deep networks remain challenging beyond toy sizes.No proof generated. The result is an encrypted output. Verification in the sense of checking correctness is not provided by FHE alone – one trusts the computing party to not cheat. (If combined with secure hardware, one might get an attestation; otherwise, a malicious server could return an incorrect encrypted result that the client would decrypt to wrong output without knowing the difference).Data confidentiality: Yes – the input is encrypted, so the computing party learns nothing about it. Model privacy: If the model owner is doing the compute on encrypted input, the model is in plaintext on their side (not protected). If roles are reversed (client holds model encrypted and server computes), model could be kept encrypted, but this scenario is less common. There are techniques like secure two-party ML that combine FHE/MPC to protect both, but these go beyond plain FHE. Output hiding: By default, the output of the computation is encrypted (only decryptable by the party with the secret key, usually the input owner). So the output is hidden from the computing server. If we want the output public, the client can decrypt and reveal it.No setup needed. Each user generates their own key pair for encryption. Trust relies on keys remaining secret.The security of FHE schemes (e.g. BFV, CKKS, TFHE) is based on lattice problems (Learning With Errors), which are believed to be resistant to quantum attacks (at least no efficient quantum algorithm is known). So FHE is generally considered post-quantum secure.

Table 1: Comparison of zk-SNARK, zk-STARK, and FHE approaches for machine learning inference (performance and privacy trade-offs).

Use Cases and Implications for Web3 Applications

The convergence of AI and blockchain via zkML unlocks powerful new application patterns in Web3:

  • Decentralized Autonomous Agents & On-Chain Decision-Making: Smart contracts or DAOs can incorporate AI-driven decisions with guarantees of correctness. For example, imagine a DAO that uses a neural network to analyze market conditions before executing trades. With zkML, the DAO’s smart contract can require a zkSNARK proof that the authorized ML model (with a known hash commitment) was run on the latest data and produced the recommended action, before the action is accepted. This prevents malicious actors from injecting a fake prediction – the chain verifies the AI’s computation. Over time, one could even have fully on-chain autonomous agents (contracts that query off-chain AI or contain simplified models) making decisions in DeFi or games, with all their moves proven correct and policy-compliant via zk proofs. This raises the trust in autonomous agents, since their “thinking” is transparent and verifiable rather than a black-box.

  • Verifiable Compute Markets: Projects like Lagrange are effectively creating verifiable computation marketplaces – developers can outsource heavy ML inference to a network of provers and get back a proof with the result. This is analogous to decentralized cloud computing, but with built-in trust: you don’t need to trust the server, only the proof. It’s a paradigm shift for oracles and off-chain computation. Protocols like Ethereum’s upcoming DSC (decentralized sequencing layer) or oracle networks could use this to provide data feeds or analytic feeds with cryptographic guarantees. For instance, an oracle could supply “the result of model X on input Y” and anyone can verify the attached proof on-chain, rather than trusting the oracle’s word. This could enable verifiable AI-as-a-service on blockchain: any contract can request a computation (like “score these credit risks with my private model”) and accept the answer only with a valid proof. Projects such as Gensyn are exploring decentralized training and inference marketplaces using these verification techniques.

  • NFTs and Gaming – Provenance and Evolution: In blockchain games or NFT collectibles, zkML can prove traits or game moves were generated by legitimate AI models. For example, a game might allow an AI to evolve an NFT pet’s attributes. Without ZK, a clever user might modify the AI or the outcome to get a superior pet. With zkML, the game can require a proof that “pet’s new stats were computed by the official evolution model on the pet’s old stats”, preventing cheating. Similarly for generative art NFTs: an artist could release a generative model as a commitment; later, when minting NFTs, prove each image was produced by that model given some seed, guaranteeing authenticity (and even doing so without revealing the exact model to the public, preserving the artist’s IP). This provenance verification ensures authenticity in a manner akin to verifiable randomness – except here it’s verifiable creativity.

  • Privacy-Preserving AI in Sensitive Domains: zkML allows confirmation of outcomes without exposing inputs. In healthcare, a patient’s data could be run through an AI diagnostic model by a cloud provider; the hospital receives a diagnosis and a proof that the model (which could be privately held by a pharmaceutical company) was run correctly on the patient data. The patient data remains private (only an encrypted or committed form was used in the proof), and the model weights remain proprietary – yet the result is trusted. Regulators or insurance could also verify that only approved models were used. In finance, a company could prove to an auditor or regulator that its risk model was applied to its internal data and produced certain metrics without revealing the underlying sensitive financial data. This enables compliance and oversight with cryptographic assurances rather than manual trust.

  • Cross-Chain and Off-Chain Interoperability: Because zero-knowledge proofs are fundamentally portable, zkML can facilitate cross-chain AI results. One chain might have an AI-intensive application running off-chain; it can post a proof of the result to a different blockchain, which will trustlessly accept it. For instance, consider a multi-chain DAO using an AI to aggregate sentiment across social media (off-chain data). The AI analysis (complex NLP on large data) is done off-chain by a service that then posts a proof to a small blockchain (or multiple chains) that “analysis was done correctly and output sentiment score = 0.85”. All chains can verify and use that result in their governance logic, without each needing to rerun the analysis. This kind of interoperable verifiable compute is what Lagrange’s network aims to support, by serving multiple rollups or L1s simultaneously. It removes the need for trusted bridges or oracle assumptions when moving results between chains.

  • AI Alignment and Governance: On a more forward-looking note, zkML has been highlighted as a tool for AI governance and safety. Lagrange’s vision statements, for example, argue that as AI systems become more powerful (even superintelligent), cryptographic verification will be essential to ensure they follow agreed rules. By requiring AI models to produce proofs of their reasoning or constraints, humans retain a degree of control – “you cannot trust what you cannot verify”. While this is speculative and involves social as much as technical aspects, the technology could enforce that an AI agent running autonomously still proves it is using an approved model and hasn’t been tampered with. Decentralized AI networks might use on-chain proofs to verify contributions (e.g., a network of nodes collaboratively training a model can prove each update was computed faithfully). Thus zkML could play a role in ensuring AI systems remain accountable to human-defined protocols even in decentralized or uncontrolled environments.

In conclusion, zkML and verifiable on-chain AI represent a convergence of advanced cryptography and machine learning that stands to enhance trust, transparency, and privacy in AI applications. By comparing the major approaches – zk-SNARKs, zk-STARKs, and FHE – we see a spectrum of trade-offs between performance and privacy, each suitable for different scenarios. SNARK-based frameworks like Ezkl and innovations like Lagrange’s DeepProve have made it feasible to prove substantial neural network inferences with practical effort, opening the door to real-world deployments of verifiable AI. STARK-based and VM-based approaches promise greater flexibility and post-quantum security, which will become important as the field matures. FHE, while not a solution for verifiability, addresses the complementary need of confidential ML computation, and in combination with ZKPs or in specific private contexts it can empower users to leverage AI without sacrificing data privacy.

The implications for Web3 are significant: we can foresee smart contracts reacting to AI predictions, knowing they are correct; markets for compute where results are trustlessly sold; digital identities (like Worldcoin’s proof-of-personhood via iris AI) protected by zkML to confirm someone is human without leaking their biometric image; and generally a new class of “provable intelligence” that enriches blockchain applications. Many challenges remain – performance for very large models, developer ergonomics, and the need for specialized hardware – but the trajectory is clear. As one report noted, “today’s ZKPs can support small models, but moderate to large models break the paradigm”; however, rapid advances (50×–150× speedups with DeepProve over prior art) are pushing that boundary outward. With ongoing research (e.g., on hardware acceleration and distributed proving), we can expect progressively larger and more complex AI models to become provable. zkML might soon evolve from niche demos to an essential component of trusted AI infrastructure, ensuring that as AI becomes ubiquitous, it does so in a way that is auditable, decentralized, and aligned with user privacy and security.

Ethereum's Anonymity Myth: How Researchers Unmasked 15% of Validators

· 6 min read
Dora Noda
Software Engineer

One of the core promises of blockchain technology like Ethereum is a degree of anonymity. Participants, known as validators, are supposed to operate behind a veil of cryptographic pseudonyms, protecting their real-world identity and, by extension, their security.

However, a recent research paper titled "Deanonymizing Ethereum Validators: The P2P Network Has a Privacy Issue" from researchers at ETH Zurich and other institutions reveals a critical flaw in this assumption. They demonstrate a simple, low-cost method to link a validator's public identifier directly to the IP address of the machine it's running on.

In short, Ethereum validators are not nearly as anonymous as many believe. The findings were significant enough to earn the researchers a bug bounty from the Ethereum Foundation, acknowledging the severity of the privacy leak.

How the Vulnerability Works: A Flaw in the Gossip

To understand the vulnerability, we first need a basic picture of how Ethereum validators communicate. The network consists of over a million validators who constantly "vote" on the state of the chain. These votes are called attestations, and they are broadcast across a peer-to-peer (P2PP2P) network to all other nodes.

With so many validators, having everyone broadcast every vote to everyone else would instantly overwhelm the network. To solve this, Ethereum’s designers implemented a clever scaling solution: the network is divided into 64 distinct communication channels, known as subnets.

  • By default, each node (the computer running the validator software) subscribes to only two of these 64 subnets. Its primary job is to diligently relay all messages it sees on those two channels.
  • When a validator needs to cast a vote, its attestation is randomly assigned to one of the 64 subnets for broadcast.

This is where the vulnerability lies. Imagine a node whose job is to manage traffic for channels 12 and 13. All day, it faithfully forwards messages from just those two channels. But then, it suddenly sends you a message that belongs to channel 45.

This is a powerful clue. Why would a node handle a message from a channel it's not responsible for? The most logical conclusion is that the node itself generated that message. This implies that the validator who created the attestation for channel 45 is running on that very machine.

The researchers exploited this exact principle. By setting up their own listening nodes, they monitored the subnets from which their peers sent attestations. When a peer sent a message from a subnet it wasn't officially subscribed to, they could infer with high confidence that the peer hosted the originating validator.

The method proved shockingly effective. Using just four nodes over three days, the team successfully located the IP addresses of over 161,000 validators, representing more than 15% of the entire Ethereum network.

Why This Matters: The Risks of Deanonymization

Exposing a validator's IP address is not a trivial matter. It opens the door for targeted attacks that threaten individual operators and the health of the Ethereum network as a whole.

1. Targeted Attacks and Reward Theft Ethereum announces which validator is scheduled to propose the next block a few minutes in advance. An attacker who knows this validator's IP address can launch a Denial-of-Service (DDoS) attack, flooding it with traffic and knocking it offline. If the validator misses its four-second window to propose the block, the opportunity passes to the next validator in line. If the attacker is that next validator, they can then claim the block rewards and valuable transaction fees (MEV) that should have gone to the victim.

2. Threats to Network Liveness and Safety A well-resourced attacker could perform these "sniping" attacks repeatedly, causing the entire blockchain to slow down or halt (a liveness attack). In a more severe scenario, an attacker could use this information to launch sophisticated network-partitioning attacks, potentially causing different parts of the network to disagree on the chain's history, thus compromising its integrity (a safety attack).

3. Revealing a Centralized Reality The research also shed light on some uncomfortable truths about the network's decentralization:

  • Extreme Concentration: The team found peers hosting a staggering number of validators, including one IP address running over 19,000. The failure of a single machine could have an outsized impact on the network.
  • Dependence on Cloud Services: Roughly 90% of located validators run on cloud providers like AWS and Hetzner, not on the computers of solo home stakers. This represents a significant point of centralization.
  • Hidden Dependencies: Many large staking pools claim their operators are independent. However, the research found instances where validators from different, competing pools were running on the same physical machine, creating hidden systemic risks.

Mitigations: How Can Validators Protect Themselves?

Fortunately, there are ways to defend against this deanonymization technique. The researchers proposed several mitigations:

  • Create More Noise: A validator can choose to subscribe to more than two subnets—or even all 64. This makes it much harder for an observer to distinguish between relayed messages and self-generated ones.
  • Use Multiple Nodes: An operator can separate validator duties across different machines with different IPs. For example, one node could handle attestations while a separate, private node is used only for proposing high-value blocks.
  • Private Peering: Validators can establish trusted, private connections with other nodes to relay their messages, obscuring their true origin within a small, trusted group.
  • Anonymous Broadcasting Protocols: More advanced solutions like Dandelion, which obfuscates a message's origin by passing it along a random "stem" before broadcasting it widely, could be implemented.

Conclusion

This research powerfully illustrates the inherent trade-off between performance and privacy in distributed systems. In its effort to scale, Ethereum's P2PP2P network adopted a design that compromised the anonymity of its most critical participants.

By bringing this vulnerability to light, the researchers have given the Ethereum community the knowledge and tools needed to address it. Their work is a crucial step toward building a more robust, secure, and truly decentralized network for the future.

Expanding Our Horizons: BlockEden.xyz Adds Base, Berachain, and Blast to API Marketplace

· 4 min read

We're thrilled to announce a significant expansion to BlockEden.xyz's API Marketplace with the addition of three cutting-edge blockchain networks: Base, Berachain, and Blast. These new offerings reflect our commitment to providing developers with comprehensive access to the most innovative blockchain infrastructures, enabling seamless development across multiple ecosystems.

API Marketplace Expansion

Base: Coinbase's Ethereum L2 Solution

Base is an Ethereum Layer 2 (L2) solution developed by Coinbase, designed to bring millions of users into the onchain ecosystem. As a secure, low-cost, developer-friendly Ethereum L2, Base combines the robust security of Ethereum with the scalability benefits of optimistic rollups.

Our new Base API endpoint lets developers:

  • Access Base's infrastructure without managing their own nodes
  • Leverage high-performance RPC connections with 99.9% uptime
  • Build applications that benefit from Ethereum's security with lower fees
  • Seamlessly interact with Base's expanding ecosystem of applications

Base is particularly appealing for developers looking to create consumer-facing applications that require Ethereum's security but at a fraction of the cost.

Berachain: Performance Meets EVM Compatibility

Berachain brings a unique approach to blockchain infrastructure, combining high performance with complete Ethereum Virtual Machine (EVM) compatibility. As an emerging network gaining significant attention from developers, Berachain offers:

  • EVM compatibility with enhanced throughput
  • Advanced smart contract capabilities
  • A growing ecosystem of innovative DeFi applications
  • Unique consensus mechanisms optimized for transaction speed

Our Berachain API provides developers with immediate access to this promising network, allowing teams to build and test applications without the complexity of managing infrastructure.

Blast: The First Native Yield L2

Blast stands out as the first Ethereum L2 with native yield for ETH and stablecoins. This innovative approach to yield generation makes Blast particularly interesting for DeFi developers and applications focused on capital efficiency.

Key benefits of our Blast API include:

  • Direct access to Blast's native yield mechanisms
  • Support for building yield-optimized applications
  • Simplified integration with Blast's unique features
  • High-performance RPC connections for seamless interactions

Blast's focus on native yield represents an exciting direction for Ethereum L2 solutions, potentially setting new standards for capital efficiency in the ecosystem.

Seamless Integration Process

Getting started with these new networks is straightforward with BlockEden.xyz:

  1. Visit our API Marketplace and select your desired network
  2. Create an API key through your BlockEden.xyz dashboard
  3. Integrate the endpoint into your development environment using our comprehensive documentation
  4. Start building with confidence, backed by our 99.9% uptime guarantee

Why Choose BlockEden.xyz for These Networks?

BlockEden.xyz continues to distinguish itself through several core offerings:

  • High Availability: Our infrastructure maintains 99.9% uptime across all supported networks
  • Developer-First Approach: Comprehensive documentation and support for seamless integration
  • Unified Experience: Access multiple blockchain networks through a single, consistent interface
  • Competitive Pricing: Our compute unit credit (CUC) system ensures cost-effective scaling

Looking Forward

The addition of Base, Berachain, and Blast to our API Marketplace represents our ongoing commitment to supporting the diverse and evolving blockchain ecosystem. As these networks continue to mature and attract developers, BlockEden.xyz will be there to provide the reliable infrastructure needed to build the next generation of decentralized applications.

We invite developers to explore these new offerings and provide feedback as we continue to enhance our services. Your input is invaluable in helping us refine and expand our API marketplace to meet your evolving needs.

Ready to start building on Base, Berachain, or Blast? Visit BlockEden.xyz API Marketplace today and create your access key to begin your journey!

For the latest updates and announcements, connect with us on Twitter or join our community on Discord.

Sony's Soneium: Bringing Blockchain to the Entertainment World

· 6 min read

In the rapidly evolving landscape of blockchain technology, a familiar name has stepped into the arena with a bold vision. Sony, the entertainment and technology giant, has launched Soneium—an Ethereum Layer-2 blockchain designed to bridge the gap between cutting-edge Web3 innovations and mainstream internet services. But what exactly is Soneium, and why should you care? Let's dive in.

What is Soneium?

Soneium is a Layer-2 blockchain built on top of Ethereum, developed by Sony Block Solutions Labs—a joint venture between Sony Group and Startale Labs. Launched in January 2025 after a successful testnet phase, Soneium aims to "realize the open internet that transcends boundaries" by making blockchain technology accessible, scalable, and practical for everyday use.

Think of it as Sony's attempt to make blockchain as user-friendly as its PlayStations and Walkmans once made gaming and music.

The Tech Behind Soneium

For the tech-curious among us, Soneium is built on Optimism's OP Stack, which means it uses the same optimistic rollup framework as other popular Layer-2 solutions. In plain English? It processes transactions off-chain and only periodically posts compressed data back to Ethereum, making transactions faster and cheaper while maintaining security.

Soneium is fully compatible with the Ethereum Virtual Machine (EVM), so developers familiar with Ethereum can easily deploy their applications on the platform. It also joins Optimism's "Superchain" ecosystem, allowing it to communicate easily with other Layer-2 networks like Coinbase's Base.

What Makes Soneium Special?

While there are already several Layer-2 solutions on the market, Soneium stands out for its focus on entertainment, creative content, and fan engagement—areas where Sony has decades of experience and vast resources.

Imagine buying a movie ticket and receiving an exclusive digital collectible that grants access to bonus content. Or attending a virtual concert where your NFT ticket becomes a memento with special perks. These are the kinds of experiences Sony envisions building on Soneium.

The platform is designed to support:

  • Gaming experiences with faster transactions for in-game assets
  • NFT marketplaces for digital collectibles
  • Fan engagement apps where communities can interact with creators
  • Financial tools for creators and fans
  • Enterprise blockchain solutions

Sony's Partnerships Power Soneium

Sony isn't going it alone. The company has forged strategic partnerships to bolster Soneium's development and adoption:

  • Startale Labs, a Singapore-based blockchain startup led by Sota Watanabe (co-founder of Astar Network), is Sony's key technical partner
  • Optimism Foundation provides the underlying technology
  • Circle ensures that USD Coin (USDC) serves as a primary currency on the network
  • Samsung has made a strategic investment through its venture arm
  • Alchemy, Chainlink, Pyth Network, and The Graph provide essential infrastructure services

Sony is also leveraging its internal divisions—including Sony Pictures, Sony Music Entertainment, and Sony Music Publishing—to pilot Web3 fan engagement projects on Soneium. For example, the platform has already hosted NFT campaigns for the "Ghost in the Shell" franchise and various music artists under Sony's label.

Early Signs of Success

Despite being just a few months old, Soneium has shown promising traction:

  • Its testnet phase saw over 15 million active wallets and processed over 47 million transactions
  • Within the first month of mainnet launch, Soneium attracted over 248,000 on-chain accounts and about 1.8 million addresses interacting with the network
  • The platform has successfully launched several NFT drops, including a collaboration with Web3 music label Coop Records

To fuel growth, Sony and Astar Network launched a 100-day incentive campaign with a 100 million token reward pool, encouraging users to try out apps, supply liquidity, and be active on the platform.

Security and Scalability: A Balancing Act

Security is paramount for Sony, especially as it carries its trusted brand into the blockchain space. Soneium inherits Ethereum's security while adding its own protective measures.

Interestingly, Sony has taken a somewhat controversial approach by blacklisting certain smart contracts and tokens deemed to infringe on intellectual property. While this has raised questions about decentralization, Sony argues that some curation is necessary to protect creators and build trust with mainstream users.

On the scalability front, Soneium's very purpose is to enhance Ethereum's throughput. By processing transactions off-chain, it can handle a much higher volume of transactions at much lower costs—crucial for mass adoption of applications like games or large NFT drops.

The Road Ahead

Sony has outlined a multi-phase roadmap for Soneium:

  1. First year: Onboarding Web3 enthusiasts and early adopters
  2. Within two years: Integrating Sony products like Sony Bank, Sony Music, and Sony Pictures
  3. Within three years: Expanding to enterprises and general applications beyond Sony's ecosystem

The company is gradually rolling out its NFT-driven Fan Marketing Platform, which will allow brands and artists to easily issue NFTs to fans, offering perks like exclusive content and event access.

While Soneium currently relies on ETH for gas fees and uses ASTR (Astar Network's token) for incentives, there's speculation about a potential Soneium native token in the future.

How Soneium Compares to Other Layer-2 Networks

In the crowded Layer-2 market, Soneium faces competition from established players like Arbitrum, Optimism, and Polygon. However, Sony is carving a unique position by leveraging its entertainment empire and focusing on creative use cases.

Unlike purely community-driven Layer-2 networks, Soneium benefits from Sony's brand trust, access to content IP, and a potentially huge user base from existing Sony services.

The trade-off is less decentralization (at least initially) compared to networks like Optimism and Arbitrum, which have issued tokens and implemented community governance.

The Big Picture

Sony's Soneium represents a significant step toward blockchain mass adoption. By focusing on content and fan engagement—areas where Sony excels—the company is positioning Soneium as a bridge between Web3 enthusiasts and everyday consumers.

If Sony can successfully convert even a fraction of its millions of customers into Web3 participants, Soneium could become one of the first truly mainstream blockchain platforms.

The experiment has just begun, but the potential is enormous. As the lines between entertainment, technology, and blockchain continue to blur, Soneium may well be at the forefront of this convergence, bringing blockchain technology to the masses one gaming avatar or music NFT at a time.

MegaETH: The 100,000 TPS Layer-2 Aiming to Supercharge Ethereum

· 9 min read

The Speed Revolution Ethereum Has Been Waiting For?

In the high-stakes world of blockchain scaling solutions, a new contender has emerged that's generating both excitement and controversy. MegaETH is positioning itself as Ethereum's answer to ultra-fast chains like Solana—promising sub-millisecond latency and an astonishing 100,000 transactions per second (TPS).

MegaETH

But these claims come with significant trade-offs. MegaETH is making calculated sacrifices to "Make Ethereum Great Again," raising important questions about the balance between performance, security, and decentralization.

As infrastructure providers who've seen many promising solutions come and go, we at BlockEden.xyz have conducted this analysis to help developers and builders understand what makes MegaETH unique—and what risks to consider before building on it.

What Makes MegaETH Different?

MegaETH is an Ethereum Layer-2 solution that has reimagined blockchain architecture with a singular focus: real-time performance.

While most L2 solutions improve on Ethereum's ~15 TPS by a factor of 10-100x, MegaETH aims for 1,000-10,000x improvement—speeds that would put it in a category of its own.

Revolutionary Technical Approach

MegaETH achieves its extraordinary speed through radical engineering decisions:

  1. Single Sequencer Architecture: Unlike most L2s that use multiple sequencers or plan to decentralize, MegaETH uses a single sequencer for ordering transactions, deliberately choosing performance over decentralization.

  2. Optimized State Trie: A completely redesigned state storage system that can handle terabyte-level state data efficiently, even on nodes with limited RAM.

  3. JIT Bytecode Compilation: Just-in-time compilation of Ethereum smart contract bytecode, bringing execution closer to "bare-metal" speed.

  4. Parallel Execution Pipeline: A multi-core approach that processes transactions in parallel streams to maximize throughput.

  5. Micro Blocks: Targeting ~1ms block times through continuous "streaming" block production rather than batch processing.

  6. EigenDA Integration: Using EigenLayer's data availability solution instead of posting all data to Ethereum L1, reducing costs while maintaining security through Ethereum-aligned validation.

This architecture delivers performance metrics that seem almost impossible for a blockchain:

  • Sub-millisecond latency (10ms target)
  • 100,000+ TPS throughput
  • EVM compatibility for easy application porting

Testing the Claims: MegaETH's Current Status

As of March 2025, MegaETH's public testnet is live. The initial deployment began on March 6th with a phased rollout, starting with infrastructure partners and dApp teams before opening to broader user onboarding.

Early testnet metrics show:

  • ~1.68 Giga-gas per second throughput
  • ~15ms block times (significantly faster than other L2s)
  • Support for parallel execution that will eventually push performance even higher

The team has indicated that the testnet is running in a somewhat throttled mode, with plans to enable additional parallelization that could double gas throughput to around 3.36 Ggas/sec, moving toward their ultimate target of 10 Ggas/sec (10 billion gas per second).

The Security and Trust Model

MegaETH's approach to security represents a significant departure from blockchain orthodoxy. Unlike Ethereum's trust-minimized design with thousands of validating nodes, MegaETH embraces a centralized execution layer with Ethereum as its security backstop.

The "Can't Be Evil" Philosophy

MegaETH employs an optimistic rollup security model with some unique characteristics:

  1. Fraud Proof System: Like other optimistic rollups, MegaETH allows observers to challenge invalid state transitions through fraud proofs submitted to Ethereum.

  2. Verifier Nodes: Independent nodes replicate the sequencer's computations and would initiate fraud proofs if discrepancies are found.

  3. Ethereum Settlement: All transactions are eventually settled on Ethereum, inheriting its security for final state.

This creates what the team calls a "can't be evil" mechanism—the sequencer can't produce invalid blocks or alter state incorrectly without being caught and punished.

The Centralization Trade-off

The controversial aspect: MegaETH runs with a single sequencer and explicitly has "no plans to ever decentralize the sequencer." This brings two significant risks:

  1. Liveness Risk: If the sequencer goes offline, the network could halt until it recovers or a new sequencer is appointed.

  2. Censorship Risk: The sequencer could theoretically censor certain transactions or users in the short term (though users could ultimately exit via L1).

MegaETH argues these risks are acceptable because:

  • The L2 is anchored to Ethereum for final security
  • Data availability is handled by multiple nodes in EigenDA
  • Any censorship or fraud can be seen and challenged by the community

Use Cases: When Ultra-Fast Execution Matters

MegaETH's real-time capabilities unlock use cases that were previously impractical on slower blockchains:

1. High-Frequency Trading and DeFi

MegaETH enables DEXs with near-instant trade execution and order book updates. Projects already building include:

  • GTE: A real-time spot DEX combining central limit order books and AMM liquidity
  • Teko Finance: A money market for leveraged lending with rapid margin updates
  • Cap: A stablecoin and yield engine that arbitrages across markets
  • Avon: A lending protocol with orderbook-based loan matching

These DeFi applications benefit from MegaETH's throughput to operate with minimal slippage and high-frequency updates.

2. Gaming and Metaverse

The sub-second finality makes fully on-chain games viable without waiting for confirmations:

  • Awe: An open-world 3D game with on-chain actions
  • Biomes: An on-chain metaverse similar to Minecraft
  • Mega Buddies and Mega Cheetah: Collectible avatar series

Such applications can deliver real-time feedback in blockchain games, enabling fast-paced gameplay and on-chain PvP battles.

3. Enterprise Applications

MegaETH's performance makes it suitable for enterprise applications requiring high throughput:

  • Instantaneous payments infrastructure
  • Real-time risk management systems
  • Supply chain verification with immediate finality
  • High-frequency auction systems

The key advantage in all these cases is the ability to run compute-intensive applications with immediate feedback while still being connected to Ethereum's ecosystem.

The Team Behind MegaETH

MegaETH was co-founded by a team with impressive credentials:

  • Li Yilong: PhD in computer science from Stanford specializing in low-latency computing systems
  • Yang Lei: PhD from MIT researching decentralized systems and Ethereum connectivity
  • Shuyao Kong: Former Head of Global Business Development at ConsenSys

The project has attracted notable backers, including Ethereum co-founders Vitalik Buterin and Joseph Lubin as angel investors. Vitalik's involvement is particularly noteworthy, as he rarely invests in specific projects.

Other investors include Sreeram Kannan (founder of EigenLayer), VC firms like Dragonfly Capital, Figment Capital, and Robot Ventures, and influential community figures such as Cobie.

Token Strategy: The Soulbound NFT Approach

MegaETH introduced an innovative token distribution method through "soulbound NFTs" called "The Fluffle." In February 2025, they created 10,000 non-transferable NFTs representing at least 5% of the total MegaETH token supply.

Key aspects of the tokenomics:

  • 5,000 NFTs were sold at 1 ETH each (raising ~$13-14 million)
  • The other 5,000 NFTs were allocated to ecosystem projects and builders
  • The NFTs are soulbound (cannot be transferred), ensuring long-term alignment
  • Implied valuation of around $540 million, extremely high for a pre-launch project
  • The team has raised approximately $30-40 million in venture funding

Eventually, the MegaETH token is expected to serve as the native currency for transaction fees and possibly for staking and governance.

How MegaETH Compares to Competitors

vs. Other Ethereum L2s

Compared to Optimism, Arbitrum, and Base, MegaETH is significantly faster but makes bigger compromises on decentralization:

  • Performance: MegaETH targets 100,000+ TPS vs. Arbitrum's ~250 ms transaction times and lower throughput
  • Decentralization: MegaETH uses a single sequencer vs. other L2s' plans for decentralized sequencers
  • Data Availability: MegaETH uses EigenDA vs. other L2s posting data directly to Ethereum

vs. Solana and High-Performance L1s

MegaETH aims to "beat Solana at its own game" while leveraging Ethereum's security:

  • Throughput: MegaETH targets 100k+ TPS vs. Solana's theoretical 65k TPS (typically a few thousand in practice)
  • Latency: MegaETH ~10 ms vs. Solana's ~400 ms finality
  • Decentralization: MegaETH has 1 sequencer vs. Solana's ~1,900 validators

vs. ZK-Rollups (StarkNet, zkSync)

While ZK-rollups offer stronger security guarantees through validity proofs:

  • Speed: MegaETH offers faster user experience without waiting for ZK proof generation
  • Trustlessness: ZK-rollups don't require trust in a sequencer's honesty, providing stronger security
  • Future Plans: MegaETH may eventually integrate ZK proofs, becoming a hybrid solution

MegaETH's positioning is clear: it's the fastest option within the Ethereum ecosystem, sacrificing some decentralization to achieve Web2-like speeds.

The Infrastructure Perspective: What Builders Should Consider

As an infrastructure provider connecting developers to blockchain nodes, BlockEden.xyz sees both opportunities and challenges in MegaETH's approach:

Potential Benefits for Builders

  1. Exceptional User Experience: Applications can offer instant feedback and high throughput, creating Web2-like responsiveness.

  2. EVM Compatibility: Existing Ethereum dApps can port over with minimal changes, unlocking performance without rewrites.

  3. Cost Efficiency: High throughput means lower per-transaction costs for users and applications.

  4. Ethereum Security Backstop: Despite centralization at the execution layer, Ethereum settlement provides a security foundation.

Risk Considerations

  1. Single Point of Failure: The centralized sequencer creates liveness risk—if it goes down, so does your application.

  2. Censorship Vulnerability: Applications could face transaction censorship without immediate recourse.

  3. Early-Stage Technology: MegaETH's novel architecture hasn't been battle-tested at scale with real value.

  4. Dependency on EigenDA: Using a newer data availability solution adds an additional trust assumption.

Infrastructure Requirements

Supporting MegaETH's throughput will require robust infrastructure:

  • High-capacity RPC nodes capable of handling the firehose of data
  • Advanced indexing solutions for real-time data access
  • Specialized monitoring for the unique architecture
  • Reliable bridge monitoring for cross-chain operations

Conclusion: Revolution or Compromise?

MegaETH represents a bold experiment in blockchain scaling—one that deliberately prioritizes performance over decentralization. Whether this approach succeeds depends on whether the market values speed more than decentralized execution.

The coming months will be critical as MegaETH transitions from testnet to mainnet. If it delivers on its performance promises while maintaining sufficient security, it could fundamentally reshape how we think about blockchain scaling. If it stumbles, it will reinforce why decentralization remains a core blockchain value.

For now, MegaETH stands as one of the most ambitious Ethereum scaling solutions to date. Its willingness to challenge orthodoxy has already sparked important conversations about what trade-offs are acceptable in pursuit of mainstream blockchain adoption.

At BlockEden.xyz, we're committed to supporting developers wherever they build, including high-performance networks like MegaETH. Our reliable node infrastructure and API services are designed to help applications thrive across the multi-chain ecosystem, regardless of which approach to scaling ultimately prevails.


Looking to build on MegaETH or need reliable node infrastructure for high-throughput applications? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.

Scaling Blockchains: How Caldera and the RaaS Revolution Are Shaping Web3's Future

· 7 min read

The Web3 Scaling Problem

The blockchain industry faces a persistent challenge: how do we scale to support millions of users without sacrificing security or decentralization?

Ethereum, the leading smart contract platform, processes roughly 15 transactions per second on its base layer. During periods of high demand, this limitation has led to exorbitant gas fees—sometimes exceeding $100 per transaction during NFT mints or DeFi farming frenzies.

This scaling bottleneck presents an existential threat to Web3 adoption. Users accustomed to the instant responsiveness of Web2 applications won't tolerate paying $50 and waiting 3 minutes just to swap tokens or mint an NFT.

Enter the solution that's rapidly reshaping blockchain architecture: Rollups-as-a-Service (RaaS).

Scaling Blockchains

Understanding Rollups-as-a-Service (RaaS)

RaaS platforms enable developers to deploy their own custom blockchain rollups without the complexity of building everything from scratch. These services transform what would normally require a specialized engineering team and months of development into a streamlined, sometimes one-click deployment process.

Why does this matter? Because rollups are the key to blockchain scaling.

Rollups work by:

  • Processing transactions off the main chain (Layer 1)
  • Batching these transactions together
  • Submitting compressed proofs of these transactions back to the main chain

The result? Drastically increased throughput and significantly reduced costs while inheriting security from the underlying Layer 1 blockchain (like Ethereum).

"Rollups don't compete with Ethereum—they extend it. They're like specialized Express lanes built on top of Ethereum's highway."

This approach to scaling is so promising that Ethereum officially adopted a "rollup-centric roadmap" in 2020, acknowledging that the future isn't a single monolithic chain, but rather an ecosystem of interconnected, purpose-built rollups.

Caldera: Leading the RaaS Revolution

Among the emerging RaaS providers, Caldera stands out as a frontrunner. Founded in 2023 and having raised $25M from prominent investors including Dragonfly, Sequoia Capital, and Lattice, Caldera has quickly positioned itself as a leading infrastructure provider in the rollup space.

What Makes Caldera Different?

Caldera distinguishes itself in several key ways:

  1. Multi-Framework Support: Unlike competitors who focus on a single rollup framework, Caldera supports major frameworks like Optimism's OP Stack and Arbitrum's Orbit/Nitro technology, giving developers flexibility in their technical approach.

  2. End-to-End Infrastructure: When you deploy with Caldera, you get a complete suite of components: reliable RPC nodes, block explorers, indexing services, and bridge interfaces.

  3. Rich Integration Ecosystem: Caldera comes pre-integrated with 40+ Web3 tools and services, including oracles, faucets, wallets, and cross-chain bridges (LayerZero, Axelar, Wormhole, Connext, and more).

  4. The Metalayer Network: Perhaps Caldera's most ambitious innovation is its Metalayer—a network that connects all Caldera-powered rollups into a unified ecosystem, allowing them to share liquidity and messages seamlessly.

  5. Multi-VM Support: In late 2024, Caldera became the first RaaS to support the Solana Virtual Machine (SVM) on Ethereum, enabling Solana-like high-performance chains that still settle to Ethereum's secure base layer.

Caldera's approach is creating what they call an "everything layer" for rollups—a cohesive network where different rollups can interoperate rather than exist as isolated islands.

Real-World Adoption: Who's Using Caldera?

Caldera has gained significant traction, with over 75 rollups in production as of late 2024. Some notable projects include:

  • Manta Pacific: A highly scalable network for deploying zero-knowledge applications that uses Caldera's OP Stack combined with Celestia for data availability.

  • RARI Chain: Rarible's NFT-focused rollup that processes transactions in under a second and enforces NFT royalties at the protocol level.

  • Kinto: A regulatory-compliant DeFi platform with on-chain KYC/AML and account abstraction capabilities.

  • Injective's inEVM: An EVM-compatible rollup that extends Injective's interoperability, connecting the Cosmos ecosystem with Ethereum-based dApps.

These projects highlight how application-specific rollups enable customization not possible on general-purpose Layer 1s. By late 2024, Caldera's collective rollups had reportedly processed over 300 million transactions for 6+ million unique wallets, with nearly $1 billion in total value locked (TVL).

How RaaS Compares: Caldera vs. Competitors

The RaaS landscape is becoming increasingly competitive, with several notable players:

Conduit

  • Focuses exclusively on Optimism and Arbitrum ecosystems
  • Emphasizes a fully self-serve, no-code experience
  • Powers approximately 20% of Ethereum's mainnet rollups, including Zora

AltLayer

  • Offers "Flashlayers"—disposable, on-demand rollups for temporary needs
  • Focuses on elastic scaling for specific events or high-traffic periods
  • Demonstrated impressive throughput during gaming events (180,000+ daily transactions)

Sovereign Labs

  • Building a Rollup SDK focused on zero-knowledge technologies
  • Aims to enable ZK-rollups on any base blockchain, not just Ethereum
  • Still in development, positioning for the next wave of multi-chain ZK deployment

While these competitors excel in specific niches, Caldera's comprehensive approach—combining a unified rollup network, multi-VM support, and a focus on developer experience—has helped establish it as a market leader.

The Future of RaaS and Blockchain Scaling

RaaS is poised to reshape the blockchain landscape in profound ways:

1. The Proliferation of App-Specific Chains

Industry research suggests we're moving toward a future with potentially millions of rollups, each serving specific applications or communities. With RaaS lowering deployment barriers, every significant dApp could have its own optimized chain.

2. Interoperability as the Critical Challenge

As rollups multiply, the ability to communicate and share value between them becomes crucial. Caldera's Metalayer represents an early attempt to solve this challenge—creating a unified experience across a web of rollups.

3. From Isolated Chains to Networked Ecosystems

The end goal is a seamless multi-chain experience where users hardly need to know which chain they're on. Value and data would flow freely through an interconnected web of specialized rollups, all secured by robust Layer 1 networks.

4. Cloud-Like Blockchain Infrastructure

RaaS is effectively turning blockchain infrastructure into a cloud-like service. Caldera's "Rollup Engine" allows dynamic upgrades and modular components, treating rollups like configurable cloud services that can scale on demand.

What This Means for Developers and BlockEden.xyz

At BlockEden.xyz, we see enormous potential in the RaaS revolution. As an infrastructure provider connecting developers to blockchain nodes securely, we're positioned to play a crucial role in this evolving landscape.

The proliferation of rollups means developers need reliable node infrastructure more than ever. A future with thousands of application-specific chains demands robust RPC services with high availability—precisely what BlockEden.xyz specializes in providing.

We're particularly excited about the opportunities in:

  1. Specialized RPC Services for Rollups: As rollups adopt unique features and optimizations, specialized infrastructure becomes crucial.

  2. Cross-Chain Data Indexing: With value flowing between multiple rollups, developers need tools to track and analyze cross-chain activities.

  3. Enhanced Developer Tools: As rollup deployment becomes simpler, the need for sophisticated monitoring, debugging, and analytics tools grows.

  4. Unified API Access: Developers working across multiple rollups need simplified, unified access to diverse blockchain networks.

Conclusion: The Modular Blockchain Future

The rise of Rollups-as-a-Service represents a fundamental shift in how we think about blockchain scaling. Rather than forcing all applications onto a single chain, we're moving toward a modular future with specialized chains for specific use cases, all interconnected and secured by robust Layer 1 networks.

Caldera's approach—creating a unified network of rollups with shared liquidity and seamless messaging—offers a glimpse of this future. By making rollup deployment as simple as spinning up a cloud server, RaaS providers are democratizing access to blockchain infrastructure.

At BlockEden.xyz, we're committed to supporting this evolution by providing the reliable node infrastructure and developer tools needed to build in this multi-chain future. As we often say, the future of Web3 isn't a single chain—it's thousands of specialized chains working together.


Looking to build on a rollup or need reliable node infrastructure for your blockchain project? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.

ENS for Businesses in 2025: From 'Nice-to-Have' to Programmable Brand Identity

· 11 min read
Dora Noda
Software Engineer

For years, the Ethereum Name Service (ENS) was seen by many as a niche tool for crypto enthusiasts—a way to replace long, clunky wallet addresses with human-readable .eth names. But in 2025, that perception is outdated. ENS has evolved into a foundational layer for programmable brand identity, turning a simple name into a portable, verifiable, and unified anchor for your company’s entire digital presence.

It’s no longer just about brand.eth. It’s about making brand.com crypto-aware, issuing verifiable roles to employees, and building trust with customers through a single, canonical source of truth. This is the guide for businesses on why ENS now matters and how to implement it today.

TL;DR

  • ENS turns a name (e.g., brand.eth or brand.com) into a programmable identity that maps to wallets, apps, websites, and verified profile data.
  • You don’t have to abandon your DNS domain: with Gasless DNSSEC, a brand.com can function as an ENS name without on-chain fees at setup.
  • .eth pricing is transparent and renewal-based (shorter names cost more), and the revenue funds the public-good protocol via the ENS DAO.
  • Subnames like alice.brand.eth or support.brand.com let you issue roles, perks, and access—time-boxed and constrained by NameWrapper “fuses” and expiry.
  • ENS is moving core functionality to L2 in ENSv2, with trust-minimized resolution via CCIP‑Read—important for cost, speed, and scale.

Why ENS Matters for Modern Companies

For businesses, identity is fragmented. You have a domain name for your website, social media handles for marketing, and separate accounts for payments and operations. ENS offers a way to unify these, creating a single, authoritative identity layer.

  • Unified, Human-Readable Identity: At its core, ENS maps a memorable name to cryptographic addresses. But its power extends far beyond a single blockchain. With multi-chain support, your brand.eth can point to your Bitcoin treasury, Solana operations wallet, and Ethereum smart contracts simultaneously. Your brand’s name becomes the single, user-friendly anchor for payments, applications, and profiles across the web3 ecosystem.
  • Deep Ecosystem Integration: ENS isn't a speculative bet on a niche protocol; it's a web3 primitive. It is natively supported across major wallets (Coinbase Wallet, MetaMask), browsers (Brave, Opera), and decentralized applications (Uniswap, Aave). When partners like GoDaddy integrate ENS, it signals a convergence between web2 and web3 infrastructure. By adopting ENS, you are plugging your brand into a vast, interoperable network.
  • Rich, Verifiable Profile Data: Beyond addresses, ENS names can store standardized text records for profile information like an avatar, email, social media handles, and a website URL. This turns your ENS name into a canonical, machine-readable business card. Your support, marketing, and engineering tools can all pull from the same verified source, ensuring consistency and building trust with your users.

Two Onramps: .eth vs. “Bring Your Own DNS”

Getting started with ENS is flexible, offering two primary paths that can and should be used together.

1. Register brand.eth

This is the web3-native approach. Registering a .eth name gives you a crypto-native asset that signals your brand's commitment to the ecosystem. The process is straightforward and transparent.

  • Clear Fee Schedule: Fees are paid annually in ETH to prevent squatting and fund the protocol. Prices are based on scarcity: 5+ character names are just $5/year, 4-character names are $160/year, and 3-character names are $640/year.
  • Set a Primary Name: Once you own brand.eth, you should set it as the "Primary Name" (also known as a reverse record) for your main company wallet. This is a critical step that allows wallets and dapps to display your memorable name instead of your long address, dramatically improving user experience and trust.

2. Enhance brand.com Inside ENS (No Migration Required)

You don't need to abandon your valuable web2 domain. Thanks to a feature called Gasless DNSSEC, you can link your existing DNS domain to a crypto wallet, effectively upgrading it into a fully functional ENS name.

  • Zero On-chain Cost for Owners: The process allows a brand.com to become resolvable within the ENS ecosystem without requiring the domain owner to submit an on-chain transaction.
  • Mainstream Registrar Support: GoDaddy has already streamlined this with a one-click “Crypto Wallet” record, powered by this ENS feature. Other major registrars that support DNSSEC can also be configured to work with ENS.

Pragmatic advice: Do both. Use brand.eth for your web3-native audience and treasury operations. Simultaneously, bring brand.com into ENS to unify your entire brand footprint and provide a seamless bridge for your existing user base.


Zero-to-One Rollout: A One-Week Plan

Deploying ENS doesn't have to be a multi-quarter project. A focused team can establish a robust presence in about a week.

  • Day 1–2: Name & Policy Claim brand.eth and link your existing DNS name using the Gasless DNSSEC method. This is also the time to establish an internal policy on canonical spelling, use of emojis, and normalization rules. ENS uses a standard called ENSIP-15 to handle name variations, but it's crucial to be aware of homoglyphs (characters that look alike) to prevent phishing attacks against your brand.

  • Day 3: Primary Names & Wallets For your company’s treasury, operations, and payment wallets, set the Primary Name (reverse record) so that they resolve to treasury.brand.eth or a similar name. Use this opportunity to populate multi-coin address records (BTC, SOL, etc.) to ensure payments sent to your ENS name are correctly routed, no matter the chain.

  • Day 4: Profile Data Fill out the standardized text records on your primary ENS name. At a minimum, set email, url, com.twitter, and avatar. An official avatar adds immediate visual verification in supported wallets. For enhanced security, you can also add a public PGP key.

  • Day 5: Subnames Begin issuing subnames like alice.brand.eth for employees or support.brand.com for departments. Use the NameWrapper to apply security "fuses" that can, for example, prevent the subname from being transferred. Set an expiry date to automatically revoke access when a contract ends or an employee leaves.

  • Day 6: Website / Docs Decentralize your web presence. Pin your press kit, terms of service, or a status page to a decentralized storage network like IPFS or Arweave and link it to your ENS name via the contenthash record. For universal access, users can resolve this content through public gateways like eth.limo.

  • Day 7: Integrate in Product Start using ENS in your own application. Use libraries like viem with ensjs to resolve names, normalize user inputs, and show avatars. When looking up addresses, perform a reverse lookup to display the user's Primary Name. Be sure to use a resolver gateway that supports CCIP-Read to ensure your app is future-proof for ENSv2's L2 architecture.


Common Patterns That Pay Off Fast

Once set up, ENS unlocks powerful, practical use cases that deliver immediate value.

  • Safer, Simpler Payments: Instead of copying and pasting a long, error-prone address, put pay.brand.eth on your invoices. By publishing all your multi-coin addresses under one name, you drastically reduce the risk of customers sending funds to the wrong address or chain.
  • Authentic Support & Social Presence: Publish your official social media handles in your ENS text records. Some tools can already verify these records, creating a strong defense against impersonation. A support.brand.eth name can point directly to a dedicated support wallet or secure messaging endpoint.
  • Decentralized Web Presence: Host a tamper-evident status page or critical documentation at brand.eth using the contenthash. Because the link is on-chain, it cannot be taken down by a single provider, offering a higher degree of resilience for essential information.
  • A Programmable Org Chart: Issue employee.brand.eth subnames that grant access to internal tools or token-gated channels. With NameWrapper fuses and expiry dates, you can create a dynamic, programmable, and automatically-revocable identity system for your entire organization.
  • Gas-Light User Experiences: For high-volume use cases like issuing loyalty IDs or tickets as subnames, on-chain transactions are too slow and expensive. Use an offchain resolver with CCIP-Read. This standard allows ENS names to be resolved from L2s or even traditional databases in a trust-minimized way. Industry leaders like Uniswap (uni.eth) and Coinbase (cb.id) already use this pattern to scale their user identity systems.

Security & Governance You Shouldn’t Skip

Treat your primary ENS name like you treat your primary domain name: as a critical piece of company infrastructure.

  • Separate “Owner” from “Manager”: This is a core security principle. The "Owner" role, which has the power to transfer the name, should be secured in a cold storage multisig wallet. The "Manager" role, which can update day-to-day records like IP addresses or avatars, can be delegated to a more accessible hot wallet. This separation of powers drastically reduces the blast radius of a compromised key.
  • Use NameWrapper Protections: When issuing subnames, use the NameWrapper to burn fuses like CANNOT_TRANSFER to lock them to a specific employee or CANNOT_UNWRAP to enforce your governance policies. All permissions are governed by an expiry date you control, providing time-boxed access by default.
  • Monitor Renewals: Don’t lose your .eth name because of a missed payment. Calendar your renewal dates and remember that while .eth names have a 90-day grace period, the policies for subnames are entirely up to you.

Developer Quickstart (TypeScript)

Integrating ENS resolution into your app is simple with modern libraries like viem. This snippet shows how to look up an address from a name, or a name from an address.

import { createPublicClient, http } from "viem";
import { mainnet } from "viem/chains";
import { normalize, getEnsAddress, getEnsName, getEnsAvatar } from "viem/ens";

const client = createPublicClient({ chain: mainnet, transport: http() });

export async function lookup(nameOrAddress: string) {
if (nameOrAddress.endsWith(".eth") || nameOrAddress.includes(".")) {
// Name → Address (normalize input per ENSIP-15)
const name = normalize(nameOrAddress);
const address = await getEnsAddress(client, {
name,
gatewayUrls: ["https://ccip.ens.xyz"],
});
const avatar = await getEnsAvatar(client, { name });
return { type: "name", name, address, avatar };
} else {
// Address → Primary Name (reverse record)
const name = await getEnsName(client, {
address: nameOrAddress as `0x${string}`,
gatewayUrls: ["https://ccip.ens.xyz"],
});
return { type: "address", address: nameOrAddress, name };
}
}

Two key takeaways from this code:

  • normalize is essential for security. It enforces ENS naming rules and helps prevent common phishing and spoofing attacks from look-alike names.
  • gatewayUrls points to a Universal Resolver that supports CCIP-Read. This makes your integration forward-compatible with the upcoming move to L2 and off-chain data.

For developers building with React, the ENSjs library offers higher-level hooks and components that wrap these common flows, making integration even faster.


  • Normalization and Usability: Familiarize yourself with ENSIP-15 normalization. Set clear internal guidelines on the use of emojis or non-ASCII characters, and actively screen for "confusables" that could be used to impersonate your brand.
  • Trademark Reality Check: .eth names operate outside of the traditional ICANN framework and its UDRP dispute resolution process. Trademark owners cannot rely on the same legal rails they use for DNS domains. Therefore, defensive registration of key brand terms is a prudent strategy. (This is not legal advice; consult with counsel.)

What’s Next: ENSv2 and the Move to L2

The ENS protocol is not static. The next major evolution, ENSv2, is underway.

  • Protocol Moving to L2: To reduce gas costs and increase speed, the core ENS registry will be migrated to a Layer 2 network. Name resolution will be bridged back to L1 and other chains via CCIP-Read and cryptographic proof systems. This will make registering and managing names significantly cheaper, unlocking richer application patterns.
  • Seamless Migration Plan: The ENS DAO has published a detailed migration plan to ensure existing names can be moved to the new system with minimal friction. If you operate at scale, this is a key development to follow.

Implementation Checklist

Use this checklist to guide your team’s implementation.

  • Claim brand.eth; link brand.com via Gasless DNSSEC.
  • Park ownership of the name in a secure multisig; delegate manager roles.
  • Set a Primary Name on all organizational wallets.
  • Publish multi-coin addresses for payments.
  • Fill out text records (email, url, social, avatar).
  • Issue subnames for teams, employees, and services using fuses and expiry.
  • Host a minimal decentralized site (e.g., status page) and set the contenthash.
  • Integrate ENS resolution (viem/ensjs) into your product; normalize all inputs.
  • Calendar all .eth name renewal dates and monitor expiry.

ENS is ready for business. It has moved beyond a simple naming system to become a critical piece of infrastructure for any company building for the next generation of the internet. By establishing a programmable and persistent identity, you lower risk, create smoother user experiences, and ensure your brand is ready for a decentralized future.

ETHDenver 2025: Key Web3 Trends and Insights from the Festival

· 24 min read

ETHDenver 2025, branded the “Year of The Regenerates,” solidified its status as one of the world’s largest Web3 gatherings. Spanning BUIDLWeek (Feb 23–26), the Main Event (Feb 27–Mar 2), and a post-conference Mountain Retreat, the festival drew an expected 25,000+ participants. Builders, developers, investors, and creatives from 125+ countries converged in Denver to celebrate Ethereum’s ethos of decentralization and innovation. True to its community roots, ETHDenver remained free to attend, community-funded, and overflowing with content – from hackathons and workshops to panels, pitch events, and parties. The event’s lore of “Regenerates” defending decentralization set a tone that emphasized public goods and collaborative building, even amid a competitive tech landscape. The result was a week of high-energy builder activity and forward-looking discussions, offering a snapshot of Web3’s emerging trends and actionable insights for industry professionals.

ETHDenver 2025

No single narrative dominated ETHDenver 2025 – instead, a broad spectrum of Web3 trends took center stage. Unlike last year (when restaking via EigenLayer stole the show), 2025’s agenda was a sprinkle of everything: from decentralized physical infrastructure networks (DePIN) to AI agents, from regulatory compliance to real-world asset tokenization (RWA), plus privacy, interoperability, and more. In fact, ETHDenver’s founder John Paller addressed concerns about multi-chain content by noting “95%+ of our sponsors and 90% of content is ETH/EVM-aligned” – yet the presence of non-Ethereum ecosystems underscored interoperability as a key theme. Major speakers reflected these trend areas: for example, zk-rollup and Layer-2 scaling was highlighted by Alex Gluchowski (CEO of Matter Labs/zkSync), while multi-chain innovation came from Adeniyi Abiodun of Mysten Labs (Sui) and Albert Chon of Injective.

The convergence of AI and Web3 emerged as a strong undercurrent. Numerous talks and side events focused on decentralized AI agents and “DeFi+AI” crossovers. A dedicated AI Agent Day showcased on-chain AI demos, and a collective of 14 teams (including Coinbase’s developer kit and NEAR’s AI unit) even announced the Open Agents Alliance (OAA) – an initiative to provide permissionless, free AI access by pooling Web3 infrastructure. This indicates growing interest in autonomous agents and AI-driven dApps as a frontier for builders. Hand-in-hand with AI, DePIN (decentralized physical infrastructure) was another buzzword: multiple panels (e.g. Day of DePIN, DePIN Summit) explored projects bridging blockchain with physical networks (from telecom to mobility).

Cuckoo AI Network made waves at ETHDenver 2025, showcasing its innovative decentralized AI model-serving marketplace designed for creators and developers. With a compelling presence at both the hackathon and community-led side events, Cuckoo AI attracted significant attention from developers intrigued by its ability to monetize GPU/CPU resources and easily integrate on-chain AI APIs. During their dedicated workshop and networking session, Cuckoo AI highlighted how decentralized infrastructure could efficiently democratize access to advanced AI services. This aligns directly with the event's broader trends—particularly the intersection of blockchain with AI, DePIN, and public-goods funding. For investors and developers at ETHDenver, Cuckoo AI emerged as a clear example of how decentralized approaches can power the next generation of AI-driven dApps and infrastructure, positioning itself as an attractive investment opportunity within the Web3 ecosystem.

Privacy, identity, and security remained top-of-mind. Speakers and workshops addressed topics like zero-knowledge proofs (zkSync’s presence), identity management and verifiable credentials (a dedicated Privacy & Security track was in the hackathon), and legal/regulatory issues (an on-chain legal summit was part of the festival tracks). Another notable discussion was the future of fundraising and decentralization of funding: a Main Stage debate between Dragonfly Capital’s Haseeb Qureshi and Matt O’Connor of Legion (an “ICO-like” platform) about ICOs vs. VC funding captivated attendees. This debate highlighted emerging models like community token sales challenging traditional VC routes – an important trend for Web3 startups navigating capital raising. The take-away for professionals is clear: Web3 in 2025 is multidisciplinary – spanning finance, AI, real assets, and culture – and staying informed means looking beyond any one hype cycle to the full spectrum of innovation.

Sponsors and Their Strategic Focus Areas

ETHDenver’s sponsor roster in 2025 reads like a who’s-who of layer-1s, layer-2s, and Web3 infrastructure projects – each leveraging the event to advance strategic goals. Cross-chain and multi-chain protocols made a strong showing. For instance, Polkadot was a top sponsor with a hefty $80k bounty pool, incentivizing builders to create cross-chain DApps and appchains. Similarly, BNB Chain, Flow, Hedera, and Base (Coinbase’s L2) each offered up to $50k for projects integrating with their ecosystems, signaling their push to attract Ethereum developers. Even traditionally separate ecosystems like Solana and Internet Computer joined in with sponsored challenges (e.g. Solana co-hosted a DePIN event, and Internet Computer offered an “Only possible on ICP” bounty). This cross-ecosystem presence drew some community scrutiny, but ETHDenver’s team noted that the vast majority of content remained Ethereum-aligned. The net effect was interoperability being a core theme – sponsors aimed to position their platforms as complementary extensions of the Ethereum universe.

Scaling solutions and infrastructure providers were also front and center. Major Ethereum L2s like Optimism and Arbitrum had large booths and sponsored challenges (Optimism’s bounties up to $40k), reinforcing their focus on onboarding developers to rollups. New entrants like ZkSync and Zircuit (a project showcasing an L2 rollup approach) emphasized zero-knowledge tech and even contributed SDKs (ZkSync promoted its Smart Sign-On SDK for user-friendly login, which hackathon teams eagerly used). Restaking and modular blockchain infrastructure was another sponsor interest – EigenLayer (pioneering restaking) had its own $50k track and even co-hosted an event on “Restaking & DeFAI (Decentralized AI)”, marrying its security model with AI topics. Oracles and interoperability middleware were represented by the likes of Chainlink and Wormhole, each issuing bounties for using their protocols.

Notably, Web3 consumer applications and tooling had sponsor support to improve user experience. Uniswap’s presence – complete with one of the biggest booths – wasn’t just for show: the DeFi giant used the event to announce new wallet features like integrated fiat off-ramps, aligning with its sponsorship focus on DeFi usability. Identity and community-focused platforms like Galxe (Gravity) and Lens Protocol sponsored challenges around on-chain social and credentialing. Even mainstream tech companies signaled interest: PayPal and Google Cloud hosted a stablecoin/payments happy hour to discuss the future of payments in crypto. This blend of sponsors shows that strategic interests ranged from core infrastructure to end-user applications – all converging at ETHDenver to provide resources (APIs, SDKs, grants) to developers. For Web3 professionals, the heavy sponsorship from layer-1s, layer-2s, and even Web2 fintechs highlights where the industry is investing: interoperability, scalability, security, and making crypto useful for the next wave of users.

Hackathon Highlights: Innovative Projects and Winners

At the heart of ETHDenver is its legendary #BUIDLathon – a hackathon that has grown into the world’s largest blockchain hackfest with thousands of developers. In 2025 the hackathon offered a record $1,043,333+ prize pool to spur innovation. Bounties from 60+ sponsors targeted key Web3 domains, carving the competition into tracks such as: DeFi & AI, NFTs & Gaming, Infrastructure & Scalability, Privacy & Security, and DAOs & Public Goods. This track design itself is insightful – for example, pairing DeFi with AI hints at the emergence of AI-driven financial applications, while a dedicated Public Goods track reaffirms community focus on regenerative finance and open-source development. Each track was backed by sponsors offering prizes for best use of their tech (e.g. Polkadot and Uniswap for DeFi, Chainlink for interoperability, Optimism for scaling solutions). The organizers even implemented quadratic voting for judging, allowing the community to help surface top projects, with final winners chosen by expert judges.

The result was an outpouring of cutting-edge projects, many of which offer a glimpse into Web3’s future. Notable winners included an on-chain multiplayer game “0xCaliber”, a first-person shooter that runs real-time blockchain interactions inside a classic FPS game. 0xCaliber wowed judges by demonstrating true on-chain gaming – players buy in with crypto, “shoot” on-chain bullets, and use cross-chain tricks to collect and cash out loot, all in real time. This kind of project showcases the growing maturity of Web3 gaming (integrating Unity game engines with smart contracts) and the creativity in merging entertainment with crypto economics. Another category of standout hacks were those merging AI with Ethereum: teams built “agent” platforms that use smart contracts to coordinate AI services, inspired by the Open Agents Alliance announcement. For example, one hackathon project integrated AI-driven smart contract auditors (auto-generating security test cases for contracts) – aligning with the decentralized AI trend observed at the conference.

Infrastructure and tooling projects were also prominent. Some teams tackled account abstraction and user experience, using sponsor toolkits like zkSync’s Smart Sign-On to create wallet-less login flows for dApps. Others worked on cross-chain bridges and Layer-2 integrations, reflecting ongoing developer interest in interoperability. In the Public Goods & DAO track, a few projects addressed real-world social impact, such as a dApp for decentralized identity and aid to help the homeless (leveraging NFTs and community funds, an idea reminiscent of prior ReFi hacks). Regenerative finance (ReFi) concepts – like funding public goods via novel mechanisms – continued to appear, echoing ETHDenver’s regenerative theme.

While final winners were being celebrated by the end of the main event, the true value was in the pipeline of innovation: over 400 project submissions poured in, many of which will live on beyond the event. ETHDenver’s hackathon has a track record of seeding future startups (indeed, some past BUIDLathon projects have grown into sponsors themselves). For investors and technologists, the hackathon provided a window into bleeding-edge ideas – signaling that the next wave of Web3 startups may emerge in areas like on-chain gaming, AI-infused dApps, cross-chain infrastructure, and solutions targeting social impact. With nearly $1M in bounties disbursed to developers, sponsors effectively put their money where their mouth is to cultivate these innovations.

Networking Events and Investor Interactions

ETHDenver is not just about writing code – it’s equally about making connections. In 2025 the festival supercharged networking with both formal and informal events tailored for startups, investors, and community builders. One marquee event was the Bufficorn Ventures (BV) Startup Rodeo, a high-energy showcase where 20 hand-picked startups demoed to investors in a science-fair style expo. Taking place on March 1st in the main hall, the Startup Rodeo was described as more “speed dating” than pitch contest: founders manned tables to pitch their projects one-on-one as all attending investors roamed the arena. This format ensured even early-stage teams could find meaningful face time with VCs, strategics, or partners. Many startups used this as a launchpad to find customers and funding, leveraging the concentrated presence of Web3 funds at ETHDenver.

On the conference’s final day, the BV BuffiTank Pitchfest took the spotlight on the main stage – a more traditional pitch competition featuring 10 of the “most innovative” early-stage startups from the ETHDenver community. These teams (separate from the hackathon winners) pitched their business models to a panel of top VCs and industry leaders, competing for accolades and potential investment offers. The Pitchfest illustrated ETHDenver’s role as a deal-flow generator: it was explicitly aimed at teams “already organized…looking for investment, customers, and exposure,” especially those connected to the SporkDAO community. The reward for winners wasn’t a simple cash prize but rather the promise of joining Bufficorn Ventures’ portfolio or other accelerator cohorts. In essence, ETHDenver created its own mini “Shark Tank” for Web3, catalyzing investor attention on the community’s best projects.

Beyond these official showcases, the week was packed with investor-founder mixers. According to a curated guide by Belong, notable side events included a “Meet the VCs” Happy Hour hosted by CertiK Ventures on Feb 27, a StarkNet VC & Founders Lounge on March 1, and even casual affairs like a “Pitch & Putt” golf-themed pitch event. These gatherings provided relaxed environments for founders to rub shoulders with venture capitalists, often leading to follow-up meetings after the conference. The presence of many emerging VC firms was also felt on panels – for example, a session on the EtherKnight Stage highlighted new funds like Reflexive Capital, Reforge VC, Topology, Metalayer, and Hash3 and what trends they are most excited about. Early indications suggest these VCs were keen on areas like decentralized social media, AI, and novel Layer-1 infrastructure (each fund carving a niche to differentiate themselves in a competitive VC landscape).

For professionals looking to capitalize on ETHDenver’s networking: the key takeaway is the value of side events and targeted mixers. Deals and partnerships often germinate over coffee or cocktails rather than on stage. ETHDenver 2025’s myriad investor events demonstrate that the Web3 funding community is actively scouting for talent and ideas even in a lean market. Startups that came prepared with polished demos and a clear value proposition (often leveraging the event’s hackathon momentum) found receptive audiences. Meanwhile, investors used these interactions to gauge the pulse of the developer community – what problems are the brightest builders solving this year? In summary, ETHDenver reinforced that networking is as important as BUIDLing: it’s a place where a chance meeting can lead to a seed investment or where an insightful conversation can spark the next big collaboration.

A subtle but important narrative throughout ETHDenver 2025 was the evolving landscape of Web3 venture capital itself. Despite the broader crypto market’s ups and downs, investors at ETHDenver signaled strong appetite for promising Web3 projects. Blockworks reporters on the ground noted “just how much private capital is still flowing into crypto, undeterred by macro headwinds,” with seed stage valuations often sky-high for the hottest ideas. Indeed, the sheer number of VCs present – from crypto-native funds to traditional tech investors dabbling in Web3 – made it clear that ETHDenver remains a deal-making hub.

Emerging thematic focuses could be discerned from what VCs were discussing and sponsoring. The prevalence of AI x Crypto content (hackathon tracks, panels, etc.) wasn’t only a developer trend; it reflects venture interest in the “DeFi meets AI” nexus. Many investors are eyeing startups that leverage machine learning or autonomous agents on blockchain, as evidenced by venture-sponsored AI hackhouses and summits. Similarly, the heavy focus on DePIN and real-world asset (RWA) tokenization indicates that funds see opportunity in projects that connect blockchain to real economy assets and physical devices. The dedicated RWA Day (Feb 26) – a B2B event on the future of tokenized assets – suggests that venture scouts are actively hunting in that arena for the next Goldfinch or Centrifuge (i.e. platforms bringing real-world finance on-chain).

Another observable trend was a growing experimentation with funding models. The aforementioned debate on ICOs vs VCs wasn’t just conference theatrics; it mirrors a real venture movement towards more community-centric funding. Some VCs at ETHDenver indicated openness to hybrid models (e.g. venture-supported token launches that involve community in early rounds). Additionally, public goods funding and impact investing had a seat at the table. With ETHDenver’s ethos of regeneration, even investors discussed how to support open-source infrastructure and developers long-term, beyond just chasing the next DeFi or NFT boom. Panels like “Funding the Future: Evolving Models for Onchain Startups” explored alternatives such as grants, DAO treasury investments, and quadratic funding to supplement traditional VC money. This points to an industry maturing in how projects are capitalized – a mix of venture capital, ecosystem funds, and community funding working in tandem.

From an opportunity standpoint, Web3 professionals and investors can glean a few actionable insights from ETHDenver’s venture dynamics: (1) Infrastructure is still king – many VCs expressed that picks-and-shovels (L2 scaling, security, dev tools) remain high-value investments as the industry’s backbone. (2) New verticals like AI/blockchain convergence and DePIN are emerging investment frontiers – getting up to speed in these areas or finding startups there could be rewarding. (3) Community-driven projects and public goods might see novel funding – savvy investors are figuring out how to support these sustainably (for instance, investing in protocols that enable decentralized governance or shared ownership). Overall, ETHDenver 2025 showed that while the Web3 venture landscape is competitive, it’s brimming with conviction: capital is available for those building the future of DeFi, NFTs, gaming, and beyond, and even bear-market born ideas can find backing if they target the right trend.

Developer Resources, Toolkits, and Support Systems

ETHDenver has always been builder-focused, and 2025 was no exception – it doubled as an open-source developer conference with a plethora of resources and support for Web3 devs. During BUIDLWeek, attendees had access to live workshops, technical bootcamps, and mini-summits spanning various domains. For example, developers could join a Bleeding Edge Tech Summit to tinker with the latest protocols, or drop into an On-Chain Legal Summit to learn about compliant smart contract development. Major sponsors and blockchain teams ran hands-on sessions: Polkadot’s team hosted hacker houses and workshops on spinning up parachains; EigenLayer led a “restaking bootcamp” to teach devs how to leverage its security layer; Polygon and zkSync gave tutorials on building scalable dApps with zero-knowledge tech. These sessions provided invaluable face-time with core engineers, allowing developers to get help with integration and learn new toolkits first-hand.

Throughout the main event, the venue featured a dedicated #BUIDLHub and Makerspace where builders could code in a collaborative environment and access mentors. ETHDenver’s organizers published a detailed BUIDLer Guide and facilitated an on-site mentorship program (experts from sponsors were available to unblock teams on technical issues). Developer tooling companies were also present en masse – from Alchemy and Infura (for blockchain APIs) to Hardhat and Foundry (for smart contract development). Many unveiled new releases or beta tools at the event. For instance, MetaMask’s team previewed a major wallet update featuring gas abstraction and an improved SDK for dApp developers, aiming to simplify how apps cover gas fees for users. Several projects launched SDKs or open-source libraries: Coinbase’s “Agent Kit” for AI agents and the collaborative Open Agents Alliance toolkit were introduced, and Story.xyz promoted its Story SDK for on-chain intellectual property licensing during their own hackathon event.

Bounties and hacker support further augmented the developer experience. With over 180 bounties offered by 62 sponsors, hackers effectively had a menu of specific challenges to choose from, each coming with documentation, office hours, and sometimes bespoke sandboxes. For example, Optimism’s bounty challenged devs to use the latest Bedrock opcodes (with their engineers on standby to assist), and Uniswap’s challenge provided access to their new API for off-ramp integration. Tools for coordination and learning – like the official ETHDenver mobile app and Discord channels – kept developers informed of schedule changes, side quests, and even job opportunities via the ETHDenver job board.

One notable resource was the emphasis on quadratic funding experiments and on-chain voting. ETHDenver integrated a quadratic voting system for hackathon judging, exposing many developers to the concept. Additionally, the presence of Gitcoin and other public goods groups meant devs could learn about grant funding for their projects after the event. In sum, ETHDenver 2025 equipped developers with cutting-edge tools (SDKs, APIs), expert guidance, and follow-on support to continue their projects. For industry professionals, it’s a reminder that nurturing the developer community – through education, tooling, and funding – is critical. Many of the resources highlighted (like new SDKs, or improved dev environments) are now publicly available, offering teams everywhere a chance to build on the shoulders of what was shared at ETHDenver.

Side Events and Community Gatherings Enriching the ETHDenver Experience

What truly sets ETHDenver apart is its festival-like atmosphere – dozens of side events, both official and unofficial, created a rich tapestry of experiences around the main conference. In 2025, beyond the National Western Complex where official content ran, the entire city buzzed with meetups, parties, hackathons, and community gatherings. These side events, often hosted by sponsors or local Web3 groups, significantly contributed to the broader ETHDenver experience.

On the official front, ETHDenver’s own schedule included themed mini-events: the venue had zones like an NFT Art Gallery, a Blockchain Arcade, a DJ Chill Dome, and even a Zen Zone to decompress. The organizers also hosted evening events such as opening and closing parties – e.g., the “Crack’d House” unofficial opening party on Feb 26 by Story Protocol, which blended an artsy performance with hackathon award announcements. But it was the community-led side events that truly proliferated: according to an event guide, over 100 side happenings were tracked on the ETHDenver Luma calendar.

Some examples illustrate the diversity of these gatherings:

  • Technical Summits & Hacker Houses: ElizaOS and EigenLayer ran a 9-day Vault AI Agent Hacker House residency for AI+Web3 enthusiasts. StarkNet’s team hosted a multi-day hacker house culminating in a demo night for projects on their ZK-rollup. These provided focused environments for developers to collaborate on specific tech stacks outside the main hackathon.
  • Networking Mixers & Parties: Every evening offered a slate of choices. Builder Nights Denver on Feb 27, sponsored by MetaMask, Linea, EigenLayer, Wormhole and others, brought together innovators for casual talks over food and drink. 3VO’s Mischief Minded Club Takeover, backed by Belong, was a high-level networking party for community tokenization leaders. For those into pure fun, the BEMO Rave (with Berachain and others) and rAIve the Night (an AI-themed rave) kept the crypto crowd dancing late into the night – blending music, art, and crypto culture.
  • Special Interest Gatherings: Niche communities found their space too. Meme Combat was an event purely for meme enthusiasts to celebrate the role of memes in crypto. House of Ink catered to NFT artists and collectors, turning an immersive art venue (Meow Wolf Denver) into a showcase for digital art. SheFi Summit on Feb 26 brought together women in Web3 for talks and networking, supported by groups like World of Women and Celo – highlighting a commitment to diversity and inclusion.
  • Investor & Content Creator Meetups: We already touched on VC events; additionally, a KOL (Key Opinion Leaders) Gathering on Feb 28 let crypto influencers and content creators discuss engagement strategies, showing the intersection of social media and crypto communities.

Crucially, these side events weren’t just entertainment – they often served as incubators for ideas and relationships in their own right. For instance, the Tokenized Capital Summit 2025 delved into the future of capital markets on-chain, likely sparking collaborations between fintech entrepreneurs and blockchain developers in attendance. The On-Chain Gaming Hacker House provided a space for game developers to share best practices, which may lead to cross-pollination among blockchain gaming projects.

For professionals attending large conferences, ETHDenver’s model underscores that value is found off the main stage as much as on it. The breadth of unofficial programming allowed attendees to tailor their experience – whether one’s goal was to meet investors, learn a new skill, find a co-founder, or just unwind and build camaraderie, there was an event for that. Many veterans advise newcomers: “Don’t just attend the talks – go to the meetups and say hi.” In a space as community-driven as Web3, these human connections often translate into DAO collaborations, investment deals, or at the very least, lasting friendships that span continents. ETHDenver 2025’s vibrant side scene amplified the core conference, turning one week in Denver into a multi-dimensional festival of innovation.

Key Takeaways and Actionable Insights

ETHDenver 2025 demonstrated a Web3 industry in full bloom of innovation and collaboration. For professionals in the space, several clear takeaways and action items emerge from this deep dive:

  • Diversification of Trends: The event made it evident that Web3 is no longer monolithic. Emerging domains like AI integration, DePIN, and RWA tokenization are as prominent as DeFi and NFTs. Actionable insight: Stay informed and adaptable. Leaders should allocate R&D or investment into these rising verticals (e.g. exploring how AI could enhance their dApp, or how real-world assets might be integrated into DeFi platforms) to ride the next wave of growth.
  • Cross-Chain is the Future: With major non-Ethereum protocols actively participating, the walls between ecosystems are lowering. Interoperability and multi-chain user experiences garnered huge attention, from MetaMask adding Bitcoin/Solana support to Polkadot and Cosmos-based chains courting Ethereum developers. Actionable insight: Design for a multi-chain world. Projects should consider integrations or bridges that tap into liquidity and users on other chains, and professionals may seek partnerships across communities rather than staying siloed.
  • Community & Public Goods Matter: The “Year of the Regenerates” theme wasn’t just rhetoric – it permeated the content via public goods funding discussions, quadratic voting for hacks, and events like SheFi Summit. Ethical, sustainable development and community ownership are key values in the Ethereum ethos. Actionable insight: Incorporate regenerative principles. Whether through supporting open-source initiatives, using fair launch mechanisms, or aligning business models with community growth, Web3 companies can gain goodwill and longevity by not being purely extractive.
  • Investor Sentiment – Cautious but Bold: Despite bear market murmurs, ETHDenver showed that VCs are actively scouting and willing to bet big on Web3’s next chapters. However, they are also rethinking how to invest (e.g. more strategic, perhaps more oversight on product-market fit, and openness to community funding). Actionable insight: If you’re a startup, focus on fundamentals and storytelling. The projects that stood out had clear use cases and often working prototypes (some built in a weekend!). If you’re an investor, the conference affirmed that infrastructure (L2s, security, dev tools) remains high-priority, but differentiating via theses in AI, gaming, or social can position a fund at the forefront.
  • Developer Experience is Improving: ETHDenver highlighted many new toolkits, SDKs, and frameworks lowering the barrier for Web3 development – from account abstraction tools to on-chain AI libraries. Actionable insight: Leverage these resources. Teams should experiment with the latest dev tools unveiled (e.g. try out that zkSync Smart SSO for easier logins, or use the Open Agents Alliance resources for an AI project) to accelerate their development and stay ahead of the competition. Moreover, companies should continue engaging with hackathons and open developer forums as a way to source talent and ideas; ETHDenver’s success in turning hackers into founders is proof of that model.
  • The Power of Side Events: Lastly, the explosion of side events taught an important lesson in networking – opportunities often appear in casual settings. A chance encounter at a happy hour or a shared interest at a small meetup can create career-defining connections. Actionable insight: For those attending industry conferences, plan beyond the official agenda. Identify side events aligned with your goals (whether it’s meeting investors, learning a niche skill, or recruiting talent) and be proactive in engaging. As seen in Denver, those who immersed themselves fully in the week’s ecosystem walked away with not just knowledge, but new partners, hires, and friends.

In conclusion, ETHDenver 2025 was a microcosm of the Web3 industry’s momentum – a blend of cutting-edge tech discourse, passionate community energy, strategic investment moves, and a culture that mixes serious innovation with fun. Professionals should view the trends and insights from the event as a roadmap for where Web3 is headed. The actionable next step is to take these learnings – whether it’s a newfound focus on AI, a connection made with an L2 team, or inspiration from a hackathon project – and translate them into strategy. In the spirit of ETHDenver’s favorite motto, it’s time to #BUIDL on these insights and help shape the decentralized future that so many in Denver came together to envision.

The Wallet Revolution: Navigating the Three Paths of Account Abstraction

· 6 min read
Dora Noda
Software Engineer

For years, the crypto world has been hampered by a critical usability problem: the wallet. Traditional wallets, known as Externally Owned Accounts (EOAs), are unforgiving. A single lost seed phrase means your funds are gone forever. Every action requires a signature, and gas fees must be paid in the chain's native token. This clunky, high-stakes experience is a major barrier to mainstream adoption.

Enter Account Abstraction (AA), a paradigm shift set to redefine how we interact with the blockchain. At its core, AA transforms a user's account into a programmable smart contract, unlocking features like social recovery, one-click transactions, and flexible gas payments.

The journey toward this smarter future is unfolding along three distinct paths: the battle-tested ERC-4337, the efficient Native AA, and the highly anticipated EIP-7702. Let's break down what each approach means for developers and users.


💡 Path 1: The Pioneer — ERC-4337

ERC-4337 was the breakthrough that brought account abstraction to Ethereum and EVM chains without changing the core protocol. Think of it as adding a smart layer on top of the existing system.

It introduces a new transaction flow involving:

  • UserOperations: A new object that represents a user's intent (e.g., "swap 100 USDC for ETH").
  • Bundlers: Off-chain actors that pick up UserOperations, bundle them together, and submit them to the network.
  • EntryPoint: A global smart contract that validates and executes the bundled operations.

The Good:

  • Universal Compatibility: It can be deployed on any EVM chain.
  • Flexibility: Enables rich features like session keys for gaming, multi-signature security, and gas sponsorship via Paymasters.

The Trade-off:

  • Complexity & Cost: It introduces significant infrastructure overhead (running Bundlers) and has the highest gas costs of the three approaches, as every operation goes through the extra EntryPoint logic. Because of this, its adoption has flourished primarily on gas-friendly L2s like Base and Polygon.

ERC-4337 walked so that other AA solutions could run. It proved the demand and laid the groundwork for a more intuitive Web3 experience.


🚀 Path 2: The Integrated Ideal — Native Account Abstraction

If ERC-4337 is an add-on, Native AA is building smart features directly into the blockchain's foundation. Chains like zkSync Era and Starknet were designed from the ground up with AA as a core principle. On these networks, every account is a smart contract.

The Good:

  • Efficiency: By integrating AA logic into the protocol, it strips away the extra layers, leading to significantly lower gas costs compared to ERC-4337.
  • Simplicity for Devs: Developers don't need to manage Bundlers or a separate mempool. The transaction flow feels much more like a standard one.

The Trade-off:

  • Ecosystem Fragmentation: Native AA is chain-specific. An account on zkSync is different from an account on Starknet, and neither is native to Ethereum mainnet. This creates a fragmented experience for users and developers working across multiple chains.

Native AA shows us the "endgame" for efficiency, but its adoption is tied to the growth of its host ecosystems.


🌉 Path 3: The Pragmatic Bridge — EIP-7702

Set to be included in Ethereum's 2025 "Pectra" upgrade, EIP-7702 is a game-changer designed to bring AA features to the masses of existing EOA users. It takes a hybrid approach: it allows an EOA to temporarily delegate its authority to a smart contract for a single transaction.

Think of it as giving your EOA temporary superpowers. You don't need to migrate your funds or change your address. Your wallet can simply add an authorization to a transaction, allowing it to perform batched operations (e.g., approve + swap in one click) or have its gas sponsored.

The Good:

  • Backward Compatibility: It works with the billions of dollars secured by existing EOAs. No migration needed.
  • Low Complexity: It uses the standard transaction pool, eliminating the need for Bundlers and drastically simplifying infrastructure.
  • Mass Adoption Catalyst: By making smart features accessible to every Ethereum user overnight, it could rapidly accelerate the adoption of better UX patterns.

The Trade-off:

  • Not "Full" AA: EIP-7702 doesn't solve key management for the EOA itself. If you lose your private key, you're still out of luck. It's more about enhancing transaction capabilities than overhauling account security.

Head-to-Head: A Clear Comparison

FeatureERC-4337 (The Pioneer)Native AA (The Ideal)EIP-7702 (The Bridge)
Core IdeaExternal smart contract system via BundlersProtocol-level smart accountsEOA temporarily delegates to a smart contract
Gas CostHighest (due to EntryPoint overhead)Low (protocol-optimized)Moderate (small overhead on one transaction for batching)
InfrastructureHigh (Requires Bundlers, Paymasters)Low (Handled by the chain's validators)Minimal (Uses existing transaction infrastructure)
Key Use CaseFlexible AA on any EVM chain, especially L2s.Highly efficient AA on purpose-built L2s.Upgrading all existing EOAs with smart features.
Best For...Gaming wallets, dApps needing gasless onboarding now.Projects building exclusively on chains like zkSync/Starknet.Bringing batching & gas sponsorship to mainstream users.

The Future is Convergent and User-Centric

These three paths aren't mutually exclusive; they are converging toward a future where the wallet is no longer a point of friction.

  1. Social Recovery Becomes Standard 🛡️: The era of "lost keys, lost funds" is ending. AA enables guardian-based recovery, making self-custody as safe and forgiving as a traditional bank account.
  2. Gaming UX Reimagined 🎮: Session keys will allow for seamless gameplay without constant "approve transaction" pop-ups, finally making Web3 gaming feel like Web2 gaming.
  3. Wallets as Programmable Platforms: Wallets will become modular. Users might add a "DeFi module" for automated yield farming or a "security module" that requires 2FA for large transfers.

For developers and infrastructure providers like Blockeden.xyz, this evolution is incredibly exciting. The complexity of Bundlers, Paymasters, and various AA standards creates a massive opportunity to provide robust, reliable, and abstracted infrastructure. The goal is a unified experience where a developer can easily integrate AA features, and the wallet intelligently uses ERC-4337, Native AA, or EIP-7702 under the hood, depending on what the chain supports.

The wallet is finally getting the upgrade it deserves. The transition from static EOAs to dynamic, programmable smart accounts is not just an improvement—it's the revolution that will make Web3 accessible and safe for the next billion users.