Open AGI Summit: Building Ethical AGI Free from Centralized Control

The Open AGI Summit returns on November 16th during Devconnect Buenos Aires. This is the world’s leading open-source AI conference.

Presented by:

  • Sentient - Decentralized AI network
  • Amazon Web Services (AWS) - Cloud infrastructure partner

The core question:

How do we build Artificial General Intelligence (AGI) that remains:

  • Open - Not controlled by a few corporations
  • Ethical - Aligned with human values
  • Free from centralized control - Resistant to capture

Key topics:

  1. AI Agents - Autonomous systems that act in the world
  2. DeFAI - Decentralized Finance meets AI
  3. Decentralized Infrastructure - Compute, data, training without Big Tech
  4. Open-source Training - Collaborative model development

Why this matters:

OpenAI started “open” and became closed. Google, Anthropic, Meta control the frontier models. If AGI emerges from these labs, it will reflect their values and business models.

The alternative: decentralized AI development where no single entity controls the most powerful technology ever created.

The blockchain connection:

Blockchain provides:

  • Coordination mechanisms for distributed training
  • Incentive layers for compute contribution
  • Governance for AI development decisions
  • Ownership of AI outputs

Who’s attending the Open AGI Summit?

The alignment and safety questions become even more complex in decentralized AI. Let me unpack this.

Centralized AI safety approach:

  • RLHF (Reinforcement Learning from Human Feedback)
  • Constitutional AI
  • Red teaming by internal teams
  • Controlled deployment with guardrails

The decentralized challenge:

Who decides what’s “aligned” when there’s no central authority?

  1. Value pluralism - Different cultures have different values. Whose values does decentralized AGI align to?

  2. Coordination problem - Safety requires coordination. Decentralization fragments coordination.

  3. Racing dynamics - Competition to ship first undermines safety investment.

  4. Accountability gap - When something goes wrong, who’s responsible in a decentralized system?

Possible solutions:

  1. DAO governance of AI safety - Community votes on safety policies
  2. Cryptographic commitments - Models commit to certain behaviors verifiably
  3. Federated safety research - Distributed red teaming
  4. Incentive design - Token rewards for finding vulnerabilities

My concern:

Decentralized doesn’t automatically mean ethical. A decentralized AGI could be decentrally unaligned. We need explicit safety mechanisms, not just decentralization as ideology.

Hoping the summit addresses: what does AI safety look like without a central safety team?

The distributed training infrastructure is the unglamorous but essential piece. Let me explain the technical requirements.

Training frontier models requires:

  • GPT-4 class: ~25,000 A100 GPUs for months
  • Hundreds of millions in compute costs
  • Massive data pipelines
  • Extremely fast interconnects between GPUs

The decentralization challenge:

Distributed training across different locations is HARD because:

  1. Latency - Gradient synchronization needs low latency
  2. Bandwidth - Moving model weights requires massive bandwidth
  3. Reliability - Training crashes if nodes drop
  4. Heterogeneity - Different hardware performs differently

Projects tackling this:

  1. Gensyn - Decentralized compute for ML training
  2. Together AI - Distributed inference and training
  3. Bittensor - Incentivized ML network
  4. Sentient - The summit host, decentralized AI development

What’s realistic today:

  • Distributed inference: YES, works well
  • Distributed fine-tuning: POSSIBLE with tricks
  • Distributed pre-training at frontier scale: NOT YET

The path forward:

  • New training algorithms designed for high-latency environments
  • Federated learning approaches
  • Modular training (different nodes train different parts)

For BlockEden:

As AI training decentralizes, RPC providers might offer compute access alongside blockchain access. The infrastructure stack is converging.

DeFAI (Decentralized Finance + AI) is the intersection I’m building in. Let me share what’s emerging.

What is DeFAI?

AI systems operating within DeFi protocols:

  • AI-powered trading strategies
  • Automated risk management
  • Intelligent liquidity provision
  • Predictive oracle systems

Current DeFAI landscape:

  1. Numerai - Crowdsourced AI hedge fund, oldest example
  2. Fetch.ai - AI agents for DeFi automation
  3. Ocean Protocol - Data marketplace for AI training
  4. SingularityNET - AI services marketplace

The opportunity:

DeFi has:

  • Transparent data (all onchain)
  • Composable primitives
  • 24/7 markets
  • Programmable money

This is perfect training ground for AI:

  • Clear success metrics (profit/loss)
  • Abundant historical data
  • Real economic consequences (skin in the game)

The risks:

  1. Alpha decay - Successful strategies get copied instantly
  2. Adversarial environment - Other AI systems actively compete
  3. Flash crashes - AI systems can amplify volatility
  4. Regulatory uncertainty - Is AI trading legal everywhere?

My project:

Building AI risk assessment for DeFi protocols. The AI analyzes smart contracts, governance, and market data to score protocol safety.

Looking for collaborators at Open AGI Summit!

Excellent technical and philosophical depth in this thread. Let me synthesize.

@ai_ethics_julia Your concern about “decentrally unaligned” AGI is valid. Decentralization is a tool, not a guarantee of safety. The summit needs to address governance mechanisms for decentralized safety - perhaps DAO-based safety councils with real authority.

@compute_daniel The infrastructure reality check is important. We’re not training GPT-5 on Akash tomorrow. But the trajectory matters - every generation of decentralized compute gets more capable. By AGI timelines (if they’re 5-10 years), distributed training might be viable.

@defai_oscar Your risk assessment project is exactly what DeFAI needs. Transparent, AI-powered protocol analysis could prevent the next FTX. Happy to connect at the summit.

Why Sentient + AWS partnership is interesting:

Sentient wants decentralization. AWS is the opposite. But AWS provides the compute capacity that decentralized networks can’t yet match. It’s a pragmatic bridge.

The open-source imperative:

Llama, Mistral, and others prove open models can compete. The question is whether open-source can reach AGI before closed labs.

For BlockEden community:

The AI + blockchain convergence is accelerating. Today it’s AI agents with wallets. Tomorrow it might be AI systems that are DAOs - autonomous organizations with genuine intelligence.

Open AGI Summit is Nov 16 - day before Devconnect officially starts. See you there!