← Back to feed
8

Covenant-72B: First Open Permissionless Distributed LLM Pre-Training at Scale

Research1 source·Mar 16

Summary

  • • Covenant-72B is the largest globally distributed LLM training run by compute and model scale
  • • First model trained with fully open, permissionless participation via live blockchain protocol
  • • 72B parameter model trained on 1.1 trillion tokens matches centralized model performance
  • • SparseLoCo optimizer enables peers to join and leave training dynamically without whitelisting
Adjust signal

Details

1.Research

Covenant-72B achieves competitive performance with centralized models at equivalent compute

The model was pre-trained on approximately 1.1 trillion tokens and evaluated against fully centralized models trained on similar or greater compute budgets. Matching centralized performance under distributed, permissionless conditions is the core empirical claim of the paper.

2.New Tech

SparseLoCo optimizer enables dynamic peer participation without training instability

SparseLoCo is a communication-efficient optimizer designed for high network latency and variable participation inherent in internet-scale distributed training. Peers can join or leave mid-training without destabilizing the run — a fundamental departure from standard distributed training assumptions requiring fixed, reliable participants.

3.Infrastructure

Blockchain protocol enforces trustless participation rules across open internet peers

Rather than relying on organizational trust or access controls, a live blockchain protocol governs participation rules. This removes the need for a central authority to vet or whitelist contributors, making the system permissionless in a cryptographically enforced sense rather than just operationally open.

4.Stat

72B parameters trained across globally distributed, non-whitelisted internet nodes

This is described as the largest collaborative globally distributed pre-training run by both compute and model scale. Previous distributed efforts topped out at far smaller scales or required controlled participant sets, making this a significant jump in demonstrated feasibility.

5.Context

Prior distributed training was either small-scale or restricted to approved participants

Projects like BLOOM and earlier collaborative training efforts used whitelisted institutional partners or operated at parameter counts well below frontier scale. The combination of open participation and frontier scale has not previously been demonstrated.

6.Insight

Permissionless distributed training challenges resource concentration driving AI power asymmetry

The authors explicitly frame the work as a challenge to the political economy of AI development, where only well-capitalized labs can train frontier models. If this approach scales further, it could reduce the capital and infrastructure barriers that currently make foundation model development exclusive to a small number of organizations.

7.Market Impact

Decentralized foundation model training path could pressure centralized lab dominance

If permissionless distributed training continues to close the performance gap with centralized approaches, it creates a credible alternative development pathway with implications for who controls frontier AI capabilities and whether regulatory leverage over AI training infrastructure remains concentrated.

Research = empirical findings, New Tech = novel technical method, Infrastructure = system architecture, Stat = quantitative data point, Context = historical background, Insight = analytical interpretation, Market Impact = competitive and structural implications

What This Means

Covenant-72B demonstrates for the first time that a frontier-scale language model can be trained through fully open, permissionless participation — meaning anyone on the internet could contribute compute without pre-approval. This directly challenges the assumption that training large models requires the centralized infrastructure and capital that only a handful of organizations can assemble. If the approach proves reproducible and continues to scale, it could fundamentally redistribute who has the ability to develop powerful AI systems, with significant implications for AI governance, competitive dynamics, and the concentration of AI capabilities. This is an arxiv preprint and results await independent verification.

Sentiment

Broadly excited in decentralized AI and crypto communities, hailed as a milestone for Bittensor

@joellidinJoel Lidin · Lead author, Covenant-72B arXiv paper / Researcher @CovenantAIView post
Impressed

Covenant-72B is the largest model ever pre-trained in a fully permissionless setting. And it holds up against centralized 70B models.

@erfan_mhiErfan Miahi · Post-training researcher @CovenantAIView post
Excited

We just released the model + technical report for Covenant-72B. The largest LLM ever pre-trained on a fully decentralized infrastructure... This is a big step toward making decentralized pre-training actually practical.

@opentensorOpenTensor Foundation · Bittensor FoundationView post
Supportive

Covenant-72B is the largest decentralized LLM pre-training run in history: 72B parameters, trained by Bittensor miners on subnet 3. With proper coordination + incentive alignment, we can compete with frontier AI labs.

@tplr_aitemplar · Templar AI, Bittensor Subnet 3View post
Excited

We just completed the largest decentralised LLM pre-training run in history: Covenant-72B. Permissionless, on Bittensor subnet 3.

Split

Overwhelmingly positive (~95/5) among decentralized AI practitioners; limited broader discussion from mainstream AI researchers.

Sources

Similar Events