More About Subnet 38, known as “Distributed Training”

More About Subnet 38, known as "Distributed Training"
Listen to this article
Read Time:4 Minute, 2 Second

Subnet 38, known as “Distributed Training” (often abbreviated as DSTRBTD or SN38), is a specialized subnet focused on decentralizing the training of large language models (LLMs). It addresses the high barriers to AI training—such as massive GPU requirements and energy costs typically controlled by centralized giants like OpenAI or Google—by distributing the workload across a global network of participants in a trustless environment.

Key functionalities based on its design and operations

Core Mechanism: Miners (participants with compute resources) perform gradient computations on portions of datasets, while validators aggregate and verify these gradients to update a shared model. This enables collaborative, decentralized training without relying on a single centralized cluster.

Incentive System:

  • Bandwidth Score: Measures how efficiently miners share model state data.
  • Gradient Score: Compares miner-reported gradients against validator-calculated ones for accuracy.
  • Steps Score: Tracks the amount of data processed per training step.
  • These scores determine rewards, ensuring high-quality contributions are incentivized through a “Proof of Intelligence” approach.

Training Pipeline:

  • Utilizes the Hugging Face FineWeb dataset (sample-350BT configuration) for web-sourced text data.
  • Text is tokenized using a GPT-2 tokenizer (via distiG).
  • Miners submit model updates to Hugging Face Hub after each gradient averaging step.
  • Validators confirm correctness via two query types: one for gradient validation and another for overall model integrity.

Stats and Scale: As of recent data, it attracts about 0.54%-0.56% of Bittensor’s overall emissions. It’s identified under “Distributed Training” in listings and has completed runs like a 1.1B parameter model. Emission rates and participation have grown, with reports noting 150% increased involvement since inception due to its incentive model.

Differentiation from Similar Subnets: Unlike broader subnets like Templar (SN3), which emphasize internet-scale orchestration and data sourcing, SN38 specializes in the mechanics of gradient computation and aggregation for more focused, scalable LLM training.

This subnet aims to democratize AI development, allowing anyone with suitable hardware to contribute and earn rewards, potentially reducing energy costs by up to 40% compared to traditional clusters, as supported by studies on decentralized systems.

Users and Miners

Users: These are typically AI developers, researchers, organizations, or projects leveraging the subnet’s trained models or infrastructure. Specific users aren’t publicly listed in detail, but the subnet serves those interested in decentralized LLM training outputs, such as open-source AI builders or teams avoiding centralized dependencies. For instance, it’s positioned as a backbone for collaborative model training, appealing to entities in AI scalability research.

Miners: Miners are the core contributors providing compute power. To become one:

  • Requirements: A powerful GPU setup (e.g., high-end hardware for gradient computations), stable high-speed internet, the Bittensor client software, miner-specific code from the subnet’s repo, and some TAO tokens for staking/registration.
  • Process: Miners train local model versions on dataset shards, compute gradients, and submit updates. Rewards are earned in TAO based on performance scores.
  • Known Details: The subnet has attracted growing participation, with reports of thousands of global GPU owners involved in similar decentralized setups.

Miners are incentivized through emissions, with successful ones earning from the subnet’s 0.54%+ share of network rewards.

AspectUsersMiners
Primary RoleConsume trained models or integrate outputs for AI projectsProvide compute for training (gradients, data processing)
Entry BarriersLow; access via Bittensor tools or Hugging FaceHigh; requires GPUs, software setup, TAO stake
Examples/BeneficiariesAI researchers, devs using decentralized LLMsGPU owners earning TAO rewards
Scale/StatsIndirect via Bittensor ecosystemGrowing; part of 150% participation increase in Proof of Intelligence models

Benefits of Holding SN38 Alpha Tokens

SN38 Alpha tokens are the subnet-specific tokens in Bittensor’s ecosystem, introduced via a protocol upgrade to allow direct staking and market-driven incentives for individual subnets.

As of September 6 2025, SN38 trades around $1.7, with a market cap of ~$4.3 and 24-hour volume of $2M+.

Holding them offers several benefits:

Staking Rewards: Stake SN38 to direct a portion of Bittensor’s TAO emissions to the subnet. Holders earn proportional TAO rewards based on the subnet’s performance and emission allocation. This acts like yield farming, where successful subnets amplify returns.

Price Appreciation Potential: If SN38 grows in adoption (e.g., more miners, better models), the token’s value can rise. Recent trends show +25% gains in some periods, driven by decentralized AI hype. It’s a speculative bet on the subnet challenging proprietary AI dominance.

Diversification in Bittensor: Unlike holding pure TAO, SN38 allows targeted exposure to distributed training’s success, potentially outperforming the broader network if the subnet scales (e.g., amid reports of fast model improvements).

Risks to Note: Volatility is high (e.g., -4% to +139% swings), and benefits depend on subnet emissions and competition from other training subnets like SN3. Always trade via official platforms like Taostats or Bittensor.ai.

Learn more about the subnet via their X account.

SN38 on Novelty Search! Check the YouTube video below.

Subscribe to receive The Tao daily content in your inbox.

We don’t spam! Read our privacy policy for more info.

Be the first to comment

Leave a Reply

Your email address will not be published.


*