
Cover image by: DreadBongo
This video is a masterclass on what actually matters when building a Bittensor subnet.
1. A subnet is just a benchmark
Const’s main point is simple: the entire subnet starts and ends with a benchmark. Before thinking about a product, website, revenue, or hype, you first need to define:
- What miners are submitting
- How validators evaluate those submissions
- How performance translates into weights
If the benchmark isn’t solid, everything else is irrelevant.
2. Submission and evaluation must be defined early
Miners can submit in different formats, like:
- HuggingFace models
- RPC endpoints
- Docker containers
- GitHub code submissions
- Trading logs / exchange data
- Scraped datasets (S3 buckets)
But whatever the format is, it must be paired with a clear evaluation method. Once you define both, the subnet is basically just two scripts: a miner script and a validator script.
3. Incentives must align with the real goal
A subnet fails when miners optimize for the benchmark but not for the real-world goal.
So the real challenge is: how do you measure the thing you actually want?
Example: a coding agent subnet might want “better software engineering,” but measuring that properly is difficult.
4. Overfitting is the biggest enemy
Miners will overfit if they can. He gives Ridges as an example: using a fixed benchmark like SWE-bench makes it possible for miners to simply inject hardcoded solutions rather than become better at coding.
Solution: synthetic benchmarks, generate near-infinite variations so miners can’t memorize answers.
5. Reduce randomness and variance
Even with a good benchmark, if evaluation has high randomness, incentives become weak.
High variance = unclear leaderboard = less competitive pressure.
So you need evaluation that is:
- consistent
- repeatable
- hard to game
6. Copy resistance matters
If miners can copy the current top solution and still get rewarded, incentives get diluted. The benchmark needs to make it hard for lazy copying to compete.
7. UID pressure is a hidden design problem
In “volume-based” subnets (scraping, data collection), miners may try to dominate by registering many UIDs. If the subnet design encourages that, it becomes inefficient and congested.
8. Open-source (OSS) and winner-take-all are a strong starting model
He strongly recommends:
- OSS submissions first
- Winner-take-all incentives early
Because it reduces confusion, increases learning speed, and lets subnet teams understand what miners are actually doing. Blackbox subnets make it harder to detect manipulation or overfitting.
9. Subnets should integrate with other Bittensor subnets
He pushes composability hard: Use existing subnet infrastructure instead of reinventing everything.
Examples:
- hosting models on Chutes
- compute from Basilica
- data from Dataverse
- internet access from Desearch
- storage buckets from Hippius
This lowers cost → lowers sell pressure → strengthens the subnet economy.
10. Transparency is the #1 requirement
If your subnet isn’t auditable, people will assume you’re cheating. Even if the subnet is centralized, it must be verifiable:
- anyone should be able to reproduce evaluation logic
- results should match what’s posted on-chain
Decentralization is important, but transparency comes first.
11. Expect miners to exploit everything
Miners will try to break your system. That’s normal. And if you’re transparent about exploits, the community will help fix them faster.
12. Ship fast, test on mainnet
He recommends launching quickly instead of spending months on testnet. Deploy to main, iterate fast, and update validators automatically. The key mindset is to fail fast and adapt fast.
Core takeaway
A great subnet is not about branding or product. It’s about building a benchmark that is aligned, hard to overfit, low-variance, transparent, and incentive-compatible.
Everything else comes later.

Be the first to comment