How Subnet 44 is Redefining Computer Vision Without Human Labels

How Subnet 44 is Redefining Computer Vision Without Human Labels
Read Time:2 Minute, 49 Second

Most AI models today depend on human-labeled datasets for training and validation. But this approach is costly, slow to scale, and prone to inconsistency. For fields like sports analytics, robotics, and medical imaging, the reliance on labels has become a critical bottleneck.

Score has taken a different path, developing a generalized validation framework that removes the need for human-labeled data altogether.

Score, built on Bittensor’s Subnet 44, is a decentralized computer vision platform that tracks player movements, ball positions, and game events in real time, providing advanced football analytics for scouting, tactics, and predictive modeling. By leveraging Bittensor’s incentivized network, it delivers high-quality, affordable data at scale for teams, fantasy sports, betting, and performance analysis.

Why The Change

The shift didn’t happen by chance—it was driven by clear challenges and opportunities that demanded a new approach:

a. Foundation models are heavy: They run mostly in the cloud, making them slow and expensive.
b. Edge deployment is the future: To scale, models must be lighter, faster, and able to run directly where footage is created.
c. Patching was unsustainable: As miners exploited blind spots, constant fixes proved inefficient. A stronger foundation was needed.

The Breakthrough

Instead of relying on human ground truth, the system now uses Vision-Language Models (VLMs) to generate pseudo-annotations. Optical flow data (motion context) is added so the model captures direction and speed, making recognition sharper.

Next, a second VLM will act as a judge, comparing miner outputs against pseudo-ground-truth and choosing the better annotation. This relative evaluation avoids inheriting the full errors of generated data while preventing miners from gaming the system.

The Development Phase

To ensure smooth adoption and maintain quality from the start, the rollout is being executed in carefully planned phases:

a. Phase 1 (Live on Testnet): VLM  optical flow generates bounding boxes, labels, and actions as pseudo-ground-truth.
b. Phase 2 (In Development): A second VLM evaluates miner outputs against pseudo-annotations, selecting the stronger result.

This hybrid approach simplifies the judge’s task while maintaining quality control on pseudo-annotations.

What Improves

The new approach brings several key improvements that strengthen the system and set it up for long-term growth:

a. Harder to exploit: Miners can no longer game predictable blind spots or game the system.
b. Scalable: Validation no longer depends on scarce human labels.
c. Forward focus: Freed from patching, development shifts toward tracking, ensemble judges, and new domains.

Roadmap Ahead

Looking forward, the roadmap outlines the next steps to strengthen accuracy, scalability, and reliability of the system:

a. Activate the VLM-as-Judge at scale.
b. Add tracking to follow objects across frames.
c. Deploy ensembles of VLMs for stronger pseudo-ground-truth.
d. Experiment with multiple judges to cross-check reliability.
e. Smarter sampling for efficient evaluation.

Why It Matters

This is more than a subnet upgrade. By breaking free from human labels, the framework unlocks scalable validation for any visual domain—sports, robotics, healthcare, or smart cities.

The long-term vision is clear: a self-improving, decentralized computer vision network that adapts, learns, and grows without centralized bottlenecks.

Resources

To explore Subnet 44, stay connected via: 

Official Website: https://www.wearescore.com/
X (Formerly Twitter): https://x.com/webuildscore
GitHub: https://github.com/score-technologies/score-vision
Discord: https://discord.gg/wearescore
LinkedIn: https://www.linkedin.com/company/webuildscore/

Subscribe to receive The Tao daily content in your inbox.

We don’t spam! Read our privacy policy for more info.

Be the first to comment

Leave a Reply

Your email address will not be published.


*