
When you think about trust in artificial intelligence, itβs easy to forget how much of it relies on faith. We type prompts into a model, get an answer, and simply assume it did the work correctly. But in a world where AI decisions can shape markets, policy, and identity, that assumption is starting to feel shaky.
Thatβs the problem Inference Labs set out to fix. Behind Bittensorβs Subnet 2, it is building a cryptographic layer of truth for machine intelligence. A system that doesnβt just generate answers, but proves they came from where they claim to.
In a conversation with Dan and Hudson, the team behind Subnet 2, Keith βBittensor Guruβ Singery explored how their latest innovation, DSperse, could redefine the way AI models are verified, shared, and commercialized.
A Sound You Can Trust
Keith began by recalling a moment from earlier in their work β the analogy that stuck.
βItβs like hearing a Porsche drive by,β he said. βYou donβt see whatβs inside the engine, but you know what youβre hearing.β
Dan smiled. βExactly. What weβve done,β he explained, βis take that same concept to AI. You donβt have to look inside the model (its βengineβ) to verify that itβs real. You just need to check its unique digital fingerprint.β
That fingerprint comes from zero-knowledge proofs (ZKPs) β cryptographic evidence that confirms a modelβs output came from a specific source, without revealing its inner workings.
The Cost of Proving the Truth
βOkay,β Keith asked, βbut how expensive is that truth?β Hudson leaned forward. βRight now, proving computations is costly. Even something as small as GPT-2 can take a hundred dollarsβ worth of compute to verify on AWS. What weβre doing is bringing that cost down from dollars to cents β and eventually, fractions of a cent.β

That leap, he explained, comes from Dsperse, their latest innovation. It split complex computations into smaller, parallelized segments. Each one is handled by different miners across the network, cutting the overall workload dramatically.

βIn practice,β Dan added, βweβve halved compute and memory requirements, and cut runtime by even more. Itβs the same security, just smarter math.β
From Subnet to Supercomputer
The conversation shifted toward Omron (Subnet 2), their proving ground within Bittensor. Dan described it as a sandbox for experimentation: part research lab, part global competition.
βWeβve run over 300 million proofs through the subnet,β he said. βAnd whatβs fascinating is how the network evolves. One miner even built a custom FPGA board to outpace everyone else. We thought something broke β turns out heβd just engineered his own advantage.β
Thatβs the power of Bittensorβs incentive layer. By rewarding miners for efficiency, the subnet turns competition into innovation. Faster proofs mean higher rankings, and higher rankings mean more rewards.
βItβs like natural selection for computation,β Hudson said, laughing.
Proofs, Competitions, and Real Use Cases
Keith asked how this competition model worked. Dan broke it down. βThere are two sides. One is Prover-as-a-Service where miners generate proofs as fast as possible, helping us benchmark performance. The other is competitions, where miners create their own zero-knowledge circuits to replicate a modelβs behavior.β
Their first test? Age verification. The challenge was simple: build a model that could estimate a personβs age from a selfie, but do it without sending the image to a central server!
Hudson explained how it worked. βNormally, companies send that image to their servers, which is risky. But with zero-knowledge proofs, we can verify the result locally, on-device, and just send a proof β not the photo. So you prove your age without revealing your identity.β
Thatβs huge for privacy compliance, especially in regions with strict data protection laws.
Turning Models Into Math
To illustrate how it works, Dan pointed to a familiar example. βThink about YOLO, the object detection model. It scans an image, recognizes objects, and labels them. We take that same process but convert every operation into math.β
Every neuron, activation, and connection is translated into a web of mathematical constraints. Those constraints must all check out for the inference to be valid.
βWhen they do,β Hudson said, βyou get a proof β a kind of digital signature that says, yes, this exact model produced this result.β
βSo the server never sees the image,β I clarified.
βExactly,β said Dan. βIt only gets the result and the proof. Thatβs the beauty of zero-knowledge β trust without exposure.β
Disperse: The Leap Forward
When the conversation turned to DSperse, the tone shifted β this was the part they were most excited about.

βWith DSperse,β Dan said, βwe can take a massive model (something that would normally need a supercomputer) and slice it into hundreds or thousands of pieces. Each miner runs a small part, proves it, and sends it back. Together, those pieces form a single, verifiable output.β
Hudson jumped in. βAnd itβs not just distributed inference β itβs parallelized. Every slice runs at the same time. What used to take a minute can now happen in a second.β
Thatβs when the term distributed inference came up β something, as Keith admitted, few had ever heard before.
βDistributed training is common,β Hudson said. βDistributed inference? Thatβs new. And weβre probably the first to pull it off.β
A Decentralized Supercomputer
To visualize it, Dan described the process like a branching tree. βInstead of one computer doing everything in a line, multiple miners work on separate branches simultaneously. They send their proofs back, we verify them, and the model continues. Itβs computation and verification are happening hand in hand.β
What emerges, effectively, is a decentralized supercomputer that grows stronger with every new miner that joins.
βThe incentive structure makes it self-optimizing,β Hudson added. βEvery miner wants to be faster, cheaper, better. That collective effort becomes an engine for global computation.β
Protecting the Secret Sauce
But this technology isnβt just about efficiency. Itβs also about ownership.
βLetβs say Iβve spent months fine-tuning a model,β Dan explained. βIβve invested time, data, and expertise. If I hand that model over to a client, they can copy it and never pay me again.β
With DSperse, that changes. βNow I can send it as a compiled mathematical circuit. You can run it, get the results you need, but youβll never see the inner workings. My weights and biases stay mine.β
Keith paused. βSo youβre protecting your Intellectual Property (IP) not through patents, but through cryptography.β
βExactly,β he said. βItβs like Digital Right (DRM) for intelligence, but without the restrictions.β
Proving Honesty, Not Just Accuracy
As we moved through examples, the conversation circled back to the trust problem. Hudson pointed out that even big AI providers struggle with verification.
βPeople assume theyβre talking to GPT-4, but sometimes the system reroutes them to smaller models,β he said. βYouβre paying for one thing and getting another.β
DSperse changes that. It allows a user to demand proof that theyβre interacting with the promised model β no substitutions, no shortcuts.
βAI needs a layer of honesty,β Dan said. βWeβre building that.β
KYC, Compliance, and Beyond
The team sees KYC as one of the first real-world applications. βIn some jurisdictions, itβs illegal to store facial data,β Hudson explained. βZero-knowledge proofs make it possible to verify identity or age without ever seeing the photo. Itβs privacy and compliance in one.β
From finance to healthcare, the implications ripple outward. Imagine hospitals verifying diagnoses without revealing patient data, or governments proving authenticity without central databases.
βThis isnβt theoretical anymore,β Dan said. βWeβre already running these models on Subnet 2.β
Collaboration Across the Network
Inference Labs isnβt building DSperse in isolation. The team is already collaborating across the wider Bittensor network.
Hudson mentioned Taoshi (Subnet 8). βWe delivered a ZK-proof for their trading metrics,β he said. βNow users can verify that their signals came from the top-performing miner, without exposing any proprietary data.β
Dan added that theyβre also in discussions with other subnets, like Chutes and Targon, to bring verifiable inference to language and prediction models. βDSperse gives them the foundation to scale securely,β he said.
The Open-Source Ethos
Before wrapping up by asking whatβs next. Hudson was optimistic. βBy yearβs end, I think weβll see breakthroughs that take this from small to massive models β 70B, maybe even 400B parameters. And every improvement we make feeds back into the subnet.β
Dan nodded. βWeβre open-source by design. Not everything we build is public right away, but all the foundational pillars will be. Thatβs how you grow a network of trust β by letting others build on it.β
Building a Trust Layer for AI
As our conversation wound down, one thing became clear: this isnβt just about faster AI. Itβs about trusted AI.
For years, the field has obsessed over making models more powerful. Now, creators like Inference Labs are asking the harder question, how do we make them accountable?
When Keith asked Dan what drives them, he smiled. βBecause itβs not enough for AI to be smart,β he said. βIt needs to be honest.βAnd with Subnet 2βs quiet revolution unfolding behind the scenes, honesty might just become the next great frontier in artificial intelligence.

Be the first to comment