
JSTprove has officially launched on DSperse (Bittensor Subnet 2), marking a major step forward for decentralized AI infrastructure. While the release has flown under the radar, it addresses one of the most critical weaknesses in today’s AI agent economy: the lack of provable guarantees that a model actually performed the computation it claims.
The core problem: unverifiable AI behavior
Most AI systems operate as black boxes. When an agent claims it analyzed data, followed a strategy, or executed an action under specific rules, there is no cryptographic proof that the computation happened as described. The model could have used different parameters, skipped steps, or hallucinated results.
This limitation may be acceptable for low-risk use cases, but it becomes dangerous in finance, healthcare, robotics, and autonomous systems.
JSTprove and proof of inference
Inference Labs has introduced a protocol-level solution with JSTprove, a zero-knowledge toolkit that proves a specific model produced a specific output from specific inputs. The proof is publicly verifiable and does not reveal the model weights, inputs, or private data.
JSTprove recently moved from testnet to mainnet and is already scaling to thousands of proofs per week. It enables verifiable inference for complex models that were previously impractical for zkML systems.
Earlier zkML approaches required deep cryptography expertise, struggled with real-world models, and were too slow for production use. JSTprove abstracts most of that complexity, delivers 65 to 76 percent faster proving, and runs under one gigabyte of memory by combining model slicing with DSperse.
A research paper released in October 2025 details a full ONNX to zero-knowledge pipeline, supported by a simple command-line workflow for compile, witness, prove, and verify.
TruthTensor in production
TruthTensor is already using this technology at scale. Launched about 40 days ago, the platform reports over one million agent fine-tunes, roughly 500,000 active trading agents, and around 500,000 builders.
Its Proof of Portfolio system allows traders to publish cryptographically verified performance without revealing strategies. The system has been audited by ZKVIN and Zellic. By late 2025, TruthTensor had processed over 283 million proofs and continues to scale. A partnership with Cysic brings ASIC acceleration, reducing costs and increasing throughput for verifiable inference.
This is no longer experimental research. It is live infrastructure handling real workloads.
Why this matters for AI agents
Any serious agent system eventually hits a trust ceiling. Trading agents must prove they followed a strategy. Medical agents must prove approved protocols were used. Robotics and autonomous systems must prove safety constraints were respected.
Until now, the tools were audits, logs, and trusted intermediaries. Proof of inference changes this by providing a mathematical receipt. Anyone can independently verify what computation happened. There are no trusted attestors and no room for fabricated logs.
The role of Bittensor and TAO
Subnet 2 sits at the infrastructure layer of the Bittensor network. Verifiable inference is a primitive that high-stakes subnets can consume across the ecosystem. Coding agents, signal generation subnets, and financial or healthcare agents all benefit from a shared trust layer grounded in cryptography.
As this capability becomes essential, value accrues to the network itself. Verifiable inference is not an application feature but a foundation.
AI agents are increasingly making decisions that affect money, safety, and real-world outcomes. That future cannot rely on blind trust. With JSTprove live on SN2, Bittensor now has a working cryptographic foundation for proving that intelligence actually did what it claimed.
The code is live, proofs are being generated, and the market is mostly quiet. Whether that changes soon is an open question.
Sources:
Inference Labs: https://inferencelabs.com
TruthTensor: https://truthtensor.com
Research paper: https://arxiv.org/pdf/2510.21024

Be the first to comment