Venice Just Trained Its Flagship Uncensored Model on Bittensor

Venice Just Trained Its Flagship Uncensored Model on Bittensor
Read Time:4 Minute, 39 Second

For years, the critique of decentralized AI was blunt: nice theory, where are the real workloads? This week, that question got an answer loud enough to rattle the industry.

Venice AI, the privacy-focused, uncensored AI platform led by founder Erik Voorhees, launched Venice Uncensored 1.2, its most capable uncensored model to date. Built on Mistral’s 24B architecture in collaboration with Dolphin AI, the new release ships with native vision support, a 4x larger context window, and stronger tool-use capabilities.

Venice Uncensored 1.2 was fine-tuned on Targon, Bittensor’s Subnet 4, using confidential compute. Read that line again. Not AWS, not GCP, not some hyperscaler hiding behind an enterprise contract; a subnet on Bittensor. A consumer AI platform with millions of users and a high-profile founder picked a decentralized subnet over Big Tech clouds for a flagship training run. That is a big deal.

A Nod From the Top

Voorhees didn’t bury the announcement. Posting on X, he confirmed directly that Venice had begun working with Bittensor subnets and that the Uncensored 1.2 model had been tuned on Targon. The post drew massive attention, with the TAO community calling it everything from “massive validation” to a “game changer.”

Targon, operated by Manifold Labs, amplified the collaboration, emphasizing the confidential nature of the training run. This detail matters enormously for a company whose entire value proposition is user privacy and censorship resistance.

For Venice, the logic for choosing Targon instead of big AI companies is clear. You cannot credibly sell an uncensored, privacy-first AI product while renting compute from vendors who reserve the right to review, throttle, or ban your workloads. Decentralized confidential compute isn’t a nice-to-have for a platform like Venice. It is structurally aligned with what the product actually is.

Why This Matters BEYOND the Bittensor Ecosystem

For months now, people have challenged the feasibility of Bittensor’s decentralized AI narrative. Venice on Targon is the clearest signal yet that the flywheel is turning. A company serving millions of users chose a Bittensor subnet for a workload that centralized clouds would happily have taken. That is revenue, retention, and reputation flowing into the ecosystem on merit.

And Targon is not alone. Across the network, subnets are actively shipping commercial workloads that would raise eyebrows at any traditional AI startup:

Chutes (Subnet 64) has emerged as one of the clearest revenue stories in the ecosystem, running serverless AI inference at pricing up to 85% below traditional clouds and powering consumer apps with millions of users, including Janitor AI and Silly Tavern.

Score (Subnet 44), operated by Manako AI, is turning live video into structured, queryable data for professional sports and enterprise clients, and has pulled in partnerships with Big Four consulting (for instance, PwC France) and top-flight football clubs.

Yanez is selling adversarial datasets to top financial institutions to stress-test KYC and AML systems; a regulated, enterprise-facing use case that does not exist in most crypto projects at all.

How Targon Works, and Who Can Use It

The Venice deal is the highest-profile example, but Targon is not only open to big enterprises. It is a decentralized GPU and CPU compute cloud that any developer, startup, or company can access.

The core technology is the Targon Virtual Machine (TVM), which combines hardware-backed confidential computing (Intel TDX, AMD SEV, and NVIDIA’s Confidential Computing stack) to run AI workloads where the data and model weights remain encrypted end-to-end. Even the hardware providers themselves cannot see what is being processed on their machines. For sensitive training runs, regulated industries, or anyone shipping uncensored or proprietary models, that guarantee is non-trivial.

Targon is so good that it got a thumbs-up from Intel itself.

What Targon offers in practice:

  • Confidential training and inference. Data and models stay encrypted during compute, not just at rest and in transit. This is exactly the guarantee Venice leveraged.
  • On-demand access to high-end GPUs, including NVIDIA H200s, with uptime and latency competitive with centralized providers, and typically at lower cost.
  • Permissionless deployment. No account reviews, no workload bans, no single point of failure. If you can pay, you can compute.
  • Developer-friendly integration. A serverless SDK and marketplace model let teams spin up training or inference jobs without procuring their own hardware.

The practical upshot: a solo researcher fine-tuning a domain-specific model, a startup launching an AI product that handles sensitive user data, and an enterprise running compliance-critical training can all use the same infrastructure Venice just used at targon.com, with compute credits discountly offered to students and early-stage builders.

The Bigger Picture

Bittensor’s thesis has always been that market-based coordination of AI compute and services would eventually outcompete centralized providers on the dimensions that matter most: cost, openness, censorship resistance, and the ability to specialize. For a long time, that thesis lived mostly in pitch decks and fancy posts on X.

It doesn’t anymore. Venice’s uncensored flagship trained on Targon. Consumer apps running inference through Chutes. Enterprise compliance datasets coming out of Yanez. Big 4 partnerships with Score.

The next twelve months will tell us how far the flywheel can spin. But the question has clearly changed. It is no longer “will decentralized AI find real use cases?” It is “which subnet is shipping next?”

And you would not be wrong to imagine that more subnets are cooking.

Enjoyed this article? Join our newsletter

Get the latest TAO & Bittensor news straight to your inbox.

We respect your privacy. Unsubscribe anytime.

Be the first to comment

Leave a Reply

Your email address will not be published.


*