Const on Building the “Monetary Computer” for AI — And Why It Matters Now More Than Ever

Const on Building the "Monetary Computer" for AI — And Why It Matters Now More Than Ever
Read Time:6 Minute, 45 Second

In a recent appearance on the Virtual Bacon podcast, Bittensor co-founder Jacob Steeves (Const) laid out a vision that reaches far beyond anything typically associated with crypto-AI projects. From the largest decentralized model training run ever completed, to an internal token economy that has reshaped how subnets compete for capital, Steves made the case that Bittensor is quietly assembling the infrastructure for a fully ownable, permissionless AI stack — one designed to rival the closed labs that currently dominate the field.

Here is what he had to say.

From DARPA to Bitcoin to Bittensor

Steeves’ path to founding Bittensor began at the intersection of mathematics, computer science, and the deep learning breakthroughs of the early 2010s. Inspired by the same neuroscience-rooted intuitions that drove Geoffrey Hinton and Yann LeCun, he spent time as a DARPA contractor working on neuromorphic chips before discovering Bitcoin around 2014. That collision of interests — machine intelligence and programmable money — planted the seed for everything that followed.

“I looked at what was working, what was going on in Bitcoin, and said, ‘Hey, we can use this for artificial intelligence,'” Steves recalled. The core insight, he explained, is deceptively simple: money is an optimization technology. Markets organize matter, resources, and behavior more efficiently than any central planner. If you can extend that logic into the digital realm, you unlock an entirely new form of computation.

He launched the OpenTensor Foundation and started the Bittensor network in 2021, with the goal of building a general-purpose platform for what he calls “incentive computing” or “monetary computing”.

Subnet 3 and the Largest Decentralized Training Run in History

The most headline-grabbing development in recent months has been Subnet 3, operated by the Templar team, which completed the largest decentralized pre-training run ever achieved. This was not fine-tuning or post-training adaptation on an existing model. Templar trained a 72-billion-parameter language model from scratch, coordinating GPU contributors scattered across the open internet.

Steeves walked through the technical architecture in detail. The model’s weights are sharded across cloud storage buckets. Participating machines pull down the latest state, compute gradient updates on their slice of the training data, and push those updates back, where they are aggregated into the global model in an asynchronous parameter-server loop — fundamentally the same workflow used inside Google or OpenAI, but stretched across an adversarial, permissionless network.

That adversarial dimension is what makes it so hard. When you don’t control the contributing machines, participants can submit poisoned gradients to sabotage the run, fake their work to collect rewards, or simply go offline without warning. Templar addressed these challenges by building a market that directly measures how much each contributed gradient reduces the global model’s loss. Miners are scored and paid in proportion to the verified value of their updates, creating a continuous filtration system that rewards genuine contributors and cycles out bad actors.

On the algorithmic side, Templar developed a novel compression technique layered on top of Sparse DiLoCo, a method for efficient distributed training over high-latency internet connections. The approach has already seen adoption beyond Bittensor: Steeves noted that independent researchers have been spotted using Sparse DiLoCo to train models across consumer MacBooks in their homes.

The host drew a comparison to the growing movement of individuals self-hosting and self-training open-source AI — a trend popularized recently by creators experimenting with personalized models that compete with commercial offerings. The missing piece, he argued, has been scalable, decentralized training infrastructure. Subnet 3 represents the first credible proof that such infrastructure can work at scale.

Dynamic TAO: The Economic Engine

Perhaps the most transformative change to Bittensor’s internal economy has been the introduction of Dynamic TAO (dTAO), a mechanism that replaced the network’s original governance model with a market-based allocation system.

Under dTAO, every subnet has its own secondary token (called an “alpha” token) paired with TAO through a liquidity pool. New TAO emissions, minted every 12 seconds at a rate of 0.5 TAO per block, flow to subnets in proportion to a single metric: net inflow. The protocol measures how much TAO each subnet’s pool attracts and retains over time. Subnets that draw sustained investment receive more emissions. Those who bleed capital get less.

The supply dynamics mirror Bitcoin by design: 21 million TAO total, no premine, halvings every four years. Each subnet token inherits the same structure — 21 million cap, no premine, its own halving schedule.

The effect on effective inflation has been dramatic. Because the majority of newly minted TAO flows directly into subnet liquidity pools and stays there — locked as liquidity rather than circulating — the real supply hitting the open market has compressed significantly. Steeves compared the dynamic to how the U.S. dollar derives value from the scale of the global economy it denominates. TAO derives value from the aggregate size and health of the digital commodity markets it underpins.

The Emergent Organism

Steeves was most animated when describing the second-order effects dTAO has produced — outcomes no one designed but that emerged naturally from the incentive structure.

The first was an explosion of outward-facing activity. Under the old oligarchic model, subnet teams focused their energy inward, lobbying validators for favorable scores. The moment their survival depended on attracting real capital from outside the network, behavior shifted overnight. Teams launched Twitter accounts, built marketing websites, shipped public APIs, and started doing press. “It’s like really a machine that naturally, just by the incentive system itself, goes and does marketing for you,” Steeves said.

The second emergent phenomenon was the rise of a new class of participant: the internal trader. dTAO spawned a sprawling trading community that continuously evaluates and reprices subnet tokens — functioning, in effect, as a decentralized capital allocation committee. He compared the dynamic to a hypothetical version of Google where every team had its own tradable token and no bureaucratic management layer. Instead of lobbying a manager for budget, each team would have to continuously prove its value to a swarm of hyper-engaged speculators showing up at their desk demanding answers.

“Be a fair warning to people that want to build a subnet,” Const cautioned. “A lot of them are going quite insane at how intense it is. You’re being attacked by these traders. You’re being attacked by the validators. You’re being attacked by the miners. Everything is just this weird black hole singularity.”

“Fiat AI” vs. the Ownership Layer

Toward the end of the conversation, Steeves and the host turned to a theme that has been gaining traction across both the AI and crypto communities: the growing inaccessibility of AI as an investment.

The host framed the dilemma starkly. AI is arguably the most important technological shift of the decade, yet meaningful exposure to the companies building it is nearly impossible for ordinary investors. OpenAI’s anticipated IPO carries a trillion-dollar price tag. The frontier labs have been funded through insider rounds long since closed. The average person’s only relationship to the AI revolution is as a paying customer.

Steeves agreed, and went further. Without ownership, he argued, the public has no control over the trajectory of the technology — no voice in how it is governed, no transparency into how it is built, and no share in the value it creates. “We’re just basically cattle that suckles off the teat of the OpenAI credits,” he said. The alternative Bittensor offers is radical: anyone can buy TAO on the open market and, through staking and subnet token selection, directly fund and participate in the development of AI systems whose code, economics, and governance are fully visible on-chain.

He drew a distinction between open source — which he views as necessary but insufficient — and what he called “openership,” the combination of open code with on-chain economic ownership. “We need to think about digital currencies as this new technology that allows more people to own, and see that ownership, in a way much more visceral than the standard systems we’ve built in the past.”

Watch the Full Episode on YouTube:

Enjoyed this article? Join our newsletter

Get the latest Bittensor & TAO ecosystem news straight to your inbox.

We respect your privacy. Unsubscribe anytime.

Be the first to comment

Leave a Reply

Your email address will not be published.


*