Eyes on AI: Training AI Without Data Centres

Eyes on AI: Training AI Without Data Centres
Read Time:4 Minute, 52 Second

The largest AI labs in the world are spending tens of billions of dollars on data centers that, on a long enough timeline, may turn out to be the wrong bet. That is the underlying argument Steffen Cruz, co-founder and CTO of Macrocosmos, makes in his recent appearance on β€œEye on AI” with Craig Smith.

The conversation moves from the structural limits of centralized GPU warehouses to the technical specifics of how Macrocosmos is training large language models on idle compute scattered across the world, all coordinated through Bittensor.

Highlights From the Conversation

The discussion covered a wide range of ground, from the academic origins of distributed training to the commercial roadmap Macrocosmos is now executing on.

The points worth pulling out are below:

a. Centralized AI Training Is Hitting An Economic Wall: Projects like Stargate and Colossus represent multi-billion dollar GPU build-outs, and as appetite for larger models grows, the capital required is approaching the scale of national infrastructure projects. 

Steffen compared it directly to the β€œLarge Hadron Collider,” where eventually you need a sovereign budget just to ask the next question.

b. Distributed Training Flips the Entire Cost Model: Instead of stuffing a warehouse with GPUs in a single location, Macrocosmos coordinates compute across thousands of devices globally, tapping into pockets of cheap surplus energy in places like Iceland for short, interruptible bursts. 

The result is the ability to perform genuine cost arbitrage on the most expensive part of model creation.

c. Bittensor is Structurally Different from Other AI Blockchain Projects: Steffen described it as over 100 projects under a trench coat, with 128 different teams currently building distinct services on top of the network. The blockchain itself does not solve a single narrow problem but provides the base layer for coordination, reward distribution, and synchronization.

d. The Blockchain’s Role is More Disciplined than Most People Assume: Macrocosmos uses the chain as an immutable registry, an authorization layer, and a synchronization clock that anchors work to a shared drumbeat. 

The actual training compute lives off-chain, and the chain only handles what genuinely requires its trust guarantees.

e. IOTA is the Technology Making it all Work: The β€œIncentivized Orchestrated Training Architecture” coordinates compute nodes scattered across the world so they behave like a single supercomputer, using model parallelism so each device only carries a small slice of the model rather than a full copy. 

This is what makes it possible to train frontier-scale models out of consumer devices like Mac Minis and personal GPUs.

f. The Compute Supply Side is Already in Place: Macrocosmos has been consistently oversubscribed by Bittensor participants offering their hardware to the network, and the recent macOS app launch added 2,500 downloads in the first two weeks. 

The strategy now extends to neoclouds and hyperscalers, who can plug in their underutilized GPU capacity to lift overall margin without renting it out at cents on the dollar for inference.

g. The Demand Side is Targeted at Researchers, Startups, and Enterprises that Have Been Priced Out: Macrocosmos is building interfaces that look familiar to anyone using PyTorch or TensorFlow, removing the cognitive overhead of working with a distributed network so users can simply specify their training objective without worrying about which nodes are running where.

h. β€œTrain at Home” Turns Idle Consumer Devices into Passive Income: As people increasingly buy dedicated machines like Mac Minis to run personal AI agents, those machines sit underutilized for the bulk of the day.

Macrocosmos lets owners plug them into the network and earn during downtime, with a one-click app, sleep-hour scheduling, and predictable payouts.

i. The Roadmap is Ambitious but Specific: By mid-2026, Macrocosmos is targeting 5,000 active compute nodes and 70-billion-parameter models trained at 10% – 20% of centralized costs. 

By the end of 2027, the goal extends to models above 100 billion parameters, with sovereign and enterprise model training as the primary commercial markets.

j. The Technology is Not Limited to AI: Steffen was clear that the deeper problem Macrocosmos is solving is how to turn unreliable, churning global compute into a persistent and stable fabric. 

Once that exists, the same infrastructure can support university HPC (High performance computing) workloads, bioinformatics, physics simulations, and any computationally expensive scientific problem that needs short bursts of distributed capacity.

k. β€œData Universe” Handles the Data Side of The Same Flywheel: Macrocosmos already operates a separate Bittensor subnet dedicated to web-scale data scraping, providing the training data that complements the IOTA compute layer and creating a complete decentralized AI pipeline within a single ecosystem.

The Broader Shift Worth Watching

What makes this conversation worth taking seriously is the specificity behind the claims. Steffen is not pitching a vision document. He is describing a system that has already moved out of research mode, has supply lined up on both the consumer and enterprise side, and has a concrete roadmap toward training models at scales that would currently require nine-figure capital expenditures inside traditional labs. 

Whether Macrocosmos becomes the dominant decentralized training network or simply one of the projects that proves it can be done, the broader shift the team is building toward, away from sovereign-scale data center bets and toward distributed compute fabrics, is a transition that almost every part of the AI industry is going to have to engage with eventually. 

Bittensor ($TAO) is where that engagement is happening earliest, and IOTA is one of the cleanest expressions of why it matters.

Enjoyed this article? Join our newsletter

Get the latest TAO & Bittensor news straight to your inbox.

We respect your privacy. Unsubscribe anytime.

Be the first to comment

Leave a Reply

Your email address will not be published.


*