
SUMMARY: On TGIF #20, Covenant Labs announced Templar’s latest evolution in decentralized training, moving toward a “unified training” paradigm that combines data and model parallelism.
By integrating their state-of-the-art SparseLoCO algorithm with new model-sharding techniques, Templar is now capable of harnessing the internet’s “long tail” of compute, allowing consumer-grade GPUs (Graphics Processing Units) and even MacBooks to contribute to massive 70B+ parameter pre-training runs.
This breakthrough effectively commoditizes intelligence as a public good, positioning decentralized training not just as a cost-saving tool, but as a superior, high-scale alternative to the physical limits of centralized data centers.
By: Covenant Labs

Be the first to comment