
Eclair is Subnet 28 on the Bittensor, focused on image-to-video (I2V) generation and evaluation. It positions itself as an open AI video project, often described as βBittensor for AI generation,β with the goal of creating markets for intelligence rather than monopolies.
Who Built It
Eclair was reportedly built by Const and Fish, and it leverages Subnet 75 (Hippius) for decentralized storage. When asked if he’s involved in the project, Const said he is only helping the team and SN28 isn’t his subnet:

How Eclair Works
Miners
Miners run image-to-video generators and commit their generator chute slug on-chain.
They receive inputs such as:
- Prompt
- Base64-encoded image used as the first frame
- FPS, number of frames, and resolution
- A fast flag
The output is a generated MP4 video.
Validators
Validators sample real video clips, extract the first frame, and generate a textual description using GPT-4o.
Miners are then challenged to recreate the video from that information. All samples are stored in Hippius S3 buckets.
Scoring
Validation relies on forced-choice comparisons using GPT-4o, where the model decides which video looks more real: the original or the minerβs output. Scoring follows a winner-take-all system, with an epsilon beat rule determining weight assignments.
Technical Snapshot
- Languages: Mostly TypeScript (74.4%) and Python (21.8%)
- Requirements: Python 3.12+, ffmpeg, OpenAI API key (GPT-4o), Chutes API key, and a Hippius subaccount seed phrase
- License: MIT
Links
- Website: https://www.eclair.earth
- GitHub: https://github.com/unconst/eclair
Closing
Eclair is shaping up as a serious attempt to standardize and evaluate open image-to-video generation inside the Bittensor ecosystem. It combines market incentives, open-source tooling, and real-world video benchmarks in a way that fits Bittensorβs broader vision.
We will share more details as the subnet evolves and more data becomes available.

Be the first to comment