Why BitMind Could Explode 1000% From Here

Why BitMind Could Explode 1000% From Here

Written by: School of Crypto

14-year-old Elliston Berry of Texas and 15-year-old Francesca Mani of New Jersey were just normal high school girls living a regular life a couple of years back. One day, pornographic pictures of them flooded the student body and reached social media. The embarrassment and ridicule these girls faced were indescribable. Deepfakes certainly put a stain on their reputation and image and likely scarred them for life.

According to Bitmind founder Ken Jon Miyachi, “In 2023, we had around 500,000 deepfakes circulating online. Now, we’re looking at 8 million in 2025”. Deepfakes images and video content have surged over the years, and no one has found a good way to combat them.

Enter BitMind.

BitMind is a subnet project based on Bittensor that trains its model to assess whether an image or video is fake or not. 

BitMind showed its deepfake detection prowess during the Israel attack on Iran’s nuclear facility.

On June 14th, an AI-generated TikTok account posted a video showcasing Iran damaging the Tel Aviv and Ben Gurion airport. For nearly 8-10 hours and a million views, people thought this video was real and formed many opinions over it. BitMind was the first to deem the video fake. Community Notes and even Grok were late to the party and finally labeled it a deepfake as well. 

Luckily, the “Take it Down” Act was swiftly passed through Congress, led by Texas Senator Ted Cruz, in order to combat and take down these vicious and inhumane deepfakes images/videos simply posted to tease harmless girls. Deepfakes don’t span just through international conflict and wars, but also via schools and vulnerable grade school kids, as mentioned with Elliston and Francesca.

While legislation is a great first step towards combating deep fakes, how exactly do you know if it’s real or not? Until now, there hasn’t been a service or technology as robust as BitMind’s, and its model mechanism is quite novel and effective. 

Essentially, Bitmind has a dual mining mechanism that constantly trains and battles each other to find which image/video is real and which is fake. 

The systematic approach is called Generative Adversarial Networks (GAN). This network approach consists of “generators” and “detectors”. The role of the generators is to publish convincing fake images or videos. In turn, the detector’s job is to sharpen its ability to evaluate synthetic data. This process allows Bitmind to constantly and dynamically train its model to improve the deepfake detection process. 

On Bittensor, miners provide proof of work to solve problems and get rewarded by the validators of the subnet. There are two sets of miners working in this GAN infrastructure. The Discriminative miners are tasked with developing an ONYX model, or an open-source machine learning model, in order to discern whether an image or video is AI-generated or real. The validators will reward these miners with the most accurate results. In turn, the Generative miners create synthetic images and videos based on prompts from the validators in order to challenge the Discriminative miners’ models. The Generative miners are rewarded based on how convincing the produced multimedia is, response time, and the consistency of uptime.  

As a user, utilizing Bitmind is as easy as 1-2-3. Hop on the AI or Not app, drag and drop an image or video, and it’ll determine whether the multimedia content is real or fake. Someone can also tweet to the @Bitmindbot on X and receive an evaluation instantly. 

See the full guide here.

Why is Bitmind going to be the leader in the Deepfake detection space? According to an article on Crowd Fund Insider, Miyachi claims that Bitmind’s detection software has an 88% accuracy rate, which far surpasses the 69% success rate of traditional tools. Statistically speaking, Bitmind is already ahead of the competition. Miyachi believes a Deepfake detection solution is purely meant to be innovated within the decentralized domain, specifically on Bittensor. He mentioned Google has a product called Google Synth ID that can only detect if the data was generated by Google or not. A colleague from OpenAI told Miyachi that he thought it wasn’t even possible to make a Deepfake service that’s robust and scalable enough to make a true impact.

Another reason Bitmind has an edge in this niche is the fact that it’s decentralized without a single point of failure. In an episode of Novelty Search, Miyachi posed the question of how someone can trust OpenAI when they are a for-profit company with their own prerogatives and interests. How can one country trust another country to detect what’s real or fake truthfully? 

The Bitmind team has a plan to reward its alpha subnet token holders. The initial idea of the project will be to utilize the revenue generated to buy back the alpha subnet tokens. There are plans to distribute these bought-back tokens to contributors who properly label images/videos on the Bitmind app. Bitmind will create a lucrative buyback flywheel that will implement buy pressure on the market and reward contributors for making the detection product better.

Bitmind is holistically suitable to develop this product for billions of users to determine what is real and what is fake. The expansive reach of millions of developers worldwide will continue to train its model to be the detector of truth and lies. Bitmind will be the moat to combat fake news and illicit posts, which could save humanity as we know it.

Subscribe to receive The Tao daily content in your inbox.

We don’t spam! Read our privacy policy for more info.

Be the first to comment

Leave a Reply

Your email address will not be published.


*