How can you trust anything you see online in this new AI-driven world?
Right now, around 40-60% of the content we consume online is bot-generated, not human-made. Add generative AI creating voices, videos, and articles, and it’s getting harder than ever to tell what’s real.
I see two serious issues that this brings:
- Trust: It’s becoming impossible to tell if something was made by a human or AI.
- Misinformation: Deepfakes(https://www.bbc.co.uk/newsround/69009887) and AI-driven content make it easier to spread lies.
So, what’s the fix?
For one, as individuals we need to stop blindly trusting what we see online and start verifying it. In the blockchain world, one of our mantras is: “Don’t trust. Verify.”
But how do we do that in a world flooded with AI content?
Ideas like “proof of humanity” and “proof of authenticity” could give people a means of verifying information without trust. Blockchain, with its focus on verification instead of trust, will play a big role in this.
There are already some projects working on solving this problem:
- World: Building a “proof of personhood” mechanism to verify individuals. https://world.org/blog/engineering/humanness-in-the-age-of-ai
- Kleros: Developing the Proof of Humanity protocol to tackle these challenges from a different angle. https://docs.kleros.io/products/proof-of-humanity
Both are exploring interesting solutions and I’m keeping a close eye on how they develop. Let’s see how all of this unfolds.