Generative AI is extremely good at generating fake photos, fake letters, fake bills, fake conversations — fake everything. Near co-founder Illia Polosukhin warns that soon, we won’t know which content to trust.
“If we don’t solve this reputation and authentication of content (problem), shit will get really weird,” Polosukhin explains. “You’ll get phone calls, and you’ll think this is from somebody you know, but it’s not.”
“All the images you see, all the content, the books will be (suspect). Imagine a history book that kids are studying, and literally every kid has seen a different textbook — and it’s trying to affect them in a specific way.”
Blockchain can be used to transparently trace the provenance of online content so that users can distinguish between genuine content and AI-generated images. But it won’t sort out truth from lies.
“That’s the wrong take on the problem because people write not-true stuff all the time. It’s more a question of when you see something, is it by the person that it says it is?” Polosukhin says.
“And that’s where reputation systems come in: OK, this content comes from that author; can we trust what that author says?”
“So, cryptography becomes an instrument to ensure consistency and traceability and then you need reputation around this cryptography — on-chain accounts and record keeping to actually ensure that ‘X posted this’ and ‘X is working for Cointelegraph right now.’”
If it’s such a great idea why isn’t anyone doing it already?
There are a variety of existing supply chain projects that use blockchain to prove the provenance of goods in the real world, including VeChain and OriginTrail.
However, content-based provenance has yet to take off. The Trive News project aimed to crowdsource article verification via blockchain, while the Po.et project stamped a transparent history of content on the blockchain, but both are now defunct
More recently, Fact Protocol…