Every day this week we’re highlighting one genuine, no bullsh*t, hype free use case for AI in crypto. Today it’s the potential for using AI for smart contract auditing and cybersecurity, we’re so near and yet so far.
One of the big use cases for AI and crypto in the future is in auditing smart contracts and identifying cybersecurity holes. There’s only one problem — at the moment, GPT-4 sucks at it.
Coinbase tried out ChatGPT’s capabilities for automated token security reviews earlier this year, and in 25% of cases, it wrongly classified high-risk tokens as low-risk.
James Edwards, the lead maintainer for cybersecurity investigator Librehash, believes OpenAI isn’t keen on having the bot used for tasks like this.
“I strongly believe that OpenAI has quietly nerfed some of the bot’s capabilities when it comes to smart contracts for the sake of not having folks rely on their bot explicitly to draw up a deployable smart contract,” he says, explaining that OpenAI likely doesn’t want to be held responsible for any vulnerabilities or exploits.
This isn’t to say AI has zero capabilities when it comes to smart contracts. AI Eye spoke with Melbourne digital artist Rhett Mankind back in May. He knew nothing at all about creating smart contracts, but through trial and error and numerous rewrites, was able to get ChatGPT to create a memecoin called Turbo that went on to hit a $100 million market cap.
But as CertiK Chief Security Officer Kang Li points out, while you might get something working with ChatGPT’s help, it’s likely to be full of logical code bugs and potential exploits:
“You write something and ChatGPT helps you build it but because of all these design flaws it may fail miserably when attackers start coming.”
So it’s definitely not good enough for solo smart contract auditing, in which a tiny mistake can see a project drained of tens of millions — though Li says it can be “a helpful tool for people doing code analysis.”
Richard Ma from blockchain security firm Quantstamp…