AI-generated smart contracts may contain vulnerabilities and could potentially ‘collapse’ under attack, according to CertiK.

32

Artificial intelligence solutions like OpenAI’s ChatGPT may introduce additional issues, bugs, and vulnerabilities if employed for writing and developing cryptocurrency initiatives, according to an executive from the blockchain security company CertiK.

Kang Li, CertiK’s chief security officer, conveyed to Cointelegraph during Korean Blockchain Week on Sept. 5 that ChatGPT is unable to detect logical coding errors in the same manner as seasoned developers.

Li indicated that ChatGPT might generate more bugs than it resolves, which could be detrimental for novice or inexperienced programmers attempting to create their own projects.

“ChatGPT will allow many individuals without the necessary training to get involved; they can begin immediately, and I start to worry about design issues that may be hidden within.”

“You create something, and ChatGPT assists in its development, but due to various design flaws, it may fail significantly when faced with attacks,” he added.

Instead, Li advocates for using ChatGPT as a tool for engineers, as it excels at clarifying the meaning of specific lines of code.

“I believe ChatGPT is an excellent resource for those engaged in code analysis and reverse engineering. It serves as a valuable assistant and will greatly enhance our efficiency.”

AI-generated smart contracts may contain vulnerabilities and could potentially 'collapse' under attack, according to CertiK.0The audience at Korean Blockchain Week attending a keynote. Source: Andrew Fenton/Cointelegraph

He emphasized that it should not be depended upon for coding—particularly by inexperienced developers aiming to create something profitable.

Li mentioned he will support his claims for at least the next two to three years, acknowledging that the swift advancements in AI could significantly enhance ChatGPT’s functionalities.

AI technology improving in social engineering tactics

In the meantime, Richard Ma, the co-founder and CEO of security firm Quantstamp, informed Cointelegraph at KBW on Sept. 4 that AI tools are increasingly effective at executing social engineering attacks—many of which closely resemble those conducted by humans.

Ma noted that Quantstamp’s clients are reporting a concerning rise in increasingly sophisticated social engineering efforts.

“[With] the latest incidents, it appears that individuals have been utilizing machine learning to compose emails and messages. They are significantly more persuasive than the social engineering attempts from a few years back.”

While regular internet users have faced AI-generated spam emails for several years, Ma believes we are nearing a stage where it will be difficult to discern whether harmful messages are generated by AI or humans.

Related: Twitter Hack: ‘Social Engineering Attack’ on Employee Admin Panels

“It’s going to become increasingly challenging to differentiate between messages from humans and those from convincingly crafted AI,” he stated.

Individuals in the crypto sector are already being targeted, with some being impersonated by AI bots. Ma anticipates that the situation will worsen.

“In the crypto space, there are numerous databases containing contact information for key individuals from various projects. Thus, hackers have access to that information and can utilize AI to message people in diverse ways.”

“It’s quite difficult to train an entire organization to ignore such communications,” Ma added.

Ma mentioned that improved anti-phishing software is being introduced to the market, which can assist companies in reducing the risk of potential attacks.

Magazine: AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4