Hot question: Despite the fact that this is only a beta version, ChatGPT already demonstrates impressive capabilities for the entire spectrum of online communities, including underground forums where experienced cybercriminals have demonstrated how AI can greatly simplify the creation of functional malware.
While New York schools have decided to ban the use of ChatGPT on their networks and devices, the criminal Internet underground is exploring the possibility of introducing new technology to create malware more quickly. The machine learning-based chatbot was designed to interact in an interactive way, answer additional questions and admit their mistakes, and it seems that OpenAI researchers have done such a good job that the service can even write code that works with just a few tweaks here. and there.
The security company Check Point recently scanned cybercriminal forums in search of fragments of malicious code using ChatGPT. They found what they were looking for, since ChatGPT is apparently used both as an “educational” tool and as a pure platform for creating malware.
Thanks to the OpenAI chatbot, users of the underground hacker forum analyzed by Check Point were able to create a Python-based styler that searches for common file types, copies them to the Temp folder, archives them and uploads them to a hard-coded FTP server. Further analyses confirmed that the malicious code could work.
The second sample, created by the same user, was a Java—based code snippet capable of downloading an SSH/Telnet client (PuTTY), and then secretly running it on the system using Powershell, a function that could be modified to load and run any program. Other, less capable “attackers” used ChatGPT to create an encryption tool so they could easily generate cryptographic keys, encrypt files, compare hashes, and more.
Check Point warned that ChatGPT can even be used to “facilitate fraudulent activities,” since the service can also create market scripts for the Dark Web using third-party APIs to “get the current cryptocurrency (Monero). , Bitcoin and Ethereum) as part of the Dark Web Market payment system.”
Earlier, Check Point tried to automate the entire infection flow using phishing email and malicious Excel VBA code. In addition, the researchers also used Codex — another AI—based code creation system – to create other types of complex fragments of (potentially) malicious code.
As for ChatGPT, the researchers say it’s too early to decide whether the chatbot will become “the new favorite tool for Dark Web participants.” However, the underground community has already shown considerable interest in “joining this latest trend towards the creation of malicious code.” ChatGPT should include some security measures to avoid abuse, but malware authors and script kids have shown that they can easily bypass these security measures.