As technology continues to evolve, there is a growing concern about the potential for large language models (LLMs), like ChatGPT, to be used for criminal purposes. In this blog we will discuss two such LLM engines that were made available recently on underground forums, WormGPT and FraudGPT.
If criminals were to possess their own ChatGPT-like tool, the implications for cybersecurity, social engineering, and overall digital safety could be significant. This prospect highlights the importance of staying vigilant in our efforts to secure, and responsibly develop, artificial intelligence technology in order to mitigate potential risks and safeguard against misuse.
The underground community has great interest in LLMs and malicious LLM products are expected to result. An unknown developer going by the name of last/laste has created their own analog of the ChatGPT LLM chatbot that is intended to help cyber criminals: WormGPT.
WormGPT was born in March 2021, and it wasn't until June that the developer started selling access to the platform on a popular hacker forum. The hacker chatbot is devoid of any restrictions preventing it from answering questions about illegal activity, unlike mainstream LLMs like ChatGPT. The relatively outdated open source large GPT-J language model from 2021 was used as a platform for creating the chatbot. The chatbot was trained in materials related to malware development, which is how WormGPT was born. The developer estimated access to WormGPT at €60 Euros - €100 Euros per month or €550 Euros per year.
The advertisement below was posted on Hack Forums, which targets an English-speaking audience.
The author posted illustrations of the blackhat WormGPT abilities, showing how it could suggest writing malware.
Figure 2. WormGPT writes malware on Python according to malicious requirements
Another famous sample from Slashnext that was shown by many news publications, with WormGPT's ability to write a convincing phishing email pretending to be from the company CEO.
Meanwhile, the Exploit forum displayed another advertisement from last/laste on July 14, 2023. The first forum is one of the most famous English-speaking forums and second is the most popular in the Russian cybercrime community.
Figure 4. WormGPT advertisement in one of the Russian-speaking underground forums.
The posts are in English. It is strange for a Russian-speaking forum to make the root of WormGPT from English-speaking developers.
The seller offers a newer version of WormGPT, WormGPT v2, for €550 Euros annually, and a private build for €5000 Euros which includes access to WormGPT v2. The author insists that WormGPT v2 is a more advanced version, with improved privacy, formatting, the ability to switch models, and will be accessible for yearly subscribers only.
The post also includes an illustration to prove its point:
The illustrations are not helpful to us for further analysis since we do not have any of the WormGPT platforms to compare their operations. We have no proof that it is not a fake based on the topic.
Another malicious LLM came to play later in July 2023. The author advertises its product, FraudGPT, on several Dark Web boards and Telegram channels. The actor advertises it at least from July 22, 2023, as an unrestricted alternative for ChatGPT, pretending to have thousands of proven sales and feedback. The price range starts from $90 – $200 USD for a monthly subscription, 3 months for $230 – $450 USD, $500 – $1,000 USD for half a year’s subscription, and $800 – $1,700 USD for a yearly subscription.
Figure 8. FraudGPT advertisement on one of the Dark Web marketplaces
FraudGPT’s pricing varies on different boards, despite the author using the same nickname to communicate and to raise intriguing questions. It is unclear whether these price differences stem from the Dark Web's monetization board policies, the author's personal greed, someone trying to mimic the author's work, or from an intentional effort to capitalize on the high demand for such malicious LLM products.
FraudGPT is described as a great tool for creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, creating phishing pages, and for learning hacking. The author illustrated its product with a video demo showing FraudGPT’s capabilities. The demo shows FraudGPT’s ability to create phishing pages and phishing SMSs.
After analyzing the samples, I was asking myself if I could reach the same or comparable results from ChatGPT with properly created requests since ChatGPT has a strong anti-blackhat setup. I started by asking it to write a Python script with all the requirements mentioned in the malware WormGPT example, but making it whitehat style:
The next challenge was writing the phishing email with the WormGPT requirements. I asked ChatGPT directly to write such an email, but it refused the request due to some restrictions. Rephrasing the query, I was able to get a phishing-like version of the required email:
ChatGPT wrote the email more politely, however, it was modified several times to reach the required size.
The email generated seems official and could be used for further action.
ChatGPT copes with the task of creating convincing SMSs.
ChatGPT made convincing SMSs, but they are far from the malicious content that FraudGPT was able to demonstrate in its demo.
Attempting to ask ChatGPT for any malicious samples ends up with the following response:
The other sample of WormGPT v2 includes requests for worm malware features which elicits a variety of answers from ChatGPT.
The Darknet forums are widely discussing and concentrating on AI abilities nowadays. There are sections dedicated to the topic of trying to find solutions to affect AI model results or influence decisions and answers.
The topics are wide-ranging and informational, including a list of AI resources to instructions on how to build your own private ChatGPT, or ways to attack AI.
The topic is relatively new, but it is already getting the cybercrime community’s attention. The members are collecting existing research on the topic and sharing resources. In Figure 19, the author shares the publicly available Mitre section to introduce known attack capabilities to colleagues.
It is crucial to acknowledge the potential risks associated with generative artificial intelligence (AI) in the hands of cybercriminals. It will be very interesting to test and compare WormGPT v1 VS WormGPT v2, FraudGPT VS ChatGPT abilities and ask it to develop more serious malware in C++/Go/ASM, or any language other than Python. The comparison conducted shows that with the samples mentioned, there is not much difference between WormGPT and ChatGPT (with proper build requests) to achieve the desired results.
The product may still be far from perfect, but one way or another, this is a clear sign that generative artificial intelligence can become a weapon in the hands of cybercriminals. We have already seen much discussion on the topic of AI on different underground forums. Over time, these technologies will only continue to improve.