CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

Services
Capture
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

twi-managed-portal-color
Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

twi-briefcase-color-svg
Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

tw-laptop-data
Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

twi-database-color-svg
Database Security

Prevent unauthorized access and exceed compliance requirements.

twi-email-color-svg
Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

tw-officer
Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

tw-network
Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

WormGPT and FraudGPT – The Rise of Malicious LLMs

As technology continues to evolve, there is a growing concern about the potential for large language models (LLMs), like ChatGPT, to be used for criminal purposes. In this blog we will discuss two such LLM engines that were made available recently on underground forums, WormGPT and FraudGPT. 

If criminals were to possess their own ChatGPT-like tool, the implications for cybersecurity, social engineering, and overall digital safety could be significant. This prospect highlights the importance of staying vigilant in our efforts to secure, and responsibly develop, artificial intelligence technology in order to mitigate potential risks and safeguard against misuse. 

The underground community has great interest in LLMs and malicious LLM products are expected to result. An unknown developer going by the name of last/laste has created their own analog of the ChatGPT LLM chatbot that is intended to help cyber criminals: WormGPT. 

WormGPT was born in March 2021, and it wasn't until June that the developer started selling access to the platform on a popular hacker forum. The hacker chatbot is devoid of any restrictions preventing it from answering questions about illegal activity, unlike mainstream LLMs like ChatGPT. The relatively outdated open source large GPT-J language model from 2021 was used as a platform for creating the chatbot. The chatbot was trained in materials related to malware development, which is how WormGPT was born. The developer estimated access to WormGPT at €60 Euros - €100 Euros per month or €550 Euros per year. 

The advertisement below was posted on Hack Forums, which targets an English-speaking audience.

 aae8d26e697fb30dbc9f12eaa9f21fe1d2646f46Figure 1. HackForum: The WormGPT advertisement

The author posted illustrations of the blackhat WormGPT abilities, showing how it could suggest writing malware.

1bb7c6ecdbf609d6b768af9753f47476eacc9583 

Figure 2. WormGPT writes malware on Python according to malicious requirements

 Another famous sample from Slashnext that was shown by many news publications, with WormGPT's ability to write a convincing phishing email pretending to be from the company CEO.

ea6a45e6554adff781ee773e107214e72f6085c3 Figure 3. WormGPT wrote a phishing email according to the requirements. Slashnext researchers conducted the test. 

Meanwhile, the Exploit forum displayed another advertisement from last/laste on July 14, 2023. The first forum is one of the most famous English-speaking forums and second is the most popular in the Russian cybercrime community. 

905d0d3c079b5378cec8c1de96f74d9437eba9df 

Figure 4. WormGPT advertisement in one of the Russian-speaking underground forums. 

The posts are in English. It is strange for a Russian-speaking forum to make the root of WormGPT from English-speaking developers.  

The seller offers a newer version of WormGPT, WormGPT v2, for €550 Euros annually, and a private build for €5000 Euros which includes access to WormGPT v2. The author insists that WormGPT v2 is a more advanced version, with improved privacy, formatting, the ability to switch models, and will be accessible for yearly subscribers only. 

The post also includes an illustration to prove its point:

d790752f3ec105729ec6aedf47a08a06c6c1b9f2 Figure 5. WormGPT responds to the request of creating malware on python.

62c31f272afe599eaba3cad29b9e39c8aadde43f Figure 6. WormGPT v2 response illustration

696dc80bca85299cd2877f7856ae6f199e24d922 Figure 7. WormGPT v2 response illustration

The illustrations are not helpful to us for further analysis since we do not have any of the WormGPT platforms to compare their operations. We have no proof that it is not a fake based on the topic.

Another malicious LLM came to play later in July 2023. The author advertises its product, FraudGPT, on several Dark Web boards and Telegram channels. The actor advertises it at least from July 22, 2023, as an unrestricted alternative for ChatGPT, pretending to have thousands of proven sales and feedback. The price range starts from $90 – $200 USD for a monthly subscription, 3 months for $230 – $450 USD, $500 – $1,000 USD for half a year’s subscription, and $800 – $1,700 USD for a yearly subscription.

78fd558a9669ea885de0865379a57497b08b888e

Figure 8. FraudGPT advertisement on one of the Dark Web marketplaces

FraudGPT’s pricing varies on different boards, despite the author using the same nickname to communicate and to raise intriguing questions. It is unclear whether these price differences stem from the Dark Web's monetization board policies, the author's personal greed, someone trying to mimic the author's work, or from an intentional effort to capitalize on the high demand for such malicious LLM products.

FraudGPT is described as a great tool for creating undetectable malware, writing malicious code, finding leaks and vulnerabilities, creating phishing pages, and for learning hacking. The author illustrated its product with a video demo showing FraudGPT’s capabilities. The demo shows FraudGPT’s ability to create phishing pages and phishing SMSs.

0820ef0d81ebd7766ffaeea2ab30822a947ccab3 Figure 9. FraudGPT generates a working code for the Bank of America scam webpage.

089a62b97f9df1a9034b12dcda3b0a75ad0dbf9e Figure 10. FraudGPT generates malicious SMS to convince victims to follow the link. 

After analyzing the samples, I was asking myself if I could reach the same or comparable results from ChatGPT with properly created requests since ChatGPT has a strong anti-blackhat setup. I started by asking it to write a Python script with all the requirements mentioned in the malware WormGPT example, but making it whitehat style:

41184ef89f6d606a95f364c5fc9108c83ab3cf51  Figure 11. Response to the request to write a Python script with all the requirements mentioned in the malware WormGPT example, but making it whitehat style.

a96ae55bbb8b471bfc10d9ce6257e901009cdee9Figure 12. ChatGPT wrote a “malicious” Python script with the same requirements as was sent to WormGPT.

The next challenge was writing the phishing email with the WormGPT requirements. I asked ChatGPT directly to write such an email, but it refused the request due to some restrictions. Rephrasing the query, I was able to get a phishing-like version of the required email:

2cad87329cccc78f8a7db624065fdc9bf48c93d3  Figure 13. ChatGPT “phishing” email version written with the corrected WormGPT requirements.

ChatGPT wrote the email more politely, however, it was modified several times to reach the required size.

d4510e1ebb8e923cec20718bb5f06415364035cdFigure 14. ChatGPT “phishing” email version written with the corrected WormGPT requirements and shortened.

The email generated seems official and could be used for further action.

ChatGPT copes with the task of creating convincing SMSs.

d39421023db9984c5445c30755ad7210c5388ed3 Figure 15. ChatGPT generates convincing SMS.

ChatGPT made convincing SMSs, but they are far from the malicious content that FraudGPT was able to demonstrate in its demo.

Attempting to ask ChatGPT for any malicious samples ends up with the following response:

e8906c9a515009a7204a91e0ddb67957a194510b

67f57e404375eba33a3b7b5bdf610640cc5b1e48           Figure 16. ChatGPT denies providing any malicious content. 

The other sample of WormGPT v2 includes requests for worm malware features which elicits a variety of answers from ChatGPT. 

The Darknet forums are widely discussing and concentrating on AI abilities nowadays. There are sections dedicated to the topic of trying to find solutions to affect AI model results or influence decisions and answers.

5ba72b5cdabc54bac94513562e319f9bf2989eba Figure 17. Forum XSS, among others, has section dedicated to AI/ML

The topics are wide-ranging and informational, including a list of AI resources to instructions on how to build your own private ChatGPT, or ways to attack AI.

113d592acd3ea084d8933c6640d4ade92f8858eb Figure 18. The Python code sample, posted on forum XSS, to develop own GPT.

a88ad2ca17c53cba322c13886e3648fe0bd9333a Figure 19. Forum XSS. A topic dedicated to methods and ways to attack AI.

 The topic is relatively new, but it is already getting the cybercrime community’s attention. The members are collecting existing research on the topic and sharing resources. In Figure 19, the author shares the publicly available Mitre section to introduce known attack capabilities to colleagues. 

Summary 

It is crucial to acknowledge the potential risks associated with generative artificial intelligence (AI) in the hands of cybercriminals. It will be very interesting to test and compare WormGPT v1 VS WormGPT v2, FraudGPT VS ChatGPT abilities and ask it to develop more serious malware in C++/Go/ASM, or any language other than Python. The comparison conducted shows that with the samples mentioned, there is not much difference between WormGPT and ChatGPT (with proper build requests) to achieve the desired results. 

The product may still be far from perfect, but one way or another, this is a clear sign that generative artificial intelligence can become a weapon in the hands of cybercriminals. We have already seen much discussion on the topic of AI on different underground forums. Over time, these technologies will only continue to improve.

Latest SpiderLabs Blogs

EDR – The Multi-Tool of Security Defenses

This is Part 8 in my ongoing project to cover 30 cybersecurity topics in 30 weekly blog posts. The full series can be found here.

Read More

The Invisible Battleground: Essentials of EASM

Know your enemy – inside and out. External Attack Surface Management tools are an effective way to understand externally facing threats and help plan cyber defenses accordingly. Let’s discuss what...

Read More

Fake Dialog Boxes to Make Malware More Convincing

Let’s explore how SpiderLabs created and incorporated user prompts, specifically Windows dialog boxes into its malware loader to make it more convincing to phishing targets during a Red Team...

Read More