Trustwave Blog

ChatGPT Update: How Security Teams and Threat Actors are Using Artificial Intelligence

Written by | Sep 20, 2023

ChatGPT and other Large Learning Modules have been in use for less than a year, yet these applications are transforming at an almost exponential rate. 

 

The changes taking place present an odd duality for the cybersecurity world. It is both a boon and a danger to security teams. In some cases, enabling teams to do more with less.

 

Unfortunately, what is good for the white hat is also good for those on the other side of the firewall, as we see threat actors using ChatGPT for malicious purposes and creating new malevolent AIs.

 

In a recent Trustwave webinar, SpiderLabs Senior Security Research Manager Karl Sigler (click here to watch a replay) gave an update on how AIs are changing the face of information security. Please check out the replay for Karl’s complete thoughts, but let’s also take a quick look at what he covered.

 

Let’s start with a bit of ChatGPT history:

  • June 2016: OpenAI published initial research on generative models
  • Sept 2019: OpenAI published research on fine-tuning the GPT-2 language model with human preferences and feedback
  • Nov 2022: OpenAI introduced ChatGPT using GPT-3.5 publicly
  • Feb 2023: ChatGPT reached 100 million users faster than TikTok (9 months)
  • March 2023: OpenAI introduced the ChatGPT API & GPT-4 in ChatGPT
  • April 2023: OpenAI released ChatGPT plugins, GPT-3.5/GPT-4 with browsing
  • May 2023: ChatGPT Plus users can now access over 200 ChatGPT plugins.

During this period, ChatGPT and similar models have had a wide impact with some educational institutions and nations banning its use.

 

Using AI for Evil

 

Using an AI of one type or another to generate term papers is the least of the world’s problems. In the past few months, SpiderLabs has seen a rise in the use of large language models, such as WormGPT and FraudGPT, being used for criminal purposes.

 

Threat actors have made both available “as a service” on the dark web for 60 Euros a month or 500 Euros a year. We are not certain how well these work; after all, we will not enrich a bad guy by buying the service, but we can estimate from what we can see on the dark web.

 

The service providers offer a long list of activities the software can deliver, like writing malicious code, undetectable malware, creating phishing pages, landing pages, scam pages, etc. Again, we have yet to play around with these products, but some people on the dark web subscribe and report good results.

 

Trustwave is also seeing other AIs being developed capable of not only bypassing security controls, but purposely built for malicious activity. Criminal gangs are purposely building AIs to create credential-grabbing web pages, exploit code, and reversing patches that come from vendors. 

 

Using AI for Good

 

While the adoption of ChatGPT and its cousins was a bit slow on our side of the white hat side of the conflict  we have picked up speed with security teams not sitting idly by and devising their own use models.

 

These include gathering real-time threat intelligence, language translation, helping with coding, assisting with incident response by guiding analysts through standardized response procedures, complying with governance and compliance laws, and host and network log processing.

 

One task a security team can attempt is to download a copy of metta and start training it to operate beyond the basic pre-set training. Train it on your organization’s internal and external policies, even the incident response policies.

 

It can help a team understand what its network diagrams looks like, basic inventory, and incident response playbooks. 

 

Training an AI in such a manner is a challenging task and is not being implemented on a wide scale at this point, but the potential is there.

 

What you can end up with is a finely tuned information security assistant built directly for your organization. 

 

The primary takeaway is ChatGPT and other AIs on their own will not save the day for security teams, but these applications will make a great assistant.

 

Trustwave and AI

 

Trustwave is adopting AI at a thoughtful pace. For example, right now, SpiderLabs uses ChatGPT as a news gathering platform, similar to RSS, to monitor various news sources and feeds and then pull synopsis of major stories for review.

 

Trustwave uses machine learning and deep learning in our anti-spam and anti-phishing software. One example is Trustwave’s email security product MailMarshal. It’s a spam and phishing filter that sits in front of an email server, inspecting all emails going through it for anything suspicious or malicious. Anything it finds is filtered out.

 

One interesting place at Trustwave where ChatGPT shines is as a translation and writing tool. As an international organization, Trustwave has many employees who do not speak English as their first language. 

 

Since English is the company’s primary language, ChatGPT has proven to be a huge help to our researchers, as one of their responsibilities is writing SpiderLabs blogs. The researchers use ChatGPT to develop blog posts that are very easy to edit, making it easier to get breaking news and important information to the public.

 

The rapid development and adoption of AI, particularly ChatGPT and similar large language models, have brought about positive and negative consequences for the cybersecurity world.

 

On one hand, security teams are leveraging these AI capabilities to enhance their operations, such as real-time threat intelligence, language translation, and incident response guidance. On the other hand, threat actors are also exploiting AI for malicious purposes, using models like WormGPT and FraudGPT to generate malicious code, phishing pages, and undetectable malware.

 

Ultimately, while ChatGPT and other AIs can assist security teams, they are not a standalone solution and require human expertise for effective cybersecurity.