Connect with our team of offensive security, AI security and pen testing experts at Black Hat Europe 2023. Learn More

Connect with our team of offensive security, AI security and pen testing experts at Black Hat Europe 2023. Learn More

Managed Detection & Response

Eradicate cyberthreats with world-class intel and expertise

Managed Security Services

Expand your team’s capabilities and strengthen your security posture

Consulting & Professional Services

Tap into our global team of tenured cybersecurity specialists

Penetration Testing

Subscription- or project-based testing, delivered by global experts

Database Security

Get ahead of database risk, protect data and exceed compliance requirements

Email Security & Management

Catch email threats others miss with layered security & maximum control

Co-Managed SOC (SIEM)

Eliminate alert fatigue, focus your SecOps team, stop threats fast, and reduce cyber risk

Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
The Trustwave Approach
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Platform
SpiderLabs Fusion Center
Security Operations Centers
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

ChatGPT: Emerging AI Threat Landscape

ChatGPT has been available to the public since November 30, 2022. Since then, it has made headlines – from being temporarily banned from Stack Overflow because, “while the answers ChatGPT produces have a high rate of being incorrect, they typically look like they might be good, and the answers are very easy to produce, [1] to threatening to kill the college essay by writing it on behalf of a student[2]. The societal implications of AI will continue to develop as the technology becomes more broadly used, so we wanted to explore the security implications of the latest headline grabber and how it can impact privacy, data, and threat actors.

ChatGPT is a prototype chatbot released by OpenAI. The chatbot is powered by AI and is gaining more traction than previous chatbots because it not only interacts in a conversational manner but has the capability to create code and many other complex questions and requests.

I’ve had numerous colleagues over the years who have said that as technology continues to advance, something is going to change the security landscape and impact the cyber security industry drastically. It’s looking more and more like AI advances could be that something.

Developments in cutting-edge technology often steal headlines for a short period of time then fade away into academic or commercial use, but AI is something that is now becoming a part of everyday life. We, often unknowingly, interact with AI through chat bots, online shopping, fraud prevention, and voice assistant. The amount of security research in this area is growing but adoption of best practices outlined in that research could be lagging the latest technological developments.

In the case of ChatGPT, various checks have been implemented to prevent nefarious usage and knowledge sharing, but those checks are far from comprehensive – we’ve already seen at least one disclosure against GPT-3 for a prompt injection vulnerability[3].

ChatGPT has multiple use cases, and the benefits are huge – go ahead and watch it review simple code snippets. Not only will it tell you if the code is secure, but it will also suggest a more secure alternative; of course, Stack Overflow provided a far better response as to why ChatGPT solutions should not be used! Conversely, this same functionality can also be used to ‘fix’ malicious exploit code and help attackers obfuscate detection.

While ChatGPT is obviously a useful tool to educate, it can also be a useful tool in developing attacks[4]. Once an attacker has found a vulnerability, ChatGPT can be used to help develop and correct exploits. Put simply, the chatbot can be used as a virtual colleague to help discuss and perfect exploits. This can also be demonstrated by asking if code snippets are secure. In this simple SQL Injection example, a PHP code snippet was provided to ChatGPT. Once ChatGPT has identified a weakness in a code snippet, it can be asked it to create a cURL request for exploitation:


Figure 1: Example of ’curl’ command

Similar techniques can be employed for other vulnerability classifications, for example a simple vulnerable buffer overflow code snippet returns the below advice:


Figure 2: Response to vulnerable overflow code snippet

ChatGPT has very similar use cases on the defensive end of the spectrum. Asking for detection rules in various formats may provide you exactly what you need.


Figure 3: Response to asking for detection rules

Reminder: before using ChatGPT as a code agnostic code review, detection rule generation and exploitation tool, consider the Stack Exchange warning!

ChatGPT shares the same security concerns as any AI (as returned by ChatGPT when queried for the most significant AI security concerns):

  1. Lack of interpretability and accountability
  2. Adversarial attacks
  3. Bias and discrimination
  4. Privacy concerns

It is becoming clear that AI is going to play an ever-increasing role in our lives, so there is a need to ensure that with that adoption comes security and privacy. With new technologies and wider adoption come new attacker tools, techniques, and procedures. It is vitally important to understand the risks as well as the benefits of adopting new technology. We are going to be entering a phase of rapid change due to advancements in AI and that rapid change will lead to new attack and defense techniques. Organizations and individuals need to ensure that they are protected against new cutting-edge attacks – should this new classification of AI abuse cases become part of your threat model?

SpiderLabs used its own test lab for this research and does not upload any data to ChatGPT when performing any engagement.



[3] (


Latest SpiderLabs Blogs

The 2023 Retail Services Sector Threat Landscape: A Trustwave Threat Intelligence Briefing

The annual holiday shopping season is poised for a surge in spending, a fact well-known to retailers, consumers, and cybercriminals alike. The latter group, however, is poised to exploit any...

Read More

Pwning Electroencephalogram (EEG) Medical Devices by Default

Overall Analysis of Vulnerability Identification – Default Credentials Leading to Remote Code Execution During internal network testing, a document was discovered titled the “XL Security Site...

Read More

Hidden Data Exfiltration Using Time, Literally

I was looking at my watch last week and my attention was moved towards the seconds over at the right of the watch face, incrementing nicely along as you’d expect. Now, I don’t know if I’d just spent...

Read More