CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

Services
Capture
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

twi-managed-portal-color
Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

twi-briefcase-color-svg
Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

tw-laptop-data
Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

twi-database-color-svg
Database Security

Prevent unauthorized access and exceed compliance requirements.

twi-email-color-svg
Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

tw-officer
Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

tw-network
Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

The Two Sides of ChatGPT: Helping MDR Detect Blind Spots While Bolstering the Phishing Threat

ChatGPT is proving to be something of a double-edged sword when it comes to cybersecurity.

Threat actors employ it to craft realistic phishing emails more quickly, while white hats use large language models (LLMs) like ChatGPT to help gather intelligence, sift through logs, and more.

The trouble is it takes significant know-how for a security team to use ChatGPT to good effect, while it takes just a few semi-knowledgeable hackers to craft ever more realistic phishing emails.

One way to level the playing field is to bring on more white hat defenders on your side in the form of a managed detection and response (MDR) service provider backed by a team of security researchers who know how to effectively add ChatGPT and other generative AI models to the arsenal of tools it already has.

The two sides of the ChatGPT issue were illustrated nicely in a recent Trustwave webinar, “ChatGPT: The Risks and Myths of Chatbot AIs.” Karl Sigler, Senior Security Research Manager at Trustwave, went through a number of ways threat actors are employing LLMs such as ChatGPT and how the good guys, including members of the Trustwave SpiderLabs research team, are employing the technology to help in their work.

 

How Threat Actors are Employ ChatGPT

 

As Sigler explains, one of the things ChatGPT was trained to understand well is the English language, including writing and editing.

That is a big help to threat actors who create phishing emails but whose native tongue is not English. Bogus emails filled with spelling or grammatical errors are obvious clues that tip off vigilant employees to phishing emails. Now attackers can use ChatGPT to help craft error-free emails that read well, making them harder for unsuspecting end users to detect.

The ability to write more realistic emails gives threat actors a powerful weapon to use in their business email compromise (BEC) efforts, including honing complex social engineering pretexting attacks that can help them fool even the most vigilant among us.

Additionally, threat actors are using ChatGPT to help refine their exploit code to make it more effective. ChatGPT does have guard rails intended to prevent its use for nefarious means, Sigler said. If you ask it to “write me some code to exploit this vulnerability,” it’ll tell you it’s not intended for that purpose. But it’s not difficult to craft prompts that get around those guard rails, such as asking ChatGPT to identify errors in code you’ve already written.

“Criminals definitely have a new tool they can use to craft phishing emails, maybe check exploit code or modify code,” Sigler said.

 

Using ChatGPT to Boost Cybersecurity

 

Security teams can use ChatGPT and other generative AI tools; for example, the technology can help gather insights from data to aid incident investigation and research.

This can entail log analysis and curating events from security information and event management (SIEM) systems. Given proper direction and input, ChatGPT can help threat hunters find relevant data far more quickly than they can on their own.

For instance, say, a SIEM identifies 10 log entries that appear to be related to a suspicious activity. Usually, a security analyst has to manually examine those entries to figure out what’s going on. Now they can dump this information into ChatGPT with instructions to prioritize each event and summarize what they mean collectively – a huge time-saver.

Similarly, if a security team identifies a script that targets a given vulnerability, it can drop it into ChatGPT and ask it to identify what HTTP requests the script generates. From that, the team can likely garner information to create intrusion detection system (IDS) signatures. Here again, that saves a significant amount of manual effort.

 

How an MDR Solution Combats ChatGPT Threats

 

However, it’s not hard to determine who benefits more from ChatGPT in these examples. Threat actors need little technical know-how to take advantage of ChatGPT. All they’re doing is asking it to write better emails, subject lines, and the like. It takes a bit of experience to write scripts, but these are widely available for sale.

The good guys, however, need significant cybersecurity chops to effectively use ChatGPT.

Sigler’s tips in the webinar offer valuable insight to any security operations center (SOC) team. But most companies don’t have a SOC, and most struggle to hire and retain knowledgeable security professionals.

For them, a better defense is to hire professional help, such as from an MDR service provider staffed with professionals like those on the Trustwave SpiderLabs team. These folks do indeed know how to make effective use of ChatGPT, among many other tools, to help them consistently identify and respond to threats in an organization’s environment.

To learn more about how to defend your organization and get Sigler and the Trustwave SpiderLabs team working for you, visit our Managed Detection and Response page.

 

ChatGPT-1TClick the image above to learn more about how Trustwave MDR can help make your organization more secure.

Latest Trustwave Blogs

Trustwave Named as a Leader in the 2024 IDC MarketScape for Worldwide Emerging MDR Services

Trustwave has been positioned in the Leaders Category in the IDC MarketScape for Worldwide Emerging Managed Detection and Response (MDR) Services 2024 Vendor Assessment (doc #US50101523 April 2024).

Read More

Trustwave Takes Home Global Infosec Award for 2024 Best Solution Managed Detection and Response (MDR) Service Provider

For the second consecutive year, Cyber Defense Magazine honored Trustwave with a 2024 Global InfoSec Award for Best Solution Managed Detection and Response (MDR) Service Provider.

Read More

Using a Systematic Approach to Creating an Offensive Security Program

An offensive security strategy is a sophisticated and dynamic approach that extends beyond mere testing. It's a comprehensive plan that aligns with an organization's core mission, transforming...

Read More