CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

CVE-2024-3400: PAN-OS Command Injection Vulnerability in GlobalProtect Gateway. Learn More

Services
Capture
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

twi-managed-portal-color
Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

twi-briefcase-color-svg
Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

tw-laptop-data
Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

twi-database-color-svg
Database Security

Prevent unauthorized access and exceed compliance requirements.

twi-email-color-svg
Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

tw-officer
Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

tw-network
Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

Missing Critical Vulnerabilities Through Narrow Scoping

The typical process when scoping a penetration test is to get a list of targets from the client, which are typically a list of IP addresses and/or hostnames. But where does this information come from, and how accurate is it? Chances are the client has documentation that lists the devices they think they have, and what addresses or names they have been assigned. This documentation will form the basis of the scope when conducting testing or scanning against a target environment. However, in a very large number of cases, this documentation is inaccurate!

Consider the following two summary graphs from a live pentest conducted recently:

Exhibit 1:
Image001-1Exhibit 2:Image002

While these look like two completely different tests, they are in fact tests of the exact same network. Exhibit 1 was a test of all systems and addresses the client believed were present on the network, while Exhibit 2 had the scope increased to include the entire local network. In other words, allowing the tester to enumerate what addresses were actually active at the time of testing rather than relying on a pre-supplied list.

Several previous tests and scans had been conducted against the specific addresses the client knew about, so vulnerabilities on those addresses had already been found and fixed. However once the scope was expanded to include the entire network, additional target addresses were discovered which the client never realized existed. As these had never been tested before, they did not benefit from the previous rounds of testing and remediation. 

Despite the almost empty first pentest report, a malicious attacker not constrained by an arbitrary scope would have absolutely no difficulty compromising this network. However, since the first pentest scope focused only on systems already known to be secure, the client was completely unaware of the hidden risks. The previous reports gave the client a false sense of security because they focused on a narrow and incomplete view of their environment.

What the documentation says:

Screen Shot 2021-09-02 at 9.56.22 AM

What was found during enumeration:

Screen Shot 2021-09-02 at 9.56.35 AM

So how can this happen? In a large variety of ways:

Outdated / incorrect documentation

The simplest scenario is that the documentation is simply incorrect. It may never have been correct in the first place, or it may have become incorrect over time as systems changed but the documentation failed to be updated.

Unexpected addresses

The notion that any given server will use only a single IP address that it has been assigned is flawed. It is entirely possible for a single system to use multiple addresses, any of which could potentially be used to access the system:

  • Virtual machines: A single physical host could have multiple virtual machines, each with its own services and potential vulnerabilities.
  • Containers: Much like virtual machines, a single system can have multiple containers using systems such as Docker or LXC. Each container may have its own services and vulnerabilities.
  • IP Aliases: A single host could simply have multiple addresses bound to it using aliases.
  • Multiple Interfaces: Most servers come with at least 2, and often 4 ethernet interfaces. While users will often configure the primary interface, the other interfaces may not have been configured and thus default to automatic configuration. If automatic configuration mechanisms such as DHCP or SLAAC are available on the network then they may have addresses assigned, otherwise, they will take default addresses using a mechanism such as APIPA which may still be reachable from other hosts on the same physical network.
  • IPv6: All significant operating systems in the past 15 years have IPv6 enabled by default. If you don't think you're running IPv6, then chances are you are and don't realize it. Systems will have link-local addresses reachable from the local network, and will often auto-configure by default.
  • Other protocols: Older systems or certain embedded devices may be accessible via non-IP protocols on the local network, such as IPX/SPX, NetBEUI, Appletalk, or DECNet, etc.
  • IPMI/Management engine: Most modern systems include some form of remote management, which in many cases is active on the network by default.

Any of these addresses could be susceptible to vulnerabilities.

Unexpected devices

Sometimes rogue devices will be present on a network. There are any number of reasons for this. It can include malicious/unauthorized devices that have been connected, or it could include devices that have been accidentally connected to the network. It could also include old redundant devices which were believed to have been removed but are actually still present. Frequently servers being decommissioned will be shut down and soft powered off (e.g. the cables are all still connected). However, despite a shutdown process being run on the system causing it to remain in a standby state, such systems could potentially become powered back up again in several ways:

  • Wake on LAN: After a software shutdown, computers typically go into a soft power off or standby mode. Using the WOL feature, upon receiving a special wakeup packet a system operating in standby mode will power up and begin booting.
  • Default state on power loss: A computer may wait indefinitely in standby mode for a wakeup signal, however in the event that power is lost completely (eg due to a power outage) they may resort to a fully powered up state once power is restored, causing the system to boot the operating system it had when it was last shut down.
  • By accident: A server could simply be powered on by accident. When a server is powered off by accident, it is likely to be noticed quickly once users complain about the lack of service. However, an unused, "decommissioned" server coming back online is far less likely to be noticed.

A server that was thought to be decommissioned is likely to be old, highly likely to be missing patches, and could be running end-of-life software riddled with security holes. This makes it an easy target to compromise and an easy system for a malicious attacker to maintain a foothold on due to a lack of monitoring.

Non-IP attacks

If a test scope consists of specific IP addresses, then that inherently excludes non-IP attacks from the scope. This could include layer 2 attacks against the switching infrastructure, such as attacks against misconfigured VLAN trunking protocols. 

Finding vulnerabilities

If you perform pentests or scans against specific addresses, then only those specific addresses will be tested. Anything else will NOT be tested as it is out of scope. In some cases, a manual tester may notice that additional hosts/addresses are present through passive monitoring of traffic. However, since these addresses are not part of the scope they will not be tested and at most you'll receive an informational mention of their detection in the report.

The most serious vulnerabilities are the ones you don't know about. If you know a vulnerability is present then you can take steps to mitigate it, monitor it, and ultimately fix it. If you don't know a vulnerability exists then it will come as a surprise when someone exploits it. If you scan only the addresses you know about, it will serve to simply reinforce the lack of awareness and a false sense of security.

Summary

Enumerating what is ACTUALLY present on a network, rather than restricting the scope to what someone believes is present can often result in significant vulnerabilities being discovered that would otherwise be missed. Addresses that you don't know about are likely to be more vulnerable simply because you're unlikely to have tested them before. A hacker will not care about your test scope, and will gladly compromise a forgotten or overlooked system. Performing testing against addresses known to be secure without checking what else might be present is only going to reinforce this lack of awareness and create a false sense of security.

Latest SpiderLabs Blogs

Protecting Zion: InfoSec Encryption Concepts and Tips

This is Part 9 in my ongoing project to cover 30 cybersecurity topics in 30 weekly blog posts. The full series can be found here.

Read More

EDR – The Multi-Tool of Security Defenses

This is Part 8 in my ongoing project to cover 30 cybersecurity topics in 30 weekly blog posts. The full series can be found here.

Read More

The Invisible Battleground: Essentials of EASM

Know your enemy – inside and out. External Attack Surface Management tools are an effective way to understand externally facing threats and help plan cyber defenses accordingly. Let’s discuss what...

Read More