OWASP Virtual Patching Survey Results

In a previous blog post, we issued a call for assistance to help OWASP with a virtual patching survey. The survey was open for about 2 weeks and we received a pretty fair turnout as 44 organizations participated. Here are the results of the survey with some commentary by SpiderLabs Research Team.

The data presented here will be analyzed in-depth at the upcoming OWASP AppSecDC Virtual Patching Workshop April 2-3 in Washington DC. Register here.

Question #1
Reason for virtual patching

Response Analysis

The first item to note is that there a multiple real-world scenarios where web applications used in production can not be fixed in the source code. The top reason is simply a Lack of Resources as the developers who originally worked on the application are alreay allocated to other projects. This situation puts businesses in a bind as they now must try and weigh the risk associated with a vulnerability that might be exploited vs. the tangible impact of delayed timelines for the currect project. 3rd Party software comes in second place and it is not much of a surprise. Organizations often utilize 3rd party software in some shape or fashion within their web architectures. When vulnerabilities are identified, they are held at the mercy of the external software development organization to update the code and make it available to users.

Three different responses are often related:

  • No in-house development teams
  • Vulnerable code is out-sourced
  • Insufficient development contract language

When organizations choose to out-source the development of their web applications, they should do so with caution. They must ensure that the contract language includes specific requirements around remediation of identified vulnerabilities. Reference the OWASP Secure Software Contract Annex. Without this contract in place, security vulnerabilities might not be covered in the same way that functional defects are.

Question #2

Who is responsible for virtual patching

Response Analysis

The team most responsible for implementing virtual patches in production web applications is IT Security. Coming in second are Web Developers, which is a bit surprising. It is surprising in a good way as the web developers are the most knowlegable about the application and would be in the best position to help craft a virtual patch. By reviewing the individual survey results, we can draw the conclusion that those organizations that do have in-house development teams utilize their expertise. When organizations don't have in-house development teams, or if the particular application was out-sourced or by a 3rd party, then they lean on IT Security and other groups to implement virtual patches.

Question #3

Virtual patching tool

Response Analysis

The top two methods of implementing virtual patches is with a web application firewall (either embedded or as an external appliance). Coming in third is to use a network IDS which re-enforces the concept that "Prevention is desired but Detection is mandatory." If you can't implement a mitigation for the vulnerability inline, then the next best thing is to alert if a client attempts to exploit the known vulnerability. Approximately 15% of respondants are using next-gen network firewalls with deep-packet inspection technology. Using a network IPS came in last at approximately 8%.

Question #4

What process do you use for virtual patching


Response Analysis

These responses are loud and clear - organizations rely mainly upon manual process to both identify and virtually patch vulnerabilities. While there are some levels of automation and integration between DAST/WAF tools, manual analysis of the vulnerabilities and the proper virtual patching method is still key.

Question #5

Validating virtual patches


Response Analysis

These responses are somewhat surprising as the majority of respondants validate the virtual patches either themselves or by monitoring the logging. I see two deficiencies with these approaches:

  1. Validating your own fix is a recipe for failure. This is the same rationale as expecting web developers to be able to identify flaws in their own code. That is why you have peer review or use other code analysis tools to help spot issues that you missed.
  2. Log monitoring will really only help you to identify if your fix has false positives and is triggering on normal traffic. What about exploit attempts? Someone needs to test out an actual attack to verify that you have false negatives.

To clarify my point here, it is not that these two options should not be done but that they should not be the only method of validating virtual patches. It is highly recommended that you request a re-assessment of the vulnerability by the security team or by the DAST tool to validate your defenses.

Question #6

False positives

Response Analysis

This question is about virtual patching accuracy and is mainly driven by two factors:

  1. Accurate details in the vulnerability report - if the vulnerability report details are not accurate, then the virtual patch will most likely not be accurate either. GIGO priciple (Garbage In = Garbage Out).
  2. Virtual patching skill level of implementor - virtual patching injection flaws often utilizes regular expressions to define the security policy. If the virtual patch writer is not well verse at using regular expressions or doesn't fully test them, then false positives will most likely occur.

Question #5 relates here as log monitoring of virtual patches is key to identify if there are any false positives when real users are interacting with the patches resource.

Question #7

False negatives

Response Analysis

The main takeaway we have here is that testing virtual patches for attack surface coverage/evasions is paramount. Half the respondants answered that their patches still have evasion issues 25% of the time after a re-test. Just as finding a working evasion for an attacker is an iterative process of tuning and testing, so to are the steps for creating a solid virtual patch. Question #5 relates here as well and is why having a separate group validate the patches for evasions is key. The bad guys are going to try and evade your protections so you need to test their resistance to bypass attempts.

Question #8

Time to fix


Response Analysis

There is a lot of data to look at here. Keep in mind that the timelines presented here reflect the time when the virtual patching team receives the vulnerability information until a virtual patch is implemented. The most disconcerting element is how often organization don't even know what their time-to-fix metric is for patching certain vulnerabilties. It may be that they don't know because they are simply not even addressing certain type of vulnerabilties with virtual patching. Tracking time-to-fix is a critical element of gauge your window of exposure to specific manifestations of vulnerabilities. Properly tracking virtual patching time-to-fix metrics can also provide a reality-check for organizations that ignorantly assume that virtual patching can immediately fix all vulnerability types. If you track realistic time-to-fix metrics, then you can make an informed decision as the best course of action for remediating vulnerabilities. In some cases, it may just as timely to fix the issue in the code.

Question #9

Attack surface reduction

Response Analysis

This purpose of this question is to gauge the estimated attack surface reduction of using virtual patching for various vulnerability categories. As you might expect, Improper Input Handling (Injection flaws) is the category that the respondants felt had the highest percentage of attack surface reduction. This makes sense as input validation processes, whether applied within the application code itself or externally with a virtual patch is essentially the same. On the flip side, application logic flaws are the most challenging vulnerabilty to address externally with a virtual patch. Attack surface reduction metrics are also a key component to virtual strategies so provide a realistic view of which vulnerabilities lend themselves to external remediation and which ones must be fixed within the code.

Question #10

Tracking virtual patches

Response Analysis

The results of this question are truly disheartening as the majority of respondants do not have an official method of tracking the use of virtual patches. This is definitely an area where your virtual patching teams should take a queue from the development teams and leverage the existing bug tracking system. The vulnerability itself must be logged somewhere and the use of a virtual patch should be documented along with it. Think of it this way, your organization almost certainly has a patch management system for handling OS level patches for workstations and servers. You should have the same type of data available to for virtual patching as well. Having this type of data will allow organizations to better track which vulnerabilities were actually fixed inside the code and when and even chose if they want to remove virtual patches one issues are fixed.

Conclusion

Virtual patching is a key component of web vulnerability management processes however care needs to be taken to fully understand how best to utilize it. You must make sure that you have the right people involved, use the right tools for the job, properly test mitigations and track their use.

Trustwave reserves the right to review all comments in the discussion below. Please note that for security and other reasons, we may not approve comments containing links.