Trustwave SpiderLabs Uncovers Unique Cybersecurity Risks in Today's Tech Landscape. Learn More

Trustwave SpiderLabs Uncovers Unique Cybersecurity Risks in Today's Tech Landscape. Learn More

Services
Capture
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

twi-managed-portal-color
Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

twi-briefcase-color-svg
Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

tw-laptop-data
Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

twi-database-color-svg
Database Security

Prevent unauthorized access and exceed compliance requirements.

twi-email-color-svg
Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

tw-officer
Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

tw-network
Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

Using Transactional Variables Instead of SecRuleRemoveById

Using SecRuleRemoveById to handle false positives

The SecRuleRemoveById directive is most often used when ModSecurity users are trying to deal with a false postive situation. Used on its own, it is a global directive that will disable a rule that was specified before it based on its rule id number. While users can technically take this approach and just use SecRuleRemoveById on its own, we caution against this. Just because a rule triggered a false positive match does not mean that the only recourse is to disable the rule entirely! Remember, the rule was created to address a specific security issue so every effort should be made to only disable a rule or make an exception in certain cases.

Limitations of SecRuleRemoveById

The problem is that SecRuleRemoveByID is somewhat limited in its capabilities for selectively disabling rules. One of the common methods of attempting to selectively disable a Mod rule is to nest the SecRuleRemoveById directive inside of an Apache scope location (such as Location) like this -

<Location /path/to/foo.php> SecRuleRemoveById 950009 </Location> 

There currently aren't many other options for using SecRuleRemoveById to disable a rule other than triggering on URI location as shown above. A similar issue was identified with other global directives and was addressed in ModSecurity 2.0 by making it possible to update these settings on a per rule basis by using the "ctl:" action. In future versions of ModSecurit we will implement a "ctl:RemoveById" action to handle this. In the meantime, however, what else can a user do to selectively disable rules bases on arbitrary request data?

Using Transactional Variables (TX)

The approach that am going to discuss is meant as an example only and its usage should be fully considered prior to implementation. I believe that the TX variable is not currently being widely used by ModSecurity users. This may be caused by two main reasons - 1) The Core Rules don't use them, and 2) We don't have proper "use-case" documentation showing how you might use it more effectively. It is with the later issue that I hope this post will help.

Transaction variables are really cool and Ivan explained their general usage in a SecurityFocus interview. Here are the relevant sections -

The addition of custom variables in ModSecurity v2.0 (along with a number of related improvements) marks a shift toward providing a generic tool that you can use in almost any way you like. Variables can be created using variable expansion or regular expression sub-expressions. Special supports exists for counters, which can be created, incremented, and decremented. They can also be configured to expire or decrease in value over time. With all these changes ModSecurity essentially now provides a very simple programming language designed to deal with HTTP. The ModSecurity Rule Language simply grew organically over time within the constraints of the Apache configuration file.

In practical terms, the addition of variables allows you to move from the "all-or-nothing" type of rules (where a rule can only issue warnings or reject transactions) to a more sensible anomaly-based approach. This increases your options substantially. The all-or-nothing approach works well when you want to prevent exploitation of known problems or enforce positive security, but it does not work equally well for negative security style detection. For the latter it is much better to establish a per-transaction anomaly score and have a multitude of rules that will contribute to it. Then, at the end of your rule set, you can simply test the anomaly score and decide what to do with the transaction: reject it if the score is too large or just issue a warning for a significant but not too large value.

What I am about to show is an implementation of this concept.

Enabling/Disabling rules using TX variables

The first step in this process is to update your your modsecurity_crs_15_customrules.conf file to specify which rules will be active. If you aren't familiar with the the modsecurity_crs_15_customrules.conf file and its usage, please see this prior Blog post. The following two entries use the SecAction directive to set two different TX variables -

# Set the enforce variable to 0 to disable and 1 to enable # Rule ID 950002 is for "System Command Access" SecAction "phase:1,pass,nolog,setvar:tx.ruleid_950002_enforced=1, \ setvar:tx.ruleid_950002_matched=0" 

As the comment text indicates, you can quickly toggle whether or not this rule is active by changing the tx.ruleid_950002_enforced variable to 0. With this directive, every request will have these two TX variables initially set. If you have ever seen any of those nature shows on television where the researchers capture an animal, tag it and then release it back into the wild, we are essentially doing the same thing. We are just "tagging" the current request with some data that will be updated and/or evaluate by later rules.

Altering the Core Rules

The next step in this process is to update the individual Core Rules files to edit the rules so that instead of applying a disruptive action (such as deny), they will only set a new TX variable upon a match. The idea is to decouple the evaluation of the attack pattern in the transaction from the disruptive action application (which will happen in the next step). Here is an example from the modsecurity_crs_40_generic_attacks.conf file for the command access rule -

# # Command access # SecRule REQUEST_FILENAME "\b(?:n(?:map|et|c)|w(?:guest|sh)|cmd(?:32)?|telnet|rcmd|ftp)\.exe\b" \         "capture,t:htmlEntityDecode,t:lowercase,log,pass,id:'12345',msg:'System Command Access. \ Matched signature <%{TX.0}>',setvar:tx.ruleid_950002_matched=1" 

Now, if an inbound request matched this rule, then the tx variable called "ruleid_950002_matched" will be set to "1". This updates the original setting of this variable from the SecAction in the modsecurity_crs_15_customrules.conf file. This rule will also log the detection of this rule to the error_log file.

Evaluating the TX variables for blocking

The next step is to add a new rule to your modsecurity_crs_60_customrules.conf file to actually implement the blocking aspect of this process -

SecRule TX:RULEID_950002_ENFORCED "@eq 1" "chain,t:none,ctl:auditLogParts=+E,deny,log, \ auditlog,status:501,msg:'System Command Access. Matched signature <%{TX.0}>',id:'950002',severity:'2'" SecRule TX:RULEID_950002_MATCHED "@eq 1" 

The above example is a chained rule set where the first line checks to see if this rule should even be evaluated. If the tx value is set to 1 (meaning yes we are evaluating this rule) then it will go to the 2nd part of the chained rule and check to see if the matched TX value is 1 (meaning that the inbound request matched the RegEx check from the modsecurity_crs_40_generic_attacks.conf file). If both of these TX values return true then the entire chained rule matches and the actions on the 1st line is triggered and the request is denied. Here is what the short error_log message would look like -

[Sat Jun 23 18:04:54 2007] [error] [client 192.168.1.103] ModSecurity: Access denied with code 501 (phase 2). \ Operator EQ match: 1. [id "950002"] [msg "System Command Access. Matched signature "] [severity "CRITICAL"] \ [hostname "www.example.com"] [uri "/bin/ftp.exe"] [unique_id "@D6NJMCoD4QAABSNAoMAAAAA"] 

What does this approach do for you?

At this point, you may be asking "Ok, how are these rules any different from the Core Rules? Didn't you just make the rules more complex?" It is true that functionally speaking, these new rules work exactly the same as the current Core Rule ID 950002. If client sent a request with one of those OS commands in them then it would be blocked by either rule set.

Advantages of this approach

The advantage of using this approach is that you now have extended flexibility to decided under what circumstances a rule will be evaluate or by which an exception can be made to a rule.

1) You could disable rules in phase:1. With the current approach of SecRuleRemoveById being used inside Apache scope directives, you could only run within phase:2 or beyond. With this approach, you could easily create a rule that runs in phase:1 and evaluates some variable (perhaps a remote IP or something) and then just sets "setvar:tx.ruleid_950002_enforced=0" and it will disable that rule.

2) Besides just deciding on whether or not the rule itself will be evaluated, you could also selectively decide if an inbound request matches the rule or not. Let's say that you keep having a false positive on rule ID 950002 when a client uses a specific web client (user-agent string). You could then easily add a rule to your modsecurity_crs_60_customrules.conf file to check for this user-agent string value and then use set "setvar:tx.ruleid_950002_matched=0" to set the TX variable back to 0 even if the rule had matched in the modsecurity_crs_40_generic_attacks.conf file :) Here is an example rule you would place in the *60* file before the blocking rule -

SecRule REQUEST_HEADERS:User-Agent "^Browser_1234$" \ "phase:2,log,t:none,id:'123456',setvar:tx.ruleid_950002_matched=0" 

As you can see, using this approach you have much more flexibility to determine when and where you want to implement an exception to a rule and you can then use "setvar" to easily change the TX variables. This provides you with many more options than using a global directive.

Disadvantages of this approach

1) This approach pretty much goes against the recommendations that I have been promoting previously about trying to limit editing of the Core Rules themselves.

2) This approach also introduces more directives than would normally be present in your configurations. As we have stated in many previous posts, the more rules that you have the higher the impact on performance. This means that for those users who are concerned with performance may not want to use this approach.

Remember, however, that I said that the purpose of this post is simply to present an alternative approach to evaluating requests and to show a use-case example of using TX variables.

Latest SpiderLabs Blogs

Zero Trust Essentials

This is Part 5 in my ongoing project to cover 30 cybersecurity topics in 30 weekly blog posts. The full series can be found here.

Read More

Why We Should Probably Stop Visually Verifying Checksums

Hello there! Thanks for stopping by. Let me get straight into it and start things off with what a checksum is to be inclusive of all audiences here, from Wikipedia [1]:

Read More

Agent Tesla's New Ride: The Rise of a Novel Loader

Malware loaders, critical for deploying malware, enable threat actors to deliver and execute malicious payloads, facilitating criminal activities like data theft and ransomware. Utilizing advanced...

Read More