Loading...
Blogs & Stories

SpiderLabs Blog

Attracting more than a half-million annual readers, this is the security community's go-to destination for technical breakdowns of the latest threats, critical vulnerability disclosures and cutting-edge research.

Setting HoneyTraps with ModSecurity: Adding Fake robots.txt Disallow Entries

image from ecx.images-amazon.comThe following blog post is taken from Recipe 3-2: Adding Fake robots.txt Disallow Entries in my new book "Web Application Defender's Cookbook: Battling Hackers and Protecting Users". Additionally, if this topic interests you and you would like to have official, hands-on training, register for the upcoming OWASP AppSecUSA conference where I will be giving a full 2-day training class based on my book.

All warfare is based ondeception. Hold out baits to entice theenemy. Feign disorder, and crush him.

— Sun Tzu in "The Art of War"

Signature Detection is Insufficient

We, as web application defenders, have a challenging task. We must try to defend web applications fromattackers without the benefit of knowledge of the application internals. Without this information, it is may bedifficult to identify malicious behavior hidden in a flood of legitimatetraffic. How do normal users interact with the application resources? If you understand how normal users utilizethe web application, you should be able to identify when attackers deviate fromthis usage profile. Unfortunately, manyorganizations attempt to use signature-based detection system to identifymalicious behavior. This endeavor isoften fraught with accidentally blocking legitimate clients, and worse, missingreal attackers all together. How can wechange the game so the odds of identifying malicious users are in our favor?

Rather than searching through "haystacks" of legitimatetraffic looking for the malicious attack "needles", we need a method ofremoving the haystacks all together. If we can setup a method that removes allnormal user traffic, what we are left with would be abnormal traffic. This brings us to the concept of honeypots, or as we will be implementingthem with ModSecurity, honeytraps.

Honeytrap Concepts

The honeytrap concept is rather simple: they are essentiallybooby-traps that are built into the application. Whereas honeypot systems are separate hoststhat you deploy within your network to act as targets, honeytraps are dynamically plantedthroughout a web application and act as a virtual minefield for would-beattackers. They have no valid productionpurpose and there is no authorized interaction with or alteration of them, soany interaction is most assuredly suspicious.

A honeytrap's true value lies in being able to quicklyidentify malicious users from benign users. This detection value lies in the way that honeytraps work as tripwiresduring various phases of attack methodologies. Before an attacker can launch a malicious request, they must firstconduct some reconnaissance of the application to understand its layout, constructionand technologies in use. They must findout how the application handles authentication, authorization, sessionmanagement and input validation. It isduring these initial reconnaissance-gathering stages that honeytraps can easilyspot users with malicious intent when they attempt to manipulate the data. Honeytrap concepts are extremely simple, yet extremelyeffective.

Let's take a closer look at the three main benefits ofutilizing honeytraps:

  • High Fidelity Alerts – Since allhoneytrap activity is, by definition, unauthorized, it is extremely effectiveat reducing false positive alerts.
  • Smaller Number of Alerts – Ahoneytrap only generates alerts when a client either interacts with ormanipulates it. This results in a muchlower number of alerts that a security analyst would need to validate.
  • Identifying False Negatives – Negativesecurity signatures do not catch all data manipulation attacks as they have noknowledge of what the data was supposed to be. Honeytraps, on the other hand, excel at identify data manipulationattacks as the payloads are known in advance.

As you can see, the concept of honeytraps is very easy tounderstand and implement. Simply layyour honeytraps throughout your web application and if anything is altered, youhave most likely identified an attacker.

Recipe 3-2: Adding Fake robots.txt Disallow Entries

This recipe will show you how to add additional Disallowentries to the robots.txt file to alert when clients attempt to access theselocations.

Ingredients

Robots Exclusion Standard

The Robots Exclusion Standard was created as a means toallow web site owners to advise search engine crawlers which resources theywere allowed to index. They do this byplacing a file called robots.txt in the document root of the web site. In this file, the administrator of the sitecan include allowand disallowcommands to instruct web crawlers which resources to access. Here are some real examples of robots.txtentries:

User-agent: * Allow: /   User-agent: Googlebot Disallow: /backup/ Disallow: /cgi-bin/Disallow: /admin.bak/ Disallow: /old/ 

The first entry means that all crawlers are allowed toaccess and index the entire site, however the second entry states that Google'sgooglebot crawler should not access four different directories. By the names of these directories, this makessense. There may be some sensitive dataor files within these directories that the web site owners do not want to beindexed by Google.

While all of this makes sense and serves a legitimatepurpose, do you see a problem with using the robots.txt file? The robots exclusion standard is merely asuggestion and does not serve as access control. The issue is that you arebasically letting external clients know of specific sensitive areas of your website that you don't want them poking around in. Well, guess what, malicious users and their tools will not abide bythese entries. They will most assuredlytry to access those locations. Thereinlies our opportunity to lay another honeytrap detection point.

Dynamically Updating the robots.txt file

With ModSecurity, we can dynamically insert our ownhoneytrap robots.txtentries. First, you must enable thefollowing two directives:

SecContentInjection OnSecStreamOutBodyInspection On

This directive tells ModSecurity that we want to have theability to modify data (using prepend, append actions or @rsub operator) into the live stream. With these directives in place, we can then usethe following rule to add a fake honeytrap robots.txt entry.

SecRule REQUEST_FILENAME "@streq /robots.txt" \"id:'999005',phase:4,t:none,nolog,pass,append:'Disallow: /db_backup.%{time_epoch}/# Old DB crash data'"

This rule will silently append a fake directory locationto the end of the legitimate robots.txt data. When the attacker now accesses the robots.txt file, this is how itwould appear:

User-agent: * Allow: /   User-agent: Googlebot Disallow: /backup/ Disallow: /cgi-bin/ Disallow: /admin.bak/ Disallow: /old/Disallow: /db_backup.1331084262/ # Old DB crash data

Notice the newhoneytrap Disallowentry at the end. You should try andmake the name of your directory and any comments after it enticing to would-beattackers. In this case, we have namedour honeytrap directory so that is would appear that it would contain possibledatabase crash dump data. This tidbit ofdata would be almost too irresistible for an attacker to pass up. Now that we have laid out our honeytrap, wenext need to write the detection rule that will catch if a user tries to accessthis location.

SecRule REQUEST_FILENAME "^/db_backup.\d{10}" \"id:'999006',phase:1,t:none,log,block,msg:'HoneyTrap Alert: Disallowedrobots.txt Entry Accessed.',logdata:'%{matched_var}',setvar:ip.malicious_client=1"

This rule will identify if any client accesses ourhoneytrap Disallow location and it will then set a variable in the IPCollection labeling this client as malicious. The ModSecurity debug log shows the following processing when a clientaccess this honeytrap Disallow location:

Recipe: Invoking rule b81169b8; [file"/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"][line "5"] [id "999006"].Rule b81169b8: SecRule "REQUEST_FILENAME" "@rx ^/db_backup.\\d{10}""phase:1,id:999006,t:none,log,block,msg:'HoneyTrap Alert: Disallowed robots.txt EntryAccessed.',logdata:%{matched_var},setvar:ip.malicious_client=1"Transformation completed in 0 usec.Executing operator "rx" with param "^/db_backup.\\d{10}" against REQUEST_FILENAME.Target value: "/db_backup.1331084275/"Operator completed in 8 usec.Setting variable: ip.malicious_client=1Set variable "ip.malicious_client" to "1".Resolved macro %{matched_var} to: /db_backup.1331084275/Warning. Pattern match "^/db_backup.\\d{10}" at REQUEST_FILENAME. [file"/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "5"] [id "999006"] [msg "HoneyTrap Alert: Disallowed robots.txt Entry Accessed."] [data"/db_backup.1331084275/"]

Implement Fake Basic Authentication

An extension to the concept of a fake directory is to add ina layer of fake authentication. This isuseful on two fronts:

  1. Byreplying to request with a 401 Authorization Required HTTP response code, youare making the honeytrap resource appear more real.
  2. Whenfaced with an authentication challenge response, attackers will often attemptto either manually entry some default username and password credentialcombinations or try a fully automated attack. In this scenario, we have won this battle as there is no correct authenticationcombination and the attacker is wasting their time attempting to brute forcethe credentials.

We can update the previous ModSecurity SecRule to includethis fake authentication response by changing the phase, adding in a denyaction and instructing ModSecurity to issue a 401 response code.

SecRule REQUEST_FILENAME "^/db_backup.\d{10}" "id:'999011',phase:3,t:none,log,deny,status:401,msg:'HoneyTrapAlert: Disallowed robots.txt Entry Accessed.',logdata:'%{matched_var}',setvar:ip.malicious_client=1, setenv:basic_auth=1" Header always set WWW-Authenticate "Basic realm=\"Admin\"" env=basic_auth

Note that when this rule triggers, it will set an Apacheenvironmental variable. The final ApacheHeader directive is then conditionally executed if the ModSecurityenvironmental variable is set. TheHeader command adds in the WWW-Authenticate response header.

Now, when an attacker decides to access our honeytrapresource from the robots.txt file, they will be greeted with an HTTP basicauthentication pop-up box as shown below.

361818 F 0301

If the attacker attempts to authenticate to our honeytrapresource, we can then use the following ruleset to extract and decode thecredentials.

SecRule REQUEST_FILENAME "^/db_backup.\d{10}" \  "chain,id:'999012',phase:1,t:none,log,msg:'HoneyTrap Alert: Authentication Attempt to Fake Resource.',logdata:'Credentials used: %{matched_var}'"  SecRule REQUEST_HEADERS:Authorization "^Basic (.*)" "chain,capture"    SecRule TX:1 ".*" "t:base64Decode"

The last rule uses the base64Decode transformationfunction to decode the submitted data in the Authorization request header. Here is a debug log section showing how thisprocessing works:

Recipe: Invoking rule b7aae038; [file"/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "12"] [id "999012"].Rule b7aae038: SecRule "REQUEST_FILENAME" "@rx ^/db_backup.\\d{10}" "phase:1,deny,chain,id:999012,t:none,log,msg:'HoneyTrap Alert: Authentication Attempt to Fake Resource.',logdata:'Credentials used: %{matched_var}'"Transformation completed in 1 usec.Executing operator "rx" with param "^/db_backup.\\d{10}" against REQUEST_FILENAME.Target value: "/db_backup.1331278051/"Operator completed in 4 usec.Rule returned 1.Match -> mode NEXT_RULE.Recipe: Invoking rule b7aaedc8; [file "/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "13"].Rule b7aaedc8: SecRule "REQUEST_HEADERS:Authorization" "@rx ^Basic (.*)" "chain,capture"Transformation completed in 0 usec.Executing operator "rx" with param "^Basic (.*)" against REQUEST_HEADERS:Authorization.Target value: "Basic YWRtaW46UGFzc3dvcmQxMjM0"Added regex subexpression to TX.0: Basic YWRtaW46UGFzc3dvcmQxMjM0Added regex subexpression to TX.1: YWRtaW46UGFzc3dvcmQxMjM0Operator completed in 24 usec.Rule returned 1.Match -> mode NEXT_RULE.Recipe: Invoking rule b7aaf368; [file "/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"] [line "14"].Rule b7aaf368: SecRule "TX:1" "@rx .*" "t:base64Decode"T (0) base64Decode: "admin:Password1234"Transformation completed in 7 usec.Executing operator "rx" with param ".*" against TX:1.Target value: "admin:Password1234"Operator completed in 3 usec.Resolved macro %{matched_var} to: admin:Password1234Warning. Pattern match ".*" at TX:1. [file"/etc/apache2/modsecurity-crs/base_rules/modsecurity_crs_15_custom.conf"][line "12"] [id "999012"] [msg "HoneyTrap Alert: Authentication Attempt to FakeResource."] [data "Credentials used: admin:Password1234"]

As you can see from the bolded entries, we are able toidentify that the attacker sent the following credentials:

  • Username = admin
  • Password = Password1234

Conclusion

As you can see from this example, setting HoneyTraps can be pretty easy and can also provide great value. Not only are we able to identify when an attacker is attempting to access restricted resources, but are also able to force the attacker waste valuable time attempting to brute force our fake authentication. This can provide valueable time for web defenders to respond to this threat. Keep an eye out for more examples of Setting HoneyTraps with ModSecurity.