SpiderLabs Blog

Using Nmap to Screenshot Web Services

Written by Ryan Linn | Jun 14, 2012 7:44:00 AM

As part of Trustwave SpiderLabs network penetration testing team, I perform many internal penetration tests each year. As part of those tests, we see a lot of web servers. Some of those are internal portals like Sharepoint. Others are non-production copies of production systems. Some are random systems people have set up to test other applications. So, while we're doing mapping on the internal network, it's not always easy to figure out which of the 400 web services are worth examining closer. While I was thinking about this, I remembered something Russ Swift had shared with me: couldn't you take screenshots with Nmap?

The answer is: yes. You can do anything with the Nmap Scripting Engine (NSE) that you can do with Lua, the language that NSE is based on. So, I set out to build a NSE screenshot tool. Unfortunately I couldn't find a good library that would allow me to use the Lua bindings to render web pages into PNG files, so I needed some additional help. Thanks to a suggestion from Nate Drier, another of the SpiderLabs members, I found a tool that uses the webkit libraries to take screenshots. It's the wkhtmltoimage tool that's part of the wkhtmltopdf project.

Because the module uses an external binary to perform most of it's work, I don't think it's suitable for inclusion in the standard Nmap scripts, but I have shared the module as part of SpiderLabs Git repository here: https://github.com/SpiderLabs/Nmap-Tools .

I'll walk you through installing the pre-requisites, then we'll take this for a test drive by running a penetration testing scenario. For this exercise, I'll assume that you're using BackTrack 5.


First, let's get wkhtmltoimage:


wget http://wkhtmltopdf.googlecode.com/files/wkhtmltoimage-0.11.0_rc1-static-i386.tar.bz2
tar -jxvf wkhtmltoimage-0.11.0_rc1-static-i386.tar.bz2
cp wkhtmltoimage-i386 /usr/local/bin/

Next, let's get and install the Nmap module:

git clone git://github.com/SpiderLabs/Nmap-Tools.git
cd Nmap-Tools/NSE/
cp http-screenshot.nse /usr/local/share/nmap/scripts/
nmap --script-updatedb

Now, let's see what web services are listening. My test network uses the 192.168.1.0/24 network. We want to run an Nmap scan for that network and include the default scripts that are run in addition to our screenshot module. To do that, we type:

nmap -A --script=default,http-screenshot 192.168.1.0/24 -oA nmap-local

This launches the Nmap scan with OS detection, version detection, traceroute and scripts scanning using the default scripts as well as the http-screenshot module. It will scan the 192.168.1.0/24 network and output three types of output files: the Nmap format, the XML format, and the grepable format.

Why all 3? If we want to use the output for other tools such as Metasploit, it will want the XML format for input. The grepable format makes finding hosts with certain ports open very easy. For instance, if you wanted to find all the hosts with port 80 open you could issue the command: grep 80/open nmap-local.gnmap . That will return all of the hosts with port 80 open. Finally, sometimes we just want to browse through the results, and the Nmap format is the most human readable of the formats, so we always want it around in case we want to manually browse information.

Once scanning is completed, you can see the script has saved a screenshot of the open http port to the filename: screenshot-nmap-192.168.1.1:80.png. For each web port (http or https) a screenshot file will be created and a note will have been added to the Nmap output to indicate the filename it was saved as.

[Screenshot example for http]

So now that we have our scan directory full of PNG screenshots of the web sites on our local network, we need a quick way to view them. One easy way is to script together a quick web page with the preview images on it. Here's an example:

#!/bin/bash
printf "<HTML><BODY><BR>" > preview.html
ls -1 *.png | awk -F : '{ print $1":"$2"\n<BR><IMG SRC=\""$1"%3A"$2"\" width=400><BR><BR>"}' >> preview.html
printf "</BODY></HTML>" >> preview.html

When we look at the preview.html in a web browser we see this:


[Sample Web Page]

Based on the results, we see that there are a few web based services on the local network. Two of them are the default Apache page letting us know the site doesn't have a specific default site, but may have sub-sites or pages that are part of it. Then we see two other services that are active and are worth additional investigation: a D-Link gateway page at 192.168.1.1 on port 80, and a page that requires JavaScript at 192.168.1.102 on port 8080.

Based on this quick example, it's easy to see this method would make it easy to pick out things we'd like to investigate further if there are hundreds of web sites surfaced. Sites that are frequently interesting are: SharePoint sites, Tomcat Admin pages, Cisco setup pages, and internal Wiki pages. Let us know if you find this useful, or if you have suggestions on how to take this concept to the next level!