Loading...
Blogs & Stories

SpiderLabs Blog

Attracting more than a half-million annual readers, this is the security community's go-to destination for technical breakdowns of the latest threats, critical vulnerability disclosures and cutting-edge research.

Web Applications and Internal Penetration Tests

Until recently, I really didn't care about web applications on an internal penetration test. Whether it was as an entry point or target, I was not interested, since I typically had far better targets and could compromise the networks anyway. However, the times have changed, internal environments are much more restricted, not many services are exposed, and applications are the main reason for the tests. This is not supposed to be a guide to analyze web applications, but some thoughts regarding web apps during internal pentests.

Reaching the Apps

As a pentester at SpiderLabs, we normally do our internal tests remotely, mostly using the SSH protocol and this is another thing that kept us from web applications, the terminal. The command-line is certainly not the best scenario for web applications assessment, but there are tools and tricks that can help us with the task.

As mentioned, one of the challenges for testing applications on an internal test is how to access the network remotely. Using SSH, we have different alternatives. For instance, it is possible to set up an SSH proxy with the -D flag and using the -X to initiate an X11 display for the connection (although even with compression, I do not recommend this).

Once we solve the best equation for access, it is time to start.

Entry Point

Using the proxy established by the SSH connection, you are now able to use any web assessment tool. However you may find countless web servers and applications, so identifying security issues on each one is not feasible for a time-limited test. Every test is mostly guided by the tester's experience, and that is true in this situation as well. I like to use the WhatWeb tool for checking what is the technology utilized on the application and server. Another interesting fact, when your access is internal, you will often find these web applications unprotected with no WAF to interfere with your probing.

Ww.png_shadowFigure 1: Running WhatWeb on an internal network.

That gives me some idea of what I can expect from the application and whether it's worth taking the chance to analyze them. Remember that the test is time-limited, s0 we can't waste time, we need to pwn ;) A big part of the web assessment on the internal pentests is to choose the applications you want to investigate. What you will analyze will be fundamental for the final results of the engagement.

I normally try to check for well-known applications or odd/old applications. I also look for anything that rings the bell as directory indexing enabled, default files, verbose errors, etc. It is common to identify legacy applications, which is normally a gold pot. 

After deciding which applications to target (maybe choosing more than one), I perform brute-force/crawling by using the tool dirb, another command-line utility very helpful in this situation.

Dirb.png_shadowFigure 2: Using the dirb utility with default wordlist.

 

You may normally find interesting stuff during the initial phase including unprotected management consoles, LFI/RFI vulnerabilities, repositories, frameworks, sensitive information, etc. 

Rce.png_shadowFigure 3: RCE found using an open Jenkins application.

As an internal penetration test, I'm primarily concerned with only high/critical risk vulnerabilities and when you don't have to work about WAF there's a lot you can look for. This used in our favor to exploit unexpected behavior on web apps like SQL Injection (SQLi). A well-known tool for SQL Injection exploitation can also be used for identification, the popular sqlmap. You can use the flag --crawl and the utility will go deep on the application (by a number of layers) as defined to search for SQLi vulnerabilities. The --batch flag will automate the process and there will be no questions to be responded to during the analysis.

Target Apps

If you are familiar with red team engagements, you know that the targets are mainly defined by applications. Even with access to a server, dealing with the information on it is not that straightforward. While working with desktop applications, we may collect a lot of information from the processes' memory but that is not the case with web browsers, at least not easy as with in-house desktop apps. However, in web browsers, we have access to a lot of other types of data to analyze such as favorite websites, saved passwords, cached sessions, etc. Having desktop access will facilitate that job and you can use any of the common access tools available like VNC, RDP, LogMeIn, and others.

I like to check the home directory for identifying what user can have access to the specific machine but as you explore you will probably find your own favorite sources of data!

Conclusion

There's no turning back and web applications are here to stay. Due to the cloud environment, we may not have many services to interact with, so we have to make sure our ability to work with web apps is kept sharp. Hopefully, this post gives you some new ideas for your next internal pentest and some tools to practice within the meantime,

Recent SpiderLabs Blog Posts