Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Trustwave Unveils New Offerings to Maximize Value of Microsoft Security Investments. Learn More

Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

(Response) Splitting Up Reverse Proxies To Reach Internal Only Paths

When I’m carrying out security research into a thing, I generally don’t like to Google prior research right away. I know, this completely goes against how you would (and should!) carry out any research; starting with a literature review to find the lay of the land and existing research done in the area to then expand upon. However, I have a habit of getting that light bulb idea or concept and acting upon it right away, rolling up my sleeves and putting my wellies on, ready to get dirty. This sometimes (a lot of times!) ends up coming back to bite me but my reason for doing so is that I don’t want to be swayed by other people. I want to explore my research idea pure and without bias, arriving at the destination without being led off course by others. If I go down the wrong rabbit hole, I want it to be because I’ve decided to jump down it, before I then come out again only to jump down another. It is not until the end if the idea works out (or hasn’t how I had hoped) that I then start googling. 

With that prelude/introduction, I want to present you this little bit of research.

Many web applications these days sit behind reverse proxies. This allows loads of good stuff to happen independently from the resultant back-end application server – load balancing, deploy WAF rules, setup routing rules, rewriting requests/responses, etc. However, with these setups comes an increase in complexity and with that, attack surface available. James Kettle illustrated this beautifully in his HTTP Desync Attacks [1] research back in 2019. James was abusing the link between the front-end and the back-end, with different setups on content-length and transfer-encoding between the two, exploiting these differences.

It is at this point we start the journey, in NGINX, probably one of the most popular reverse proxies out there… in nginx/src/http/modules/ngx_http_proxy_module.c to be precise. I like looking for edge cases, I’m all about the edge cases. Finding and exploiting vulnerabilities is all about thinking a little differently – you have to assume lots of eyes have already been over this open source code. I’m looking for weird obscure things, legacy support for things, features which no one uses, sources for user input which get put/end up somewhere else interesting. 

For those of you not familiar with HTTP, messages consist of a both a header section and a body section, for both requests and responses. When I say ‘body’ in requests I’m referring to POST requests with post data sent in the body section. Typically a ‘body’ is associated with a response in HTTP though. 

As my mince pie crumbles over my keyboard, these bad boys catch my eye as I scroll through ngx_http_proxy_module.c. These are some proxy-related HTTP responses headers used by NGINX.




I must have missed the memo, but I knew nothing (before this blog post obviously!) about the “X-Accel” headers, especially the “X-Accel-Redirect” header which looks worth poking at. Because I am at this stage a little clueless, I take a trip to NGINX’s documentation [2] and come across the following…

“’X-Accel-Redirect’ performs an internal redirect to the specified URI;”

You had me at “performs an internal redirect” Jerry. [3]

That light bulb moment… can I abuse this feature to access internal things?

We need a proof of concept, or it didn’t happen. My fingers are already in a terminal typing apt-get install nginx before I can do anything else.

I set up NGINX to be the front-end web server on TCP port 80, forwarding requests from the user onto the back-end server on TCP port 8000 where a Flask application (yet to code at this point!) will answer, namely for requests to / and /publicdownload/ (as per the NGINX config in the image below). The Flask application will receive requests for /publicdownload/<file_id> and serve files out of the back-end at /root/publicdownload/<file_id>, returning the data back to NGINX to then provide to the requesting user.




There is also an internal only path called /secret/ which serves files direct out of /tmp/secret/ from the NGINX front-end server - this will be our flag for the proof of concept. It is not possible to call nor reach this /secret/ path directly (specifically the file “/secret/123”) from an external perspective – 404 returned, as shown below.




This is typical of how we’d see NGINX used as a reverse proxy out in the wild, minus the flag bit.

On the receiving Flask application I then coded up, to emulate a HTTP Response Splitting vulnerability [4], I have made it take two optional parameters along with the file request; ‘goodness1’ and ‘goodness2’. A new HTTP response header will be created using both of these values, ‘goodness1’ input used to construct the header name and ‘goodness2’ for the value.




In a real world scenario, you’d inject %0D%0A (Carriage Return Line Feed) characters into parameters you find are being echoed back into a response header to then create a new one. In my experience this is typically when an application is creating a ‘Set-Cookie’ response header based on user input. So in this proof of concept I’m using these optional parameters to get to that point (to fake that CRLF vulnerability) but we get to the same destination – an arbitrary HTTP response header is inserted via abusing an application vulnerability (HTTP Response Splitting).

When a normal user requests http://server/publicdownload/123 this first hits the NGINX front-end server on tcp port 80. NGINX checks its config and forwards it to the back-end Flask application server on tcp port 8000. Flask will see if the file “123” exists in /root/publicdownload/ and will return the response to NGINX which in-turn sends it to the requesting user (123 public file). The raw HTTP request/response of this is shown below.




I’ve illustrated these flows in my scribbles/the diagram below.




A hacker requests http://server/publicdownload/123 but does so whilst exploiting a HTTP Response Splitting vulnerability - in this proof of concept, this be via our optional parameters (yarrr!). They get to abuse the Flask application vulnerability to create an arbitrary response header. The request http://server/publicdownload/123?goodness1=X-Accel-Redirect&goodness2=/secret/123 is instead made to exploit this (in real life this would be “blah%0D%0AX-Accel-Redirect:%20/secret/123” as mentioned before). The “123” file is served out of /root/publicdownload/ as usual (and as before) by the back-end Flask application server. This response is then returned to NGINX as before, however, the difference here is that a new HTTP response header of “X-Accel-Redirect: /secret/123” is sent back to NGINX with this response. NGINX sees this header and as per the documentation, it will use this to redirect instead to internal paths. NGINX will dip its hand into the /tmp/secret/ hat locally and retrieve the “123” file instead (123 private file), and then provide this back to the requesting user, I mean hacker.




…and there you have it, we were able to reach the internal /secret path and access the “/secret/123” file which was previously forbidden by utilizing a feature of NGINX.



From neuron light bulb to a digital machine frosted proof of concept.

It is at this point that I fire up the old web browser and google “x-accel-redirect” and add the all important key word of “security” into the mix. Like some kind of mechanical squeaky metal sliding doors opening to reveal the prize, resembling something out of the 80s British television show “Bullseye” [5], this is when I find out if I’m the first to stumble on this from a security perspective.

It would appear I really did miss the memo as there is a really good blog post from Detectify in 2021 [6] that covers the insecurities of middleware setups, and it mentions this very “X-Accel-Redirect” response header can be used to access internal Nginx blocks. It doesn’t however go into what scenario one could take advantage of this. You have to read between the lines of the issue as it is all covered in one sentence, as part of a bigger wider research:

By using the X-Accel-Redirect response header, we can make NGINX redirect internally to serve another config block, even ones marked with the internal directive.

I thought about leaving my research there, like my proof of concept never happened. However, it then dawned on me – if I didn’t get the memo, perhaps a bunch of other people didn’t either – like you there, reading this blog post.

I think there is still some value in publishing this. Both to convey that not all research leads to happy endings aka CVEs but that there is as much joy to be found in the journey too. Coming up with an idea or hypothesis, turning it into a working proof of concept and being able to then answer those initial questions. In this case I was 2 years too late to the party, but I take away that my methodology does work correctly and I had fun knocking up the proof of concept… and I’m here for the fun.

As always, thanks for reading! 







Latest SpiderLabs Blogs

Network Isolation for DynamoDB with VPC Endpoint

DynamoDB is a fully managed NoSQL database service offered by Amazon Web Services (AWS). It is renowned for its scalability, dependability, and easy connection with other AWS services....

Read More

The Underdog of Cybersecurity: Uncovering Hidden Value in Threat Intelligence

Threat Intelligence, or just TI, is sometimes criticized for possibly being inaccurate or outdated. However, there are compelling reasons to incorporate it into your cybersecurity defense strategy....

Read More

Clockwork Blue: Automating Security Defenses with SOAR and AI

It’s impractical to operate security operations alone, using manual human processes. Finding opportunities to automate SecOps is an underlying foundation of Zero Trust and an essential architecture...

Read More