Grey bar Blue bar
Share this:

Tue, 5 Aug 2014

SensePost partners with Paterva to offer improved security intelligence

SENSEPOST PNG on clear
We've been big fans of Maltego and the team at Paterva for a very long time now, and we frequently use this powerful tool for all kinds of fun and interesting stuff, like

We go way back with Andrew and Roelof, who was in fact a founder of SensePost, so today we're super excited to be able to announce a new, strengthened partnership with them under which we have been accredited as an Approved Maltego Solutions Provider. Basically this means the that with Paterva's help we plan to use the powerful Maltego toolset to become better at our job - that is to provide information and information systems to our customer with which they can make sound security decisions. Here's the official news:
SensePost today is proud to announce the completion of a contract that will see the company recognized as the world's first “Approved Maltego Solution Provider” (AMSP) and the exclusive provider of this kind in the UK and Southern Africa.


SensePost was founded in 2000 and has developed into one of the worlds leading Information Security Services companies with offices in London, Cape Town and Pretoria. As trusted advisors it has always been our mission to provide our customers with insight, information and systems to enable them to make strong decisions about Information Security that support their business performance. Whilst this mission has traditionally expressed itself in technical security analysis services like Vulnerability Assessment and Penetration Testing we recognise that the threat landscape is constantly changing and that new and more complex realities necessitate the use of sophisticated new skills, tools and techniques with which to support our clients.


“This strategic alliance perfectly fits the ‘Assess-Detect-Protect-Respond' framework that drives the way we design, sell and deliver our service. It's the perfect evolution of our growing services offering.” says Etienne Greef, CEO of the SensePost group holding company SecureData, who's strategy is at the core of this new initiative.


‘Maltego', built by Paterva, is a powerful suite of software tools used for data mining, link analysis and data visualization, giving the user the ability to extract large volumes of data from diverse sources and then analyze it to understand the patterns and relationships it reveals. In the modern digital age these techniques are used to convert data into information and thereby extract concrete value that can be used for effective decision-making.


Maltego is a highly regarded and popular platform used extensively in Open Source Intelligence Gathering, Infrastructure Analysis for Penetration Testing, Cyber Attack Analysis, Fraud Detection and Investigation, Security Intelligence, Information Security Management, Research and more.


This partnership between SensePost and Paterva (who produce the Maltego software) builds on the companies' shared roots and intellectual heritage and will allow both companies to serve their customers and fulfil their respective missions better.


As an AMSP SensePost will be authorised to provide integration, consulting, support and training for the Maltego tools with full endorsement, support and assistance directly from Paterva. This new capability, combined with an existing wealth of information security skills and experience, uniquely positions SensePost to advise and support clients seeking to exploit the unique strategic advantage the Maltego toolset can offer.


More information on our services and capabilities in this space will follow with our official "launch" in a few weeks time. In the mean, here's a brief summary of our new offering.

Fri, 13 Jun 2014

Release the hounds! Snoopy 2.0

theHounds
Friday the 13th seemed like as good a date as any to release Snoopy 2.0 (aka snoopy-ng). For those in a rush, you can download the source from GitHub, follow the README.md file, and ask for help on this mailing list. For those who want a bit more information, keep reading.

What is Snoopy?


Snoopy is a distributed, sensor, data collection, interception, analysis, and visualization framework. It is written in a modular format, allowing for the collection of arbitrary signals from various devices via Python plugins.


It was originally released as a PoC at 44Con 2012, but this version is a complete re-write, is 99% Python, modular, and just feels better. The 'modularity' is possibly the most important improvement, for reasons which will become apparent shortly.


Tell me more!


We've presented our ongoing work with snoopy at a bunch of conferences under the title 'The Machines that Betrayed Their Masters'. The general synopsis of this research is that we all carry devices with us that emit wireless signals that could be used to:

  • Uniquely identify the device / collection of devices

  • Discover information about the owner (you!)


This new version of snoopy extends this into other areas of RFID such as; Wi-Fi, Bluetooth, GSM, NFC, RFID, ZigBee, etc. The modular design allows each of these to be implemented as a python module. If you can write Python code to interface with a tech, you can slot it into a snoopy-ng plugin.


We've also made it much easier to run Snoopy by itself, rather than requiring a server to sync to as the previous version did. However, Snoopy is still a distributed framework and allows the deployment of numerous Snoopy devices over some large area, having them all sync their data back to one central server (or numerous hops through multiple devices and/or servers). We've been working on other protocols for data synchronisation too - such as XBee. The diagram below illustrates one possible setup:


Architecture Diagram

OK - but how do I use it?


I thought you'd never ask! It's fairly straight forward.

Hardware Requirements


Snoopy should run on most modern computers capable of running Linux, with the appropriate physical adapters for the protocols you're interested in. We've tested it on:

  • Laptop

  • Nokia N900 (with some effort)

  • Raspberry Pi (SnooPi!)

  • BeagleBone Black (BeagleSnoop!)


In terms of hardware peripherals, we've been experimenting with the following:
TechnologyHardwareRange
Wi-FiAWUS 036H100m
BluetoothUbertooth50m
ZigBeeDigi Xbee1km to 80kms
GSMRTL2832U SDR35kms
RFIDRFidler15cm
NFCACR122U10cm


The distances can be increased with appropriate antennas. More on that in a later blog post.

Software Requirements


Essentially a Linux environment is required, but of more importance are the dependencies. These are mostly Python packages. We've tested Snoopy on Kali 1.x, and Ubuntu 12.04 LTS. We managed to get it working on Maemo (N900) too. We're investigating getting it running on OpenWRT/ddWRT. Please let us know if you have success.

Installation


It should be as simple as:
git clone https://github.com/sensepost/snoopy-ng.git
cd snoopy-ng
bash ./install.sh

Usage


Run Snoopy with the command 'snoopy', and accept the License Agreement. We'd recommend you refer to the README.md file for more information, but here are a few examples to get you going:


1. To save data from the wireless, sysinfo, and heartbeat plugins locally:

snoopy -v -m wifi:iface=wlanX,mon=True -m sysinfo -m heartbeat -d <drone name> -l <location name>

2. To sync data from a client to a server:


Server:

snoopy_auth --create <drone name> # Create account
snoopy -v -m server # Start server plugin

Client:
snoopy -v -m wifi:iface=mon0 -s http://<server hostname>:9001/ -d <drone name> -l <location name> -k

Data Visualization


Maltego is the preferred tool to perform visualisation, and where the beauty of Snoopy is revealed. See the README.md for instructions on how to use it.

I heard Snoopy can fly?


You heard right! Well, almost right. He's more of a passenger on a UAV:



There sure is a lot of stunt hacking in the media these days, with people taking existing hacks and duct-taping them to a cheap drone for media attention. We were concerned to see stories on snoopy airborne take on some of this as the message worked its way though the media. What's the benefit of having Snoopy airborne, then? We can think of a few reasons:


  1. Speed: We can canvas a large area very quickly (many square kilometres)

  2. Stealth: At 80m altitude the UAV is out of visual/audible range

  3. Security: It's possible to bypass physical security barriers (walls, men with guns, dogs)

  4. TTL (Tag, Track, Locate): It's possible to search for a known signature, and follow it


We're exploring the aerial route a whole lot. Look out for our DefCon talk in August for more details.

Commercial Use


The license under which Snoopy is released forbids gaining financially from its use (see LICENSE.txt). We have a separate license available for commercial use, which includes extra functionality such as:

  • Syncing data via XBee

  • Advanced plugins

  • Extra/custom transforms

  • Web interface

  • Prebuilt drones


Get in contact (glenn@sensepost.com / research@sensepost.com) if you'd like to engage with us.

Using Maltego to explore threat & vulnerability data

This blog post is about the process we went through trying to better interpret the masses of scan results that automated vulnerability scanners and centralised logging systems produce. A good example of the value in getting actionable items out of this data is the recent Target compromise. Their scanning solutions detected the threat that lead to their compromise, but no humans intervened. It's suspected that too many security alerts were being generated on a regular basis to act upon.


The goal of our experiment was to steer away from the usual data interrogation questions of "What are the top N vulnerabilities my scanner has flagged with a high threat?" towards questions like "For how many of my vulnerabilities do public exploits exist?". Near the end of this exercise we stumbled across this BSides talk "Stop Fixing All The Things". Theses researchers took a similar view-point: "As security practitioners, we care about which vulnerabilities matter". Their blog post and video are definitely worth having a look at.


At SensePost we have a Managed Vulnerability Scanning service (MVS). It incorporates numerous scanning agents (e.g. Nessus, Nmap, Netsparker and a few others), and exposes an API to interact with the results. This was our starting point to explore threat related data. We could then couple this data with remote data sources (e.g. CVE data, exploit-db.com data).


We chose to use Maltego to explore the data as it's an incredibly powerful data exploration and visualisation tool, and writing transforms is straight forward. If you'd like to know more about Maltego here are some useful references:


What we ended up building were:

  • Transforms to explore our MVS data

  • A CVE / exploit-db.com API engine

  • Transforms to correlate between scanner data and the created APIs

  • Maltego Machines to combine our transforms


So far our API is able to query a database populated from CVE XML files and data from www.exploit-db.com (they were kind enough to give us access to their CVE inclusive data set). It's a standalone Python program that pulls down the XML files, populates a local database, and then exposes a REST API. We're working on incorporating other sources - threat feeds, other logging/scanning systems. Let us know if you have any ideas. Here's the API in action:


Parsing CVE XML data and exposing REST API
Parsing CVE XML data and exposing REST API


Querying a CVE. We see 4 public exploits are available.
Querying a CVE. We see 4 public exploits are available.


It's also worth noting that for the demonstrations that follow we've obscured our clients' names by applying a salted 'human readable hash' to their names. A side effect is that you'll notice some rather humorous entries in the images and videos that follow.


Jumping into the interesting results, these are some of the tasks that we can perform:


  • Show me all hosts that have a critical vulnerability within the last 30 days

  • Show me vulnerable hosts for which public exploit code exists

  • Show me all hosts for which a vulnerability exists that has the word 'jmx-console' in the description

  • Show me all hosts on in my DMZ that have port 443 open

  • Given a discovered vulnerability on a host, show me all other hosts with the same vulnerability

  • Show me a single diagram depicting every MVS client, weighted by the threat of all scans within the last week

  • Show me a single diagram depicting every MVS client, weighted by the availability of public exploit code

  • Given a CPE, show me all hosts that match it


Clicking the links in the above scenarios will display a screenshot of a solution. Additionally, two video demonstrations with dialog are below.


Retrieving all recent vulnerabilities for a client 'Bravo Tango', and checking one of them to see if there's public exploit code available.
Retrieving all recent vulnerabilities for a client 'Bravo Tango', and checking one of them to see if there's public exploit code available.


Exploring which clients/hosts have which ports open
Exploring which clients/hosts have which ports open


In summary, building 'clever tools' that allow you to combine human insight can be powerful. An experiences analyst with the ability to ask the right questions, and building tools that allows answers to be easily extracted, yields actionable tasks in less time. We're going to start using this approach internally to find new ways to explore the vulnerability data sets of our scanning clients and see how it goes.


In the future, we're working on incorporating other data sources (e.g. LogRhythm, Skybox). We're also upgrading our MVS API - you'll notice a lot of the Maltego queries are cumbersome and slow due to its current linear exploration approach.


The source code for the API, the somewhat PoC Maltego transforms, and the MVS (BroadView) API can be downloaded from our GitHub page, and the MVS API from here. You'll need a paid subscription to incorporate the exploit-db.com data, but it's an initiative definitely worth supporting with a very fair pricing model. They do put significant effort in correlating CVEs. See this page for more information.


Do get in touch with us (or comment below) if you'd like to know more about the technical details, chat about the API (or expand on it), if this is a solution you'd like to deploy, or if you'd just like to say "Hi".

Thu, 5 Jun 2014

Associating an identity with HTTP requests - a Burp extension

This is a tool that I have wanted to build for at least 5 years. Checking my archives, the earliest reference I can find is almost exactly 5 years ago, and I've been thinking about it for longer, I'm sure.


Finally it has made it out of my head, and into the real world!


Be free! Be free!


So, what does it do, and how does it do it?


The core idea for this tool comes from the realisation that, when reviewing how web applications work, it would help immensely to be able to know which user was actually making specific requests, rather than trying to just keep track of that information in your head (or not at all). Once you have an identity associated with a request, that enables more powerful analysis of the requests which have been made.


In particular, it allows the analyst to compare requests made by one user, to requests made by another user, even as those users log in and log out.


There are various ways in which users can be authenticated to web applications, and this extension doesn't try to handle them all, not just yet, anyway. It does handle the most common case, though, which is forms-based auth, with cookie-based session identifiers.


So, as a first step, it allows you to identify the "log in" action, extract the name of the user that is authenticating, and associate that identity with the session ID until it sees a "log out" action. Which is pretty useful in and of itself, I think. Who hasn't asked themselves, while reviewing a proxy history: "Now which user was I logged in as, when I made this request?" Or: "Where is that request that I made while logged in as 'admin'?"


Associating an identity with the requests


So, how does it do this? Unfortunately, the plugin doesn't have AI, or a vast database of applications all captured for you, with details of how to identify logins and logouts. But it does have the ability to define a set of rules, so you can tell it how your app behaves. These rules can be reviewed and edited in the "Options" tab of the Identity extension.


What sort of rules do we need? Well, to start with, what constitutes a valid logon? Typically, that may include something like "A POST to a specified URL, that gets a 200 response without the text 'login failed' in it". And we need to know which form field contains the username. Oh, and the sessionid in use by the application, so that the next time we see a sessionid with the same value, we can link that same identity to that conversation as well.


The easiest way to create the login rule is probably via the Http Proxy History tab. Just right click on a valid login request, and choose "Identity -> create login rule". It will automatically create a rule that matches the request method, request path, and the response status. Of course, you can customise it as you see fit, adding simple rules (just one condition), or complex rules (this AND that, this OR that), nested to arbitrary levels of complexity. And you can select the session id parameter name, and login parameter name on the Options tab as well.


Awesome! But how do we identify when the user logs out? Well, we need a rule for that as well, obviously. This can often be a lot simpler to identify. An easy technique is just to look for the text of the login form! If it is being displayed, you're very unlikely to be logged in, right? That can also catch the cases where a session gets timed out, but for the moment, we have separate rules and states for "logged out" and "timed out". That may not be strictly necessary, though. Again, these rules can be viewed and edited in the Options tab. Another easy way to create the logout rule is to select the relevant text in the response, right-click, and choose "Identity -> create logout rule".


Sweet! So now we can track a series of conversations from an anonymous user, through the login process, through the actions performed by the person who was logged in, through to the end of that session, whether by active logout, or by inactivity, and session timeout, back to an anonymous user.


Most interestingly, though, by putting the conversations into a "spreadsheet", and allowing you to create a pivot table of selected parameters vs the identity of the person making the request, it becomes possible to almost automate the testing of access control rules.


This tool is not quite at the "automated" stage yet, but it does currently allow you to see which user has performed which actions, on which subject, which makes it almost trivial to see what each user is able to do, and then formulate tests for the other users. You can also see which tests you have executed, as the various cells in the pivot table start filling up.


Pivoting requests against the user


In this screenshot, we are pivoting on the path of the URL, the method (GET vs POST), and then a bunch of parameters. In this application (WordPress, just for demonstration purposes), we want the "action" parameter, as well as the parameter identifying the blog post being operated on. The "action" parameter can appear in the URL, or in the Body of the request, and the "post" parameter in the URL identifies the blog post, but it is called post_ID in the body. (It might be handy to be able to link different parameters that mean the same thing, for future development!). The resulting table creates rows for each unique parameter combination, exactly as one would expect in an Excel pivot table.


Clicking on each cell allows you to get a list of all the conversations made by that userid, with the specific combination of parameters and values, regardless of the number of times that they had logged in and out, or how many times their session id changed. Clicking on each conversation in the list brings up the conversation details in the request/response windows at the bottom, so you can check the minutiae, and if desired, right-click and send them to the repeater for replay.


So far, the approach has been to manually copy and paste the session cookie for a different user into the repeater window before replaying the request, but this is definitely something that lends itself to automation. A future development will have an option to select "current" session tokens for identified users, and substitute those in the request before replaying it.


So far, so good! But, since the point of this extension is to check access controls, we'd ideally like to be able to say whether the replayed request was successful or not, right? There's a rule for that! Or there could be, if you defined them! By defining rules that identify "successful" requests vs "failed" requests, conversations can be tagged as successful or not, making it easier to see when reviewing lists of several conversations. Future development is intended to bring that data "up" into the pivot table too, possibly by means of colouring the cells based on the status of the conversations that match. That could end up showing a coloured matrix of successful requests for authorised users, and unsuccessful requests for unauthorised users, which, ultimately, is exactly what we want.


We'd love to hear how you get on with using this, or if you have any feature requests for the plugin. For now, the BurpId plugin is available here.

Wed, 12 Feb 2014

RAT-a-tat-tat

Hey all,


So following on from my talk (slides, video) I am releasing the NMAP service probes and the Poison Ivy NSE script as well as the DarkComet config extractor.



An example of finding and extracting Camellia key from live Poison Ivy C2's:
nmap -sV -Pn --versiondb=nmap-service-probes.pi --script=poison-ivy.nse <ip_address/range)
Finding Poison Ivy, DarkComet and/or Xtreme RAT C2's:
nmap -sV -Pn --versiondb=nmap-service-probes.pi <ip_range>


If you have any questions, please contact research@sensepost.com
Cheers