It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.
As I understood it, the talk's hypothesis could be (over) simplified as follows:
Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.
It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.
A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.
What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.
The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.
Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.
Solving the security problem in total is sadly still going to take a whole lot more work...
Security policies are necessary, but their focus is to the detriment of more important security tasks. If auditors had looked for trivial SQL injection on a companies front-page as hard as they have checked for security polices, then maybe our industry would be in a better place. I want to make this go away, I want to help you tick the box so you can focus on the real work. If you just want the "tool" skip to the end.
A year and a half ago, SensePost started offering "build it" rather than "break it" consulting services, we wanted to focus on technical, high-quality advisory work. However, by far the most frequently "consulting" request we've seen has been asking for security policies. Either a company approaches us looking for them explicitly or they want them bolted on to other work. The gut feel I've picked up over the years is that if someone is asking you to develop security policies for them, then either they're starting on security at the behest of some external or compliance requirement or they're hoping that this is the first step in an information security program. (Obviously, I can't put everything into the same bucket, but I'm talking generally) Both are rational reasons to want to get your information security policies sorted, but getting outside consultants to spend even a week's worth of time developing them for you, is time that could be better spent in my opinion. My reasons for this are two-fold:
Saying all of this is fine, but it doesn't make the auditors stop asking, and it doesn't put a green box or tick in the ISO/PCI/CoBIT/HIPAA/SOX policies checkbox. Previously, I've pointed people at existing policy repositories, where sample policies can be downloaded and modified to suit their need. Sites such as CSOOnline or PacketSource have links to some policies, but by far the most comprehensive source of free security policy templates is SANS. The problem is people seem to look at these, think it looks like work, and move on to a consultancy that's happy to charge for a month's worth of time. Even when you don't, the policies are buried in sub-pages that don't always make sense (for example, why is the Acceptable Use Policy put under "computer security"), even then several of them are only available in PDF form (hence not editable), even though they are explicitly written as modifiable templates. What I did was to go through all of these pages, download the documents, convert them into relevant formats and categorise them into a single view in a spreadsheet with hyperlinks to the documents. I've also included their guidance documents on how to write good sec policies, and ISO 27001-linked policy roadmaps. I haven't modified any of the actual content of the documents, and those retain their original copyright. I'm not trying to claim any credit for others' hard work, merely make the stuff a little more accessible.
You can download the index and documents HERE.
In future, I hope to add more "good" policies (a few of the SANS policies aren't wonderful), and also look into expanding into security standards (ala CIS Security) in the future. If necessary, take this to a consultancy, and ask them to spend some time making these specific to your organisation and way of doing things, but please, if you aren't getting the basics right, don't focus on these. In the meantime, if you're looking for information security policies to go away, so you can get on with the bigger problems organisations, and our industry in general are facing, then this should be a useful tool.
Last week we presented an invited talk at the ISSA conference on the topic of online privacy (embedded below, click through to SlideShare for the original PDF.)
The talk is an introductory overview of Privacy from a Security perspective and was prompted by discussions between security & privacy people along the line of "Isn't Privacy just directed Security? Privacy is to private info what PCI is to card info?" It was further prompted by discussion with Joe the Plumber along the lines of "Privacy is dead!"
The talk, is unfortunately best delivered as a talk, and not as standalone slides, so here's some commentary:
We start off the problem statement describing why privacy has grown in importance. The initial reactions were based on new technology allowing new types of information to be captured and disseminated. While the example given is from the 1980s, the reaction is a recurring one, as we've seen with each release of new tech (some examples: Cameras, Newspapers, Credit Cards, The Internet, Facebook). Reactions are worsened by the existence of actors with the funding & gall to collect and collate much information to further potentially disagreeable goals (usually Governments). However, the new threat is that there has been a fundamental shift in the way in which we live our lives, where information about us is no longer merely *recorded* online, but rather, our lives are *lived* on line. It is quite possible that for an average day, from waking up to going to sleep, a significant number of the actions you perform will not only be conducted (in part) online, but that it is possible for them to be conducted using the services of one service provider. My intention is not to beat up on Google, but rather use them as an example. They are a pertinent example, as every business book seems to use them as one. The, arguably, most successful corporation of our current age's primary business model is the collection & monetisation of private data. Thus, while Google is the example, there are and will be many followers.
The next section moves into providing a definition of privacy, and attempts to fly through some fairly dry aspects of philosophy, law & psychology. We've done some entry-level work on collating the conception of privacy across history and these fields, however, brighter minds, such as Daniel Solove and Kamil Reddy have done better jobs of this. In particular, Solove's paper "I've got nothing to hide", and other misconception of privacy is a good introductory read. The key derived point however, is that private data is data with an implied access control & authorised use. Which of the implied access controls & authorised uses are reasonable to enforce or can be legally enforced is a developing field.
As the talk is about "Online Privacy" the talk moves into a description of the various levels at which private data is collected, what mechanisms are used to attempt to collect that data, and what sort of data can be gleaned. It was an academic conference, so I threw in the word "taxonomy." Soon, it will be more frequently quoted than Maslow's Hierarchy, any day now.
At each level, a brief demonstration of non-obvious leaks and their implications was demonstrated. From simple techniques such as cross-site tracking using tracking pixels or cookies, to exploit of rich browser environments such as the simple CSS history hack, to less structured and less obvious leaks such as search data (as demonstrated by the AOL leak), moving to deanonymisation of an individual by correlating public data sets (using the awesome Maltego) and finally to unintended leaks provided by meta-data (through analysis of twitter & facebook friends groups).
Finally, a mere two slides are used to explain some of the implications and defenses. These are incomplete and are the current area of research I'm engaged in.
Since joining SensePost I've had a chance to get down and dirty with the threat modeling tool. The original principle behind the tool, first released in 2007 at CSI NetSec, was to throw out existing threat modeling techniques (it's really attack-focused risk) and start from scratch. It's a good idea and the SensePost approach fits nicely between the heavily formalised models like Octave and the quick-n-dirty's like attack trees. It allows fairly simple modeling of the organisation/system to quickly produce an exponentially larger list of possible risks and rank them.
We've had some time and a few bits of practical work to enhance the tool and our thinking about it. At first, I thought it would need an overhaul, mostly because I didn't like the terminology (hat tip to Mr Bejtlich). But, in testament to Charl's original thinking & the flexibility of the tool, no significant changes to the code were required. We're happy to announce version 2.1 is now available at our new tools page. In addition, much of our exploration of other threat modeling techniques was converted into a workshop of which the slides are available (approx 30MB).
The majority of the changes were in the equation. The discussion below will give you a good idea of how you can play with the equation to fundamentally change how the tool works.
There are 5 values you can play with in the equation:
In English that translates to: The risk is equal to; the average of the impact of the attack and it's likelihood, combined with the value of the asset (exposed through a particular interface), and reduced by the trust of the user performing the attack and the location they are performing it from.
We felt there were two problems with this equation:
Once again in English: The risk of an attack is; the likelihood of the attack reduced by the average of both the trust in the user & location, combined with, the value of the asset reduced by the potential impact of the attack (value at risk). (The 0.2 & 2.5 are just to make it fit the scales. Specifically, the 0.2 is because the scale of the entities is 1-5 and we're looking to make a percentage, and the 2.5 is to fit the 0-25 scale on the final graph.)
The key change which breaks backward compatibility here is that impact now becomes a moderator on value. i.e. the impact of an attack determines how much of the asset's value is exposed.
The way things are now modeled, interfaces represent the value of a system. For the most part, all a system's interfaces should have the same value, because as we often see, even minor interfaces that expose limited functionality can often be abused for a full compromise. However, the actual attack (called threats in the tool) determined how much of that value is exposed. For example, a worst-case XSS is (depending on the system of course) probably going to expose less of the system's value than a malicious sysadmin publicly pwning it (once again, dependent on the system and controls in place).
Unfortunately, there's still no provable way to perform threat modeling, but we feel we can go quite far in providing a quick and useful way of enumerating and prioritising attacks (and hence defenses) across complex system.
In a future blog post, I hope to cover some of the really cool scenario planning the tool can let you do, and the pretty graphs it gave us an excuse to justify budgets with.
[ Credit to the Online LaTeX Equation Editor for the formulas, although if you'd like to copy paste the formula described above into the tool, here's an ascii version:
( ( ( lik * ( ( ( (6 - usr) + (6 - loc) ) / 2 ) * 0.2 ) ) + ( int * ( imp * 0.2 ) ) ) * 2.5 )
I really enjoy listening to Mac Break Weekly.. Leo Laporte is an excellent host and i would tune in just to hear [Andy Ihnatko's] take on the industry and the (possible) motivations behind certain players moves. (he is sometimes wrong, but always worth listening to). The only time the things ever get a little cringe-worthy is when talk switches to malware and security (although both Andy and Leo for the most part have pretty reasonable balanced views on it).
Disclosure: I am a mac user, and love the hardware.. the fan-boy'ism that surrounds it, not so much..
Most security savvy mac users, dont push Invulnerable-Mac argument too much.. But it does lead to the follow-up "Once Mac gets more market share, we will hit the malware tipping point".. I dont think that this is how it will go down.. Here's my $0.002c on it.
One of the talks we gave at the recent ITWeb Security Summit was titled "One bad Apple".. The aim of the talk was to examine the truth/lies/fud behind the security claims on both the fan-boy and hater end of the spectrum.. I dont want to cover the whole talk here, but do want to touch on just a few of the current annoying red-herrings that normally pop up in this discussion:
Vulnerability counts as a useful Metric
This argument has been had by [many people] far brighter than me, so i wont rehash it here. I think its safe to say that since there isnt really a standard on what gets reported, very few vuln count reports end up comparing apples with apples. What i did pick on during the talk, was that some people dont even bother trying to dress up the stats in a cloak of reasonableness. The table below was taken from ByteSize magazine showing that Apple indeed had more Vulnerability Disclosures than Microsoft:
Vendors with the Most Vulnerability Disclosures (ByteSize - 3rd Ed. 2009)
Instead of muddying the water by asking what a 3.2% disclosure means, or by comparing Apple with Microsoft you have to ask yourself if the table is really comparing Microsoft, with its software, hardware, * against Wordpress with its 60 000 lines of PHP code?
My suggestion there is that if we going to use tables and charts, we should at least stick to the reasonable ones:
Of course the next topic that refuses to die is how mac architecture pixie-dust prevents it from getting worms and viruses.. A quick check should clarify this.. The ILOVEYOU virus which took windows computers all over the world (and according to Wikipedia cost about $5.5 billion in damage) was a snippet of VBS that read your address book, and mailed itself to your contacts (where it did the same). You can hack this up in Automator in seconds.. Same functionality completely..
Memory Corruption Attacks
In recent times, Microsoft has made huge leaps in terms of generic memory corruption protection mechanisms to minimize the effect of buffer overflow/mem corruption attacks. While Apple claimed to do the same with Leopard, they still trail Microsoft in this regard. The 3 points we covered:
(We cover these in more detail in an upcoming [conference in July] - but again, its fairly well understood that OSX in its current form is only randomizing libraries, and that to get the benefit of ASLR, you need to be randomizing everything)
So if we are saying that Apple is just as vulnerable to ILOVEYOU and even more vulnerable today than Windows from a nimda or a code-red, then what explains the fact that we dont see Macs getting owned on the same level as Windows?
The almost global answer is "Market share!". The belief that once more people are running macs, the big bad malware writers will start aiming at them.
If you look at the [netcraft web server survey] (2003) you should notice that at the time that nimda and code-red were running around the Internet, IIS didnt have the lions share of the webserver market either. Their lower market share didnt keep them safe then, why does it keep mac users safer now ?
The real market share difference
One of my guesses here is that we are looking at the wrong data for market share. What Microsoft does have over Apple, is a bigger market share of [developers..]
Microsoft went out of their way to make sure that anyone and their dog could write code for their platform, that any idiot in the world could write an app for them, and many did. I suspect that if you consider that any group will have a proportion of people with evil intentions, then in part what we seeing is just the percentage of the bigger pool.
Different user profiles
The other thing (although it sounds strange) is the question of user culture which is different. My wifes macbook air has very little software that didnt come with the machine. Apples "batteries included" policy means that her machine remains pretty clean.. Her mothers windows machine is a different story
Which means what?
Today, pound for pound, OS X Leopard is indeed more vulnerable than a Vista machine, but the eco system around Mac is holding back the huge embarrassing attacks that shamed Microsoft into action. Apple has a small window during which time they can take action, refine their built in mitigation strategies and come out on the other side acting like they were better all along..
If Snow Leopard is done right, it will hopefully be Apples XP-SP2, and us fanboys will be able to keep our securer-than-thou attitude.. If it doesnt, its only a matter of time..