Grey bar Blue bar
Share this:

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...

Fri, 15 Jul 2011

Security Policies - Go Away

Security policies are necessary, but their focus is to the detriment of more important security tasks. If auditors had looked for trivial SQL injection on a companies front-page as hard as they have checked for security polices, then maybe our industry would be in a better place. I want to make this go away, I want to help you tick the box so you can focus on the real work. If you just want the "tool" skip to the end.

A year and a half ago, SensePost started offering "build it" rather than "break it" consulting services, we wanted to focus on technical, high-quality advisory work. However, by far the most frequently "consulting" request we've seen has been asking for security policies. Either a company approaches us looking for them explicitly or they want them bolted on to other work. The gut feel I've picked up over the years is that if someone is asking you to develop security policies for them, then either they're starting on security at the behest of some external or compliance requirement or they're hoping that this is the first step in an information security program. (Obviously, I can't put everything into the same bucket, but I'm talking generally) Both are rational reasons to want to get your information security policies sorted, but getting outside consultants to spend even a week's worth of time developing them for you, is time that could be better spent in my opinion. My reasons for this are two-fold:

  • If you're starting a security program, then you have a lot to learn and possibly a lot of convincing of senior management to do. Something like an internal penetration test (not that I'm advocating this specifically instead of policy) will give you far more insight into the security of your environment and a lot more "red ink" that can be used to highlight the risk to the "higher ups".
  • Security policies don't "do" anything. They are a representation of management's intention and agreements around security controls, which in the best case, provide a "cover my ass" defense if an employee takes you to task for intercepting their e-mails or something similar. The policies need to be used to derive actual controls, and are not controls in themselves.
Instead, we too often end up in a world where security policies, rather than good security, is the end goal while new technologies keep us amused developing new ones (mobile policies, social media policies, data leakage policies etc.)

Saying all of this is fine, but it doesn't make the auditors stop asking, and it doesn't put a green box or tick in the ISO/PCI/CoBIT/HIPAA/SOX policies checkbox. Previously, I've pointed people at existing policy repositories, where sample policies can be downloaded and modified to suit their need. Sites such as CSOOnline or PacketSource have links to some policies, but by far the most comprehensive source of free security policy templates is SANS. The problem is people seem to look at these, think it looks like work, and move on to a consultancy that's happy to charge for a month's worth of time. Even when you don't, the policies are buried in sub-pages that don't always make sense (for example, why is the Acceptable Use Policy put under "computer security"), even then several of them are only available in PDF form (hence not editable), even though they are explicitly written as modifiable templates. What I did was to go through all of these pages, download the documents, convert them into relevant formats and categorise them into a single view in a spreadsheet with hyperlinks to the documents. I've also included their guidance documents on how to write good sec policies, and ISO 27001-linked policy roadmaps. I haven't modified any of the actual content of the documents, and those retain their original copyright. I'm not trying to claim any credit for others' hard work, merely make the stuff a little more accessible.

You can download the index and documents HERE.

In future, I hope to add more "good" policies (a few of the SANS policies aren't wonderful), and also look into expanding into security standards (ala CIS Security) in the future. If necessary, take this to a consultancy, and ask them to spend some time making these specific to your organisation and way of doing things, but please, if you aren't getting the basics right, don't focus on these. In the meantime, if you're looking for information security policies to go away, so you can get on with the bigger problems organisations, and our industry in general are facing, then this should be a useful tool.

Tue, 10 Aug 2010

Information Security South Africa (ISSA) 2010

Last week we presented an invited talk at the ISSA conference on the topic of online privacy (embedded below, click through to SlideShare for the original PDF.)

The talk is an introductory overview of Privacy from a Security perspective and was prompted by discussions between security & privacy people along the line of "Isn't Privacy just directed Security? Privacy is to private info what PCI is to card info?" It was further prompted by discussion with Joe the Plumber along the lines of "Privacy is dead!"

The talk, is unfortunately best delivered as a talk, and not as standalone slides, so here's some commentary:

We start off the problem statement describing why privacy has grown in importance. The initial reactions were based on new technology allowing new types of information to be captured and disseminated. While the example given is from the 1980s, the reaction is a recurring one, as we've seen with each release of new tech (some examples: Cameras, Newspapers, Credit Cards, The Internet, Facebook). Reactions are worsened by the existence of actors with the funding & gall to collect and collate much information to further potentially disagreeable goals (usually Governments). However, the new threat is that there has been a fundamental shift in the way in which we live our lives, where information about us is no longer merely *recorded* online, but rather, our lives are *lived* on line. It is quite possible that for an average day, from waking up to going to sleep, a significant number of the actions you perform will not only be conducted (in part) online, but that it is possible for them to be conducted using the services of one service provider. My intention is not to beat up on Google, but rather use them as an example. They are a pertinent example, as every business book seems to use them as one. The, arguably, most successful corporation of our current age's primary business model is the collection & monetisation of private data. Thus, while Google is the example, there are and will be many followers.

The next section moves into providing a definition of privacy, and attempts to fly through some fairly dry aspects of philosophy, law & psychology. We've done some entry-level work on collating the conception of privacy across history and these fields, however, brighter minds, such as Daniel Solove and Kamil Reddy have done better jobs of this. In particular, Solove's paper "I've got nothing to hide", and other misconception of privacy is a good introductory read. The key derived point however, is that private data is data with an implied access control & authorised use. Which of the implied access controls & authorised uses are reasonable to enforce or can be legally enforced is a developing field.

As the talk is about "Online Privacy" the talk moves into a description of the various levels at which private data is collected, what mechanisms are used to attempt to collect that data, and what sort of data can be gleaned. It was an academic conference, so I threw in the word "taxonomy." Soon, it will be more frequently quoted than Maslow's Hierarchy, any day now.

At each level, a brief demonstration of non-obvious leaks and their implications was demonstrated. From simple techniques such as cross-site tracking using tracking pixels or cookies, to exploit of rich browser environments such as the simple CSS history hack, to less structured and less obvious leaks such as search data (as demonstrated by the AOL leak), moving to deanonymisation of an individual by correlating public data sets (using the awesome Maltego) and finally to unintended leaks provided by meta-data (through analysis of twitter & facebook friends groups).

Finally, a mere two slides are used to explain some of the implications and defenses. These are incomplete and are the current area of research I'm engaged in.

Thu, 10 Jun 2010

SensePost Corporate Threat(Risk) Modeler

Since joining SensePost I've had a chance to get down and dirty with the threat modeling tool. The original principle behind the tool, first released in 2007 at CSI NetSec, was to throw out existing threat modeling techniques (it's really attack-focused risk) and start from scratch. It's a good idea and the SensePost approach fits nicely between the heavily formalised models like Octave and the quick-n-dirty's like attack trees. It allows fairly simple modeling of the organisation/system to quickly produce an exponentially larger list of possible risks and rank them.

We've had some time and a few bits of practical work to enhance the tool and our thinking about it. At first, I thought it would need an overhaul, mostly because I didn't like the terminology (hat tip to Mr Bejtlich). But, in testament to Charl's original thinking & the flexibility of the tool, no significant changes to the code were required. We're happy to announce version 2.1 is now available at our new tools page. In addition, much of our exploration of other threat modeling techniques was converted into a workshop of which the slides are available (approx 30MB).

The majority of the changes were in the equation. The discussion below will give you a good idea of how you can play with the equation to fundamentally change how the tool works.

There are 5 values you can play with in the equation:

  1. imp - the impact of a risk being realised
  2. lik - the likelihood of the risk occurring
  3. int - the value of an asset (represented by an interface to that asset)
  4. usr & loc - the measurable trust placed in a user & location respectively
The current default formula is:

In English that translates to: The risk is equal to; the average of the impact of the attack and it's likelihood, combined with the value of the asset (exposed through a particular interface), and reduced by the trust of the user performing the attack and the location they are performing it from.

We felt there were two problems with this equation:

  1. It doesn't acknowledge impact as linked to value. e.g. You can't have a huge impact on something of low value.
  2. It doesn't see trust as linked to likelihood. e.g. a trusted user in a trusted location is less likely to commit an attack.
  3. It double weights trust with location and user trust counting at full weight.
  4. It's maybe a little far from semi-consensual views on the subject
After much internal wrangling, and some actual work on modeling fairly complex stuff, we came up with a new equation. While we feel this works better, it does mean the way things are modeled changes, and hence backwards compatibility with existing models is broken (but you don't need to use this equation). The new equation (consider the risk= implied) is:

Once again in English: The risk of an attack is; the likelihood of the attack reduced by the average of both the trust in the user & location, combined with, the value of the asset reduced by the potential impact of the attack (value at risk). (The 0.2 & 2.5 are just to make it fit the scales. Specifically, the 0.2 is because the scale of the entities is 1-5 and we're looking to make a percentage, and the 2.5 is to fit the 0-25 scale on the final graph.)

The key change which breaks backward compatibility here is that impact now becomes a moderator on value. i.e. the impact of an attack determines how much of the asset's value is exposed.

The way things are now modeled, interfaces represent the value of a system. For the most part, all a system's interfaces should have the same value, because as we often see, even minor interfaces that expose limited functionality can often be abused for a full compromise. However, the actual attack (called threats in the tool) determined how much of that value is exposed. For example, a worst-case XSS is (depending on the system of course) probably going to expose less of the system's value than a malicious sysadmin publicly pwning it (once again, dependent on the system and controls in place).

Unfortunately, there's still no provable way to perform threat modeling, but we feel we can go quite far in providing a quick and useful way of enumerating and prioritising attacks (and hence defenses) across complex system.

In a future blog post, I hope to cover some of the really cool scenario planning the tool can let you do, and the pretty graphs it gave us an excuse to justify budgets with.

[ Credit to the Online LaTeX Equation Editor for the formulas, although if you'd like to copy paste the formula described above into the tool, here's an ascii version:

( ( ( lik * ( ( ( (6 - usr) + (6 - loc) ) / 2 ) * 0.2 ) ) + ( int * ( imp * 0.2 ) ) ) * 2.5 )

]

Thu, 11 Jun 2009

Apple vs Microsoft as a malware target.. stop saying market share..

I really enjoy listening to Mac Break Weekly.. Leo Laporte is an excellent host and i would tune in just to hear [Andy Ihnatko's] take on the industry and the (possible) motivations behind certain players moves. (he is sometimes wrong, but always worth listening to). The only time the things ever get a little cringe-worthy is when talk switches to malware and security (although both Andy and Leo for the most part have pretty reasonable balanced views on it).

Disclosure: I am a mac user, and love the hardware.. the fan-boy'ism that surrounds it, not so much..

Most security savvy mac users, dont push Invulnerable-Mac argument too much.. But it does lead to the follow-up "Once Mac gets more market share, we will hit the malware tipping point".. I dont think that this is how it will go down.. Here's my $0.002c on it.

One of the talks we gave at the recent ITWeb Security Summit was titled "One bad Apple".. The aim of the talk was to examine the truth/lies/fud behind the security claims on both the fan-boy and hater end of the spectrum.. I dont want to cover the whole talk here, but do want to touch on just a few of the current annoying red-herrings that normally pop up in this discussion:

Vulnerability counts as a useful Metric

This argument has been had by [many people] far brighter than me, so i wont rehash it here. I think its safe to say that since there isnt really a standard on what gets reported, very few vuln count reports end up comparing apples with apples. What i did pick on during the talk, was that some people dont even bother trying to dress up the stats in a cloak of reasonableness. The table below was taken from ByteSize magazine showing that Apple indeed had more Vulnerability Disclosures than Microsoft:

Vendors with the Most Vulnerability Disclosures (ByteSize - 3rd Ed. 2009)

Instead of muddying the water by asking what a 3.2% disclosure means, or by comparing Apple with Microsoft you have to ask yourself if the table is really comparing Microsoft, with its software, hardware, * against Wordpress with its 60 000 lines of PHP code?

My suggestion there is that if we going to use tables and charts, we should at least stick to the reasonable ones:

Malware defense

Of course the next topic that refuses to die is how mac architecture pixie-dust prevents it from getting worms and viruses.. A quick check should clarify this.. The ILOVEYOU virus which took windows computers all over the world (and according to Wikipedia cost about $5.5 billion in damage) was a snippet of VBS that read your address book, and mailed itself to your contacts (where it did the same). You can hack this up in Automator in seconds.. Same functionality completely..

Memory Corruption Attacks

In recent times, Microsoft has made huge leaps in terms of generic memory corruption protection mechanisms to minimize the effect of buffer overflow/mem corruption attacks. While Apple claimed to do the same with Leopard, they still trail Microsoft in this regard. The 3 points we covered:

  1. Non-executable Stack.
  2. Non-executable Heap.
  3. Address Space Layout Randomization.

(We cover these in more detail in an upcoming [conference in July] - but again, its fairly well understood that OSX in its current form is only randomizing libraries, and that to get the benefit of ASLR, you need to be randomizing everything)

So if we are saying that Apple is just as vulnerable to ILOVEYOU and even more vulnerable today than Windows from a nimda or a code-red, then what explains the fact that we dont see Macs getting owned on the same level as Windows?

The almost global answer is "Market share!". The belief that once more people are running macs, the big bad malware writers will start aiming at them.

If you look at the [netcraft web server survey] (2003) you should notice that at the time that nimda and code-red were running around the Internet, IIS didnt have the lions share of the webserver market either. Their lower market share didnt keep them safe then, why does it keep mac users safer now ?

The real market share difference

One of my guesses here is that we are looking at the wrong data for market share. What Microsoft does have over Apple, is a bigger market share of [developers..]

Microsoft went out of their way to make sure that anyone and their dog could write code for their platform, that any idiot in the world could write an app for them, and many did. I suspect that if you consider that any group will have a proportion of people with evil intentions, then in part what we seeing is just the percentage of the bigger pool.

Different user profiles

The other thing (although it sounds strange) is the question of user culture which is different. My wifes macbook air has very little software that didnt come with the machine. Apples "batteries included" policy means that her machine remains pretty clean.. Her mothers windows machine is a different story

Which means what?

Today, pound for pound, OS X Leopard is indeed more vulnerable than a Vista machine, but the eco system around Mac is holding back the huge embarrassing attacks that shamed Microsoft into action. Apple has a small window during which time they can take action, refine their built in mitigation strategies and come out on the other side acting like they were better all along..

(Recent hires like Ivan give hope for this happening)

If Snow Leopard is done right, it will hopefully be Apples XP-SP2, and us fanboys will be able to keep our securer-than-thou attitude.. If it doesnt, its only a matter of time..