While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).
The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:
But security vendors prioritisation of controls are driven by:
Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:
Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:
The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.
On this past Thursday we spoke at BlackHat USA on Python Pickle. In the presentation, we covered approaches for implementing missing functionality in Pickle, automating the conversion of Python calls into Pickle opcodes, scenarios in which attacks are possible and guidelines for writing shellcode. Two tools were released:
Over the last few years there has been a popular meme talking about information centric security as a new paradigm over vulnerability centric security. I've long struggled with the idea of information-centricity being successful, and in replying to a post by Rob Bainbridge, quickly jotted some of those problems down.
In pre-summary, I'm still sceptical of information-classification approaches (or information-led control implementations) as I feel they target a theoretically sensible idea, but not a practically sensible one.
Information gets stored in information containers (to borrow a phrase from Octave) such as the databases or file servers. This will need to inherit a classification based on the information it stores. That's easy if it's a single purpose DB, but what about a SQL cluster (used to reduce processor licenses) or even end-user machines? These should be moved up the classification chain because they may store some sensitive info, even if they spend the majority of the time pushing not-very-sensitive info around. In the end, the hoped-for cost-saving-and-focus-inducing prioritisation doesn't occur and you end up having to deploy a significantly higher level of security to most systems. Potentially, you could radically re-engineer your business to segregate data into separate networks such as some PCI de-scoping approaches suggest, but, apart from being a difficult job, this tends to counter many of the business benefits of data and system integrations that lead to the cross-pollination in the first place.
Next up, I feel this fails to take cognisance of what we call "pivoting"; the escalation of privileges by moving from one system or part of a system to another. I've seen situations when the low criticality network monitoring box is what ends up handing out the domain administrator password. It had never been part of internal/external audits scope, none of the vulns showed up on your average scanner, it had no sensitive info etc. Rather, I think we need to look at physical, network and trust segregation between systems, and then data. It would be nice to go data-first, but DRM isn't mature (read simple & widespread) enough to provide us with those controls.
Lastly, I feel information-led approaches often end up missing the value of raw functionality. For example, a critical trade execution system at an investment bank could have very little sensitive data stored on it, but the functionality it provides (i.e. being able to execute trades using that bank's secret sauce) is hugely sensitive and needs to be considered in any prioritisation.
I'm not saying I have the answers, but we've spent a lot of time thinking about how to model how our analysts attack systems and whether we could "guess" the results of multiple pentests across the organisation systematically, based on the inherent design of your network, systems and authentication. The idea is to use that model to drive prioritisation, or at least a testing plan. This is probably closer aligned to the idea of a threat-centric approach to security, and suffers from a lack of data in this area (I've started some preliminary work on incorporating VERIS metrics).
In summary, I think information-centric security fails in three ways; by providing limited prioritiation due to the high number of shared information containers in IT environments, by not incorporating how attackers move through a networks and by ignoring business critical functionality.
The brand new BlackOps HBN course makes its debut in Vegas this year. The course finds its place as a natural follow on from Bootcamp, and prepares students for the more intense Combat edition. Where Bootcamp focuses on methodology and Combat focuses on thinking, BlackOps covers tools and techniques to brush up your skills.
This course is split into eight segments, covering scripting, targeting, compromise, privilege escalation, pivoting, exfiltration, client-side and and even a little exploit writing. BlackOps is different from our other courses in that it is pretty full of tricks, which are needed to move from the methodology of hacking to professional-level pentesting. It's likely to put a little (more) hair on your chest.
Course Name: Hacking By Numbers: BlackOps Edition Venue: BlackHat Briefings, Caesars Palace Las Vegas, NV Dates: July 30-31 & August 1-2 2011 Sign up here.
It is always a little bemusing to hear that we only provide pentests. Since 2001, SensePost has offered a very comprehensible vulnerability management service that's evolved through multiple generations of technologies and methodologies into a service we're very proud of. The Managed Vulnerability Scanning ("MVS") service makes use of our purpose-built BroadView scanning technology to scan a number of high profile South African and European clients. More information can be found here, but the purpose of this post is to introduce it with a basic overview of its deployment.
To give you a better understanding of our coverage, below are a number of statistics from our scanning database.
Number of scans per week: 935 average per week
Number of findings stored: 3 795 963
Number of collected attribute instance: 1 274 016
Number of unique IPs listed as targets: 24723
Number of unique IPs with issues: 4931
However, the stats are not the interesting bit. BroadView goes further than simply storing open issues, it also tags interesting characteristics of the targets using 'attributes', which are pieces of information associated with a finding, but are not necessarily a result. It is possible to query these attributes and tie them back to hosts; this enables you to search across all hosts for matching attributes. The most used attributes are:
So, we have loads of data and it makes for interesting analysis.
The number of targets with potential webservers: 918
And breaking it down further:
The top 3 SSL certificate issuers used:
Next time, more about the dashboard and the blizzards.