Well, we're ramping up with the new Hacking By Numbers W^3 edition course we will be presenting at BlackHat Vegas this year. This course is a replacement for the Web2.0 course we successfully presented over the past three years and sports a whole bunch of new and improved practicals. We've also upped the technology being used and the presentation is chock-full of ASCII sheep... :)
The new course is an intermediate web application hacking course, and will deal with the following topics
I'll be presenting the course again this year, and I'm looking forward to the training and the briefings. Hope to see you all there.
A longish post, but this wasn't going to fit into 140 characters. This is an argument pertaining to security metrics, with a statement that using pure vulnerability count-based metrics to talk about an organisation's application (in)security is insufficient, and suggests an alternative approach. Comments welcome.
Apart from the two bookends (SOSS and DBIR), other metrics are also published.
From a testing perspective, WhiteHat releases perhaps the most well-known set of metrics for appsec bugs, and in years gone by, Corsaire released statistics covering their customers. Also in 2008, WASC undertook a project to provide metrics with data sourced from a number of companies, however this too has not seen recent activity (last edit on the site was over a year ago). WhiteHat's metrics measure the number of serious vulnerabilities in each site (High, Critical, Urgent) and then slice and dice this based on the vulnerability's classification, the organisation's size, and the vertical within which they lie. WhiteHat is also in the fairly unique position of being able to record remediation times with a higher granularity than appsec firms that engage with customers through projects rather than service contracts. Corsaire's approach was slightly different; they recorded metrics in terms of the classification of the vulnerability, its impact and the year within which the issue was found. Their report contained similar metrics to the WhiteHat report (e.g. % of apps with XSS), but the inclusion of data from multiple years permitted them to extract trends from their data. (No doubt WhiteHat have trending data, however in the last report it was absent). Lastly, WASC's approach is very similar to WhiteHat's, in that a point in time is selected and vulnerability counts according to impact and classification are provided for that point.
Essentially, each of these approaches uses a base metric of vulnerability tallies, which are then viewed from different angles (classification, time-series, impact). While the metrics are collected per-application, they are easily aggregated into organisations.
In the extreme edges of ideal metrics, the ability to factor in chains of vulnerabilities that individually present little risk, but combined is greater than the sum of the parts, would be fantastic. This aspect is ignored by most (including us), as a fruitful path isn't clear.
One could just as easily claim that absolute bug counts are irrelevant and that they need to be relative to some other scale; commonly the number of applications an organisation has. However in this case, if the metrics don't provide enough granularity to accurately position your organisation with respect to others that you actually care about, then they're worthless to you in decision making. What drives many of our customers is not where they stand in relation to every other organisation, but specifically their peers and competitors. It's slightly ironic that oftentimes the more metrics released, the less applicable they are to individual companies. As a bank, knowing you're in the top 10% of a sample of banking organisations means something; when you're in the highest 10% of a survey that includes WebGoat clones, the results are much less clear.
In Seven Myths About Information Security Metrics, Dr Hinson raises a number of interesting points about security metrics. They're mostly applicable to security awareness, however they also carry across into other security activities. At least two serve my selfish needs, so I'll quote them here:
Myth 1: Metrics must be “objective” and “tangible”There is a subtle but important distinction between measuring subjective factors and measuring subjectively. It is relatively easy to measure “tangible” or objective things (the number of virus incidents, or the number of people trained). This normally gives a huge bias towards such metrics in most measurement systems, and a bias against measuring intangible things (such as level of security awareness). In fact, “intangible” or subjective things can be measured objectively, but we need to be reasonably smart about it (e.g., by using interviews,surveys and audits). Given the intangible nature of security awareness, it is definitely worth putting effort into the measurement of subjective factors, rather than relying entirely on easy-to-measure but largely irrelevant objective factors. [G Hinson]
Myth 3: We need absolute measurementsFor some unfathomable reason, people often assume we need “absolute measures”—height in meters, weight in pounds, etc. This is nonsense!
If I line up the people in your department against a wall, I can easily tell who is tallest, with no rulers in sight. This yet again leads to an unnecessary bias in many measurement systems. In fact, relative values are often more useful than absolute scales, especially to drive improvement. Consider this for instance: “Tell me, on an (arbitrary) scale from one to ten, how security aware are the people in your department are? OK, I'll be back next month to ask you the same question!” We need not define the scale formally, as long as the person being asked (a) has his own mental model of the processes and (b) appreciates the need to improve them. We needn't even worry about minor variations in the scoring scale from month to month, as long as our objective of promoting improvement is met. Benchmarking and best practice transfer are good examples of this kind of thinking. “I don't expect us to be perfect, but I'd like us to be at least as good as standard X or company Y. [G Hinson]
While he writes from the view of an organisation trying to decide whether their security awareness program is yielding dividends, the core statements are applicable for organisations seeking to determine the efficacy of their software security program. I'm particularly drawn by two points: the first is that intangibles are as useful as concrete metrics, and the second is that absolute measurements aren't necessary, comparative ordering is sometimes enough.
Measuring effort, or attacker cost, is not new to security but it's mostly done indirectly through the sale of exploits (e.g. iDefence, ZDI). Even here, effort is not directly related to the purchase price, which is also influenced by other factors such as the number of deployed targets etc. In any case, for custom applications that testers are mostly presented with, such public sources should be of little help (if your testers are submitting findings to ZDI, you have bigger problems). Every now and then, an exploit dev team will mention how long it took them to write an exploit for some weird Windows bug; these are always interesting data points, but are not specific enough for customers and the sample size is low.
Ideally, any measure of an attacker's cost can take into account both time and their exclusivity (or experience), however in practice this will be tough to gather from your testers. One could base it on their hourly rate, if your testing company differentiates between resources. In cases where they don't, or you're seeking to keep the metric simple, then another estimate for effort is the number of days spent on testing.
Returning to our sample companies, if the 5 vulnerabilities exposed in the Visigoth's each required, on average, a single day to find, while the Ostrogoth's 20 bugs average 5 days each, then the effort required by an attacker is minimised by choosing to target the Visigoths. In other words, one might argue that the Visigoths are more at risk than the Ostrogoths.
With this base metric, it's then possible to capture historical assessment data and provide both internal-looking metrics for an organisation as well as comparative metrics, if the testing company is also employed by your competitors. Internal metrics are the usual kinds (impact, classification, time-series), but the comparison option is very interesting. We're in the fortunate position of working with many top companies locally, and are able to compare competitors using this metric as a base. The actual ranking formulae is largely unimportant here. Naturally, data must be anonymised so as to protect names; one could provide the customer with their rank only. In this way, the customer has an independent notion of how their security activities rate against their peers without embarrassing the peers.
Inverting the findings-per-day metric provide the average number of days to find a particular class of vulnerability, or impact level. That is, if a client averages 0.7 High or Critical findings per testing day, then on average it takes us 1.4 days of testing to find an issue of great concern, which is an easy way of expressing the base metric.
As mentioned above, a minimum number of assessments would be needed before the metric is reliable; this is a hint at the deeper problems that randomly selected project days are not independent. An analyst stuck on a 4 week project is focused on a very small part of the broader organisation's application landscape. We counter this bias by including as many projects of the same type as possible.
This metric would also be very useful to include in each subsequent report for the customer, with every report containing an evaluation against their longterm vulnerability averages.
As mentioned above, a key test for metrics is where they support decision making, and the feedback from the client was positive in this regard.
This idea is still being fleshed out. If you're aware of previous work in this regard or have suggestions on how to improve it (even abandon it) please get in contact.
Oh, and if you've read this far and are looking for training, we're at BH in August.
The brand new BlackOps HBN course makes its debut in Vegas this year. The course finds its place as a natural follow on from Bootcamp, and prepares students for the more intense Combat edition. Where Bootcamp focuses on methodology and Combat focuses on thinking, BlackOps covers tools and techniques to brush up your skills.
This course is split into eight segments, covering scripting, targeting, compromise, privilege escalation, pivoting, exfiltration, client-side and and even a little exploit writing. BlackOps is different from our other courses in that it is pretty full of tricks, which are needed to move from the methodology of hacking to professional-level pentesting. It's likely to put a little (more) hair on your chest.
Course Name: Hacking By Numbers: BlackOps Edition Venue: BlackHat Briefings, Caesars Palace Las Vegas, NV Dates: July 30-31 & August 1-2 2011 Sign up here.
Salut à tous,
It's that time of the year again and like every year, we'll once again be running our ever-popular "BOOTCAMP EDITION" at the BlackHat Briefings in Las Vegas this July-August. This course is part of our established Hacking by Numbers series. BUT, this year, only the name remains the same. We are slaving away at making this course cutting edge, providing you with a hands-on hacking experience on the latest operating systems, application frameworks and programming languages utilizing the latest tools and techniques. Gone are the days of IIS 5.0, Windows XP and we truly understand that [ed: for Bootcamp, maybe... Combat certainly contains an OS older than Win95].
SensePost's Bootcamp edition will provide you with two days of insight into the hacking world. The course is designed to keep a balance between theoretical knowledge & practical experience. Anyone can read a whitepaper about SQL Injection but unless you've exploited it against a real-world application and owned it completely, it all tastes too bland. The Bootcamp spices up the pwning experience.
The training course starts with basics of hacking, the hacker mindset & methodology and quickly moves on to practicalities of modern hacking attacks. Immediately after brainstorming a security concept, the students are placed in different attacker scenarios which have handy pracsheets to guide you (but not answer sheets to spoonfeed). Competition is good and hence we have a few "capture the flag" practicals that'll provoke you to race against each other to grab that surprise gift.
Our trainers are experienced, patient and well groomed. Having a good sense of humor is a requirement here @SensePost, so you don't have to worry about falling asleep.
So, if you're looking to explore the world of hacking or move from newbie to competent in a great environment, remember:
Course Name: Hacking By Numbers: Bootcamp Edition
Venue: BlackHat Briefings, Caesars Palace Las Vegas, NV
Dates: July 30-31 & August 1-2 2011
Sign up here.
See you there!!!
An education isn't how much you have committed to memory, or even how much you know. It's being able to differentiate between what you know and what you don't. - Anatole France
Jobs within Information Security, and indeed Information Technology, are often more than a 9-5 affair for many who choose them as their career. There is a wealth of different technologies, frameworks, approaches and information that you need to understand to perform your job to a suitable level. In IT security specifically, with the pace of technology constantly growing, keeping abreast is often easier said than done.
Local there is a severe lack of established courses catering for those new to Information Security, or those looking at obtaining a more meaningful qualification, which are few and far between. When Rhodes University announced they were offering a Masters course in Information Security here in South Africa, and asked SensePost if they'd like to present a number of modules, we were more than happy to be involved.
Barry Irwin asked us to deliver a weekend of application security: the whys, hows, whats and whens of all things application security. Armed with suitable vulnerable web applications for the students to abuse and use, I made the trip down to Grahamstown in April.
The course started with an understanding of why security has traditionally been hard to implement in the development life-cycle and then moved on to the various challenges faced by those responsible for developing applications. The course drew on the experience of those within SensePost, who have been involved in large application deployments and worked with customers in helping them produce secure applications.
Since all talk and no fun isn't the best approach for learning, students were let loose on commonly deployed applications and taught how to break them. Whilst many have heard the term "SQL injection", doing it correctly for the first time always brings an evil smile upon the face of who ever is doing it. As an industry, we are very quick to use acronyms and expect others to know what we are talking about, but often fail to realise this isn't always the case. From basic authorisation flaws to chained logic flaws, the main areas of abuse were talked about.
Besides being told by a few of the students that their brains had exploded, the course went well and everyone enjoyed hacking and learning, even if it was only for a weekend.
It was fantastic to see many reach the "aaah ha!" moment when it all made sense. SensePost have a large training offering, from beginner to advanced courses and nothing means more to the trainer than when someone understands something they've struggled with previously.
It's a great sign for the country knowing that Rhodes are producing some of South Africa's next Information Security champions and is even better knowing that SensePost was helping.
If you wish to learn how to perform security assessments the correct way, SensePost offers a comprehensive suite of training courses. We are also offering training at the Black hat security conference in Las Vegas in July.
Contact our sales team if you wish to learn more about the training offerings.