By the year 2015 sub-Saharan Africa will have more people with mobile network access than with access to electricity at home.This remarkable fact from a 2011 MobileMonday report  came to mind again as I read an article just yesterday about the introduction of Mobile Money in the UK: By the start of next year, every bank customer in the country may have the ability to transfer cash between bank accounts, using an app on their mobile phone. 
I originally came across the MobileMonday report while researching the question of mobility and security in Africa for a conference I was asked to present at . In this presentation I examine the global growth and impact of the so-called mobile revolution and then its relevance to Africa, before looking at some of the potential security implications this revolution will have.
The bit about the mobile revolution is easy: According to the Economist there will be 10 billion mobile devices connected to the Internet by 2020, and the number of mobile devices will surpass the number of PCs and laptops by this year already. The mobile-only Internet population will grow 56-fold from 14 million at the end of 2010 to 788 million by the end of 2015. Consumerization - the trend for new information technology to emerge first in the consumer market and then spread into business organizations, resulting in the convergence of the IT and consumer electronics industries - implies that the end-user is defining the roadmap for these technologies as manufacturers, networks and businesses scramble desperately to absorb their impact.
Africa, languishing behind in so many other respects, is right there on the rushing face of this new wave, as my initial quote illustrates. In fact the kind of mobile payment technology referred to in the BBC article is already quite prevalent in our home markets in Africa and we're frequently engaged to test mobile application security in various forms. In my presentation for example, I make reference to m-Pesa - the mobile payments system launched in Kenya and now mimicked in South Africa also. Six million people in Kenya use m-Pesa, and more than 5% of that country's annual GDP is moved to and fro directly from mobile to mobile. There are nearly five times the number of m-Pesa outlets than the total number of PostBank branches, post offices, bank branches, and automated teller machines (ATMs) in the country combined.
Closer to home in South Africa, it is estimated that the number of people with mobile phones outstrips the number of people with fixed-line Internet connections by a factor of ten! And this impacts our clients and their businesses directly: Approximately 44% of urban cellphone users in South Africa now make use of mobile banking services. The reasoning is clear: Where fixed infrastructure is poor mobile will dominate, and where the mobile dominates mobile services will soon follow. Mobile banking, mobile wallets, mobile TV and mobile social networking and mobile strong-authentication systems are all already prevalent here in South Africa and are already bringing with them the expected new array of security challenges. Understanding this is one of the reasons our customers come to us.
In my presentation I describe the Mobile Threat Model as having three key facets:
The technical security issues we discover on mobile devices and mobile applications today are really no different from what we've been finding in other environments for years. There are some interesting new variations and interesting new attack vectors, but it's really just a new flavor of the same thing. But there are four attributes of the modern mobile landscape that combine to present us with an entirely new challenge:
Firstly, mobiles are highly connected. The mobile phone is permanently on some IP network and by extension permanently on the Internet. However, it's also connected via GSM and CDMA; it's connected with your PC via USB, your Bluetooth headset and your GPS, and soon it will be connected with other devices in your vicinity via NFC. Never before in our history have communications been so converged, and all via the wallet-sized device in your pocket right now!
Secondly, the mobile device is deeply integrated. On or through this platform is everything anyone would ever want to know about you: Your location, your phone calls, your messages, your personal data, your photos, your location, your location history and your entire social network. Indeed, in an increasing number of technical paradigms, your mobile device is you! Moreover, the device has the ability to collect, store and transmit everything you say, see and hear, and everywhere you go!
Thirdly, as I've pointed out, mobile devices are incredibly widely distributed. Basically, everyone has one or soon will. And, we're rapidly steering towards a homogenous environment defined by IOS and Google's Android. Imagine the effect this has on the value of an exploit or attack vector. Finally, the mobile landscape is still being very, very poorly managed. Except for the Apple AppStore, and recent advances by Google to manage the Android market, there is extremely little by way of standardization, automated patching or central management to be seen. Most devices, once deployed, will stay in commission for years to come and so security mistakes being made now are likely to become a nightmare for us in the future.
Thus, the technical issues well known from years of security testing in traditional environments are destined to prevail in mobile, and we're already seeing this in the environments we've tested. This reality, combined with how connected, integrated, distributed and poorly managed these platforms are, suggests that careless decisions today could cost us very dearly in the future...
A longish post, but this wasn't going to fit into 140 characters. This is an argument pertaining to security metrics, with a statement that using pure vulnerability count-based metrics to talk about an organisation's application (in)security is insufficient, and suggests an alternative approach. Comments welcome.
Apart from the two bookends (SOSS and DBIR), other metrics are also published.
From a testing perspective, WhiteHat releases perhaps the most well-known set of metrics for appsec bugs, and in years gone by, Corsaire released statistics covering their customers. Also in 2008, WASC undertook a project to provide metrics with data sourced from a number of companies, however this too has not seen recent activity (last edit on the site was over a year ago). WhiteHat's metrics measure the number of serious vulnerabilities in each site (High, Critical, Urgent) and then slice and dice this based on the vulnerability's classification, the organisation's size, and the vertical within which they lie. WhiteHat is also in the fairly unique position of being able to record remediation times with a higher granularity than appsec firms that engage with customers through projects rather than service contracts. Corsaire's approach was slightly different; they recorded metrics in terms of the classification of the vulnerability, its impact and the year within which the issue was found. Their report contained similar metrics to the WhiteHat report (e.g. % of apps with XSS), but the inclusion of data from multiple years permitted them to extract trends from their data. (No doubt WhiteHat have trending data, however in the last report it was absent). Lastly, WASC's approach is very similar to WhiteHat's, in that a point in time is selected and vulnerability counts according to impact and classification are provided for that point.
Essentially, each of these approaches uses a base metric of vulnerability tallies, which are then viewed from different angles (classification, time-series, impact). While the metrics are collected per-application, they are easily aggregated into organisations.
In the extreme edges of ideal metrics, the ability to factor in chains of vulnerabilities that individually present little risk, but combined is greater than the sum of the parts, would be fantastic. This aspect is ignored by most (including us), as a fruitful path isn't clear.
One could just as easily claim that absolute bug counts are irrelevant and that they need to be relative to some other scale; commonly the number of applications an organisation has. However in this case, if the metrics don't provide enough granularity to accurately position your organisation with respect to others that you actually care about, then they're worthless to you in decision making. What drives many of our customers is not where they stand in relation to every other organisation, but specifically their peers and competitors. It's slightly ironic that oftentimes the more metrics released, the less applicable they are to individual companies. As a bank, knowing you're in the top 10% of a sample of banking organisations means something; when you're in the highest 10% of a survey that includes WebGoat clones, the results are much less clear.
In Seven Myths About Information Security Metrics, Dr Hinson raises a number of interesting points about security metrics. They're mostly applicable to security awareness, however they also carry across into other security activities. At least two serve my selfish needs, so I'll quote them here:
Myth 1: Metrics must be “objective” and “tangible”There is a subtle but important distinction between measuring subjective factors and measuring subjectively. It is relatively easy to measure “tangible” or objective things (the number of virus incidents, or the number of people trained). This normally gives a huge bias towards such metrics in most measurement systems, and a bias against measuring intangible things (such as level of security awareness). In fact, “intangible” or subjective things can be measured objectively, but we need to be reasonably smart about it (e.g., by using interviews,surveys and audits). Given the intangible nature of security awareness, it is definitely worth putting effort into the measurement of subjective factors, rather than relying entirely on easy-to-measure but largely irrelevant objective factors. [G Hinson]
Myth 3: We need absolute measurementsFor some unfathomable reason, people often assume we need “absolute measures”—height in meters, weight in pounds, etc. This is nonsense!
If I line up the people in your department against a wall, I can easily tell who is tallest, with no rulers in sight. This yet again leads to an unnecessary bias in many measurement systems. In fact, relative values are often more useful than absolute scales, especially to drive improvement. Consider this for instance: “Tell me, on an (arbitrary) scale from one to ten, how security aware are the people in your department are? OK, I'll be back next month to ask you the same question!” We need not define the scale formally, as long as the person being asked (a) has his own mental model of the processes and (b) appreciates the need to improve them. We needn't even worry about minor variations in the scoring scale from month to month, as long as our objective of promoting improvement is met. Benchmarking and best practice transfer are good examples of this kind of thinking. “I don't expect us to be perfect, but I'd like us to be at least as good as standard X or company Y. [G Hinson]
While he writes from the view of an organisation trying to decide whether their security awareness program is yielding dividends, the core statements are applicable for organisations seeking to determine the efficacy of their software security program. I'm particularly drawn by two points: the first is that intangibles are as useful as concrete metrics, and the second is that absolute measurements aren't necessary, comparative ordering is sometimes enough.
Measuring effort, or attacker cost, is not new to security but it's mostly done indirectly through the sale of exploits (e.g. iDefence, ZDI). Even here, effort is not directly related to the purchase price, which is also influenced by other factors such as the number of deployed targets etc. In any case, for custom applications that testers are mostly presented with, such public sources should be of little help (if your testers are submitting findings to ZDI, you have bigger problems). Every now and then, an exploit dev team will mention how long it took them to write an exploit for some weird Windows bug; these are always interesting data points, but are not specific enough for customers and the sample size is low.
Ideally, any measure of an attacker's cost can take into account both time and their exclusivity (or experience), however in practice this will be tough to gather from your testers. One could base it on their hourly rate, if your testing company differentiates between resources. In cases where they don't, or you're seeking to keep the metric simple, then another estimate for effort is the number of days spent on testing.
Returning to our sample companies, if the 5 vulnerabilities exposed in the Visigoth's each required, on average, a single day to find, while the Ostrogoth's 20 bugs average 5 days each, then the effort required by an attacker is minimised by choosing to target the Visigoths. In other words, one might argue that the Visigoths are more at risk than the Ostrogoths.
With this base metric, it's then possible to capture historical assessment data and provide both internal-looking metrics for an organisation as well as comparative metrics, if the testing company is also employed by your competitors. Internal metrics are the usual kinds (impact, classification, time-series), but the comparison option is very interesting. We're in the fortunate position of working with many top companies locally, and are able to compare competitors using this metric as a base. The actual ranking formulae is largely unimportant here. Naturally, data must be anonymised so as to protect names; one could provide the customer with their rank only. In this way, the customer has an independent notion of how their security activities rate against their peers without embarrassing the peers.
Inverting the findings-per-day metric provide the average number of days to find a particular class of vulnerability, or impact level. That is, if a client averages 0.7 High or Critical findings per testing day, then on average it takes us 1.4 days of testing to find an issue of great concern, which is an easy way of expressing the base metric.
As mentioned above, a minimum number of assessments would be needed before the metric is reliable; this is a hint at the deeper problems that randomly selected project days are not independent. An analyst stuck on a 4 week project is focused on a very small part of the broader organisation's application landscape. We counter this bias by including as many projects of the same type as possible.
This metric would also be very useful to include in each subsequent report for the customer, with every report containing an evaluation against their longterm vulnerability averages.
As mentioned above, a key test for metrics is where they support decision making, and the feedback from the client was positive in this regard.
This idea is still being fleshed out. If you're aware of previous work in this regard or have suggestions on how to improve it (even abandon it) please get in contact.
Oh, and if you've read this far and are looking for training, we're at BH in August.
Considering how freely i've ranted on our blog over the past few years i found it incredibly hard to to write this post. SensePost has been my home for the better part of a decade and i have been email@example.com much more than i have been haroon meer.
In truly boring last post manner i wanted to quickly say thanks to everyone for making it such a fun ride. From the awesome people who took a chance on us when we were scarily young and foolish, to the guys (and girls) who joined us to help make SP elite. From the many customers who tolerated my sloppy dressing to Secure Data Holdings who have been awesome in every interaction we have ever had with them. From the people who have used our tools, read our work and contributed ideas to the people who read this blog (Hi Mom!).
Seriously.. thanks muchly!
It's been an awesome 10 years and with the quality of guys that remain at SensePost, it's a safe bet that the next 10 are going to be even better..
The question that everyone asks me is "what now?". The short answer still has 2 parts..
With Penetration Testing and Research over the past while I've spent a lot of time and energy trying to find new ways to break stuff, and new ways to break into stuff.. (it's been incredibly fun!)
I'm hoping now to be able to aim the same sort of bull-headedness at defending stuff, and at building solutions that give applications and networks a fighting chance.
I'll still pop in occasionally at the SensePost offices (mainly to have the coffee and lose at foosball), and my relationship with Secure Data Holdings also remains intact (Other than our historical relationship, Thinkst is doing some consulting work for SDH, making them our first customer!). Hey.. you might even still find me bending your ear on this blog..
So.. all that remains is to say thanks again.. it's been amazingly fun, incredibly rewarding and "rockingly leet"
So...because I don't have a report to write this weekend I've had some time to ponder and reflect on stuff (and read my mail)- I thought I'd share some stuff that came to the fore of my mind again now when reading a newsletter.
Since the early days of playing competitive sport (in those days it was paintball) I've always been astounded as to the intensity of the emotions involved when you win and when you lose. Particularly how when you are on a losing streak (or your personal game just sucks) it's really tough to drag yourself out of that and come back kicking ass. I hate to lose...I really hate it...
That stuff started to make a lot more sense to me when I was older and started fighting - when people tell you fighting is 95% mental and 5% physical don't think they are being cute - it's spot on. My coach at the time worked a lot of mental game stuff and the improvements were very tangible. Taking a solid punch and not even blinking as you give a harder one back, even when you can barely see anything but stars, takes an almost iron will, and equally can destroy an opponent's resolve.
MH, Bradley, the Panda and I had a similar talk around this just on Thursday in the chill room, where the talk was about penalty taking in soccer, and to protect the innocent we won't go into anymore specific detail on this. Suffice to say that it's tough...very... :>
The same principles are of course applicable to life in general (in fact I've tried to apply that thinking to other areas of my life too and wouldn't have it any other way) and to business. MH blogged before his trip about a book he read called The Dip, by Seth Godin. I also read this book about a week or two ago, and it really does say a lot without saying a lot...things we think we should know...hmmm
Without going into too much detail, the book basically talks about how most people quit something at the most inopportune moment - when they are in a dip and success is just on the other side. Also, people tend to stick with stuff that is going nowhere out of fear to quit...and end up wasting their lives / effort / potential etc
I read this from Napolean Hill this morning:
Most failures could have been converted into successes if someone had held on another minute or made more effort.
When you have the potential for success within you, adversity and temporary defeat only help you prepare to reach great heights of success. Without adversity, you would never develop the qualities of reliability, loyalty, humility, and perseverance that are so essential to enduring success.
Many people have escaped the jaws of defeat and achieved great victories because they would not allow themselves to fail. When your escape routes are all closed, you will be surprised how quickly you will find the path to success.
I'm also reading a book at the moment on some of the greatest traders and how they had success. The one thing that sticks out the most in my mind is that those who were good were able to detach themselves emotionally from their wins, and particularly their losses.
Some of these guys were able to recover from being far in the red to becoming centi-millionaires. How? By taking the losses, understanding they will come by nature of the business, and pushing on through the worst of times, keeping their composure and not giving up.
What's interesting is if you look at our work, particularly some assessments, this same situation becomes true. I believe that what separates those who are good from those who are ok is how hard you push and where you give up when you are down. I saw it on a number of projects I had over the last year or so...when I was about ready to quit, way outside of my comfort zone, tired and sick, I pushed on and ultimately got some solid rapage at the end.
What makes us great at what we do in my opinion is the attitude, not the technical skill - skills are easy to pick up - the competiveness and will to fight for it is what makes a potentially good hax0r a great one. Most of us take our work personally (I know I do) and I'd rather be great and tired then average and comfortable.
To sum up my arb ramblings, I had some stuff in my life recently that I really don't want to go into, but that made me question a lot of things, and really messed with my attitude. Taking a step back, looking at the big picture, riding the wins and cutting the losses early, knowing where to push and where to quit, and pushing it where I was sick, tired or hurt got me through on the good side.
Maybe it works for someone....maybe it doesn't...but just putting it out there :>
** CRM114 Whitelisted by: From firstname.lastname@example.org **