While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).
The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:
But security vendors prioritisation of controls are driven by:
Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:
Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:
The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.
Until recently, there was a distinct lack of decent, high-quality technical security conferences held in the United Kingdom. Home to the Global Financial Centre, London, there isn't a shortage of industries who require secure applications and rely on secure infrastructure and applications to operate.
With this in mind, 44Con is the first combined information security conference and training event held in Central London. The con will provide business and technical tracks, aimed at government, public sector, financial, security professionals and Chief Security Officers.
SensePost will be attending the conference, with Ian de Villiers giving a ground-breaking talk on intercepting and modifying the protocol used by SAP GUI, including the release of a tool that facilitates assessments of said protocol. In addition, Ian and Daniel Cuthbert will be delivering a training course aimed at educating developers, and those involved in the deployments and life-cycle of applications, on the correct approaches required to protect applications from common threats.
Unlike other developer-centric courses, developers will actively be involved in breaking into their fellow students applications, whilst they try and prevent the attacks from taking place.
44Con will be held at the Grange City Hotel in London on the 30th August until the 2nd September.
Dominic is currently in the air somewhere over the Atlantic, returning from a long trip that included BlackHat, DefCon and lastly Metricon6, where he spoke on a threat model approach that he has picked up and fleshed out. He has promised a full(er) write-up on his glorious return, however in the meantime his slides are below. An updated copy of the CTM tool is on the CTM page, as is the demonstration dashboard (a nifty spreadsheet-from-the-deep that interactively provides various views on your threat model).
In light of recent mass hacks (HBGary, Sony, Nintendo, etc) one would have thought that collectively, companies would take notice and at least be slightly more aware of the potential implications vulnerabilities in public-facing services could have.
The problem appears to be that these hacks, and indeed hackers, aren't that technically superior and more often than not, take advantage of simple flaws. Some flaws, like SQL injection, provide so much access on their own that a fairly grim attack scenario can be painted. However, often attackers don't require such extravagant flaws to gain access. Chained attacks utilising "low risk" attacks can be far more deadly than a single flaw.
We had an interesting scenario recently which demonstrated this. This is one example of how we use these minor flaws to gain access, and also show how the house of cards can fall quite spectacularly when basic security principles are not adhered to. We were on a fairly bread and butter security assessment; perform an analysis of the target (a large multinational) and determine where their weaknesses were from an unauthenticated perspective. Increasingly, we advise against unauthenticated assessments as we feel we can offer more value when you assume the shell is already cracked, but this was a special case.
The web application was good; it soon became clear that the developers had followed guidelines for the development of secure applications and ensured that common attacks were indeed handled in a suitable manner. What they didn't do, however, was apply a stringent hardening process to the server itself, and as the platitude goes "security is only as good as the weakest link."
The analyst had already obtained all administrative user names and passwords (stored in a database with no protection at all) and had logged into a number to confirm access. My email, now forwarded in the clear, was sent to all involved with a stern "fix it". Since we had access to the mailboxes, we saw an admin send the reply: "..have changed the password. Ask them to check if the password is strong now, there's no way they can get in now."
Yes, the password was indeed strong and certainly constructed in a recommended manner, but the administrator's account was already compromised and we were monitoring communications. This wasn't an appropriate response, it's a bit like using AV to clean a virus from a box, post-infection, instead of rebuilding it. Some data mining through multiple unencrypted mailboxes provided numerous credentials for other servers inside the network. We could pivot through the internal network to our hearts content, while monitoring their comms to make sure our supply line wasn't under threat.
Takeaway I: Once an attacker is inside the perimeter, trying to control intruder access at the perimeter is a game you've already lost. i.e. Blocking the path in, doesn't mean you've blocked the paths in use.
Starting with a simple directory listing flaw, one that Nessus rates as a "low" risk, the house of cards fell at an alarming rate. Security isn't about looking only at the high risks, because attackers won't limit themselves the same way.
Over the last few years there has been a popular meme talking about information centric security as a new paradigm over vulnerability centric security. I've long struggled with the idea of information-centricity being successful, and in replying to a post by Rob Bainbridge, quickly jotted some of those problems down.
In pre-summary, I'm still sceptical of information-classification approaches (or information-led control implementations) as I feel they target a theoretically sensible idea, but not a practically sensible one.
Information gets stored in information containers (to borrow a phrase from Octave) such as the databases or file servers. This will need to inherit a classification based on the information it stores. That's easy if it's a single purpose DB, but what about a SQL cluster (used to reduce processor licenses) or even end-user machines? These should be moved up the classification chain because they may store some sensitive info, even if they spend the majority of the time pushing not-very-sensitive info around. In the end, the hoped-for cost-saving-and-focus-inducing prioritisation doesn't occur and you end up having to deploy a significantly higher level of security to most systems. Potentially, you could radically re-engineer your business to segregate data into separate networks such as some PCI de-scoping approaches suggest, but, apart from being a difficult job, this tends to counter many of the business benefits of data and system integrations that lead to the cross-pollination in the first place.
Next up, I feel this fails to take cognisance of what we call "pivoting"; the escalation of privileges by moving from one system or part of a system to another. I've seen situations when the low criticality network monitoring box is what ends up handing out the domain administrator password. It had never been part of internal/external audits scope, none of the vulns showed up on your average scanner, it had no sensitive info etc. Rather, I think we need to look at physical, network and trust segregation between systems, and then data. It would be nice to go data-first, but DRM isn't mature (read simple & widespread) enough to provide us with those controls.
Lastly, I feel information-led approaches often end up missing the value of raw functionality. For example, a critical trade execution system at an investment bank could have very little sensitive data stored on it, but the functionality it provides (i.e. being able to execute trades using that bank's secret sauce) is hugely sensitive and needs to be considered in any prioritisation.
I'm not saying I have the answers, but we've spent a lot of time thinking about how to model how our analysts attack systems and whether we could "guess" the results of multiple pentests across the organisation systematically, based on the inherent design of your network, systems and authentication. The idea is to use that model to drive prioritisation, or at least a testing plan. This is probably closer aligned to the idea of a threat-centric approach to security, and suffers from a lack of data in this area (I've started some preliminary work on incorporating VERIS metrics).
In summary, I think information-centric security fails in three ways; by providing limited prioritiation due to the high number of shared information containers in IT environments, by not incorporating how attackers move through a networks and by ignoring business critical functionality.