This week, Charl van der Walt and I (Saurabh) spoke at Mobile Security Summit organized by IIR (http://www.iir.co.za/detail.php?e=2389).
Charl was the keynote speaker and presented his insight on the impact of the adoption of mobile devices throughout Africa and the subsequent rise of security related risks. During his talk, he addressed the following:
I spoke on iPhone and Android security, demonstrating the ease with which mobile security can be breached and presented some live demos. Below is the agenda of my talk:
The original presentation can be downloaded from link below:
While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).
The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:
But security vendors prioritisation of controls are driven by:
Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:
Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:
The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.
Until recently, there was a distinct lack of decent, high-quality technical security conferences held in the United Kingdom. Home to the Global Financial Centre, London, there isn't a shortage of industries who require secure applications and rely on secure infrastructure and applications to operate.
With this in mind, 44Con is the first combined information security conference and training event held in Central London. The con will provide business and technical tracks, aimed at government, public sector, financial, security professionals and Chief Security Officers.
SensePost will be attending the conference, with Ian de Villiers giving a ground-breaking talk on intercepting and modifying the protocol used by SAP GUI, including the release of a tool that facilitates assessments of said protocol. In addition, Ian and Daniel Cuthbert will be delivering a training course aimed at educating developers, and those involved in the deployments and life-cycle of applications, on the correct approaches required to protect applications from common threats.
Unlike other developer-centric courses, developers will actively be involved in breaking into their fellow students applications, whilst they try and prevent the attacks from taking place.
44Con will be held at the Grange City Hotel in London on the 30th August until the 2nd September.
Dominic is currently in the air somewhere over the Atlantic, returning from a long trip that included BlackHat, DefCon and lastly Metricon6, where he spoke on a threat model approach that he has picked up and fleshed out. He has promised a full(er) write-up on his glorious return, however in the meantime his slides are below. An updated copy of the CTM tool is on the CTM page, as is the demonstration dashboard (a nifty spreadsheet-from-the-deep that interactively provides various views on your threat model).
In light of recent mass hacks (HBGary, Sony, Nintendo, etc) one would have thought that collectively, companies would take notice and at least be slightly more aware of the potential implications vulnerabilities in public-facing services could have.
The problem appears to be that these hacks, and indeed hackers, aren't that technically superior and more often than not, take advantage of simple flaws. Some flaws, like SQL injection, provide so much access on their own that a fairly grim attack scenario can be painted. However, often attackers don't require such extravagant flaws to gain access. Chained attacks utilising "low risk" attacks can be far more deadly than a single flaw.
We had an interesting scenario recently which demonstrated this. This is one example of how we use these minor flaws to gain access, and also show how the house of cards can fall quite spectacularly when basic security principles are not adhered to. We were on a fairly bread and butter security assessment; perform an analysis of the target (a large multinational) and determine where their weaknesses were from an unauthenticated perspective. Increasingly, we advise against unauthenticated assessments as we feel we can offer more value when you assume the shell is already cracked, but this was a special case.
The web application was good; it soon became clear that the developers had followed guidelines for the development of secure applications and ensured that common attacks were indeed handled in a suitable manner. What they didn't do, however, was apply a stringent hardening process to the server itself, and as the platitude goes "security is only as good as the weakest link."
The analyst had already obtained all administrative user names and passwords (stored in a database with no protection at all) and had logged into a number to confirm access. My email, now forwarded in the clear, was sent to all involved with a stern "fix it". Since we had access to the mailboxes, we saw an admin send the reply: "..have changed the password. Ask them to check if the password is strong now, there's no way they can get in now."
Yes, the password was indeed strong and certainly constructed in a recommended manner, but the administrator's account was already compromised and we were monitoring communications. This wasn't an appropriate response, it's a bit like using AV to clean a virus from a box, post-infection, instead of rebuilding it. Some data mining through multiple unencrypted mailboxes provided numerous credentials for other servers inside the network. We could pivot through the internal network to our hearts content, while monitoring their comms to make sure our supply line wasn't under threat.
Takeaway I: Once an attacker is inside the perimeter, trying to control intruder access at the perimeter is a game you've already lost. i.e. Blocking the path in, doesn't mean you've blocked the paths in use.
Starting with a simple directory listing flaw, one that Nessus rates as a "low" risk, the house of cards fell at an alarming rate. Security isn't about looking only at the high risks, because attackers won't limit themselves the same way.