In light of recent mass hacks (HBGary, Sony, Nintendo, etc) one would have thought that collectively, companies would take notice and at least be slightly more aware of the potential implications vulnerabilities in public-facing services could have.
The problem appears to be that these hacks, and indeed hackers, aren't that technically superior and more often than not, take advantage of simple flaws. Some flaws, like SQL injection, provide so much access on their own that a fairly grim attack scenario can be painted. However, often attackers don't require such extravagant flaws to gain access. Chained attacks utilising "low risk" attacks can be far more deadly than a single flaw.
We had an interesting scenario recently which demonstrated this. This is one example of how we use these minor flaws to gain access, and also show how the house of cards can fall quite spectacularly when basic security principles are not adhered to. We were on a fairly bread and butter security assessment; perform an analysis of the target (a large multinational) and determine where their weaknesses were from an unauthenticated perspective. Increasingly, we advise against unauthenticated assessments as we feel we can offer more value when you assume the shell is already cracked, but this was a special case.
The web application was good; it soon became clear that the developers had followed guidelines for the development of secure applications and ensured that common attacks were indeed handled in a suitable manner. What they didn't do, however, was apply a stringent hardening process to the server itself, and as the platitude goes "security is only as good as the weakest link."
The analyst had already obtained all administrative user names and passwords (stored in a database with no protection at all) and had logged into a number to confirm access. My email, now forwarded in the clear, was sent to all involved with a stern "fix it". Since we had access to the mailboxes, we saw an admin send the reply: "..have changed the password. Ask them to check if the password is strong now, there's no way they can get in now."
Yes, the password was indeed strong and certainly constructed in a recommended manner, but the administrator's account was already compromised and we were monitoring communications. This wasn't an appropriate response, it's a bit like using AV to clean a virus from a box, post-infection, instead of rebuilding it. Some data mining through multiple unencrypted mailboxes provided numerous credentials for other servers inside the network. We could pivot through the internal network to our hearts content, while monitoring their comms to make sure our supply line wasn't under threat.
Takeaway I: Once an attacker is inside the perimeter, trying to control intruder access at the perimeter is a game you've already lost. i.e. Blocking the path in, doesn't mean you've blocked the paths in use.
Starting with a simple directory listing flaw, one that Nessus rates as a "low" risk, the house of cards fell at an alarming rate. Security isn't about looking only at the high risks, because attackers won't limit themselves the same way.
To this end, different types of session identifiers were examined. The thinking was that by bruting session IDs instead of credentials the monitoring systems might be less likely to pickup the attack, and the cloud gives the attacker vast amounts of bandwidth and processing power that was not previously available. However even with access to cloud resources, most "strong" session IDs would still be large enough to avoid this attack (think 128-bit sessions such as those stored in ASP.NET cookies).
Of course, authentication tokens are not necessarily only stored in session carriers such as cookies/urls/hidden fields. A number of sites use a randomly generated link to effect a password reset, and if these random links can be brute-forced then the attacker still gains access to the account.
Thus, in the following set of videos we show how an attacker can generate a huge number of password reset links on the one hand, each of which is valid for the target account (he doesn't get to see the links). The final step would be to randomly guess links until one is hit (left as an exercise to the reader).
Users can sign-up for free trial accounts and upload/store/share files via the web interface, which is where authentication is handled. There were also client-side options, but we didn't examine these.
Here we show how the password reset process works for SugarSync.
The next video is short, and shows the execution of a Python script that submits many password reset requests for a single account.
The final SugarSync video shows the masses of reset emails that were sent to the user.
Two items were of interest:
The recent widespread carnage caused by the Conficker worm is astounding, but is also comforting, in a strange way.
It has been a good few years since the world saw a worm outbreak of this magnitude. Indeed, since the Code Red, Slammer and Blaster days, things have been fairly quiet on the Interwebs front.
As a community, it seems we very quickly forgot the pains caused by these collective strains of evil. Many people proclaimed the end of issues of that particular bent, whether it be as a result of prolific post-worm hastily induced reaction buying of preventative technologies and their relatives, or whether more faith was placed in software vendors preventing easily "wormable" holes in their software.
Needless to say, Conficker turned those theories a little on their head. Wikipedia notes on the impact of the worm gleaned from various sources seem to say it all:
The New York Times reported that Conficker had infected 9 million PCs by 22 January 2009, while The Guardian estimated 3.5 million infected PCs. By 16 January 2009, antivirus software vendor F-Secure reported that Conficker had infected almost 9 million PCs. As of January 26 2009, Conficker had infected more than 15 million computers, *making it one of the most widespread infections in recent times*.
We saw similar turmoil when a large organization in South Africa was hit incredibly hard by this worm, and was struggling to resolve the resulting chaos, even with the assistance of their security software vendors. Thankfully, it all ended happily for them, as the issue was resolved, but it's plain to see where this could go wrong and affect many organizations similarly.
I did mention up front that I found this all to be comforting (granted, this may be a slightly twisted viewpoint, but it really is how I feel about it). The reason I find this comforting is that perhaps as a collective, we needed a fresh wake-up call. They say that complacency kills, and I know that many organizations have become rather complacent of late...
Consider how Conficker works and spreads - missing patches leading to RPC-based buffer overflows in the Microsoft Server Service, brute-force attacks on weak passwords, spreading through file shares...hold on...does this sound at all familiar? Aren't these issues all addressed by basic security best practices 1 oh 1?
Organizations that had adopted reasonably robust internal security measures - hardening and patching policies, internal security assessments, solid internal vulnerability and compliance management solutions - they would have smiled through the Conficker onslaught..
I don't only say this because we play squarely in the assessment and vulnerability management spaces - I say this because the same steps that would have protected against Code Red, Slammer, Blaster and friends, would have protected against Conficker... best practice 1 oh 1..
I guess every now and then, we all need a reminder of just how essential the basics that we all tend to overlook actually are :>
A long time ago i blogged on the joys of using VBS to automate bruteforcing [1|2]when one didnt want to mess about duplicating an applications functionality at the protocol level.. Yesterday i had need to brute-force a web application which tried hard to be difficult and annoying..
This was quick and dirty, if i had more time i would have chosen to read the results and only screenshot results that didnt match "your credentials are invalid".. ahh.. for another day..
*** a word of warning.. AppleScript is described as "an English-like language used to create script files that control the actions of the computer and the applications that run on it." This english-like-ness makes it extremely obtuse at times..
In a subsequent version of the brute force, i wished to use the username from my list, and the users First Name as his password. Now this is an obvious call for a hash/dictionary/associative array.. The sparse documentation that i was able to find on AppleScript records did not appear to help me a jot (but this could just be poor google skills).
Instead i opted for saving the username and password as a ":" delimited string. I then split the string at runtime and submit as before.. ugly, but effective..
Richard Bejtlich didnt give the pre-release a glowing review but i know at least a few people waiting eagerly to get their hands on the new "Fuzzing: Brute Force Vulnerability Discovery by Michael Sutton, Adam Greene, and Pedram Amini". Pedram is the mastermind behind Pai-Mei and started OpenRCE, but his last blog post points to the books dedication page, and it probably makes the book worth buying all on its own..