Rich Mogull (who's stuff I really quite dig) has launched an 'Open Patch Management Survey' via the SecurityMetrics blog. Its an interesting idea, and they plan to release both their analysis *and* the raw data, which might be really insightful for our VMS stuff.
Corporations can take the SurveyMonkey survey at http://www.surveymonkey.com/s.aspx?sm=SjehgbiAl3mR_2b1gauMibQw_3d_3d, and there's some nice material already available at http://securosis.com/projectquant.
Here's the rest of Rich's message (pls forgive the cross-post):
Our goal here is to gain an understanding of what people are really doing with regards to patch management, to better align the metrics model with real practices. We're doing something different with this survey. All the results will be made public. We don't mean the summary results, but the raw data (minus any private or identifiable information that could reveal the source person or organization). Once we hit 100 responses we will release the data in spreadsheet formats. Then, either every week or for every 100 additional responses, we will release updated data. We don't plan on closing this for quite some time, but as with most surveys we expect an initial rush of responses and want to get the data out there quickly. As with all our material, the results will be licensed under Creative Commons.
We will, of course, provide our own analysis, but we think it's important for everyone to be able to evaluate the results for themselves. All questions are optional, but the more you complete the more accurate the results will be. In two spots we ask if you are open for a direct interview, which we will start scheduling right away. Please spread the word far and wide, since the more responses we collect, the more useful the results.
If you fill out the survey as a result of reading this email please use SECURITYMETRICS as the registration code (helps us figure out what channels are working best). This won't affect the results, but we think it might be interesting to track how people found the survey, and which social media channels are more effective.
The recent widespread carnage caused by the Conficker worm is astounding, but is also comforting, in a strange way.
It has been a good few years since the world saw a worm outbreak of this magnitude. Indeed, since the Code Red, Slammer and Blaster days, things have been fairly quiet on the Interwebs front.
As a community, it seems we very quickly forgot the pains caused by these collective strains of evil. Many people proclaimed the end of issues of that particular bent, whether it be as a result of prolific post-worm hastily induced reaction buying of preventative technologies and their relatives, or whether more faith was placed in software vendors preventing easily "wormable" holes in their software.
Needless to say, Conficker turned those theories a little on their head. Wikipedia notes on the impact of the worm gleaned from various sources seem to say it all:
The New York Times reported that Conficker had infected 9 million PCs by 22 January 2009, while The Guardian estimated 3.5 million infected PCs. By 16 January 2009, antivirus software vendor F-Secure reported that Conficker had infected almost 9 million PCs. As of January 26 2009, Conficker had infected more than 15 million computers, *making it one of the most widespread infections in recent times*.
We saw similar turmoil when a large organization in South Africa was hit incredibly hard by this worm, and was struggling to resolve the resulting chaos, even with the assistance of their security software vendors. Thankfully, it all ended happily for them, as the issue was resolved, but it's plain to see where this could go wrong and affect many organizations similarly.
I did mention up front that I found this all to be comforting (granted, this may be a slightly twisted viewpoint, but it really is how I feel about it). The reason I find this comforting is that perhaps as a collective, we needed a fresh wake-up call. They say that complacency kills, and I know that many organizations have become rather complacent of late...
Consider how Conficker works and spreads - missing patches leading to RPC-based buffer overflows in the Microsoft Server Service, brute-force attacks on weak passwords, spreading through file shares...hold on...does this sound at all familiar? Aren't these issues all addressed by basic security best practices 1 oh 1?
Organizations that had adopted reasonably robust internal security measures - hardening and patching policies, internal security assessments, solid internal vulnerability and compliance management solutions - they would have smiled through the Conficker onslaught..
I don't only say this because we play squarely in the assessment and vulnerability management spaces - I say this because the same steps that would have protected against Code Red, Slammer, Blaster and friends, would have protected against Conficker... best practice 1 oh 1..
I guess every now and then, we all need a reminder of just how essential the basics that we all tend to overlook actually are :>
Over at [Rational Survivability] beaker as coined the term EDoS. To describe how "the utility and agility of the cloud computing models such as Amazon AWS (EC2/S3) and the pricing models that go along with them can actually pose a very nasty risk to those who use the cloud to provide service"
Of course, this has kicked off the flurry of responses from "How is this different to soaking up the bandwidth of people who pay per gig" to "OMG! thats the new thing.. Cloud Computing is bad".
It is an interesting concept, one we blogged about briefly back in 2007 . What makes it interestinger for me, is that with a smart enough attacker, the defender is far worse off trying to differentiate valid application requests from the invalid and black-holing wont be as easy to do..
We are currently doing some fiddling on this, and while i dont think it deserves a new acronym, i do think its got some coolness that needs exploring..
Kaspersky will show how processor bugs can be exploited using certain instruction sequences and a knowledge of how Java compilers work, allowing an attacker to take control of the compiler.
The demonstrated attack will be made against fully patched computers running a range of operating systems, including Windows XP, Vista, Windows Server 2003, Windows Server 2008, Linux and BSD. The demo will be presented at the Hack In The Box Security Conference in Kuala Lumpur in October
since forever, i've been told (and told others) that the greatest threat is from the inside. turns out, not so much. verizon business (usa) apparently conducted a four year study on incidents inside their organisation and found that the vast majority, 73%, originated from outside. however, the majority of breaches occurred as a result of errors in internal behaviour such as misconfigs, missing patches etc. (62% of cases).
So attackers are generally outsiders taking advantage of bad internal behaviours, rather than local users finding 0-day. From the exec summary:
In a finding that may be surprising to some, most data breaches investigated were caused by external sources. Breaches attributed to insiders, though fewer in number, were much larger than those caused by outsiders when they did occur. As a reminder of risks inherent to the extended enterprise, business partners were behind well over a third of breaches, a number that rose five-fold over the time period of the studyOther interesting snippets that tie directly back into what we cover when we train, and why we think there is value in not only aiming at sploit-writing and 0-day:
Most breaches resulted from a combination of events rather than a single action.
Intrusion attempts targeted the application layer more than the operating system and less than a quarter of attacks exploited vulnerabilities.In other words, bite-sized chunks for the win, core/canvas/metasploit are cute but that's not how customers get owned most often in the real world.