aka.. Someone put the hurtski on Kaspersky..
The Twitters (via XSSniper and others) and the Interwebs were ablaze with news on a SQL Injection vulnerability that was exploited on AV vendor Kaspersky's site. Detail of the attack can be found here.
It's interesting that SQL Injection (though as old as the proverbial hills) is still such a major issue. In fact, I have it on good authority that the bulk of PCI-related compromises are still as a result of SQL Injection...
In our own work, we see this all over the show.
Also interesting is the fact that the DB in use by Kaspersky is MySQL - so much for the "I don't use MSSQL, I have x database with magical pixie dust SQL Injection protection - what me worry?" argument...
Once again, security one-oh-one...if you aren't *effectively* validating user input, you're going to get bitten some time...
ED* From the shameless self promotion department:
haroon and Marco have just finished their chapters in an upcoming book dedicated to SQL Injection. We will post more details here when its available. (the book aims to give SQL Injection thorough coverage from OR 1=1 to some of the insanity demo'd at BlackHat last year..)
The last few weeks have brought some fairly interesting predictions for 2009 to bear in CSO Magazine columns. Two recent articles caught my eye from a penetration testing perspective.
In the first, Brian Chess, CTO of Fortify (they make source code review and software security tools, and he has written a great book on static analysis) predicted that penetration testing as we know it will die in 2009.
The premise of his argument is that penetration testing will die and be reborn in a different form, aiming more at preventing bugs from occurring, rather than identifying them (rolling things into QA / SDLC etc). Granted, it's a fairly valid point *in some respects*, albeit a biased one if you consider what he does for a living.
Ivan Arce (CTO of Core and pretty much as uber as they come) wrote a very well articulated response to this, stating the counter-viewpoint. I liked his response firstly because his points are valid, and secondly, because no bias toward his product is shown in his response.
I won't repeat the articles, but it is interesting to ponder, especially in light of the number of people (customers) that I know that read CSO magazine, and take what is written there as the gospel (for better or worse).
Either way, the argument on the value and validity of penetration testing is still raised often these days (even though governance and compliance with standards is mandating it more and more). I think some of the 12 points that Ivan Arce lists in his response are awesome ammo when addressing this timeless question, so I'll snip them out and include them below.
1. A 35-year old practice with steadily increasing adoption rates does not usually disappear or transform itself substantially within just 12 months.
2. Penetration testing is intrinsically operational in nature. While pre-emptive measures such as security QA and testing and other SDLC practices may be useful to reduce the number of security vulnerabilities in custom or newly developed software, existing operational environments will continue to have bugs during 2009 due to the deployment of legacy or un-audited buggy applications.
3. Penetration testing is operational in nature (did I say that?). It deals with multistage and multilayered threats or attacks (not just vulnerabilities!) in real-world environments (not test labs) and then maps them explicitly to actual security risks. This will remain a valid use case scenario during 2009.
4. Penetration testing is tactical. It provides tangible, actionable information on how to incrementally improve an organization's security posture effectively to prevent real and specific attacks from happening and do so efficiently since it makes it easier to measure at least some for of return on security investment considering both the defense and offense technology currently available.
5. Penetration testing is strategic. If performed regularly, consistently and as part of an organization's overall security strategy, it becomes a useful and valuable practice to implement a program of constant improvement of information security.
6. Penetration testing is strategic. Incorporating an attacker's perspective to an organization's overall security strategy provides necessary checks and balances and improves the organization's ability to steer security policy in accordance to current trends in the threat landscape.
7. Penetration testing is not a silver bullet. It is best used in conjunction with other security practices and in doing so it amplifies those results with both positive and negative feedback (about what does and does not work).
8. Penetration testing is -- at least partially -- driven by compliance. It is a recommended or even a mandatory practice in several regulations, industry standards and organization's internal policies that will not go away in 2009.
9. The IT landscape is constantly evolving and will continue to do so in the next year. As new technologies emerge, new attack vectors become prevalent. Monitoring the evolution over time of sophisticated penetration testing techniques is a good leading indicator of threats that may see mass-adoption in the future which makes pen testing almost a necessity to improve SISSP qualifications.
10. There is money to be made selling penetration services and products. The opportunity will not go away in 2009.
11. Financial crisis and economic turmoil means also more and better opportunities for cybercrime. In the context of 2009 testing one's defenses periodically will be more (not less) necessary than if we had a more globally stable scenario.
12. Last but not least, five years ago IDS technology and its respective market was the "in thing" to make predictions about. Although predicted several times, the death of the IDS has been greatly exaggerated in the past years.
Someone in the office was discussing Microsoft's recent horrible foray into the anti-virus market. Apparently an online source held one-care as faring worse than a simple man with a perl script. A quick scan shows that they have indeed faired pretty poorly in independent tests:
"(BBC News) OneCare was the only failure among 17 anti-virus programs tested by the AV Comparatives organisation."
Now the obvious question was: How could Microsoft possibly get it so wrong? (Cue the drum roll, bring out your tin foil hats)
You have less people running around these days screaming "Microsoft just dont get security" (Compare a BlackHat today with a BlackHat 5 years ago where Microsoft employees hid their name badges more than any of the guys from 3 letter agencies did). There is little doubt that they have skilled engineers and that they quickly step up when they have to.. So how is it possible that they can do this badly at a new market they were keen to enter?
There are probably good reasons for it, but since i do have a shiny new tin foil hat, im going to run with a different ponderation. Microsoft knew they were going to take heat for patchguard (preventing 3rd parties from hooking into the kernel). They had to know that the Symantecs and McAfees of the world would be up in arms, about the fact that this would give Microsoft an unfair advantage in this space. This would affect not just their AV lines, but their HIPS products that Microsoft would surely want to bundle with the OS in the future..
Of course, Microsoft's biggest defense was to claim that their engineers were not given access to windows internals where their product competed with 3rd parties. Almost all of us responded with: "riiiiiiiigghtt...."
But then.. Microsoft releases one-care, and it really _does_ do poorly against its industry peers! Surely if one-care engineers had preferred treatment they would have done better? Surely this proves that they were honest all along??? From an MSFT point of view, it would be giving up fairly little (a market they have not yet come to rely on) to gain favor in markets where they do indeed have a lot to lose.. all in all.. it would make an excellent sacrificial lamb..
A short while back, a discussion broke out on a mailing list about the nature of being a pen-tester. The discussion quickly gravitated towards the number of "security" companies where numbers of projects far out-weigh the interestingness of projects, leading rapidly to a cookie-cutter mentality to pen-test engagements..
Of course if you have spent any time in the industry, you already know this to be true.. the obvious danger with this is that you have a lot of unhappy pen-testers giving shoddy output to (eventually) very unhappy customers. Sadly this soon follows the well published "market for lemons" problem where eventually due to information asymmetry, bad products will soon push out good ones.. i.e. because its hard for customers to tell the difference between good pen-tests and lame pen-tests, eventually the market price drops towards low grade pen-tests (since the customer is paying for what they expect) and at the low prices, good pen-test teams will close shop and move on to other lines of work..
The list discussion was dominated by guys who had been in the pen-testing game for years who contended that "you sign up for a cool dynamic job pushing the envelope, and you end up running scannerX for the next Y years of your life.."
I replied with a quasi essay, mainly because we have had these sorts of discussions at the office for years.. i think the bottom line is that if you in the right company, just about any line of work will be ok, and if you in the wrong company just about any project can be made to suck.. Paul Graham once covered this when describing great hackers.. a piece that sometimes went over the edge, but mostly warms the heart..
We handle the problem of cookie-cutter projects by following 2 simple rules:
a) we always try hard to push the envelope,
b) we always double and triple check to make sure we are adding value.
(a) is the easy one (relatively) and we are really fortunate because over the years we have ended up with a culture that reinforces this sort of behavior. Although the office has great teamwork (which you can easily judge by the number of people helping other people on project till the wee hours of the morning) it also has a healthy amount of competition. Everyone wants to pull the next great piece of leetness, and everyone works hard for it.. In time it reaches the state where you almost feel dirty for not getting it.. (b) is also relatively simple.. We have had a number of projects over the years where we have literally turned down business, because even though the customer thought he needed us, at that point we didnt think he did.. we felt his money was better spent at that point in time doing something else, and we pointed him towards it.. its a win for the customer who doesnt just walk away with a cool report he is unable to effectively work on and its a win for the analyst who doesnt end up with his work being in vein.
Now both come with obvious downsides.. (b) wouldnt please short sighted investors / business folks and (a) is cool for some but breaks some people to pieces..
My answer for both is the same (but is clearly my opinion.. blah blah std disclaimer.h): Let them go elsewhere..
If your business folks / investors dont realise that eventually customers like this would be happier, and that the resultant goodwill will hold your company in better stead than the short term gains they get from just that piece of work, then you guys were probably going to kill each other before long anyway..
If your analysts dont wake up itching to go, and wanting to dent the world then they probably not right for your company either.. there are lots of jobs in the world where people can kinda half-ass their way through the day but thats not the company you want to work for.. so you and ultimately they (the half-assers) will probably be happier with them finding one of those companies instead..
Doing good work constantly, and constantly pushing the limits are def. their own reward, and most people i know who live like that, probably dont know how to live any other way, but make no mistake it _is_ hard.. Being good at something could happen with luck (good genes, good background, etc) but being great at something and being consistently great at it _does_ take work.. anyone who says otherwise is probably still just sailing along on his luck component that never lasts forever..
(i first blogged some posts to this effect waaaaaaaaaay back.. the original posts reference Richard Hammings - you and your research paper which always bears repeating)
OK.. now we bump into another pen-testers dilemma.. a good friend of mine and i used to have this discussion often a few years ago where we pondered how come if we were breaking into banks / huge companies all the time, that this was not happening more often? a logical conclusion has to be that such attacks are not as likely as we imagined and that our work while fun was largely academic. Around 2003 this argument was at its peak since our attacks were increasingly more in-depth, with the sorts of attacks required to achieve our goal needing a level of technical skill far greater than the level of attacks actually being discovered in the wild.
It was a low time for the argument on my side (i always held that what we do actually does add value despite the attacks being arcane..) and of course we started to question the value we were adding.. if the number of people likely to pull off such attacks were increasingly small (due to the complexity level of the attack) then surely the customers wouldn't really need to pay good money to find this out.. they could just pay for the cheap assessment and mitigate the threats at the level of the attackers they are likely to face..
Of course.. as time went on this started to change.. Attacker sophistication is always on the rise, and attacks that were once publicly decried as a black-art, are point and click weaponised a few years later.. Customers that have been protecting their code / networks / etc from these attacks (because you have been using it against them since 2002/3) are much better off, and much further down the path because you took them there years ago.. the argument swings again in my favor.. :>
Ultimately.. for me its pretty simple.. do it cause you love it.. do it as hard as you can.. make sure your company and you are on the same page.. and make sure you adding value.. the rest just falls into place..
* Ancient post revived for blog continuity
(email in response to a deal (company XXXX) we snagged)
Re: XXXX, for everyones info it showed a few other interesting things.. (other than us signing up for yet another challenge we hope we can meet)
[a] XXXX originally came to us after reading "Special Ops - How to hack web apps" --> Shouts to all the plakkers involved there..
[b] While still deciding between vendors, he realised he has seen us talk in vegas this year (erm.. that year - this conv. started nov 04) --> shouts to all those plakkers involved..
[c] He saw a johnny long google talk and asked "what exactly is the relationship between johnny and sensepost? , cause he talks about u guys all the time" --> shouts to mr pemmingh!
[d] He then went to shmoocon and saw we had just written "Aggressive Network Self Defense" and loved our paper --> shouts to all (especially chuck who wrote the paper at a Miami airport)
i had the mood-swings over the deal for 9 months, but we won the deal with a ton of work done by a bunch of ppl that we actually never knew had any bearing on the deal..
its like we said before "The boxer doesnt win the match in the ring.. he wins on the sunday mornings he is on the road jogging and his opposition isnt"
SensePlak is leet.. because its leet when u out running at 04h30, to know that there are ppl on ur team out running too..