Grey bar Blue bar
Share this:

Mon, 11 Jun 2012

CREST South Africa? Let's talk...

First, some background on CREST in the form of blatant plagiarism...

CREST — The Council for Registered Ethical Security Testers - exists to serve the needs of a global information security marketplace that increasingly requires the services of a regulated and professional security testing capability. They provide globally recognised, up to date certifications for organisations and individuals providing penetration testing services.

For organisations, CREST provides a provable validation of security testing methodologies and practices, aiding with client engagement and procurement processes, and proving that your company is committed to providing testing services to the highest standard.

For individuals, CREST provides an industry leading qualification and career path for security penetration testers. By gaining a CREST certification you are proving that you are committed to your professional development in security testing.

CREST has been serving the industry as a pivotal player in the Penetration Testing landscape for many years now, and has also recently established a government-approved chapter in Australia.

There have been numerous discussions about CREST in South Africa over the years and we believe now is the time to take the conversation further. Ian Glover - President of CREST - will be in South Africa next week to deliver a presentation at the ITWeb Security Summit in Johannesburg, and this affords interested parties and excellent opportunity to discuss the concept with him.

With the support of ITWeb we're setting up a workshop to be held at the Sandton Convention Center from 10h00 to 12h00 on Thursday 17 May to meet with Ian, understand the process, and discuss a possible path forward.

Interested parties, whether from testing companies or clients, should please RSVP by commenting on this post (we'll keep it private) or mailing us via info <at> sensepost <dot> com.

Be part of the discussion. We look forward to hearing from you!

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...

Fri, 9 Mar 2012

Foot printing – Finding your target...

We were asked to contribute an article to PenTest magazine, and chose to write up an introductory how-to on footprinting. We've republished it here for those interested.

Network foot printing is, perhaps, the first active step in the reconnaissance phase of an external network security engagement. This phase is often highly automated with little human interaction as the techniques appear, at first glance, to be easily applied in a general fashion across a broad range of targets. As a security analyst, footprinting is also one of the most enjoyable parts of my job as I attempt to outperform the automatons; it is all about finding that one target that everybody forgot about or did not even know they had, that one old IIS 5 webserver that is not used, but not powered off.

With this article I am going to share some of the steps, tips and tricks that pentesters and hackers alike use when starting on a engagement.

Approach

As with most things in life having a good approach to a problem will yield better results and overtime as your approach is refined you will consume less time while getting better results. By following a methodology, your footprinting will become more repeatable and thus reliable. A basic footprining methodology covers reconnaissance, DNS mining, various information services (e.g. whois, Robtex, routes), network registration information and active steps such as SSL host enumeration.

While the temptation exists to merely feed a domain name into a tool or script and take the output as your completed footprint, this will not yield a passable footprint for two reasons. Firstly, a single tool will not have access to all the disparate information sources that one should consult, and secondly the footprinting process is inherently iterative and continuous. A footprint is almost never complete; instead, a fork of the footprint data provides the best current view of the target, but the information could change tomorrow as new sites are brought online, or old sites are taken offline. As a new piece of data is found that could expand the footprint, a new iteration of the footprinting process triggers with that datum as the seed, and the results are combined with all discovered information.

Know your target

The very first thing to do is to get to know your target organisation. What they do, who they do it for, who does it for them, where they do it from - both online and in the kinetic world, what community or charity work they are involved in. This will give you an insight into what type of network/infrastructure you can expect. Reading public announcements, financial reports and any other documents published on or by the organisation might also yield interesting results. Any organisation that must publish regular reports (e.g. listed companies), provide a treasure trove of information for understanding the target's core business units, corporate hierarchy and lines of business. All these become very useful when selecting targets.

Dumpster diving, if you are up for it and have physical access to the target, means sifting through trash to get useful information, but in recent times social media can provide us with even more. Sites like LinkedIn, Facebook and Twitter can provide you with lists of employees and projects that the organisation is involved with and perhaps even information about third party products and suppliers that are in use.

One should even keep an eye out for evidence of previous breaches or loss of credentials. It has become common place for hackers to post information about security breaches on sites like pastebin.com. The most likely evidence would be credentials in the form of corporate emails and passwords being reused on unrelated sites that are hacked, and have their user databases uploaded. In addition, developers use sites like Pastebin to share code, ideas and patches, and if you are lucky you might just find a little snippet of code sitting out in the open on Pastebin, that will give you the edge.

DNS

“The Domain Name System (DNS) is a hierarchical distributed naming system for computers, services, or any resource connected to the Internet or a private network.” — WikiPedia

In a nutshell, DNS is used to convert computer names to their numeric addresses.

Start by enumerating every possible domain owned by the target. This is where the information from the initial reconnaissance phase comes in handy, as the target's website will likely point to external domains of interest and also help you guess at possible names. With a list of most discovered domains in hand, move on to a TLD (Top level domain) expand. TLDs are the highest level subdomains in DNS; .com, .net, .za, .mobi are all examples of TLDs (The Mozilla Organization maintains a list of TLDs https://wiki.mozilla.org/TLD_List).

In the next step, we take a discovered discovered domain and check to see if there are any other domains with the same name, but with a different TLD. For example, if the target has the domain victim.com, test whether the domains victim.net, victim.info, victim.org etc. exist and if they exist check to see if they are owned by our target organization. To determine whether a domain exists or not, one should examine the SOA (start of authority) DNS record for the domain. Using commands like nslookup under Microsoft Windows or the dig/host commands under most of the *nix family will reveal SOA records.

Using dig, “dig zonetransfer.me soa”.

?

Figure 1: Using dig to get the SOA (Start of authority) record for a domain

If, by verifying the SOA, it is confirmed that the domain exists, then the next step is to track down who it belongs to. At this point the whois service is called upon. ‘Whois' is simply a registry that contains the information of the owner of a domain. Note that it is not entirely reliable and certainly not consistent. The following very simple query “whois zonetransfer.me” provides us with the owner of the domain “zonetransfer.me” detail.

Figure 2: Using whois to get the domain owner detail

After finding domains, running them through a TLD expansion and verifying their whois information, it is time to track down hosts. First we need to get the NS or name server records for the domains. Again using “dig zonetransfer.me ns” returns a list of all the name servers used by this domain. In many cases the name server will not be part of the target's network and is often out-of-scope, but they will still be used in the next step.

DNS yields much interesting information, but the default methods for extracting information from foreign servers effectively relies on a brute force. However, DNS supports a trick where all DNS information for a zone can be downloaded if the server allows it, and this is called a “zone transfer”. When enabled, they are extremely useful as they negate the need for guessing or brute-forcing; sadly they are commonly disabled. Still, given the usefulness of zone transfers it is always worth testing for. Zone transfers should be performed against all the name servers that are specified in the NS records of a domain as the data contained in each name server should be the same, but the security configuration might be different. Using dig, the following command will attempt to perform a zone transfer “dig axfr @ns12.zoneedit.com zonetransfer.me”

Figure 3: Performing a zone transfer using dig

As mentioned previously, zone transfers are not that common. When we cannot download the zone file, there are a couple of other tricks that might work. One is to brute force or guess host names: by using a long list of common hostnames one can test for names such as “fw.victim.com”, “intranet.victim.com”, “mail.victim.com” and so on. The names can be commonly seen hostnames, generated names when computers are assigned numeric or algorithmic names, or from sets of related names such as characters from a book series. When brute forcing DNS, be sure to check the following DNS records: CNAME, A and AAAA. Again this is easy using a tool like dig. “dig www.google.com a” produces the DNS configuration for www.google.com, note that the hostname www.google.com actually has multiple DNS entries, one CNAME record, and multiple A records. Looking at the IP addresses it is clear that there are several different hosts (2 in the screenshot below).

Figure 4: Using dig to get the a record for a host entry

Doing this manually seems easy and quick, (and it is) but if we want to brute force or guess many host names, then this will take too long. Of course, it is easy enough to script these commands to automate the process; however there are existing tools written specifically for this purpose. One of the most popular tools, Fierce, is a perl script written by RSnake (http://ha.ckers.org/fierce/), which is easy to use and has many useful functions. Additionally, there are tools like Paterva's Maltego and SensePost's Yeti (a tool I wrote) which provide graphical tools for this purpose.

If we happen to have a list of IP addresses or IP netblocks of the target, then a further DNS trick is to convert the addresses into hostnames using reverse lookups to get the PTR record entry. This is useful since reverse records are easily brute forced in IPv4. Bear in mind that DNS does not require a PTR record (reverse entry) or that entries in the reverse zone must match entries in the forward zone. But the result can give you an idea of whether the host is a shared host, owned and hosted by the company or just remote hosted.

To test once more, try using dig, “dig 104.66.194.173.in-addr.arpa ptr”. While this too can be easily automated, the previously mentioned tools will also handle PTR records.

Search engines:

DNS interrogation and mining forms the bulk foot printing, but thanks to modern search engines like Google and Bing, finding targets has become much easier.

Apart from the normal searching for your target, as you would do in your initial phase, you can actually use the data that you discovered during the course of the DNS mining to try and get further information using search engines. Bing from Microsoft provides us with two really useful search operators: “ip:” and “site:”. When using the “ip:” operator, Bing will return a list of hosts that it has indexed that resolve to the IP address that you have specified. Alternatively the “site:” operator when used with a domain name, will return a list of host names that have been indexed by the search engine and belong to the domain specified. Quick and easy, and Bing also provides you with a very simple free API that you can use to automate these searches.

Address mapping

All this fuss with DNS is important, but it is only useful insofar as they lead us to addresses. The next step is discovering where the target exists within the IP address space. Luckily useful tools and resources exist to help us uncover these ranges, by automating a combination of manual techniques such as whois querying, traceroute and netblock calculators. In the previous section the whois tool was used to get the domain owner information. The same tool can be used to discover the ownership/assignment details of a specific IP address. Let's take www.facebook.com; one of the IP addresses that it resolves to is 69.63.190.10. “whois 69.63.190.10” produces the following output.

Figure 5: Getting the netblock and owner using whois

From the whois output we get really useful information. First is a netblock range 69.63.176.0-69.63.190.255 as well as the owner of this net block, namely Facebook, Inc. In this case we are lucky and the netblock is registered to facebook, but often you will only get the network service provider to which the netblock is allocated to. In that case, you will have to query the service provider in order to gain more info about the specific netblock. Online resources can also be very useful, for example ARIN (American Registry for Internet Numbers) or any of the other regional registries (RIPE, AfriNIC, APNIC and LACNIC) provides a reverse whois search interface where one can search for organisation names and other terms, even performing wild card searches. Giving Facebook a second look, we try a search on the reverse whois interface found at http://whois.arin.net/ with the term “facebook”, and get a list of five additional network ranges.

Figure 6: Search results for ARIN reverse whois

SSL Certificates

Lastly, we turn to SSL. SSL may be more familiar as a “protection” against nasty eavesdroppers and men-in-the-middle, but it is useful for footprinters. How? It is really simple actually, one of the security checks performed by browsers when deciding on the validity of a SSL certificate is whether the Common Name contained in the certificate matches the DNS name of the host requested from the browser. How does this help? Say a list of IP addresses has been produced; the next step would be to perform a reverse lookup of all these addresses. However, if no reverse entry is present and Bing has no record of the IP, then some creativity is called for. If an HTTPS website is hosted on that address then simply browse to that IP address and, when presented with the invalid certificate error, message, look for the “real” host name.

Figure 7: Firefox reporting the common name contained in a SSL certificate for a host

Again, this is something that is easily automated, so we have included a module in Yeti to actually do this for you.

Conclusion

Foot printing might at first glance appear to be simple and mundane, but the more you do it, the more you will realise that very few organisations have a handle on exactly what they have and what they present to the Internet. As the Internet and networks evolve so will the way companies and organisations use it, and so will their footprint. A year-old footprint could be hopelessly outdated, and ongoing footprinting helps organisations maintain a current view of their threat landscape.

With the ongoing move away from local infrastructure to hosted infrastructure, the footprint expands, spreads and grows, and so will our quest to find as much as possible.

Thu, 3 Nov 2011

Squinting at Security Drivers and Perspective-based Biases

While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).

Auditors

The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:

  • Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
  • Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
  • Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
  • The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
  • Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
Vendors Security vendors are the love to hate people in the infosec world. Thinking of them invokes pictures of greasy salesmen phoning your CIO to ask if your security chumps have even thought about network admission control (true story). On the other hand if you've ever been a small team trying to secure a large org, you'll know you can't do it without automation and at some point you'll need to purchase some products. Their marketing and sales people get all over the place and end up driving controls; whether it's “management by in-flight magazine”, an idea punted at a sponsored conference, or the result of a sales meeting.

But security vendors prioritisation of controls are driven by:

  • New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
  • Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
  • Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Pentesters

Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:

  • New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
  • Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
  • Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Individual Experience

Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:

  • Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
  • New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
  • Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
So What?

The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.

Mon, 8 Aug 2011

BlackHat 2011 Presentation

On this past Thursday we spoke at BlackHat USA on Python Pickle. In the presentation, we covered approaches for implementing missing functionality in Pickle, automating the conversion of Python calls into Pickle opcodes, scenarios in which attacks are possible and guidelines for writing shellcode. Two tools were released:

  1. Converttopickle.py — automates conversion from Python-like statements into shellcode.
  2. Anapickle — helps with the creation of malicious pickles. Contains the shellcode library.
Lastly, we demonstrated bugs in a library, a piece of security software, typical web apps, peer-to-peer software and a privesc bug on RHEL6.

Slides are available below, the whitepaper is here and tools here.