Grey bar Blue bar
Share this:

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...

Wed, 4 Apr 2012

Towards Firmware Analysis

While I was evaluating a research idea about a SCADA network router during the past week, I used available tools and resources on the Internet to unpack the device firmware and search for interesting components. During security assessments, you may find interesting embedded devices available on the network. Whilst many don't look at the feasibility of doing firmware analysis, I decided to document the steps I took to analysis my target firmware, so you can take the similar approach in the case of assessing such devices. This could also be a good indication on the feasibility of automating this process (An unfinished project was launched in 2007: http://www.uberwall.org/bin/project/display/85/UWfirmforce).

The following process would be easy for most of you who use *nix systems on a daily bases:

Step 1) Scanning the firmware image

The BinWalk tool is useful for scanning firmware image files to identify embedded file systems and compressed streams inside. It can detect common bootloaders, file systems and compressed archives inside a given firmware image file. Since it works by scanning for signature and magic values, it usually has false positives and the results need to be verified manually.

U-Boot bootloader (yes, it's German :-)) signature was identified at offset 262144 and the uImage header information, such as creation date, CPU type, etc appears to be valid. This bootloader was followed by a gzip compressed stream, which probably is the zImage kernel and a squashfs file system at offset 1522004. We will attempt to extract this file system in the next step. The following are common bootloaders that are used in embedded devices with ARM CPU:

Blob bootloader Bootldr Redboot U-Boot ABLE bootloader

The bootloader's task is to load the kernel image at the correct address and pass initial parameters to it. So in most cases we are not interested in analysing the bootloader itself, but in the root file system.

Step 2) Extracting file systems

First, I extracted the uImage content at offset 262144 by using dd command and then used uboot-mkimage (packages.debian.org/uboot-mkimage) to test if it's a valid uImage file and to discover more information about it:

The image format was valid and it contained two other file system images with 1MB and 2MB sizes, which probably are kernel zImage and root file systems (RAMdisk). If you check the uImage file format, you will notice a 64 bytes long header. There is a “multi-file” image list that contains each image size in bytes and this list is terminated by a 32bit zero. So, I would need to skip 64+2*4+4=76 bytes from start of the uImage file to get to the first Image content that would be kernel zImage:

The file command could not detect kernel image or squshfs in the extracted file systems; this might be due to lack of squashfs (with LZMA compression) in my Ubuntu kernel. I proceed by using Firmware Mod Kit which contains a set of programs to decompress various file system images including squashfs-LZMA. After trying the various unsquashfs version 3.x scripts, I was able extract the rootfs image files successfully:

Step 3) Searching the root file system

Once the root file system files were extracted, we can file and strings search tools to look for interesting files and patterns such as RSA private key files, password and configuration files, SQL database files, SQL query string and etc. In my case, I was looking for RSA certificate or private key files and found the following: (a database of private keys in embedded devices was published in 2011 but it's not actively maintained, you can access it at http://code.google.com/p/littleblackbox/)

One can write shell scripts to automate the file system search process.

Step 4) Running and debugging the Executables

The Qemu emulator supports multiple CPU architectures including ARM, MIPS, PowerPC, etc and can be used to run and debug the interesting executable extracted from the firmware image on your system for dynamic analysis purposes. You would need to build the Qemu with —static and —enable-debug options. The following figure demonstrates how to run the web server (httpd) that was extracted from my target firmware using chroot and Qemu:

As you can see from the above screenshot, the web server was working fine, but was not able to display the bootloader version, because it couldn't read this value from the NVRAM (not volatile RAM) normally mounted by the kernel in a real device (there is an interesting post here about resolving the NVRAM access errors while emulating embedded device executables). Some of the executables, like the remote management agent example below, could have more severe problems running under emulator.

For troubleshooting such cases, or monitoring an emulated process while fuzzing it, we would need to attach a debugger to it. This can be achieved by using —g switch in Qemu and using a debugger out of the emulator process or even on a remote windows machine. I used IDA pro remote GDB debugging tool as shown in the figures below:

Once successfully attached to the remote emulated process, IDA pro can be used to simply trace the execution of the process, placing breakpoints or running IDA scripts.

Often overlooked during assessments, firmware analysis of devices can yield results and often do when we target them at SensePost. Our methodology includes the above steps and we recommend yours does too.

Wed, 7 Mar 2012

Mobile Security - Observations from the developing world

By the year 2015 sub-Saharan Africa will have more people with mobile network access than with access to electricity at home.
This remarkable fact from a 2011 MobileMonday report [1] came to mind again as I read an article just yesterday about the introduction of Mobile Money in the UK: By the start of next year, every bank customer in the country may have the ability to transfer cash between bank accounts, using an app on their mobile phone. [2]

I originally came across the MobileMonday report while researching the question of mobility and security in Africa for a conference I was asked to present at [3]. In this presentation I examine the global growth and impact of the so-called mobile revolution and then its relevance to Africa, before looking at some of the potential security implications this revolution will have.

The bit about the mobile revolution is easy: According to the Economist there will be 10 billion mobile devices connected to the Internet by 2020, and the number of mobile devices will surpass the number of PCs and laptops by this year already. The mobile-only Internet population will grow 56-fold from 14 million at the end of 2010 to 788 million by the end of 2015. Consumerization - the trend for new information technology to emerge first in the consumer market and then spread into business organizations, resulting in the convergence of the IT and consumer electronics industries - implies that the end-user is defining the roadmap for these technologies as manufacturers, networks and businesses scramble desperately to absorb their impact.

Africa, languishing behind in so many other respects, is right there on the rushing face of this new wave, as my initial quote illustrates. In fact the kind of mobile payment technology referred to in the BBC article is already quite prevalent in our home markets in Africa and we're frequently engaged to test mobile application security in various forms. In my presentation for example, I make reference to m-Pesa - the mobile payments system launched in Kenya and now mimicked in South Africa also. Six million people in Kenya use m-Pesa, and more than 5% of that country's annual GDP is moved to and fro directly from mobile to mobile. There are nearly five times the number of m-Pesa outlets than the total number of PostBank branches, post offices, bank branches, and automated teller machines (ATMs) in the country combined.

Closer to home in South Africa, it is estimated that the number of people with mobile phones outstrips the number of people with fixed-line Internet connections by a factor of ten! And this impacts our clients and their businesses directly: Approximately 44% of urban cellphone users in South Africa now make use of mobile banking services. The reasoning is clear: Where fixed infrastructure is poor mobile will dominate, and where the mobile dominates mobile services will soon follow. Mobile banking, mobile wallets, mobile TV and mobile social networking and mobile strong-authentication systems are all already prevalent here in South Africa and are already bringing with them the expected new array of security challenges. Understanding this is one of the reasons our customers come to us.

In my presentation I describe the Mobile Threat Model as having three key facets:

  • Security: The challenge of ensuring Confidentiality, Integrity and Authenticity for the data and transactions on the device;
  • Privacy: The implications of mobility (and especially convergence) for citizens and their rights to talk, move, think and act unobserved; and
  • Control: The challenge presented by the mobile revolution to governments fighting crime, gangsterism and terrorism.
All of these issues are real and complex, but I'm restricting myself to the security question here. I encourage readers to peruse the presentation itself for a full breakdown of the Threat Model because for this article I think it suffices to consider just the conclusion of my presentation, and it's this:

The technical security issues we discover on mobile devices and mobile applications today are really no different from what we've been finding in other environments for years. There are some interesting new variations and interesting new attack vectors, but it's really just a new flavor of the same thing. But there are four attributes of the modern mobile landscape that combine to present us with an entirely new challenge:

Firstly, mobiles are highly connected. The mobile phone is permanently on some IP network and by extension permanently on the Internet. However, it's also connected via GSM and CDMA; it's connected with your PC via USB, your Bluetooth headset and your GPS, and soon it will be connected with other devices in your vicinity via NFC. Never before in our history have communications been so converged, and all via the wallet-sized device in your pocket right now!

Secondly, the mobile device is deeply integrated. On or through this platform is everything anyone would ever want to know about you: Your location, your phone calls, your messages, your personal data, your photos, your location, your location history and your entire social network. Indeed, in an increasing number of technical paradigms, your mobile device is you! Moreover, the device has the ability to collect, store and transmit everything you say, see and hear, and everywhere you go!

Thirdly, as I've pointed out, mobile devices are incredibly widely distributed. Basically, everyone has one or soon will. And, we're rapidly steering towards a homogenous environment defined by IOS and Google's Android. Imagine the effect this has on the value of an exploit or attack vector. Finally, the mobile landscape is still being very, very poorly managed. Except for the Apple AppStore, and recent advances by Google to manage the Android market, there is extremely little by way of standardization, automated patching or central management to be seen. Most devices, once deployed, will stay in commission for years to come and so security mistakes being made now are likely to become a nightmare for us in the future.

Thus, the technical issues well known from years of security testing in traditional environments are destined to prevail in mobile, and we're already seeing this in the environments we've tested. This reality, combined with how connected, integrated, distributed and poorly managed these platforms are, suggests that careless decisions today could cost us very dearly in the future...

[1] Mobile Africa Report 2011, Regional Hubs of Excellence and Innovation by Dr Madanmohan Rao, Research Project Director, MobileMonday March 2011

[2] http://www.bbc.co.uk/news/business-17115946

[3] http://prezi.com/as-szhrug5zr/examining-the-impact-of-the-adoption-of-mobile-devices-throughout-africa-and-the-subsequent-rise-of-security-related-risks-sensepost-information-security/

Thu, 3 Nov 2011

Squinting at Security Drivers and Perspective-based Biases

While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).

Auditors

The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:

  • Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
  • Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
  • Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
  • The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
  • Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
Vendors Security vendors are the love to hate people in the infosec world. Thinking of them invokes pictures of greasy salesmen phoning your CIO to ask if your security chumps have even thought about network admission control (true story). On the other hand if you've ever been a small team trying to secure a large org, you'll know you can't do it without automation and at some point you'll need to purchase some products. Their marketing and sales people get all over the place and end up driving controls; whether it's “management by in-flight magazine”, an idea punted at a sponsored conference, or the result of a sales meeting.

But security vendors prioritisation of controls are driven by:

  • New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
  • Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
  • Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Pentesters

Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:

  • New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
  • Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
  • Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Individual Experience

Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:

  • Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
  • New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
  • Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
So What?

The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.

Wed, 19 Oct 2011

Press Release - Jane Frankland joins SensePost

The SensePost marketing division, a highly skilled team of ruthless spin-doctors, is proud to announce that they have written ... a press release. Indeed, this team of fawners, flunkeys, lackeys and puffers has been slaving since early 2009 to come up with the pristine example of literary art you will read below. If you're intimidated by what I've just said, harbour a fanatical dislike for marketing folks or simply don't read so good, then here's the short version:

As of 01 October we have been joined by Jane Frankland, an industry stalwart, previously with Corsaire and NGS. Jane will be responsible for growing the SensePost business in the UK and Europe and we think she's very clever. We're extremely pleased to have her on board and sincerely look forward to working with her. Welcome onboard Jane!

So, here's the famous press-release...

We're proud to announce that former Founder of Corsaire and Associate Director of Operations at NGS Secure Moves to Expand SensePost's UK and European Presence

Pretoria, South Africa -- SensePost, a leader in penetration testing and information security services, announced today that Jane Frankland has joined the company as Head of Business Development for Europe. Frankland will focus first on expanding the brand's UK national reach while providing strategic support and direction for the company's European clients.

Jane, was most recently an Associate Director at NGS Secure, an NCC Group company. She was responsible for their UK (SE), Australian and US operations and also played a part in developing their marketing strategy including re-branding. Prior to NGS Secure, Frankland founded Corsaire, another leading brand in information security consultancy and assessment services. During her 13 years as their Commercial Director, she managed accounts such as Marks & Spencer, Royal Sun Alliance, William Hill and RWE.

When asked “Why SensePost?” Frankland stresses the caliber of the consultants she is working with alongside the value-culture that the Directors have created. “In joining SensePost, I get to be part of an incredibly forward thinking and technically able group, plus I have an active hand in establishing SensePost as a dominant brand in penetration testing services in the UK. It's lovely to be working in collaboration again!”

Charl van der Walt, co-Founder and Managing Director of SensePost, stated that growth into the UK market was a key strategic priority for the company. “When we met Jane, we found the right mix of strategic insight and business management experience. She brings a wealth of experience, fits into the team and can help expand our business. We're excited to welcome her into the SensePost team.”

You can hear more from Jane herself here.