When performing spear phishing attacks, the more information you have at your disposal, the better. One tactic we thought useful was this Skype security flaw disclosed in the early days of 2012 (discovered by one of the Skype engineers much earlier).
For those who haven't heard of it - this vulnerability allows an attacker to passively disclose victims external, as well as internal, IP addresses in a matter of seconds, by viewing the victims VCard through an 'Add Contact' form.
Why is this useful?
1. Verifying the identity and the location of the target contact. Great when performing geo-targeted phishing attacks.
2. Checking whether your Skype account has not been used elsewhere :)
3. Spear phishing enumeration while Pen Testing.
4. Just out of plain curiosity.
To get this working, following these basic steps:
1. Download and install the patched version of Skype 5.5 from here (the patch enables the Skype client to save the logs in non obfuscated form)
2. Save the lines below as a Skype_log_patch.reg reg file:
Once saved, run it to enable the Skype Debug Log File.Windows Registry Editor Version 5.00[HKEY_CURRENT_USER\Software\Skype\Phone\UI\General]"LastLanguage"="en""Logging"="SkypeDebug2003""Logging2"="on"
4. Start Skype.
5. Search for any Skype contact and click on the 'Add a Skype Contact' button, but do not send the request, rather click on the user to view their VCard.
4. Open the log file (it should appear in the same folder as Skype executable e.g. debug-20121003-0150)
5. Look for the PresenceManager line - you should see something similar to this - >
The log will include similar credentilas for everyone listed as a "contact" under your Skype account, as well as many other fresh, genuine and useful information received directly from your local Skype tracker.
Today's smart cards such as banking cards and smart corporate badges are capable of running multiple tiny applications which are often written in high level programming languages like Java or Microsoft .NET and compiled into small card resident binaries. It is a critical security requirement to isolate the execution context and data storage of these applications in order to protect them from unauthorized access by other malicious card applications. To satisfy this requirement, multi-application smart cards implement an “Application Firewall” concept in their operating system which creates an execution sandbox for card applications.
During the recent 44con conference in London, we presented the "HiveMod" reverse engineering tool for .NET smart cards and demonstrated the exploitation of a vulnerability to bypass the card's application firewall. The talk also highlighted threats and possible attack scenarios against smart corporate or military badges.
The presentation slides can be viewed below:
Please contact SensePost research team for more information.
There has been a healthy reaction to our initial post on our research into the RSA SecureID Software Token. A number of readers had questions about certain aspects of the research, and I thought I'd clear up a number of concerns that people have.
The research pointed out two findings; the first of which is in fact a design vulnerability in RSA software's "Token Binding" mechanism. The second finding is another design issue that affects not only RSA software token but also any other software, which generates pseudo-random numbers from a "secret seed" running on traditional computing devices such as laptops, tablets or mobile phones. The correct way of performing this has been approached with hardware tokens, which are often tamper-resistant.
Let me first explain one of the usual use cases of RSA software token deployments:
The second finding, as I mentioned before, is a known issue with all software tokens. Our aim at SensePost was to demonstrate how easy/hard it would be for an attacker, who has already compromised a system, to extract RSA token secrets and clone them on another machine. A number of people commented on the fact that we did not disclose the steps required to update the LSA secrets on the cloned system. Whilst this technique is relatively easy to do, it is not required for this attack to function.
If a piece of malware was written for this attack, it does NOT have to grab the DPAPI blobs and replicate them on the attackers machine. It can simply hook into the CryptUnprotectData and steal the decrypted blobs once the RSA software token starts execution. The sole reason I included the steps to replicate the DPAPI on another machine, was that this research was performed during a real world assessment, which was time-limited. We chose to demonstrate the attack to the client by replicating the DPAPI blobs instead of developing a proof of concept malcode.
A real-world malware targeting RSA software tokens would choose the API hooking method or a similar approach to grab the decrypted seed and post it back to the attacker.
"I'm also curious to know whether software token running on smartphones might be vulnerable."
The "Token Binding" bypass attack would be successful on these devices, but with a different device serial ID calculation formula. However, the application sandboxing model deployed on most modern smartphone operating systems, would make it more difficult for a malicious application, deployed on the device, to extract the software token's secret seeds. Obviously, if an attacker has physical access to a device for a short time, they would be able to extract those secrets. This is in contrast to tamper-proof hardware tokens or smart cards, which by design provide a very good level of protection, even if they are in the hands of an attacker for a long time.
"Are the shortcomings you document particular to RSA or applicable to probably applicable to Windows software tokens from rival vendors too?"
All software tokens found to be executing a pseudo-random number generation algorithm that is based on a "secret value", are vulnerable to this type of cloning attack, not because of algorithms vulnerabilities, but simply because the software is running on an operating system and storage that is not designed to be tamper-resistance like modern smart cards, TPM chips and secure memory cards.
One solution for this might be implementing a "trusted execution" environment into CPUs, which has been done before for desktop and laptops by Intel (Intel TXT) and AMD. ARM's "trustzone" technology is a similar implementation, which targets mobile phone devices and secures mobile software's from logical and a range of physical attacks.
It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.
As I understood it, the talk's hypothesis could be (over) simplified as follows:
Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.
It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.
A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.
What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.
The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.
Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.
Solving the security problem in total is sadly still going to take a whole lot more work...
While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).
The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:
But security vendors prioritisation of controls are driven by:
Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:
Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:
The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.