Grey bar Blue bar
Share this:

Fri, 11 Nov 2011

Decrypting iPhone Apps

This blog post steps through how to convert encrypted iPhone application bundles into plaintext application bundles that are easier to analyse.

Requirements: 1) Jailbroken iPhone with OpenSSH, gdb plus other utilities (com.ericasadun.utilities etc. etc.) 2) An iPhone app 3) On your machine:

  • otool (comes with iPhone SDK)
  • Hex editor (0xED, HexWorkshop etc.)
  • Ida - Version 5.2 through 5.6 supports remote debugging of iPhone applications (iphone_server).
For this article, I will use the app name as “blah”.

Some groundwork, taken from Apple's API docs [1, 2]:

The iPhone apps are based on Mach-O (Mach Object) file format. The image below illustrates the file format at high-level:

A Mach-O file contains three major regions: 1. At the beginning of every Mach-O file is a header structure that identifies the file as a Mach-O file. The header also contains other basic file type information, indicates the target architecture, and contains flags specifying options that affect the interpretation of the rest of the file. 2. Directly following the header are a series of variable-size load commands that specify the layout and linkage characteristics of the file. Among other information, the load commands can specify:
  • The initial layout of the file in virtual memory
  • The location of the symbol table (used for dynamic linking)
  • The initial execution state of the main thread of the program
  • The names of shared libraries that contain definitions for the main executable's imported symbols
3. Following the load commands, all Mach-O files contain the data of one or more segments. Each segment contains zero or more sections. Each section of a segment contains code or data of some particular type. Each segment defines a region of virtual memory that the dynamic linker maps into the address space of the process. The exact number and layout of segments and sections is specified by the load commands and the file type. 4. In user-level fully linked Mach-O files, the last segment is the link edit segment. This segment contains the tables of link edit information, such as the symbol table, string table, and so forth, used by the dynamic loader to link an executable file or Mach-O bundle to its dependent libraries.

The iPhone apps are normally encrypted and are decrypted by the iPhone loader at run time. One of the load commands is responsible for decrypting the executable.

Push EBP
Mov EBP, ESP
JMP loc_6969
loc_6969:
Once you have downloaded and installed an app on your iPhone, make a copy of the actual executable on your machine.

Note1: The blah.app is not the actual executable. If you browse this folder, you will find a binary file named blah. This is the actual application binary.

Note2: To find the path where your application is installed, ssh onto your iPhone and use the following command:

sudo find / | grep blap.app
Once you have copied the app binary on your machine, follow the steps below (on your local machine).

Open up a terminal and type the following command:

otool —l blah | grep crypt
This assumes that iPhone SDK or otool is already installed on your machine.

The above command will produce the following output:

If cryptid is set to 1, it implies that the app is encrypted. cryptoff and cryptsize indicates the offset and size of crypt section respectively. Now, firstly we'll have to locate the cryptid in the binary and set it to zero. This is done so that when we finally decrypt the binary and execute it on iPhone, the loader does not attempt to decrypt it again. Open the binary in a hex editor and load the binary. I did not come across any definite method of locating the cryptid. Once you have loaded the binary in a hex editor, search for “/System/Library/Frameworks”. You should be able to locate it around the address 0x1000. In the line, just above the very first instance of this statement (/System/Library/Frameworks), you will find bytes 01. Flip it to 00 and save the file.

Note3: In case you find multiple instances of 01, use coin-tossing method of choosing between them.

Use otool again to query the crypt data. You will see that the cryptid is now set to 0 (zero).

Next, we need to run the app, which was installed on iPhone and take a memory dump.

Note4: The actual application code starts at 0x2000. The cryptsize in case of our sample app is 942080 (0xE6000). Hence, we add 0x2000 and 0xE6000.

0x2000 + 0xE6000 = 0xE8000
Therefore, we need to dump the running process from 0x2000 till 0xE8000. Now, ssh onto your iPhone, run the target app and look for the process id using “ps —ax” command. Once you have the process id, use the following command to dump the process:
gdb —p PID
dump memory blah.bin 0x2000 0xE8000
Once you have taken the memory dump, use “quit” command to exit gdb. Use the following command to get the size of memory dump:
ls —l blah.bin
The size of this bin file should exactly be same as the cryptsize of the original app. Refer to screenshot above. Now pull this bin file onto your local machine. On your local machine, load the bin file in a hex editor and copy everything (using select all or whatever). Close the file and open the original app in the hex editor. (The file in which we modified cryptid 01 to 00). If you remember, the cryptoff was 4096, which is 0x1000 (in hex). Proceed to memory address 0x1000 and make sure that your hex editor is in overwrite mode, not in append mode. Once you are on memory address 0x1000, paste everything you copied from the bin file. This will overwrite the encrypted section with the decrypted one. Save the file and you're done.

Open the file in IDA pro and you'll see the difference between the encrypted and decrypted binaries. At this point, you can easily reverse engineer the app and patch it. The first image below shows an encrypted app and the second one illustrates a decrypted app:

After patching the application, ssh onto the iPhone and upload it to the application directory. This would mean replace the original binary with the patched one. Once uploaded, install a utility called "ldid" on your iphone.

apt-get install ldid
Finally, sign the patched binary using ldid:
ldid -s blah
This will fix the code signatures and you will be able to run the patched app on your iPhone.

References:

1) http://developer.apple.com/library/mac/#documentation/DeveloperTools/Conceptual/MachORuntime/Reference/reference.html

2) http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/otool.1.html

Thu, 3 Nov 2011

Squinting at Security Drivers and Perspective-based Biases

While doing some thinking on threat modelling I started examining what the usual drivers of security spend and controls are in an organisation. I've spent some time on multiple fronts, security management (been audited, had CIOs push for priorities), security auditing (followed workpapers and audit plans), pentesting (broke in however we could) and security consulting (tried to help people fix stuff) and even dabbled with trying to sell some security hardware. This has given me some insight (or at least an opinion) into how people have tried to justify security budgets, changes, and findings or how I tried to. This is a write up of what I believe these to be (caveat: this is my opinion). This is certainly not universalisable, i.e. it's possible to find unbiased highly experienced people, but they will still have to fight the tendencies their position puts on them. What I'd want you to take away from this is that we need to move away from using these drivers in isolation, and towards more holistic risk management techniques, of which I feel threat modelling is one (although this entry isn't about threat modelling).

Auditors

The tick box monkeys themselves, they provide a useful function, and are so universally legislated and embedded in best practise, that everyone has a few decades of experience being on the giving or receiving end of a financial audit. The priorities audit reports seem to drive are:

  • Vulnerabilities in financial systems. The whole audit hierarchy was created around financial controls, and so sticks close to financial systems when venturing into IT's space. Detailed and complex collusion possibilities will be discussed when approving payments, but the fact that you can reset anyone's password at the helpdesk is sometimes missed, and more advanced attacks like token hijacking are often ignored.
  • Audit house priorities. Audit houses get driven just like anyone else. While I wasn't around for Enron, the reverberations could still be felt years later when I worked at one. What's more, audit houses are increasingly finding revenue coming from consulting gigs and need to keep their smart people happy. This leads to external audit selling "add-ons" like identity management audits (sometimes, they're even incentivised to).
  • Auditor skills. The auditor you get could be an amazing business process auditor but useless when it comes to infosec, but next year it could be the other way around. It's equally possibly with internal audit. Thus, the strengths of the auditor will determine where you get nailed the hardest.
  • The Rotation plan. This year system X, next year system Y. It doesn't mean system X has gotten better, just that they moved on. If you spend your year responding to the audit on system Y and ignore X, you'll miss vital stuff.
  • Known systems. External and internal auditors don't know IT's business in detail. There could be all sorts of critical systems (or pivot points) that are ignored because they weren't in the "flow of financial information" spread sheet.
Vendors Security vendors are the love to hate people in the infosec world. Thinking of them invokes pictures of greasy salesmen phoning your CIO to ask if your security chumps have even thought about network admission control (true story). On the other hand if you've ever been a small team trying to secure a large org, you'll know you can't do it without automation and at some point you'll need to purchase some products. Their marketing and sales people get all over the place and end up driving controls; whether it's “management by in-flight magazine”, an idea punted at a sponsored conference, or the result of a sales meeting.

But security vendors prioritisation of controls are driven by:

  • New Problems. Security products that work eventually get deployed everywhere they're going to be deployed. They continue to bring in income, but the vendor needs a new bright shiny thing they can take to their existing market and sell. Thus, new problems become new scary things that they can use to push product. Think of the Gartner hype curve. Whatever they're selling, be it DLP, NAC, DAM, APT prevention or IPS if your firewall works more like a switch and your passwords are all "P@55w0rd" then you've got other problems to focus on first.
  • Overinflated problems. Some problems really aren't as big as they're made out to be by vendors, but making them look big is a key part of the sell. Even vendors who don't mean to overinflate end up doing it just because they spend all day thinking of ways to justify (even legitimate) purchases.
  • Products as solutions. Installing a product designed to help with a problem isn't the same as fixing the problem, and vendors aren't great at seeing that (some are). Take patch management solutions, there are some really awesome, mature products out there, but if you can't work out where your machines are, how many there are or get creds to them, then you've got a long way to go before that product starts solving the problem it's supposed to.
Pentesters

Every year around Black Hat Vegas/Pwn2Own/AddYourConfHere time a flurry of media reports hit the public and some people go into panic mode. I remember The DNS bug, where all that was needed was for people to apply a patch, but which, due to the publicity around it, garnered a significant amount of interest from people who it usually wouldn't, and probably shouldn't have cared so much. But many pentesters trade on this publicity; and some pentesting companies use this instead of a marketing budget. That's not their only, or primary, motivation, and in the end things get fixed, new techniques shared and the world a better place. The cynical view then is that some of the motivations for vulnerability researchers, and what they end up prioritising are:

  • New Attacks. This is somewhat similar to the vendors optimising for "new problems" but not quite the same. When Errata introduced Hamster at ToorCon ‘07, I heard tales of people swearing at them from the back. I wasn't there, but I imagine some of the calls were because Layer 2 attacks have been around and well known for over a decade now. Many of us ignored FireSheep for the same reason, even if it motivated the biggest moves to SSL yet. But vuln researchers and the scene aren't interested, it needs to be shiny, new and leet . This focus on the new, and the press it drives, has defenders running around trying to fix new problems, when they haven't fixed the old ones.
  • Complex Attacks. Related to the above, a new attack can't be really basic to do well, it needs to involve considerable skill. When Mark Dowd released his highly complex flash attack, he was rightly given much kudos. An XSS attack on the other hand, was initially ignored by many. However, one lead to a wide class of prevalent vulns, while the other requires you to be, well, Mark Dowd. This mean some of the issues that should be obvious, that underpin core infrastructure, but that aren't sexy, don't get looked at.
  • Shiny Attacks. Some attacks are just really well presented and sexy. Barnaby Jack had an ATM spitting out cash and flashing "Jackpot", that's cool, and it gets a room packed full of people to hear his talk. Hopefully it lead to an improvement in security of some of the ATMs he targeted, but the vulns he exploited were the kinds of things big banks had mostly resolved already, and how many people in the audience actually worked in ATM security? I'd be interested to see if the con budget from banks increased the year of his talk, even if they didn't, I suspect many a banker went to his talk instead of one that was maybe talking about a more prevalent or relevant class of vulnerabilities their organisation may experience. Something Thinkst says much better here.
Individual Experience

Unfortunately, as human beings, our decisions are coloured by a bunch of things, which cause us to make decisions either influenced or defined by factors other than the reality we are faced with. A couple of those lead us to prioritising different security motives if decision making rests solely with one person:

  • Past Experience. Human beings develop through learning and consequences. When you were a child and put your hand on a stove hot plate, you got burned and didn't do it again. It's much the same every time you get burned by a security incident, or worse, internal political incident. There's nothing wrong with this, and it's why we value experience; people who've been burned enough times not to let mistakes happen again. However, it does mean time may be spent preventing a past wrong, rather than focusing on the most likely current wrong. For example, one company I worked with insisted on an overly burdensome set of controls to be placed between servers belonging to their security team and the rest of the company network. The reason for this was due to a previous incident years earlier, where one of these servers had been the source of a Slammer outbreak. While that network was never again a source of a virus outbreak, their network still got hit by future outbreaks from normal users, via the VPN, from business partners etc. In this instance, past experience was favoured over a comprehensive approach to the actual problem, not just the symptom.
  • New Systems. Usually, the time when the most budget is available to work on a system is during its initial deployment. This is equally true of security, and the mantra is for security to be built in at the beginning. Justifying a chunk of security work on the mainframe that's been working fine for the last 10 years on the other hand is much harder, and usually needs to hook into an existing project. The result is that it's easier to get security built into new projects than to force an organisation to make significant “security only” changes to existing systems. The result in those that present the vulnerabilities pentesters know and love get less frequently fixed.
  • Individual Motives. We're complex beings with all sorts of drivers and motivations, maybe you want to get home early to spend some time with your kids, maybe you want to impress Bob from Payroll. All sorts of things can lead to a decision that isn't necessarily the right security one. More relevantly however, security tends to operate in a fairly segmented matter, while some aspects are “common wisdom”, others seem rarely discussed. For example, the way the CISO of Car Manufacturer A and the CISO of Car Manufacturer B set up their controls and choose their focus could be completely different, but beyond general industry chit-chat, there will be little detailed discussion of how they're securing integration to their dealership network. They rely on consultants, who've seen both sides for that. Even then, one consultant may think that monitoring is the most important control at the moment, while another could think mobile security is it.
So What?

The result of all of this is that different companies and people push vastly different agendas. To figure out a strategic approach to security in your organisation, you need some objective risk based measurement that will help you secure stuff in an order that mirrors the actual risk to your environment. While it's still a black art, I believe that Threat Modelling helps a lot here, a sufficiently comprehensive methodology that takes into account all of your infrastructure (or at least admits the existence of risk contributed by systems outside of a “most critical” list) and includes valid perspectives from above tries to provide an objective version of reality that isn't as vulnerable to the single biases described above.

Fri, 28 Oct 2011

Metricon 2011 Summary

[I originally wrote this blog entry on the plane returning from BlackHat, Defcon & Metricon, but forgot to publish it. I think the content is still interesting, so, sorry for the late entry :)]

I've just returned after a 31hr transit from our annual US trip. Vegas, training, Blackhat & Defcon were great, it was good to see friends we only get to see a few times a year, and make new ones. But on the same trip, the event I most enjoyed was Metricon. It's a workshop held at the Usenix security conference in San Francisco, run by a group of volunteers from the security metrics mailing list and originally sparked by Andrew Jacquith's seminal book Security Metrics.

There were some great talks, and interactions, the kind you only get at small groupings around a specific set of topics. It was a nice break from the offensive sec of BH & DC to listen to a group of defenders. The talks I most enjoyed (they were all recorded bar a few private talks) were the following:

Wendy Nather — Quantifying the Unquantifiable, When Risk Gets Messy

Wendy looked at the bad metrics we often see, and provided some solid tactical advice on how to phrase (for input) and represent (for output) metrics. As part of that arc, she threw out more pithy phrases that even the people in the room tweeting could keep up with. From introducing a new phrase for measuring attacker skill, "Mitnicks", to practical experience such as how a performance metric phrase as 0-100 had sysadmins aiming for 80-90, but inverting it had them aiming for 0 (her hypothesis, is that school taught us that 100% was rarely achievable). Frankly, I could write a blog entry on her talk alone.

Josh Corman - "Shall we play a game?" and other questions from Joshua

Josh tried to answer the hard question of "why isn't security winning". He avoided the usual complaints and had some solid analysis that got me thinking. In particular the idea of how PCI is the "No Child Left Behind" act for security, which not only targeted those that had been negligent, but also encouraged those who hadn't to drop their standards. "We've huddled around the digital dozen, and haven't moved on." He went on to talk about how controls decay as attacks improve, but our best practice advice doesn't. "There's a half-life to our advice". He then provided a great setup for my talk "What we are doing, is very different from how people were exploited."

Jake Kouns - Cyber Liability Insurance

Jake has taken security to what we already knew it was, an insurance sale ;) Jokes aside, Jake is now a product manager for cyber-liability insurance at Merkel. He provided some solid justifications for such insurance, and opened my eyes to the fact that it is now here. The current pricing is pretty reasonable (e.g. $1500 for $1million in cover). Most of the thinking appeared to target small to medium organisations, that until now have only really had "use AV & pray" as their infosec strategy, and I'd love to hear some case-studies from large orgs that are using it & have claimed. He also spoke about how it could become a "moral hazard" where people choose to insure rather than implement controls, and the difficulties the industry could face, but that right now work as incentives for us (the cost of auditing a business will be more than the insurance is worth). His conclusion, which seemed solid, is why spend $x million on the "next big sec product" when you could spend less & get more on insurance. Lots of questions left, but it looks like it may be time to start investigating.

Allison Miller - Applied Risk Analytics

I really enjoyed Allison and Itai's talk. They looked at practical methodologies for developing risk metrics and coloured them with great examples. The process they presented was the following:

  1. Target - You need to figure out what you want to measure. Allison recommended aiming for "yes/no" questions rather than more open ended questions such as "Are we at risk"
  2. Find Data, Create Variables - Once you know what you're tying to look at, you need to find appropriate data, and work out what the variables are from it.
  3. Data Prep - "Massaging the data" tasks such as normalising, getting it into a computable format etc.
  4. Model Training - Pick an algorithm, send the data through it and see what comes out. She suggested using a couple, and pitting them against each other.
  5. Assessment - Check the output, what is the "Catch vs False Positive vs False Negative" rate. Even if you have FP & FNs, sometimes, weighting the output to give you one failure of a particular type could still be useful.
  6. Deployment - Building intelligence to take automated responses once the metric is stable
The example they gave was to look for account takeovers stemming from the number of released e-mail/password combos recently. Itai took us through each step and showed us how they were eventually able to automate the decision making of the back of a solid metric.

Conclusion

I found the conference refreshing, with a lot of great advice (more than the little listed above). Too often we get stuck in the hamster wheels of pain, and it's nice to think we may be able to slowly take a step off. Hopefully we'll be back next year.

Wed, 19 Oct 2011

Press Release - Jane Frankland joins SensePost

The SensePost marketing division, a highly skilled team of ruthless spin-doctors, is proud to announce that they have written ... a press release. Indeed, this team of fawners, flunkeys, lackeys and puffers has been slaving since early 2009 to come up with the pristine example of literary art you will read below. If you're intimidated by what I've just said, harbour a fanatical dislike for marketing folks or simply don't read so good, then here's the short version:

As of 01 October we have been joined by Jane Frankland, an industry stalwart, previously with Corsaire and NGS. Jane will be responsible for growing the SensePost business in the UK and Europe and we think she's very clever. We're extremely pleased to have her on board and sincerely look forward to working with her. Welcome onboard Jane!

So, here's the famous press-release...

We're proud to announce that former Founder of Corsaire and Associate Director of Operations at NGS Secure Moves to Expand SensePost's UK and European Presence

Pretoria, South Africa -- SensePost, a leader in penetration testing and information security services, announced today that Jane Frankland has joined the company as Head of Business Development for Europe. Frankland will focus first on expanding the brand's UK national reach while providing strategic support and direction for the company's European clients.

Jane, was most recently an Associate Director at NGS Secure, an NCC Group company. She was responsible for their UK (SE), Australian and US operations and also played a part in developing their marketing strategy including re-branding. Prior to NGS Secure, Frankland founded Corsaire, another leading brand in information security consultancy and assessment services. During her 13 years as their Commercial Director, she managed accounts such as Marks & Spencer, Royal Sun Alliance, William Hill and RWE.

When asked “Why SensePost?” Frankland stresses the caliber of the consultants she is working with alongside the value-culture that the Directors have created. “In joining SensePost, I get to be part of an incredibly forward thinking and technically able group, plus I have an active hand in establishing SensePost as a dominant brand in penetration testing services in the UK. It's lovely to be working in collaboration again!”

Charl van der Walt, co-Founder and Managing Director of SensePost, stated that growth into the UK market was a key strategic priority for the company. “When we met Jane, we found the right mix of strategic insight and business management experience. She brings a wealth of experience, fits into the team and can help expand our business. We're excited to welcome her into the SensePost team.”

You can hear more from Jane herself here.

Tue, 18 Oct 2011

Be Inspired

  • Talented
  • Innovative
  • Quality driven
  • Forward thinking
  • Trusted advisors
  • And …simply good fun!
These are all phrases associated with SensePost. Do you think you have what it takes to become part of our expanding GLOBAL team?

We are looking for more security assessment consultants to join us in the UK and South Africa. Security assessments are what we live and breathe — whether it's foot-printing and obtaining enterprise domain admin rights on production networks, training hundreds at conferences around the world, to reverse-engineering mobile applications and producing cutting-edge security applications.

For over a decade we have helped companies understand their information security liabilities and successfully reduced their risk. We have also pioneered assessment training and supported the infosec community with our tools and research. Few companies can match our offering.

We take pride in our world-class team and the quality of the work we deliver. Personal research and career development are as important to us as performing assessments. We invest in our staff, AND we're not interested in burnout through back-to-back engagements.

So, if you're interested in IT security, have at least 2 years experience of penetration testing and security assessments, or an idea that you think could change this industry, we'd love to hear from you.

Just drop us an email: jobs@sensepost.com