Apple can comply with the FBI court order

Earlier today, a federal judge ordered Apple to comply with the FBI’s request for technical assistance in the recovery of the San Bernadino gunmen’s iPhone 5C. Since then, many have argued whether these requests from the FBI are technically feasible given the support for strong encryption on iOS devices. Based on my initial reading of the request and my knowledge of the iOS platform, I believe all of the FBI’s requests are technically feasible.

The FBI’s Request

In a search after the shooting, the FBI discovered an iPhone belonging to one of the attackers. The iPhone is the property of the San Bernardino County Department of Public Health where the attacker worked and the FBI has permission to search it. However, the FBI has been unable, so far, to guess the passcode to unlock it. In iOS devices, nearly all important files are encrypted with a combination of the phone passcode and a hardware key embedded in the device at manufacture time. If the FBI cannot guess the phone passcode, then they cannot recover any of the messages or photos from the phone.

There are a number of obstacles that stand in the way of guessing the passcode to an iPhone:

  • iOS may completely wipe the user’s data after too many incorrect PINs entries
  • PINs must be entered by hand on the physical device, one at a time
  • iOS introduces a delay after every incorrect PIN entry

As a result, the FBI has made a request for technical assistance through a court order to Apple. As one might guess, their requests target each one of the above pain points. In their request, they have asked for the following:

  1. [Apple] will bypass or disable the auto-erase function whether or not it has been enabled;
  2. [Apple] will enable the FBI to submit passcodes to the SUBJECT DEVICE for testing electronically via the physical device port, Bluetooth, Wi-Fi, or other protocol available on the SUBJECT DEVICE; and
  3. [Apple] will ensure that when the FBI submits passcodes to the SUBJECT DEVICE, software running on the device will not purposefully introduce any additional delay between passcode attempts beyond what is incurred by Apple hardware.

In plain English, the FBI wants to ensure that it can make an unlimited number of PIN guesses, that it can make them as fast as the hardware will allow, and that they won’t have to pay an intern to hunch over the phone and type PIN codes one at a time for the next 20 years — they want to guess passcodes from an external device like a laptop or other peripheral.

As a remedy, the FBI has asked for Apple to perform the following actions on their behalf:

[Provide] the FBI with a signed iPhone Software file, recovery bundle, or other Software Image File (“SIF”) that can be loaded onto the SUBJECT DEVICE. The SIF will load and run from Random Access Memory (“RAM”) and will not modify the iOS on the actual phone, the user data partition or system partition on the device’s flash memory. The SIF will be coded by Apple with a unique identifier of the phone so that the SIF would only load and execute on the SUBJECT DEVICE. The SIF will be loaded via Device Firmware Upgrade (“DFU”) mode, recovery mode, or other applicable mode available to the FBI. Once active on the SUBJECT DEVICE, the SIF will accomplish the three functions specified in paragraph 2. The SIF will be loaded on the SUBJECT DEVICE at either a government facility, or alternatively, at an Apple facility; if the latter, Apple shall provide the government with remote access to the SUBJECT DEVICE through a computer allowed the government to conduct passcode recovery analysis.

Again in plain English, the FBI wants Apple to create a special version of iOS that only works on the one iPhone they have recovered. This customized version of iOS (*ahem* FBiOS) will ignore passcode entry delays, will not erase the device after any number of incorrect attempts, and will allow the FBI to hook up an external device to facilitate guessing the passcode. The FBI will send Apple the recovered iPhone so that this customized version of iOS never physically leaves the Apple campus.

As many jailbreakers are familiar, firmware can be loaded via Device Firmware Upgrade (DFU) Mode. Once an iPhone enters DFU mode, it will accept a new firmware image over a USB cable. Before any firmware image is loaded by an iPhone, the device first checks whether the firmware has a valid signature from Apple. This signature check is why the FBI cannot load new software onto an iPhone on their own — the FBI does not have the secret keys that Apple uses to sign firmware.

Enter the Secure Enclave

Even with a customized version of iOS, the FBI has another obstacle in their path: the Secure Enclave (SE). The Secure Enclave is a separate computer inside the iPhone that brokers access to encryption keys for services like the Data Protection API (aka file encryption), Apple Pay, Keychain Services, and our Tidas authentication product. All devices with TouchID (or any devices with A7 or later A-series processors) have a Secure Enclave.

When you enter a passcode on your iOS device, this passcode is “tangled” with a key embedded in the SE to unlock the phone. Think of this like the 2-key system used to launch a nuclear weapon: the passcode alone gets you nowhere. Therefore, you must cooperate with the SE to break the encryption. The SE keeps its own counter of incorrect passcode attempts and gets slower and slower at responding with each failed attempt, all the way up to 1 hour between requests. There is nothing that iOS can do about the SE: it is a separate computer outside of the iOS operating system that shares the same hardware enclosure as your phone.

The Hardware Key is stored in the Secure Enclave in A7 and newer devices

The Hardware Key is stored in the Secure Enclave in A7 and newer devices

As a result, even a customized version of iOS cannot influence the behavior of the Secure Enclave. It will delay passcode attempts whether or not that feature is turned on in iOS. Private keys cannot be read out of the Secure Enclave, ever, so the only choice you have is to play by its rules.

Passcode delays are enforced by the Secure Enclave in A7 and newer devices

Passcode delays are enforced by the Secure Enclave in A7 and newer devices

Apple has gone to great lengths to ensure the Secure Enclave remains safe. Many consumers became familiar with these efforts after “Error 53” messages appeared due to 3rd party replacement or tampering with the TouchID sensor. iPhones are restricted to only work with a single TouchID sensor via device-level pairing. This security measure ensures that attackers cannot build a fraudulent TouchID sensor that brute-forces fingerprint authentication to gain access to the Secure Enclave.

For more information about the Secure Enclave and Passcodes, see pages 7 and 12 of the iOS Security Guide.

The Devil is in the Details

“Why not simply update the firmware of the Secure Enclave too?” I initially speculated that the private data stored within the SE was erased on updates, but I now believe this is not true. Apple can update the SE firmware, it does not require the phone passcode, and it does not wipe user data on update. Apple can disable the passcode delay and disable auto erase with a firmware update to the SE. After all, Apple has updated the SE with increased delays between passcode attempts and no phones were wiped.

If the device lacks a Secure Enclave, then a single firmware update to iOS will be sufficient to disable passcode delays and auto erase. If the device does contain a Secure Enclave, then two firmware updates, one to iOS and one to the Secure Enclave, are required to disable these security features. The end result in either case is the same. After modification, the device is able to guess passcodes at the fastest speed the hardware supports.

The recovered iPhone is a model 5C. The iPhone 5C lacks TouchID and, therefore, lacks a Secure Enclave. The Secure Enclave is not a concern. Nearly all of the passcode protections are implemented in software by the iOS operating system and are replaceable by a single firmware update.

The End Result

There are still caveats in these older devices and a customized version of iOS will not immediately yield access to the phone passcode. Devices with A6 processors, such as the iPhone 5C, also contain a hardware key that cannot ever be read. This key is also “tangled” with the phone passcode to create the encryption key. However, there is nothing that stops iOS from querying this hardware key as fast as it can. Without the Secure Enclave to play gatekeeper, this means iOS can guess one passcode every 80ms.

Passcodes can only be guessed once every 80ms

Passcodes can only be guessed once every 80ms with or without the Secure Enclave

Even though this 80ms limit is not ideal, it is a massive improvement from guessing only one passcode per hour with unmodified software. After the elimination of passcode delays, it will take a half hour to recover a 4-digit PIN, hours to recover a 6-digit PIN, or years to recover a 6-character alphanumeric password. It has not been reported whether the recovered iPhone uses a 4-digit PIN or a longer, more complicated alphanumeric passcode.

Festina Lente

Apple has allegedly cooperated with law enforcement in the past by using a custom firmware image that bypassed the passcode lock screen. This simple UI hack was sufficient in earlier versions of iOS since most files were unencrypted. However, since iOS 8, it has become the default for nearly all applications to encrypt their data with a combination of the phone passcode and the hardware key. This change necessitates guessing the passcode and has led directly to this request for technical assistance from the FBI.

I believe it is technically feasible for Apple to comply with all of the FBI’s requests in this case. On the iPhone 5C, the passcode delay and device erasure are implemented in software and Apple can add support for peripheral devices that facilitate PIN code entry. In order to limit the risk of abuse, Apple can lock the customized version of iOS to only work on the specific recovered iPhone and perform all recovery on their own, without sharing the firmware image with the FBI.


For more information, please listen to my interview with the Risky Business podcast.

  • Update 1: Apple has issued a public response to the court order.
  • Update 2: Software updates to the Secure Enclave are unlikely to erase user data. Please see the Secure Enclave section for further details.
  • Update 3: Reframed “The Devil is in the Details” section and noted that Apple can equally subvert the security measures of the iPhone 5C and later devices that include the Secure Enclave via software updates.

Tidas: a new service for building password-less apps

For most mobile app developers, password management has as much appeal as a visit to the dentist. You do it because you have to, but it is annoying and easy to screw up, even when using standard libraries or protocols like OAUTH.

Your users feel the same way. Even if they know to use strong passwords and avoid reusing them, mobile devices make this difficult. Typing a strong p@4sw0r%d on a tiny keyboard is a hassle.

Today, we’ve got some good news for app developers. We’re releasing a simple SDK drop-in for iOS apps called Tidas. This SDK allows you to completely replace passwords with a simple touch to log into an app. It relies on strong encryption built into iOS to validate the user’s identity without the need to transmit any private information outside of the device.

Tidas: Make passwords obsolete

Tidas: Make passwords obsolete

When your app is installed on a new device, the Tidas SDK generates a unique encryption key identifying the user and registers it with the Tidas backend. This key is stored on the device in the iOS Secure Enclave chip and is protected by Touch ID, requiring the user to use their fingerprint to sign into the app. Signing in generates a digitally signed session token that your backend can pass to the Tidas backend to verify the user’s identity. The entire authentication process is handled by the SDK and does not require you to touch any of the user’s sensitive data.

Start a free trial to see our source code

Preserve user privacy and minimize your liability

Tidas is built by Trail of Bits, a security research company dedicated to advancing Internet security. From the ground up, we have designed Tidas to be safe even in the worst case scenario. If the Tidas backend or your servers were breached tomorrow, the attackers would gain nothing: they would find no passwords and no personally identifying information.

That’s because Tidas doesn’t store any sensitive data outside the mobile device. A user’s encryption keys never leave their device’s Secure Enclave chip and cannot be compromised even if the app, the device or the server are hacked.

Tidas doesn’t collect or have any access to the user’s fingerprints either. That’s Touch ID’s job: it collects users’ fingerprints for authentication and stores them in the Secure Enclave, so they remain completely opaque to Tidas. By design, Tidas protects user’s privacy, and you never have to worry about how to handle their login credentials.

Free access until March 31, 2016

Tidas is free until March 31st. There’s no billing, and no usage limits. Just sign up to gain unfettered access to Tidas’s API. We’ll also provide all the Ruby middleware and Objective-C client libraries you need.

Go to passwordlessapps.com now and download the Tidas SDK now!

Read more about the fast-approaching death of the password in the Wall St Journal and our press release about Tidas this morning.

Join us at Etsy’s Code as Craft

We’re excited to announce that Sophia D’Antoine will be the next featured speaker at Etsy’s Code as Craft series on Wednesday, February 10th from 6:30-8pm in NYC.

What is Code as Craft?

Etsy Code as Craft events are a semi-monthly series of guest speakers who explore a technical topic or computing trend, sharing both conceptual ideas and practical advice. All talks will take place at the Etsy Labs on the 7th floor at 55 Washington Street in beautiful Brooklyn (Suite 712). Come see an awesome speaker and take a whirl in our custom photo booth. We hope to see you at an upcoming event!

In her talk, Sophia will discuss the latest in iOS security and the cross-section between this topic and compiler theory. She will discuss one of our ongoing projects, MAST, a mobile application security toolkit for iOS, which we discussed on this blog last year. Since then, we’ve continued to work on it, added new features, and transitioned it from a proof-of-concept DARPA project to a full-fledged mobile app protection suite.

What’s the talk about?

iOS applications have become an increasingly popular targets for hackers, reverse engineers, and software pirates. In this presentation, we discuss the current state of iOS attacks, review available security APIs, and reveal why they are not enough to defend against known threats. For high-risk applications, novel protections that go beyond those offered by Apple are required. As a solution, we discuss the design of the Mobile Application Security Toolkit (MAST) which ties together jailbreak detection, anti-debugging, and anti-reversing in LLVM to address these risks.

We hope to see you there. If you’re interested in attending, follow this link to register. MAST is still a beta product, so if you’re interested in using it on your own iOS applications after seeing this talk, contact us directly.

Enabling Two-Factor Authentication (2FA) for Apple ID and DropBox

In light of the recent compromises, you’re probably wondering what could have been done to prevent such attacks. According to some unverified articles it would appear that flaws in Apple’s services allowed an attacker to brute force passwords without any rate limiting or account lockout. While its not publicly known if the attacks were accomplished via brute force password guessing, there has been a lot of talk about enabling Two-Factor Authentication (2FA) across services that offer it. The two most popular services being discussed are iCloud and DropBox. While setting up 2FA on these services is not as easy as it should be, this guide will step you through enabling 2FA on Google, Apple ID and DropBox accounts. It’s a free way of adding an extra layer of security on top of these services which handle potentially sensitive information.

What is Two-Factor Authentication?

Username and password authentication uses a single factor to verify identity: something the user knows. Two-Factor authentication adds an extra layer of security on top of a username and password. Normally, the second factor is something only the real user has. This is typically a temporary passcode generated by a piece of hardware such as an RSA token, a passcode sent as an SMS to the user’s cell phone, or a mobile application that accomplishes the same function.

With two-factor authentication, stealing a username and password won’t be enough to log in — the second factor is also required. This multi-factor authentication means an attacker will be required to compromise a user above and beyond password guessing or stealing a credentials database. An attacker would have to gain access to the source of the extra, unique and usually temporary information that makes up the 2FA.
[Read more…]

ReMASTering Applications by Obfuscating during Compilation

In this post, we discuss the creation of a novel software obfuscation toolkit, MAST, implemented in the LLVM compiler and suitable for denying program understanding to even the most well-resourced adversary. Our implementation is inspired by effective obfuscation techniques used by nation-state malware and techniques discussed in academic literature. MAST enables software developers to protect applications with technology developed for offense.

MAST is a product of Cyber Fast Track, and we would like to thank Mudge and DARPA for funding our work. This project would not have been possible without their support. MAST is now a commercial product offering of Trail of Bits and companies interested in licensing it for their own use should contact info@trailofbits.com.

Background

There are a lot of risks in releasing software these days. Once upon a time, reverse engineering software presented a challenge best solved by experienced and skilled reverse engineers at great expense. It was worthwhile for reasonably well-funded groups to reverse engineer and recreate proprietary technology or for clever but bored people to generate party tricks. Despite the latter type of people causing all kinds of mild internet havoc, reverse engineering wasn’t widely considered a serious threat until relatively recently.

Over time, however, the stakes have risen; criminal entities, corporations, even nation-states have become extremely interested in software vulnerabilities. These entities seek to either defend their own network, applications, users, or to attack someone else’s. Historically, software obfuscation was a concern of the “good guys”, who were interested in protecting their intellectual property. It wasn’t long before malicious entities began obfuscating their own tools to protect captured tools from analysis.

A recent example of successful obfuscation is that used by the authors of the Gauss malware; several days after discovering the malware, Kaspersky Lab, a respected malware analysis lab and antivirus company, posted a public plea for assistance in decrypting a portion of the code. That even a company of professionals had trouble enough to ask for outside help is telling: obfuscation can be very effective. Professional researchers have been unable to deobfuscate Gauss to this day.

Motivation

With all of this in mind, we were inspired by Gauss to create a software protection system that leapfrogs available analysis technology. Could we repurpose techniques from software exploitation and malware obfuscation into a state-of-the-art software protection system? Our team is quite familiar with publicly available tools for assisting in reverse engineering tasks and considered how to significantly reduce their efficacy, if not deny it altogether.

Software developers seek to protect varying classes of information within a program. Our system must account for each with equal levels of protection to satisfy these potential use cases:

  • Algorithms: adversary knowledge of proprietary technology
  • Data: knowledge of proprietary data (the company’s or the user’s)
  • Vulnerabilities: knowledge of vulnerabilities within the program

In order for the software protection system to be useful to developers, it must be:

  • Easy to use: the obfuscation should be transparent to our development process, not alter or interfere with it. No annotations should be necessary, though we may want them in certain cases.
  • Cross-platform: the obfuscation should apply uniformly to all applications and frameworks that we use, including mobile or embedded devices that may run on different processor architectures.
  • Protect against state-of-the-art analysis: our obfuscation should leapfrog available static analysis tools and techniques and require novel research advances to see through.

Finally, we assume an attacker will have access to the static program image; many software applications are going to be directly accessible to a dedicated attacker. For example, an attacker interested in a mobile application, anti-virus signatures, or software patches will have the static program image to study.

Our Approach

We decided to focus primarily on preventing static analysis; in this day and age there are a lot of tools that can be run statically over application binaries to gain information with less work and time required by attackers, and many attackers are proficient in generating their own situation-specific tools. Static tools can often very quickly be run over large amounts of code, without necessitating the attacker having an environment in which to execute the target binary.

We decided on a group of techniques that compose together, comprising opaque predicate insertion, code diffusion, and – because our original scope was iOS applications – mangling of Objective-C symbols. These make the protected application impossible to understand without environmental data, impossible to analyze with current static analysis tools due to alias analysis limitations, and deny the effectiveness of breakpoints, method name retrieval scripts, and other common reversing techniques. In combination, these techniques attack a reverse engineer’s workflow and tools from all sides.

Further, we did all of our obfuscation work inside of a compiler (LLVM) because we wanted our technology to be thoroughly baked into the entire program. LLVM can use knowledge of the program to generate realistic opaque predicates or hide diffused code inside of false paths not taken, forcing a reverse engineer to consult the program’s environment (which might not be available) to resolve which instruction sequences are the correct ones. Obfuscating at the compiler level is more reliable than operating on an existing binary: there is no confusion about code vs. data or missing critical application behavior. Additionally, compiler-level obfuscation is transparent to current and future development tools based on LLVM. For instance, MAST could obfuscate Swift on the day of release — directly from the Xcode IDE.

Symbol Mangling

The first and simplest technique was to hinder quick Objective-C method name retrieval scripts; this is certainly the least interesting of the transforms, but would remove a large amount of human-readable information from an iOS application. Without method or other symbol names present for the proprietary code, it’s more difficult to make sense of the program at a glance.

Opaque Predicate Insertion

The second technique we applied, opaque predicate insertion, is not a new technique. It’s been done before in numerous ways, and capable analysts have developed ways around many of the common implementations. We created a stronger version of predicate insertion by inserting predicates with opaque conditions and alternate branches that look realistic to a script or person skimming the code. Realistic predicates significantly slow down a human analyst, and will also slow down tools that operate on program control flow graphs (CFGs) by ballooning the graph to be much larger than the original. Increased CFG size impacts the size of the program and the execution speed but our testing indicates the impact is smaller or consistent with similar tools.

Code Diffusion

The third technique, code diffusion, is by far the most interesting. We took the ideas of Return-Oriented Programming (ROP) and applied them in a defensive manner.

In a straightforward situation, an attacker exploits a vulnerability in an application and supplies their own code for the target to execute (shellcode). However, since the introduction of non-executable data mitigations like DEP and NX, attackers have had to find ways to execute malicious code without the introduction of anything new. ROP is a technique that makes use of code that is already present in the application. Usually, an attacker would compile a set of short “gadgets” in the existing program text that each perform a simple task, and then link those together, jumping from one to the other, to build up the functionality they require for their exploit — effectively creating a new program by jumping around in the existing program.

We transform application code such that it jumps around in a ROP-like way, scrambling the program’s control flow graph into disparate units. However, unlike ROP, where attackers are limited by the gadgets they can find and their ability to predict their location at runtime, we precisely control the placement of gadgets during compilation. For example, we can store gadgets in the bogus programs inserted during the opaque predicate obfuscation. After applying this technique, reverse engineers will immediately notice that the handy graph is gone from tools like IDA. Further, this transformation will make it impossible to use state-of-the-art static analysis tools, like BAP, and impedes dynamic analysis techniques that rely on concrete execution with a debugger. Code diffusion destroys the semantic value of breakpoints, because a single code snippet may be re-used by many different functions and not used by other instances of the same function.

graph view is useful

Native code before obfuscation with MAST

graph view is useless

Native code after obfuscation with MAST

The figures above demonstrate a very simple function before and after the code diffusion transform, using screenshots from IDA. In the first figure, there is a complete control flow graph; in the second, however, the first basic block no longer jumps directly to either of the following blocks; instead, it must refer at runtime to a data section elsewhere in the application before it knows where to jump in either case. Running this code diffusion transform over an entire application reduces the entire program from a set of connected-graph functions to a much larger set of single-basic-block “functions.”

Code diffusion has a noticeable performance impact on whole-program obfuscation. In our testing, we compared the speed of bzip2 before and after our return-oriented transformation and slowdown was approximately 55% (on x86).

Environmental Keying

MAST does one more thing to make reverse engineering even more difficult — it ties the execution of the code to a specific device, such as a user’s mobile phone. While using device-specific characteristics to bind a binary to a device is not new (it is extensively used in DRM and some malware, such as Gauss), MAST is able to integrate device-checking into each obfuscation layer as it is woven through the application. The intertwining of environmental keying and obfuscation renders the program far more resistant to reverse-engineering than some of the more common approaches to device-binding.

Rather than acquiring any copy of the application, an attacker must also acquire and analyze the execution environment of the target computer as well. The whole environment is typically far more challenging to get ahold of, and has a much larger quantity of code to analyze. Even if the environment is captured and time is taken to reverse engineer application details, the results will not be useful against the same application as running on other hosts because every host runs its own keyed version of the binary.

Conclusions

In summary, MAST is a suite of compile-time transformations that provide easy-to-use, cross-platform, state-of-the-art software obfuscation. It can be used for a number of purposes, such as preventing attackers from reverse engineering security-related software patches; protecting your proprietary technology; protecting data within an application; and protecting your application from vulnerability hunters. While originally scoped for iOS applications, the technologies are applicable to any software that can be compiled with LLVM.

iVerify is now available on Github

Today we’re excited to release an open-source version of iVerify!

iPhone users now have an easy way to ensure their phones are free of malware.

iVerify validates the integrity of supported iOS devices and detects modifications that malware or jailbreaking would make, without the use of signatures. It runs at boot-time and thoroughly inspects the device, identifying any changes and collecting relevant artifacts for offline analysis.

In order to use iVerify, grab the code from GitHub, put your phone in DFU mode and run the iverify utility. Prompts on screen will indicate whether surreptitious modifications have been made. Visit the GitHub repository for more information about iVerify.

iOS 4 Security Evaluation

This year’s BlackHat USA was the 12th year in a row that I’ve attended and the 6th year in a row that I’ve participated in as a presenter, trainer, and/or co-organizer/host of the Pwnie Awards. And I made this year my busiest yet by delivering four days of training, a presentation, the Pwnie Awards, and participating on a panel. Not only does that mean that I slip into a coma after BlackHat, it also means that I win at conference bingo.

Reading my excuses for the delay in posting my slides and whitepaper, however, is not why you are reading this blog post. It is to find the link to download said slides and whitepaper:

Hacking at Mach 2!

At BayThreat last month, I gave an updated (and more much sober) version of my “Hacking at Mach Speed” presentation from SummerC0n.  Now, since the 0day Mach RPC privilege de-escalation vulnerability has been fixed, I can include full details on it.  The presentation is meant to give a walkthrough on how to identify and enumerate Mach RPC interfaces in bootstrap servers on Mac OS X.  Why would you want to do this?  Hint: there are other uses for these types of vulnerabilities besides gaining increased privileges on single-user Mac desktops.  Enjoy!

  • “Hacking at Mach 2!” (PDF)

KARMA Demo on the CBS Early Show

Although I haven’t done any development on KARMA for a little over 5 years at this point, many of the weaknesses that it demonstrates are still very present, especially with the proliferation of open 802.11 Hotspots in public places. A few weeks ago, I was invited to help prepare a demo of KARMA for CBS News and the segment actually aired a few weeks ago. If you’re like me and don’t have one of those old-fashioned tele-ma-vision boxes, you can check out the segment here.

Unfortunately, they weren’t able to use the full demo that I prepared. The full demo used a KARMA promiscuous access point to lure clients onto my rogue wireless network with a rogue network’s gateway routed outbound HTTP traffic through a transparent proxy that injected IFRAMEs in each HTML page. The IFRAMEs loaded my own custom “Aurora” exploit, which injected Metasploit’s Meterpreter into the running web browser. From there, I could use the Meterpreter to sniff keystrokes as they logged into their SSL-protected bank/e-mail/whatever. The point was that even if a victim only uses the open Wi-Fi network to log into the captive portal webpage, that’s enough for a nearby attacker to exploit their web browser and maintain control over their system going forward. Perhaps that was a little too complicated for a news segment that the average American watches over breakfast.

As it has been quite a while since I have talked about KARMA, here are a few updates on the weaknesses that it demonstrated:

  • Windows XP SP2 systems with 802.11b-only wireless cards would “park” the cards when the user is not associated to a wireless network by assigning them a 32-character random desired SSID. Even if the user had no networks in their Preferred Networks List, the laptop would associate to a KARMA Promiscuous Access Point and activate the network stack while the GUI would still show the user as not currently associated to any network. This issue was an artifact of 802.11b-only card firmwares (PrismII and Orinoco were affected) and is not present on most 802.11g cards, which is what everyone has these days anyway.
  • Even with a newer card, Windows XP SP2 will broadcast the contents of its Preferred Networks List in Probe Request frames every 60 seconds until it joins a network. Revealing the contents of the PNL allows an attacker to create a network with that name or use a promiscuous access point to lure the client onto their rogue network. Windows Vista and XP SP3 fixed this behavior.
  • Mac OS X had the same two behaviors, except that Apple’s AirPort driver would enable WEP on the wireless card when it had “parked” it. However, the WEP key was a static 40-bit key (0x0102030405 if I recall). Apple issued a security update in July 2005 and credited me for reporting the issue.
  • On 10/17/2006, Microsoft released a hotfix to fix both of the previous issues on Windows XP SP2 systems that Shane Macaulay and I had discovered and presented at various security conferences over the previous two years.
  • Newer versions of Windows (XP SP3, Vista, 7) are only affected if the user manually selects to join the rogue wireless network or the rogue network beacons an SSID in the user’s Preferred Networks List.

Although the leading desktop operating systems found on most laptops have addressed the issue, most mobile phones now support 802.11 Wi-Fi, which may give KARMA a chance to live again!

Advanced Mac OS X Rootkits

At BlackHat USA 2009, I presented “Advanced Mac OS X Rootkits” covering a number of Mach-based rootkit techniques and some tools that I have developed to demonstrate them.  While the majority of Mac OS X rootkits employ known and traditional Unix-based rootkit techniques, these Mach-based techniques show what else is possible using the powerful Mach abstractions in Mac OS X.  My presentation covered a number of Mach-based rootkit tools and techniques including user-mode Mach-O bundle injection, Mach RPC proxying, in-kernel RPC server injection/modification, and kernel rootkit detection.

User-mode DLL injection is quite common on Windows-based operating systems and is facilitated by the CreateRemoteThread API function.  The Mach thread and task calls support creating threads in other tasks, however, they are much more low-level.  The inject-bundle tool demonstrates the steps necessary to use injected memory and threads to load a Mach-O bundle into another task.  A number of injectable bundles are included to demonstrate the API (test), capture an image using the iSight camera (iSight), log instant messages from within iChat (iChatSpy), and log SSL traffic sent through the Apple Secure Transport API (SSLSpy).

The majority of Mach kernel services (task and thread system calls, for example) are implemented as RPC services.  The Mach message format was designed to be host-independent, which facilitates transferring them across the network.  Machiavelli demonstrates using Mach RPC proxying in order to transparently perform Mach RPC to a remote host. Machiavellian versions of ps and inject-bundle are included in order to demonstrate how this technique may be used for remote host control by rootkits.

Most of the public kernel rootkits for Mac OS X load as kernel extensions and remove their entries from the kernel’s kmod list in order to hide themselves from kextstat and prevent themselves from being unloaded. The uncloak tool examines the kernel memory regions looking for loaded Mach-O objects.  If any of these objects do not correspond to a known kernel extension, they may be dumped to disk using kernel-macho-dump.

Mach IPC messages to the in-kernel Mach RPC servers are dispatched through the mig_buckets table.  This table stores function pointers to the kernel RPC server routines and is analogous to the Unix systent system call table.  A kernel rootkit may directly modify this table in order to inject new kernel RPC servers or interpose on in-kernel RPC server routines.  The KRPC kernel extension shows how a kernel rootkit may directly modify this table in order to dynamically inject a new in-kernel RPC subsystem.

These tools are deliberately released as ‘non-hostile’
proof-of-concept tools that meant to demonstrate techniques and are
not suitable for use in actual rootkits or attack tools.  The IM and
SSL logging bundles log to the local system’s disk in an obvious
fashion and Machiavelli opens up the controlling host to some obvious
attacks.  The non-Machiavelli version of inject-bundle, however, is
fully functional and useful for a variety of system-level tasks.
Using the other tools outside of a closed network or test virtual
machine is not recommended.

These tools are deliberately released as ‘non-hostile’ proof-of-concept tools that meant to demonstrate techniques and are not suitable for use in actual rootkits or attack tools.  The IM and SSL logging bundles log to the local system’s disk in an obvious fashion and Machiavelli opens up the controlling host to some obvious attacks.  The non-Machiavelli version of inject-bundle, however, is fully functional and useful for a variety of system-level tasks.  Using the other tools outside of a closed network or test virtual machine is not recommended.

Here are the goods:

Follow

Get every new post delivered to your Inbox.

Join 5,511 other followers