Mac OS X Return-Oriented Exploitation

In The Mac Hacker’s Handbook and a few Mac-related presentations last year, I described my return-oriented exploitation technique for Mac OS X Leopard (10.5) for x86. This technique involved returning into the setjmp() function within dyld (the Mac OS X dynamic linker, which is loaded at a static location) to write out the values of controlled registers to a chosen location in writable and executable memory. By subsequently returning into that location, a few bytes of chosen x86 instructions could be executed. Performing this sequence twice will allow the attacker to execute enough chosen instructions to copy their traditional machine code payload into executable memory and execute it. In Snow Leopard (10.6), Apple has removed setjmp() from dyld, so I had to go back to the drawing board.

For my talk at REcon this year, Mac OS X Return-Oriented Exploitation, I applied my recent research in return-oriented programming and exploitation to Mac OS X to develop a few techniques against Snow Leopard x86 (32-bit) processes. I also talk about why attackers don’t really have to care about 64-bit x86_64 processes on Snow Leopard just yet. If you missed REcon this year (and why would you ever allow that to happen?!), you can download my slides here: Mac OS X Return-Oriented Exploitation.

Hacking at Mach Speed!

The first ever NYC SummerCon last weekend was a blast and everyone seemed to have a great time. As promised, there was 0day at the conference and hopefully no one remembered it because they were too drunk. Here are the slides for my presentation, (they are really no substitute for the live SummerCon experience). This presentation was a mix of some technical background on local Mach RPC on Mac OS X, a bug I found the day before the conference, and some miscellaneous rants from my presentation at BSidesSF.

It was awesome bringing the conference up to NYC and I had a great time opening up for Dr. Raid’s “Busticating DEP” presentation/freestyle busticati rap.

Practical Return-Oriented Programming

At a number of conferences this spring, I am presenting “Practical Return-Oriented Programming.” The talk is about taking the academic and applying it in the real world to developing exploits for Windows that bypass Permanent DEP using my BISC (Borrowed Instructions Synthetic Computer) library.  In the talk, I demonstrate exploitation of the Internet Explorer “Operation Aurora” vulnerability on Windows 7.  These techniques are not at all new, only my implementation is, and it owes much to previous research by Sebastian Krahmer’s “Borrowed Code Chunks” technique , Hovav Shacham’s Return-Oriented Programming, and Pablo Sole’s DEPLIB.

Assured Exploitation Training

This year, Alex Sotirov and I will be teaching our first “Assured Exploitation” training class at CanSecWest.  This training class is focused on various topics in advanced exploitation of memory corruption vulnerabilities.  This includes a thorough understanding of exploitation mitigations (where they are effective and where they aren’t), heap manipulation, return-oriented programming, and ensuring a clean continuation of process execution so that the application does not crash.

Over the course of the training, the hands-on exercises will be oriented around taking the students through the steps of fully understanding the “Aurora” Internet Explorer vulnerability and developing their own reliable and robust exploit for Internet Explorer 8 on Windows 7, just like the exploit demonstrated in this video demo of my exploit:

One Exploit Should Not Ruin Your Day

Now that the media excitement of the aftermath of Operation Aurora has calmed down and we are all soothing ourselves to sleep by the sound of promptly applying Windows Updates, it is a good time to take a look back and try and figure out what the changing threat landscape means for real-world information security (besides Selling! More! Security! Products!) and what lessons can be learned from it.

First off, the threat landscape has not changed at all, only the perception of it.  If you have done or been around any high-level incident response, you would know that these advanced persistent threats have been going on in various sectors for years.  Nor is it a new development that the attackers used an 0day client-side exploit along with targeted social engineering as their initial access vector.  What is brand new is the fact that a number of large companies have voluntarily gone public with the fact that they were victims to a targeted attack.  And this is the most important lesson: targeted attacks do exist and happen to a number of industries besides the usual ones like credit card processors and e-commerce shops.

For the last decade of the information security industry, almost all of the products and solutions have been designed to stop casual opportunistic attackers and mass Internet-scale attacks.  Moreover, these products are absolutely worthless in protecting you from an Aurora-style attack.  Your software vendor doesn’t have a patch for the vulnerability, your anti-virus and/or network intrusion prevention systems don’t have signatures for the exploit or agent it installs, and the 3rd-party software that your business needs to run prevents you from upgrading your desktops to the latest and greatest operating system and/or browser with the most complete exploit mitigations due to a lack of compatibility.  How many of these large security product vendors employ even one full-time person to play the role of a dedicated attacker attempting to bypass or defeat their defensive systems?  Or have even hired one attack-oriented consultant on a contract for an independent assessment of the efficacy of their product or solution?  Don’t let the same product vendors who failed to protect the victims of Operation Aurora turn right around and sell you those same products as a solution to “the APT threat.”

Second, Operation Aurora has no bearing on the vulnerability disclosure debate.  This particular vulnerability was apparently reported to Microsoft in August and scheduled to be patched in February.  Some are arguing that had this vulnerability been reported via full-disclosure to everyone all at once, it would not have been used in these attacks.  They are right.  The reality, however, is that another vulnerability would have been used instead.  These attacks show that the vulnerability disclosure debate and responsible disclosure process is simply a distraction that prevents us from actually improving security.  Remember, a vulnerability never owned anyone — an exploit did.  I am not arguing that vulnerabilities should not be fixed, simply that it is impossible to find and fix every security vulnerability so we should not let that obsession monopolize our efforts and prevent us from implementing more secure application and network designs.

Finally, the larger problem is that it only took one exploit to compromise these organizations.  One exploit should never ruin your day.  Isn’t that why we build DMZ networks with firewalls in front and behind them?  The point of doing that is so that it requires more than one server-side exploit to get into your organization.  Thanks to rich Internet client applications, it now only requires one client-side exploit to get into your organization.  Ideally, it should require around three or four: a remote code execution exploit, a sandbox escape or integrity level escalation exploit, and finally a local privilege escalation exploit in order to be able to install and hide a remote access backdoor on the system.  Also, workstations that receive e-mail and instant messages from strangers, visit random web sites, and download/install whatever software from the Internet should probably not be on the same network as something like your lawful intercept system.

Take this time to review which exploit mitigations such as DEP and ASLR are enabled in your web browser based on your operating system, browser release, and web plugins.  Take ‘/NoExecute=AlwaysOn’ for a spin in your boot.ini and see what (if anything) breaks.  Use this opportunity to get buy-in for placing users’ Internet-enabled workstations onto DMZ-like subnets where you can closely monitor data going in and out.  Give developers remote desktop access to VMs on a separate development network for working on your products (they will be happy as long as you give the VMs more RAM than their workstations so their builds are quicker).  Give everyone access to an external Wi-Fi network to use with their personal Internet-enabled devices.  Get started implementing some internal network segmentation.  Never let a good crisis go to waste.

CSAW CTF 2009

Friday, November 13th, 2009 was the final round of the NYU-Poly CSAW Capture the Flag Application Security Challenge.   The challenge was open to teams of graduate and undergraduate students from around the world.  The preliminary round was performed over the Internet, giving teams 24 hours to complete a number of challenges in web application security, reverse engineering, and exploitation of memory corruption vulnerabilities.  Congratulations to all of the teams who played, those that made the finals, and especially to the winning teams: ppop, RPISEC, and SecDaemons.

I was responsible for creating and judging the exploitation challenges.  Because I love RISC CPU architectures, I made the challenges revolve around exploitation of embedded Linux systems on x86, PowerPC, ARM, and SH4 processors.  Stephen Ridley created a set of Windows binary reverse engineering challenges and an online scoreboard that was used for the preliminary rounds.  Here are the reversing and exploitation challenges for anyone who is interested in giving them a try for themselves:

  • My embedded Linux exploitation challenges (147 MB): BitTorrent
  • Stephen Ridley’s Windows binary reverse engineering challenges: GitHub

Advanced Mac OS X Rootkits

At BlackHat USA 2009, I presented “Advanced Mac OS X Rootkits” covering a number of Mach-based rootkit techniques and some tools that I have developed to demonstrate them.  While the majority of Mac OS X rootkits employ known and traditional Unix-based rootkit techniques, these Mach-based techniques show what else is possible using the powerful Mach abstractions in Mac OS X.  My presentation covered a number of Mach-based rootkit tools and techniques including user-mode Mach-O bundle injection, Mach RPC proxying, in-kernel RPC server injection/modification, and kernel rootkit detection.

User-mode DLL injection is quite common on Windows-based operating systems and is facilitated by the CreateRemoteThread API function.  The Mach thread and task calls support creating threads in other tasks, however, they are much more low-level.  The inject-bundle tool demonstrates the steps necessary to use injected memory and threads to load a Mach-O bundle into another task.  A number of injectable bundles are included to demonstrate the API (test), capture an image using the iSight camera (iSight), log instant messages from within iChat (iChatSpy), and log SSL traffic sent through the Apple Secure Transport API (SSLSpy).

The majority of Mach kernel services (task and thread system calls, for example) are implemented as RPC services.  The Mach message format was designed to be host-independent, which facilitates transferring them across the network.  Machiavelli demonstrates using Mach RPC proxying in order to transparently perform Mach RPC to a remote host. Machiavellian versions of ps and inject-bundle are included in order to demonstrate how this technique may be used for remote host control by rootkits.

Most of the public kernel rootkits for Mac OS X load as kernel extensions and remove their entries from the kernel’s kmod list in order to hide themselves from kextstat and prevent themselves from being unloaded. The uncloak tool examines the kernel memory regions looking for loaded Mach-O objects.  If any of these objects do not correspond to a known kernel extension, they may be dumped to disk using kernel-macho-dump.

Mach IPC messages to the in-kernel Mach RPC servers are dispatched through the mig_buckets table.  This table stores function pointers to the kernel RPC server routines and is analogous to the Unix systent system call table.  A kernel rootkit may directly modify this table in order to inject new kernel RPC servers or interpose on in-kernel RPC server routines.  The KRPC kernel extension shows how a kernel rootkit may directly modify this table in order to dynamically inject a new in-kernel RPC subsystem.

These tools are deliberately released as ‘non-hostile’
proof-of-concept tools that meant to demonstrate techniques and are
not suitable for use in actual rootkits or attack tools.  The IM and
SSL logging bundles log to the local system’s disk in an obvious
fashion and Machiavelli opens up the controlling host to some obvious
attacks.  The non-Machiavelli version of inject-bundle, however, is
fully functional and useful for a variety of system-level tasks.
Using the other tools outside of a closed network or test virtual
machine is not recommended.

These tools are deliberately released as ‘non-hostile’ proof-of-concept tools that meant to demonstrate techniques and are not suitable for use in actual rootkits or attack tools.  The IM and SSL logging bundles log to the local system’s disk in an obvious fashion and Machiavelli opens up the controlling host to some obvious attacks.  The non-Machiavelli version of inject-bundle, however, is fully functional and useful for a variety of system-level tasks.  Using the other tools outside of a closed network or test virtual machine is not recommended.

Here are the goods:

No More Free Bugs

No More Free Bugs

Alex and I holding "No More Free Bugs" sign during Charlie's Lighting Talk at CanSecWest

A few weeks ago, Charlie Miller, Alex Sotirov, and I arrived on a new meme: No More Free Bugs.  We started talking about it publicly at CanSecWest where Charlie Miller notably announced it for his Lightning Talk and in his ZDNet interview.  It is now making its way through Twitter and the rest of the tubes.  It is understandable that this may be a controversial position, so I’m going to give some more background on the argument here.

First, this is neither advocating non-disclosure nor any disclosure at all.  That decision is left to the discoverer of the vulnerability.  I’m not even going to touch the anti/partial/full disclosure argument.

Second, this philosophy is primarily regarding vulnerabilities in products sold for profit by for profit companies, especially those that already employ security engineers as employees or consultants.  Vulnerabilities discovered in open source projects or Internet infrastructure deservedly require different handling.

The basic argument is as follows:

  • Vulnerabilities place users and customers at risk.  Otherwise, vendors wouldn’t bother to fix them.  Internet malware and worms spread via security vulnerabilities and place home users’ and enterprises’ sensitive data at risk.
  • Vulnerabilities have legitimate value.  Software vendors pay their own employees and consultants to find them and help them fix them in their products during development.  Third-party companies such as Verisign (iDefense) and ZDI will pay researchers for exclusive rights to the vulnerability so that they may responsibly disclose it to the vendor but also share advance information about it to their customers (Verisign/iDefense) or build detection for it into their product (ZDI).  Google is even offering a cash bounty for the best security vulnerability in Native Client.  Donald Knuth personally pays for bugs found in his software and Dan Bernstein paid $1000 personally as a bounty for a vulnerability in djbdns.
  • Reporting vulnerabilities can be legally and professionally risky.  When a researcher discloses the vulnerability to the vendor, there is no “whistle blower” protection and independent security researchers may be unable to legally defend themselves.  You may get threatened, sued, or even thrown in jail.  A number of security researchers have had their employers pressured by vendors to whom they were responsibly disclosing security vulnerabilities.  Vendors expect security researchers to follow responsible disclosure guidelines when they volunteer vulnerabilities, but they are under no such pressure to follow responsible guidelines in their actions towards security researchers.  Where are the vendors’ security research amnesty agreements?
  • It is unfair to paying customers.  Professional bug hunting is a specialized and expensive business.  Software vendors that “freeload” on the security research community place their customers at risk by not putting forth resources to discover vulnerabilities in and fix their products.

Therefore, reporting vulnerabilities for free without any legal agreements in place is risky volunteer work.  There are a number of legitimate alternatives to the risky proposition of volunteering free vulnerabilities and I have already mentioned a few (I don’t want to turn this into an advertisement or discussion on the best/proper way to monetize security research).   There just need to be more legal and transparent options for monetizing security research.  This would provide a fair market value for a researcher’s findings and incentivize more researchers to find and report vulnerabilities to these organizations.  All of this would help make security research a more widespread and legitimate profession.  In the meantime, I’m not complaining about its current cachet and allure.

The Mac Hacker’s Handbook is out!

The Mac Hacker’s Handbook by Charlie Miller and myself has just been published and is now shipping from Amazon.  I have even spotted it in several bookstores where you can usually find it in the Mac section.  The book is all about Mac OS X-specific vulnerability discovery, reverse-engineering, exploitation, and post-exploitation.

For me, this book is a culmination of over 8 years of personal Mac OS X security research.  I had bought and restored a NeXTSTATION Turbo Color in college and fell in love with the NeXTSTEP and OpenStep operating systems.  When I got a check for my first pen-test, I bought a brand-new iBook 500 Mhz to run OS X 10.0 on and I have used OS X as my primary operating system ever since.

Of course, I started hacking on it immediately.  I wrote a monitor-mode wireless packet capture driver for AirPort (Viha) back when the only documentation on IOKit was the Darwin source code and a series of emails on the darwin mailing lists from an Apple kernel developer.  And that was just the first part of my WEP cracking project for my Crypto class.  I was so stubborn about using my iBook for it that I wrote my own driver, stumbler, and WEP weak RC4 key cracker for it.

There was also very little documentation on shellcode for PowerPC around then.  Palante and LSD had both released PowerPC shellcode for Linux and AIX respectively.  But there was nothing for OS X.  I wrote this in a hotel room in Washington, D.C. a few days after DEFCON 9.  As far as I know I was the first one to publish PowerPC shellcode that filled in the unused bits in the ‘sc’ instruction instead of dynamically overwriting them because self-modifying code is pretty tricky on PowerPC.  That shellcode is what appears encoded in the hex bytes on the cover of the book.

Alright, enough self-indulgent trips down memory lane.  I just presented “Mac OS Xploitation” at SOURCE Boston last week and I’ll be doing a bigger presentation called “Hacking Macs for Fun and Profit” next week at CanSecWest with Charlie Miller.  Stay tuned here for some more Mac tool releases.

ARM versus x86

At Hack in the Box in Kuala Lumpur this year, I was interviewed by Sumner Lemon of IDG about various Mac and iPhone-related security topics.  One of the topics was the relative security of ARM versus x86 processors and my comments on this seem to have bounced around the internets a bit.  There seems to have been some confusion over what I meant in my statements, so I thought I’d provide some clarification here on the technical and economic rationale behind this statement.

First, the technical rationale: The classic x86 architecture (pre NX-bit) is an exploit developer’s dream.  Almost every other architecture has complications that x86 almost coincidentally does not.  For example, SPARC has register windows, PowerPCs can have separate data and instruction caches, any RISC architecture has alignment requirements, most architectures support non-executable memory, and all of these make writing exploits on these platforms more difficult.  The x86 had none of these speedbumps and only started supporting truly non-executable memory somewhat recently.  Finally, the x86 instruction set is incredibly flexible, allowing all sorts of ingenious techniques for self-modifying code to evade character filters and intrusion detection systems.  Of course, this was all possible on other architectures as well (see ADMutate‘s SPARC support), but x86 makes it way easier and more powerful.  I have a hard time imagining what could be changed in x86 to make a better target for exploit developers.

Since cybercrime and malware has become a significantly sized industry, it makes a lot of sense to analyze the risk presented by it through economics (and game theory).  Attackers have a lot of infrastructure already built that is x86-specific.  Besides exploit development experience, this also includes payload encoders and hand-written assembly exploit payloads.  Rewriting these takes time and effort.  Macs (and iPhones, as postulated in the article) using x86 processors allow attackers to carry over their experience and existing infrastructure, slightly lowering the barrier to entry to begin attacking a new platform.  If a new platform with marketshare X% starts attracting malware authors’ attention, a new platform with a familiar processor may attract malware authors’ attention at (X – Y)% marketshare (where Y is probably less than 10).  In the end, however, this earlier attention most likely matters less to the product vendor than the deep discount or performance improvements they can get by going with a dominant CPU architecture and manufacturer.

In summary, just about any commodity non-x86 CPU-based system is harder to write exploits for than an x86-based system assuming the same operating system is running on both.  But it does not matter because these differences are just speed bumps and a good exploit developer will be able to work around them.  Vendors should focus on the generic security defenses that they can build into their operating systems and application runtime environments as well as focus on eliminating software vulnerabilities before and after their software is shipped rather than caring what processor architecture they use and whatever impact it may have on attacks against their platform.

Finally, I would also like to make a retraction.  In the same interview, I said that I considered the iPhone OS to be “significantly less secure” than the desktop Mac OS X.  While I would still consider the iPhone OS 1.x to be less secure than Leopard, the iPhone OS 2.2 is quite the opposite.  A number of improvements, including a smaller attack surface, application sandboxes, a non-executable heap, and mandatory code signing for every executable launched (not just applications, even low-level binaries) make compromising the special-purpose iPhone more difficult than the general-purpose desktop Mac OS X.  For more details on the security improvements in the latest iPhone OS, see Charlie Miller’s HiTBSecConf presentation.  Of course, this primarily applies to unjailbroken iPhones since a jailbroken iPhone allows execution of unsigned binaries and it seems that most jailbroken phones still have an SSH server running with the default root account password anyway.  Qualitative comparisons of security are very difficult to whittle down into a one sentence summary, but that’s why organizations (hopefully) have security analysts around and don’t make all of their decisions based on what they read on the Internet.

Follow

Get every new post delivered to your Inbox.

Join 3,631 other followers