Dead Bugs Society: Apple File Server

For today’s installment of Dead Bugs Society, I’m going to dig up another one of my favorite exploits.  This exploit is actually the second exploit that I wrote for the Apple File Server FPLoginExt stack overflow that DaveG found while we were both working for @stake.  I will also take this time to apologize to DaveG for insisting that the bug was a long PathName element (it wasn’t — it was a long UAM string), so that is why the advisory is wrong.  Oops.  My first exploit did a return into libc to branch into the stack pointer so that I didn’t have to hardcode or brute force stack addresses.  But for some odd reason, it worked most of the time, but not every time.  It was only after thinking about it a lot and a helpful tip at DEFCON that year that I figured it all out.  My second exploit, written post-@stake, for the AFP bug fixed that problem and made exploiting this remote root code execution vulnerability 100% reliable :).

The PowerPC cache design makes exploits very interesting.  The PowerPC architecture may have separate instruction and data caches, but not necessarily.  PowerPC processors also may have write-back or write-through caches.  Understanding why these affect cache coherency, especially for exploits where you are dynamically injecting machine code as data and then executing it as instructions, is very important.  Apple’s processors have spanned just about every combination of these.  For example, whereas the earlier PowerPC 601 processors had a unified L1 cache, the G3 and G4 had separate 32 KB instruction and data L1 write-back caches.  The G5 on the other hand, has separate 64 KB instruction and 32 KB data L1 write-through caches.  For a quick comparison between the G4 and G5, see Apple’s TN2087: PowerPC G5 Performance Primer.

The difference between a write-back and write-through cache is when the data from the cache block is written to the next-level cache or main memory.  In a write-through cache, changed data is written through immediately.  A write-back cache only sends the data back to the next level when a “dirty” cache block is expired from the cache.  What does this mean for exploits?  On a separate write-back cache processor like the G3 and G4, your exploit payload will be sitting in the L1 data cache and when the CPU branches to your return address, it will fetch the instructions to execute from main memory or the L2 cache.  It is highly unlikely that an address on the stack will already be in the L1 instruction cache.  Essentially, the CPU will execute stale memory instead of your exploit payload.

In order to get reliable execution, I needed a way to deterministically flush the caches.  My first exploit worked most of the time because I would often trigger a page fault by returning into libSystem.  The page fault would cause a mode switch into the kernel, flushing all caches to main memory.  If I didn’t cause a page fault, however, the exploit would fail.  In the end, I wrote a stub that bounced around libSystem five times in order to execute a system call and then branch indirectly through the stack pointer back into my shellcode.

Dead Bugs Society: Introduction and AnswerBook2

Looking back, I have been finding and exploiting security vulnerabilities for almost a decade at this point and have always loved remote code injection exploits the most.  Notice that I didn’t call them “buffer overflows,” because I think that is too specific.  The rest of the industry has started calling them “remote code execution,” which is much better, but I still prefer my terminology from back when I was playing hacker-turned academic.

I refer to these vulnerabilities as memory trespass vulnerabilities since that is a more general term that is an accurate description of buffer overflow, format string injection, out-of-bounds array access, double-free, and uninitialized variable vulnerabilities where the attacker can write to memory outside of the semantics of the programming language runtime.  While these vulnerabilities can be exploited in a number of ways, the most popular technique is a code injection exploit where a standalone machine code fragment, the exploit payload, is executed by the target process via an injection vector.  Popular injection vectors include overwriting stack return addresses, exception handlers, and other code pointers.  I consider techniques that do not inject payloads but merely reuse code that already exists in the address space (i.e. return into system() to run a chosen command) a different way to exploit memory trespass vulnerabilities.  Those are far less common anyway.

So what does this have to do with anything and what is the “Dead Bugs Society?”  This will be an until-I-get-bored-with-it blog feature where I dredge up some old exploit, publish it, and wax poetic about it.  Most of these will be of the code injection variety, so I wanted to introduce my terms.  To be clear, these are all old long-ago patched vulnerabilities.  I will also only publish an exploit or detailed information for a vulnerability that has been fixed for at least a year.  These are for educational purposes only, unless you have a time machine in which case you better share because I’d like to go rock 1994 with my 2008 skills also.

For the debut of this series, I’m publishing the first remote zero-day code execution exploit that I wrote: a remote format string/stack overflow exploit against the AnswerBook2 web server that was enabled by default in Solaris 2.6 – 8 on TCP port 8888.  I reported these vulnerabilities to Sun immediately, but they took their sweet time in fixing them and I finally published the advisory 2 years later.  I tend to comment heavily, so it should be pretty self-explanatory.  Enjoy the antique warez!

Follow

Get every new post delivered to your Inbox.

Join 43 other followers