This summer I’ve had the incredible opportunity to work with Trail of Bits as a high school intern. In return, I am obligated to write a blog post about this internship. So without further ado, here it is.
Starting with Fuzzing
The summer kicked off with fuzzing, a technique I had heard of but had never tried. The general concept is to throw input at a program until it crashes, then analyze the crash to find a vulnerability. Because of time constraints, it didn’t make sense to write my own fuzzer, so I began looking for pre-existing fuzzers online.
The first tool that I found was CERT’s Failure Observation Engine (FOE), which seemed very promising. FOE has many options that allow for precise fine-tuning of the fuzzer, so it can be tweaked specifically for the target. However, my experience with FOE was fickle. With certain targets, the tool would run once and stop, instead of running continuously (as a fuzzer should). Just wanting to get started, I decided to move on to other tools instead. I settled on american fuzzy lop (AFL) for Linux and Microsoft MiniFuzz for Windows. Each had their pros and cons. AFL works best with source code, which limits the scope to open-source software (there is experimental support for closed-source binaries, however it is considerably slower). Compiling from source with AFL allows the fuzzer to ascertain code coverage and provide helpful feedback in its interface. MiniFuzz is the opposite: it runs on closed-source Windows software and provides very little feedback while it runs. However, the crash data is very helpful as it gives the values of all registers at the time of the program crash — something the other fuzzers did not provide. MiniFuzz was very click and run compared to AFL’s more involved compiling setup.
Examining a Crash
Once the fuzzers were set up and running on targets (Video Lan’s VLC, Wireshark, and Image Magick just to name a few) it was time to start analyzing the crashes. Afl reported several crashes in VLC. While verifying that these crashes were reproducible, I noticed that several were segfaults while trying to free the address
0x7d. This struck me as odd because the address was so small, so on a hunch I opened up the crashing input in a hex editor and searched for ‘
7d’. Sure enough, deep in the file was a match:
0x0000007d. I changed this to something easily recognizable,
0x41414141, and ran the file through again. This time the segfault was on, you guessed it,
0x41414141! Encouraged by the knowledge that I could control an address in the program from a file, I set out to find the bug. This involved a long process of becoming undesirably acquainted with both gdb and the VLC source code. The bug allows for the freeing of two arbitrary, user-controlled pointers.
The Bug in Detail
VLC reads in different parts of a file as boxes, which it categorizes in a tagged union. The bug is the result of a type confusion when the size of the
stsd box in the file is changed, increasing its size so that it considers the following box, an
stts box, to be its child. VLC reads boxes from the file by indexing into a function table based on the type of the box and the type of its parent. But with the wrong parent, it finds no match and instead uses a default read, which reads the file in as a
vide type box. Later, when freeing the box, it finds the function only by checking its own type, so it triggers the correct function. VLC tries to free an
stts box that was read in as a generic
vide box, and frees two address straight from the
Controlling two freed addresses is plausibly exploitable, so it was time to report the bug. I went through oCERT who were very helpful in communicating the bug with the VLC developers to fix the issue and getting a CVE assigned (CVE-2015-5949). After some back and forth it was settled, and time to move on to something new.
Switching Gears to the Web
With half a summer done and another half to learn something new, I began to explore web security. I had slightly more of a background in this from some CTFs and from NYU Hack Night, but I wanted to get a more in-depth and practical understanding. Unlike fuzzing, where it was easy to hit the ground running, web security required a bit more knowledge beforehand. I spent a week trying to learn as much as possible from The Web Application Hacker’s Handbook and the corresponding MDSec labs. Armed with a solid foundation, I put this training to good use.
HackerOne has a directory of companies that have bug bounty programs, and this seemed the best place to start. I sorted by date joined and picked the newest companies – they probably had not been looked at much yet. Using BurpSuite, an indispensable tool, I poked through these websites looking for anything amiss. Looking through sites like ok.ru, marktplaats.nl, and united.com, I searched for vulnerable functions and security issues, and submitted a few reports. I’ve had some success, but they are still going through disclosure.
To conclude the internship, I performed a security assessment of a tech startup in NYC, applying the skills I’ve acquired. I found bugs in application logic, access controls, and session management, the most severe of which was a logic flaw that posed significant financial risk to the company. I then had the opportunity to present these bugs in a meeting with the company. The report was well-received and the company is now implementing fixes.
This experience at Trail of Bits has been fantastic. I’ve gotten a solid foundation in application and web security, and it’s been a great place to work. I’m taking a week off to look at colleges, but I’ll be back working part time during my senior year.