Summer @ Trail of Bits

This summer I’ve had the incredible opportunity to work with Trail of Bits as a high school intern. In return, I am obligated to write a blog post about this internship. So without further ado, here it is.

Starting with Fuzzing

The summer kicked off with fuzzing, a technique I had heard of but had never tried. The general concept is to throw input at a program until it crashes, then analyze the crash to find a vulnerability. Because of time constraints, it didn’t make sense to write my own fuzzer, so I began looking for pre-existing fuzzers online. The first tool that I found was Cert’s Failure Observation Engine (FOE), which seemed very promising. FOE has many options that allow for precise fine-tuning of the fuzzer, so it can be tweaked specifically for the target. However, my experience with FOE was fickle. With certain targets, the tool would run once and stop, instead of running continuously (as a fuzzer should). Just wanting to get started, I decided to move on to other tools instead. I settled on american fuzzy lop (afl) for Linux and Microsoft MiniFuzz for Windows. Each had their pros and cons. Afl works best with source code, which limits the scope to open-source software (there is experimental support for closed-source binaries, however it is considerably slower). Compiling from source with afl allows the fuzzer to ascertain code coverage and provide helpful feedback in its interface. MiniFuzz is the opposite: it runs on closed-source Windows software and provides very little feedback while it runs. However, the crash data is very helpful as it gives the values of all registers at the time of the program crash — something the other fuzzers did not provide. MiniFuzz was very click and run compared to afl’s more involved compiling setup.

Examining a Crash

Once the fuzzers were set up and running on targets (Video Lan’s VLC, Wireshark, and Image Magick just to name a few) it was time to start analyzing the crashes. Afl reported several crashes in VLC. While verifying that these crashes were reproducible, I noticed that several were segfaults while trying to free the address 0x7d. This struck me as odd because the address was so small, so on a hunch I opened up the crashing input in a hex editor and searched for ‘7d’. Sure enough, deep in the file was a match: 0x0000007d. I changed this to something easily recognizable, 0x41414141, and ran the file through again. This time the segfault was on, you guessed it, 0x41414141! Encouraged by the knowledge that I could control an address in the program from a file, I set out to find the bug. This involved a long process of becoming undesirably acquainted with both gdb and the VLC source code. The bug allows for the freeing of two arbitrary, user-controlled pointers.

The Bug in Detail

VLC reads in different parts of a file as boxes, which it categorizes in a tagged union. The bug is the result of a type confusion when the size of the stsd box in the file is changed, increasing its size so that it considers the following box, an stts box, to be its child. VLC reads boxes from the file by indexing into a function table based on the type of the box and the type of its parent. But with the wrong parent, it finds no match and instead uses a default read, which reads the file in as a vide type box. Later, when freeing the box, it finds the function only by checking its own type, so it triggers the correct function. VLC tries to free an stts box that was read in as a generic vide box, and frees two address straight from the stts box.



Controlling two freed addresses is plausibly exploitable, so it was time to report the bug. I went through oCERT who were very helpful in communicating the bug with the VLC developers to fix the issue and getting a CVE assigned (CVE-2015-5949). After some back and forth it was settled, and time to move on to something new.

Switching Gears to the Web

With half a summer done and another half to learn something new, I began to explore web security. I had slightly more of a background in this from some CTFs and from NYU Hack Night, but I wanted to get a more in-depth and practical understanding. Unlike fuzzing, where it was easy to hit the ground running, web security required a bit more knowledge beforehand. I spent a week trying to learn as much as possible from The Web Application Hacker’s Handbook and the corresponding MDSec labs. Armed with a solid foundation, I put this training to good use.

Bounty Hunting

HackerOne has a directory of companies that have bug bounty programs, and this seemed the best place to start. I sorted by date joined and picked the newest companies – they probably had not been looked at much yet. Using BurpSuite, an indispensable tool, I poked through these websites looking for anything amiss. Looking through sites like,, and, I searched for vulnerable functions and security issues, and submitted a few reports. I’ve had some success, but they are still going through disclosure.

Security Assessment

To conclude the internship, I performed a security assessment of a tech startup in NYC, applying the skills I’ve acquired. I found bugs in application logic, access controls, and session management, the most severe of which was a logic flaw that posed significant financial risk to the company. I then had the opportunity to present these bugs in a meeting with the company. The report was well-received and the company is now implementing fixes.

Signing Off

This experience at Trail of Bits has been fantastic. I’ve gotten a solid foundation in application and web security, and it’s been a great place to work. I’m taking a week off to look at colleges, but I’ll be back working part time during my senior year.

How We Fared in the Cyber Grand Challenge

The Cyber Grand Challenge qualifying event was held on June 3rd, at exactly noon Eastern time. At that instant, our Cyber Reasoning System (CRS) was given 131 purposely built insecure programs. During the following 24 hour period, our CRS was able to identify vulnerabilities in 65 of those programs and rewrite 94 of them to eliminate bugs built in their code. This proves, without a doubt, that it is not only possible but achievable to automate the actions of a talented software auditor.

Despite the success of our CRS at finding and patching vulnerabilities, we did not qualify for the final event, to be held next year. There was a fatal flaw that lowered our overall score to 9th, below the 7th place threshold for qualification. In this blog post we’ll discuss how our CRS works, how it performed against competitor systems, what doomed its score, and what we are going to do next.

Cyber Grand Challenge Background

The goal of the Cyber Grand Challenge (CGC) is to combine the speed and scale of automation with the reasoning capabilities of human experts. Multiple teams create Cyber Reasoning Systems (CRSs) that autonomously reason about arbitrary networked programs, prove the existence of flaws in those programs, and automatically formulate effective defenses against those flaws. How well these systems work is evaluated through head-to-head tournament-style competition.

The competition has two main events: the qualifying event and the final event. The qualifying event was held on June 3, 2015. The final event is set to take place during August 2016. Only the top 7 competitors from the qualifying event proceed to the final event.

During the qualifying event, each competitor was given the same 131 challenges, or purposely built vulnerable programs, each of which contained at least one intentional vulnerability. For 24 hours, the competing CRSes faced off against each other and were scored according to four criteria. The full details are in the CGC Rules, but here’s a quick summary:

  • The CRS had to work without human intervention. Any teams found to use human assistance were disqualified.
  • The CRS had to patch bugs in challenges. Points were gained for every bug successfully patched. Challenges with no patched bugs received zero points.
  • The CRS could prove bugs exist in challenges. The points from patched challenges were doubled if the CRS could generate an input that crashed the challenge.
  • The patched challenges had to function and perform almost as well as the originals. Points were lost based on performance and functionality loss in the patched challenges.

A spreadsheet with all the qualifying event scores and other data used to make the graphs in this post is available from DARPA (Trail of Bits is the ninth place team). With the scoring in mind, let’s review the Trail of Bits CRS architecture and the design decisions we made.


We’re a small company with a distributed workforce, so we couldn’t physically host a lot of servers. Naturally, we went with cloud computing to do processing; specifically, Amazon EC2. Those who saw our tweets know we used a lot of EC2 time. Most of that usage was purely out of caution.

We didn’t know how many challenges would be in the qualifying event — just that it would be “more than 100.” We prepared for a thousand, with each accompanied by multi-gigabyte network traffic captures. We were also terrified of an EC2 region-wide failure, so we provisioned three different CRS instances, one in each US-based EC2 region, affectionately named Biggie (us-east-1), Tupac (us-west-2), and Dre (us-west-1).

It turns out that there were only 131 challenges and no gigantic network captures in the qualifying event. During the qualifying event, all EC2 regions worked normally. We could have comfortably done the qualifying event with 17 c4.8xlarge EC2 instances, but instead we used 297. Out of our abundance of caution, we over-provisioned by a factor of ~17x.

Bug Finding

The Trail of Bits CRS was ranked second by the number of verified bugs found (Figure 1). This result is impressive considering that we started with nothing while several other teams already had existing bug finding systems prior to CGC.

Figure 1: Teams in the qualifying event ranked by number of bugs found. Orange bars signify finalists.

Our CRS used a multi-pronged strategy to find bugs (Figure 2). First, there was fuzzing. Our fuzzer is implemented with a custom dynamic binary translator (DBT) capable of running several 32-bit challenges in a single 64-bit address space. This is ideal for challenges that feature multiple binaries communicating with one another. The fuzzer’s instrumentation and mutation are separated, allowing for pluggable mutation strategies. The DBT framework can also snapshot binaries at any point during execution. This greatly improves fuzzing speed, since it’s possible to avoid replaying previous inputs when exploring new input space.

Figure 2: Our bug finding architecture. It is a feedback-based architecture that explores the state space of a program using fuzzing and symbolic execution.

Figure 2: Our bug finding architecture. It is a feedback-based architecture that explores the state space of a program using fuzzing and symbolic execution.

In addition to fuzzing, we had not one but two symbolic execution engines. The first operated on the original unmodified binaries, and the second operated on the translated LLVM from mcsema. Each symbolic execution engine had its own strengths, and both contributed to bug finding.

The fuzzer and symbolic execution engines operate in a feedback loop mediated by a system we call MinSet. The MinSet uses branch coverage to maintain a minimum set of maximal coverage inputs. The inputs come from any source capable of generating them: PCAPs, fuzzing, symbolic execution, etc. Every tool gets original inputs from MinSet, and feeds any newly generated inputs into MinSet. This feedback loop lets us explore the possible input state with both fuzzers and symbolic execution in parallel. In practice this is very effective. We log the provenance of our crashes, and most of them look something like:

Network Capture ⇒ Fuzzer ⇒ SymEx1 ⇒ Fuzzer ⇒ Crash

Some bugs can only be triggered when the input replays a previous nonce, which would be different on every execution of the challenge. Our bug finding system can produce inputs that contain variables based on program outputs, enabling our CRS to handle such cases.

Additionally, our symbolic executors are able to identify which inputs affect program state at the point of a crash. This is a key requirement for the success of any team competing in the final as it enables the CRS to create a more controlled crash.


Our CRS’s patching effectiveness, as measured by the security score, ranks as fourth (Figure 3).

Figure 3: Teams in the qualifying event ranked by patch effectiveness (security score). Orange bars signify finalists.

Figure 3: Teams in the qualifying event ranked by patch effectiveness (security score). Orange bars signify finalists.

Our CRS patches bugs by translating challenges into LLVM bitcode with mcsema. Patches are applied to the LLVM bitcode, optimized, and then converted back into executable code. The actual patching works by gracefully terminating the challenge when invalid memory accesses are detected. Patching the LLVM bitcode representation of challenges provides us with enormous power and flexibility:

  • We can easily validate any memory access and keep track of all memory allocations.
  • Complex algorithms, such as dataflow tracking, dominator trees, dead store elimination, loop detection, etc., are very simple to implement using the LLVM compiler infrastructure.
  • Our patching method can be used on real-world software, not just CGC challenges.

We created two main patching strategies: generic patching and bug-based patching. Generic patching is an exclusion-based strategy: it first assumes that every memory access must be verified, and then excludes accesses that are provably safe. The benefit of generic patching is that it patches all possible invalid memory accesses in a challenge. Bug-based patching is an inclusion-based strategy: it first assumes only one memory access (where the CRS found a bug) must be verified, and then includes nearby accesses that may be unsafe. Each patching strategy has multiple heuristics to determine which accesses should be included or excluded from verification.

The inclusion and exclusion heuristics generate patched challenges with different security/performance tradeoffs. The patched challenges generated by these heuristics were tested for performance and security to determine which heuristic performed best while still fixing the bug. For the qualifying event, we evaluated both generic and bug-based patching, but ultimately chose a generic-only patching strategy. Bug-based patching was slightly more performant, but generic patching was more comprehensive and it patched bugs that our CRS couldn’t find.

Functionality and Performance

Functionality and performance scores combine to create an availability score. The availability score is used as a scaling factor for points gained by patching and bug finding. This scaling factor only matters for successfully patched challenges, since those are the only challenges that can score points. The following graphs only consider functionality and performance of successfully patched challenges.


Out of the 94 challenges that our CRS successfully patched, 56 retained full functionality, 30 retained partial functionality, and 8 were nonfunctional. Of the top 10 teams in the qualifying event, our CRS ranks 5th in terms of fully functional patched challenges (Figure 4). We suspect our patched challenges lost functionality due to problems in mcsema, our x86 to LLVM translator. We hope to verify and address these issues once DARPA open-sources the qualifying event challenges.

Figure 4: The count of perfectly functional, partially functional, and nonfunctional challenges submitted by each of the top 10 teams in the qualifying event. Orange bars signify finalists.

Figure 4: The count of perfectly functional, partially functional, and nonfunctional challenges submitted by each of the top 10 teams in the qualifying event. Orange bars signify finalists.


The performance of patched challenges is how our CRS snatched defeat from the jaws of victory. Of the top ten teams in the qualifying event, our CRS placed last in terms of patched challenge performance (Figure 5).

Figure 5: Average and median performance scores of the top ten qualifying event participants. Orange bars signify finalists.

Figure 5: Average and median performance scores of the top ten qualifying event participants. Orange bars signify finalists.

Our CRS produces slow binaries for two reasons: technical and operational. The technical reason is that performance of our patched challenges is an artifact of our patching process, which translates challenges into LLVM bitcode and then re-emits them as executable binaries. The operational reason is that our patching was developed late and optimized for the wrong performance measurements.

So, why did we optimize for the wrong performance measurements? The official CGC performance measurement tools were kept secret, because the organizers wanted to ensure that no one could cheat by gaming the performance measurements. Therefore, we had to measure performance ourselves, and our metrics showed that CPU overhead of our patched challenges was usually negligible. The main flaw that we observed was that our patched challenges used too much memory. Because of this, we spent time and effort optimizing our patching to use less memory in favor of using more CPU time.

It turns out we optimized for the wrong thing, because our self-measurement did not agree with the official measurement tools (Table 1). When self-measuring, our worst-performing patching method had a median CPU overhead of 33% and a median memory overhead of 69%. The official qualifying event measured us at 76% CPU overhead and 28% memory overhead. Clearly, our self-measurements were considerably different from official measurements.

Measurement Median CPU Overhead Median Memory Overhead
Worst Self-Measured Patching Method 33% 69%
Official Qualifying Event 76% 28%

Table 1: Self measured CPU and memory overhead and the official qualifying event CPU and memory overhead.

Our CRS measured its overall score with our own performance metrics. The self-measured score of our CRS was 106, which would have put us in second place. The real overall score was 21.36, putting us in ninth.

An important aspect of software development is choosing where to focus your efforts, and we chose poorly. CGC participants had access to the official measuring system during two scored events held during the year, one in December 2014 and one in April 2015. We should have evaluated our patching system thoroughly during both scored events. Unfortunately, our patching wasn’t fully operational until after the second scored event, so we had no way to verify the accuracy of our self-measurement. The performance penalty of our patching isn’t a fundamental issue. Had we known how bad it was, we would have fixed it. However, according to our own measurements the patching was acceptable so we focused efforts elsewhere.

What’s Next?

According to the CGC FAQ (Question 46), teams are allowed to combine after the qualifying event. We hope to join forces with another team that qualified for the CGC final event, and use the best of both our technologies to win. The technology behind our CRS will provide a significant advantage to any team that partners with us. If you would like to discuss a potential partnership for the CGC final, please contact us at

If we cannot find a partner for the CGC final, we will focus our efforts on adapting our CRS to automatically find and patch vulnerabilities in real software. Our system is up to the task: it has already proven that it can find bugs, and all of its core components were derived from software that works on real Linux binaries. Several components even have Windows and 64-bit support, and adding support for other platforms is a possibility. If you are interested in commercial applications of our technology, please get in touch with us at

Finally, we plan to contribute back fixes and updates to the open source projects utilized in our CRS. We used numerous open source projects during development, and have made several custom fixes and modifications. We look forward to contributing these back to the community so that everyone benefits from our improvements.

Dear DARPA: Challenge Accepted.


We are proud to have one of the only seven accepted funded-track proposals to DARPA’s Cyber Grand Challenge.

Computer security experts from academia, industry and the larger security community have organized themselves into more than 30 teams to compete in DARPA’s Cyber Grand Challenge —- a first-of-its-kind tournament designed to speed the development of automated security systems able to defend against cyberattacks as fast as they are launched. DARPA also announced today that it has reached an agreement to hold the 2016 Cyber Grand Challenge final competition in conjunction with DEF CON, one of the largest computer security conferences in the world.

More info from DARPA:

Press coverage:

Our participation in this program aligns with our development of Javelin, an automated system for simulating attacks against enterprise networks. We have assembled a world-class team of experts in software security, capture the flag, and program analysis to compete in this challenge. As much as we wish the other teams luck in this competition, Trail of Bits is playing to win. Game on!

Using Static Analysis and Clang To Find Heartbleed


Friday night I sat down with a glass of Macallan 15 and decided to write a static checker that would find the Heartbleed bug. I decided that I would write it as an out-of-tree clang analyzer plugin and evaluate it on a few very small functions that had the spirit of the Heartbleed bug in them, and then finally on the vulnerable OpenSSL code-base itself.

The Clang project ships an analysis infrastructure with their compiler, it’s invoked via scan-build. It hooks whatever existing make system you have to interpose the clang analyzer into the build process and the analyzer is invoked with the same arguments as the compiler. This way, the analyzer can ‘visit’ every compilation unit in the program that compiles under clang. There are some limitations to clang analyzer that I’ll touch on in the discussion section.

This exercise added to my list of things that I can only do while drinking: I have the best success with first-order logic while drinking beer, and I have the best success with clang analyzer while drinking scotch.


One approach to identify Heartbleed statically was proposed by Coverity recently, which is to taint the return values of calls to ntohl and ntohs as input data. One problem with doing static analysis on a big state machine like OpenSSL is that your analysis either has to know the state machine to be able to track what values are attacker influenced across the whole program, or, they have to have some kind of annotation in the program that tells the analysis where there is a use of input data.

I like this observation because it is pretty actionable. You mark ntohl calls as producing tainted data, which is a heuristic, but a pretty good one because programmers probably won’t htonl their own data.

What our clang analyzer plugin should do is identify locations in the program where variables are written using ntohl, taint them, and then alert when those tainted values are used as the size parameter to memcpy. Except, that isn’t quite right, it could be the use is safe. We’ll also check the constraints of the tainted values at the location of the call: if the tainted value hasn’t been constrained in some way by the program logic, and it’s used as an argument to memcpy, alert on a bug. This could also miss some bugs, but I’m writing this over a 24h period with some Scotch, so increasing precision can come later.

Clang analyzer details

The clang analyzer implements a type of symbolic execution to analyze C/C++ programs. Plugging in to this framework as an analyzer requires bending your mind around the clang analyzer view of program state. This is where I consumed the most scotch.

The analyzer, under the hood, performs a symbolic/abstract exploration of program state. This exploration is flow and path sensitive, so it is different from traditional compiler data flow analysis. The analysis maintains a “state” object for each path through the program, and in this state object are constraints and facts about the program’s execution on that path. This state object can be queried by your analyzer, and, your analyzer can change the state to include information produced by your analysis.

This was one of my biggest hurdles when writing the analyzer – once I have a “symbolic variable” in a particular state, how do I query the range of that symbolic variable? Say there is a program fragment that looks like this:

int data = ntohl(pkt_data);
if(data >= 0 && data < sizeof(global_arr)) {
 // CASE A
} else {
 // CASE B

When looking at this program from the analyzers point of view, the state “splits” at the if into two different states A and B. In state A, there is a constraint that data is between certain bounds, and in case B there is a constraint that data is NOT within certain bounds. How do you access this information from your checker?

If your checker calls the “dump” method on its given “state” object, data like the following will be printed out:

Ranges of symbol values:
 conj_$2{int} : { [-2147483648, -2], [0, 2147483647] }
 conj_$9{uint32_t} : { [0, 6] }

In this example, conj_$9{uint32_t} is our ‘data’ value above and the state is in the A state. We have a range on ‘data’ that places it between 0 and 6. How can we, as the checker, observe that there’s a difference between this range and an unconstrained range of say [-2147483648, 2147483648]?

The answer is, we create a formula that tests the symbolic value of ‘data’ against some conditions that we enforce, and then we ask the state what program states exist when this formula is true and when it is false. If a new formula contradicts an existing formula, the state is infeasible and no state is generated. So we create a formula that says, roughly, “data > 500” to ask if data could ever be greater than 500. When we ask the state for new states where this is true and where it is false, it will only give us a state where it is false.

This is the kind of idiom used inside of clang analyzer to answer questions about constraints on state. The arrays bounds checkers use this trick to identify states where the sizes of an array are not used as constraints on indexes into the array.


Your analyzer is implemented as a C++ class. You define different “check” functions that you want to be notified of when the analyzer is exploring program state. For example, if your analyzer wants to consider the arguments to a function call before the function is called, you create a member method with a signature that looks like this:

void checkPreCall(const CallEvent &Call, CheckerContext &C) const;

Your analyzer can then match on the function about to be (symbolically) invoked. So our implementation works in three stages:

  1. Identify calls to ntohl/ntoh
  2. Taint the return value of those calls
  3. Identify unconstrained uses of tainted data

We accomplish the first and second with a checkPostCall visitor that roughly does this:

void NetworkTaintChecker::checkPostCall(const CallEvent &Call,
CheckerContext &C) const {
  const IdentifierInfo *ID = Call.getCalleeIdentifier();

  if(ID == NULL) {

  if(ID->getName() == "ntohl" || ID->getName() == "ntohs") {
    ProgramStateRef State = C.getState();
    SymbolRef 	    Sym = Call.getReturnValue().getAsSymbol();

    if(Sym) {
      ProgramStateRef newState = State->addTaint(Sym);

Pretty straightforward, we just get the return value, if present, taint it, and add the state with the tainted return value as an output of our visit via ‘addTransition’.

For the third goal, we have a checkPreCall visitor that considers a function call parameters like so:

void NetworkTaintChecker::checkPreCall(const CallEvent &Call,
CheckerContext &C) const {
  ProgramStateRef State = C.getState();
  const IdentifierInfo *ID = Call.getCalleeIdentifier();

  if(ID == NULL) {
  if(ID->getName() == "memcpy") {
    SVal            SizeArg = Call.getArgSVal(2);
    ProgramStateRef state =C.getState();

    if(state->isTainted(SizeArg)) {
      SValBuilder       &svalBuilder = C.getSValBuilder();
      Optional<NonLoc>  SizeArgNL = SizeArg.getAs<NonLoc>();

      if(this->isArgUnConstrained(SizeArgNL, svalBuilder, state) == true) {
        ExplodedNode  *loc = C.generateSink();
        if(loc) {
          BugReport *bug = new BugReport(*this->BT, "Tainted,
unconstrained value used in memcpy size", loc);

Also relatively straightforward, our logic to check if a value is unconstrained is hidden in ‘isArgUnConstrained’, so if a tainted, symbolic value has insufficient constraints on it in our current path, we report a bug.

Some implementation pitfalls

It turns out that OpenSSL doesn’t use ntohs/ntohl, they have n2s / n2l macros that re-implement the byte-swapping logic. If this was in LLVM IR, it would be tractable to write a “byte-swapping recognizer” that uses an amount of logic to prove when a piece of code approximates the semantics of a byte-swap.

There is also some behavior that I have not figured out in clang’s creation of the AST for openssl where calls to ntohs are replaced with __builtin_pre(__x), which has no IdentifierInfo and thus no name. To work around this, I replaced the n2s macro with a function call to xyzzy, resulting in linking failures, and adapted my function check from above to check for a function named xyzzy. This worked well enough to identify the Heartbleed bug.

Solution output with demo programs and OpenSSL

First let’s look at some little toy programs. Here is one toy example with output:

$ cat demo2.c


int data_array[] = { 0, 18, 21, 95, 43, 32, 51};

int main(int argc, char *argv[]) {
  int   fd;
  char  buf[512] = {0};

  fd = open("dtin", O_RDONLY);

  if(fd != -1) {
    int size;
    int res;

    res = read(fd, &size, sizeof(int));

    if(res == sizeof(int)) {
      size = ntohl(size);

      if(size < sizeof(data_array)) {
        memcpy(buf, data_array, size);

      memcpy(buf, data_array, size);


  return 0;

$ ../
scan-build: Using '/usr/bin/clang' for static analysis
/usr/bin/ccc-analyzer -o demo2 demo2.c
demo2.c:30:7: warning: Tainted, unconstrained value used in memcpy size
      memcpy(buf, data_array, size);
1 warning generated.
scan-build: 1 bugs found.
scan-build: Run 'scan-view /tmp/scan-build-2014-04-26-223755-8651-1' to
examine bug reports.

And finally, to see it catching Heartbleed in both locations it was present in OpenSSL, see the following:




The approach needs some improvement, we reason about if a tainted value is “appropriately” constrained or not in a very coarse-grained way. Sometimes that’s the best you can do though – if your analysis doesn’t know how large a particular buffer is, perhaps it’s enough to show to an analyst “hey, this value could be larger than 5000 and it is used as a parameter to memcpy, is that okay?”

I really don’t like the limitation in clang analyzer of operating on ASTs. I spent a lot of time fighting with the clang AST representation of ntohs and I still don’t understand what the source of the problem was. I kind of just want to consider a programs semantics in a virtual machine with very simple semantics, so LLVM IR seems ideal to me. This might just be my PL roots showing though.

I really do like the clang analyzers interface to path constraints. I think that interface is pretty powerful and once you get your head around how to apply your problem to asking states if new states satisfying your constraints are feasible, it’s pretty straightforward to write new analyses.

Edit: Code Post

I’ve posted the code for the checker to Github, here.

Introducing Javelin


Javelin shows you how modern attackers would approach and exploit your enterprise. By simulating real-time, real-world attack techniques, Javelin identifies which employees are most likely to be targets of spearphishing campaigns, uncovers security infrastructure weaknesses, and compares overall vulnerability against industry competitors. Javelin benchmarks the efficacy of defensive strategies, and provides customized recommendations for improving security and accelerating threat detection. Highly automated, low touch, and designed for easy adoption, Javelin will harden your existing security and information technology infrastructure.

Read more about Javelin on the Javelin Blog.

Free Ruby Security Workshop

We interrupt our regularly scheduled programming to bring you an important announcement: On Thursday, June 6th, just in time for SummerCon, we will be hosting a free Ruby Security Workshop in NYC! Signups are first-come, first-serve and we only have space for 30 people. Sign up here and we will email the selected participants the location of the training on Tuesday night.

In the last year, many new vulnerabilities and vulnerability classes have been discovered in Ruby applications. These vulnerabilities make use of features specific to the Ruby language and common idioms present in large Ruby projects, such as serialization and deserialization of data in the YAML format. As these vulnerability classes were initially discovered in popular and well-studied open source software, it’s extremely likely that they occur in applications throughout the Ruby ecosystem. These applications frequently represent lucrative targets for attackers, and with the appearance of new and easily exploitable bug classes, the potential for targeted and mass exploitation of Ruby programs has been demonstrated to the world. In this workshop, we aim to bridge a knowledge and skills gap by bringing information about these new vulnerability classes to software developers.

Our Ruby Security Workshop will be led by Hal Brodigan (@postmodern_mod3) and covers recent Ruby on Rails vulnerabilities classes, their root causes, and exercises where students develop exploits for real-world vulnerabilities. Attendees will learn the patterns behind the vulnerabilities and develop software engineering strategies to avoid introducing these flaws into their projects.

If you’re in the city for SummerCon and interested in attending on Thursday, fill out our signup form and selected participants will be sent more info tomorrow. We’re excited to bring you programs like this and we hope to see you there!

Ending the Love Affair with ExploitShield


ExploitShield has been marketed as offering protection “against all known and unknown 0-day day vulnerability exploits, protecting users where traditional anti-virus and security products fail.” I found this assertion quite extraordinary and exciting! Vulnerabilities in software applications are real problems for computer users worldwide. So far, we have been pretty bad at providing actual technology to help individual users defend against vulnerabilities in software.

In my opinion, Microsoft has made the best advances with their Enhanced Mitigation Experience Toolkit. EMET changes the behavior of the operating system to increase the effort attackers have to expend to produce working exploits. There are blog posts that document exactly what EMET does.

In general, I believe that systems that are upfront and public about their methodologies are more trustworthy than “secret sauce” systems. EMET is very upfront about their methodologies, while ExploitShield conceals them in an attempt to derive additional security from obscurity.

I analyzed the ExploitShield system and technology and the results of my analysis follow. To summarize, the system is very predictable, attackers can easily study it and adapt their attacks to overcome it and the implementation itself creates new attack surface. After this analysis, I do not believe that this system would help an individual or organization defend themselves against an attacker with any capability to write their own exploits, 0-day or otherwise.


The analysis I performed was on their “Browser” edition. It’s possible that something far more advanced is in their “Corporate” edition, I honestly can’t say because I haven’t seen it. However, given the ‘tone’ of the implementation that I analyzed, and the implementation flaws that are in it, I doubt this possibility and believe that the “Corporate” edition represents just “more of the same.” I am welcome to being proven wrong.

Initial Analysis

Usually we can use some excellent and free tools to get a sense of software’s footprint. I like to use GMER for this. GMER surveys the entire system and uses a cross-view technique to identify patches made to running programs.

If you recall, from ExploitShields marketing information, we see popup boxes that look like this:

This screenshot has some tells in it, for example, why is the path specified? If this was really blocking the ‘exploit’, shouldn’t it never get as far as specifying a path on the file system?

In the following sections, I’ll go over each phase of my analysis as it relates to a component of or a concept within ExploitShield.

ExploitShield uses a Device Driver

One component of the ExploitShield system is a device driver. The device driver uses an operating-system supported mechanism (PsSetCreateProcessNotifyRoutine) to receive notification from the operating system when a process is started by the operating system.

Each time a process starts, the device driver examines this process and optionally loads its own user-mode code module into the starting process. The criteria for loading a user-mode code module is determined by whether or not the starting process is a process that ExploitShield is protecting.

User-Mode Component

The user-mode component seems to exist only to hook/detour specific functions.

The act of function hooking, also called function detouring, involves making modifications to the beginning of a function such that when that function is invoked, another function is invoked instead. The paper on Detours by MS Research explains the concept pretty thoroughly.

Function hooking is commonly used as a way to implement a checker or reference monitor for an application. A security system can detour a function, such as CreateProcessA, and make a heuristics-based decision on the arguments to CreateProcessA. If the heuristic indicates that the behavior is suspect, the security system can take some action, such as failing the call to CreateProcessA or terminating the process.

Hooked Functions

ExploitShield seems to function largely by detouring the following methods:

* ShellExecute
* UrlDownloadToFileW/A
* UrlDownloadToCacheFileW/A

Here we can get a sense of what the authors of ExploitShield meant when they said “After researching thousands of vulnerability exploits ZeroVulnerabilityLabs has developed an innovative patent-pending technology that is able to detect if a shielded application is being exploited maliciously”. These are functions commonly used by shellcode to drop and execute some other program!

Function Hook Behavior

Each function implements a straightforward heuristic. Before any procedure (on x86) is invoked, the address to return to after the procedure is finished is pushed onto the stack. Each hook retrieves the return address off of the stack, and asks questions about the attributes of the return address.

  • Are the page permissions of the address RX (read-execute)?
  • Is the address located within the bounds of a loaded module?

If either of these two tests fail, ExploitShield reports that it has discovered an exploit!

A Confusion of Terms

  • Vulnerability: A vulnerability is a property of a piece of software that allows for some kind of trust violation. Vulnerabilities have a really broad definition. Memory corruption vulnerabilities have had such an impact on computer security that many times, ‘vulnerability’ is used simply as a shorthand for ‘memory corruption vulnerability’ however other kinds of vulnerabilities do exist, for example information disclosure vulnerabilities or authentication bypass vulnerabilities. An information disclosure vulnerability could sometimes be worse for individual privacy than a memory corruption vulnerability.
  • Exploit: An exploit is a software or procedure that uses a vulnerability to effect some action, usually to execute a payload.
  • Payload: Attacker created software that executes after a vulnerability has been used to compromise a system.

It is my belief that when ExploitShield uses the term ‘exploit’, they really mean ‘payload’.

A Good Day for ExploitShield

So what is a play by play of ExploitShield functioning as expected? Let’s take a look, abstracting the details of exactly which exploit is used:

  1. A user is fooled into navigating to a malicious web page under the attackers control. They can’t really be blamed too much for this, they just need to make this mistake once and the visit could be the result of an attacker compromising a legitimate website and using it to serve malware.
  2. This web page contains an exploit for a vulnerability in the user’s browser. The web browser loads the document that contains the exploit and begins to parse and process the exploit document.
  3. The data in the exploit document has been modified such that the program parsing the document does something bad. Let’s say that what the exploit convinces the web browser to do is to overwrite a function pointer stored somewhere in memory with a value that is the address of data that is also supplied by the exploit. Next, the vulnerable program calls this function pointer.
  4. Now, the web browser executes code supplied by the exploit. At this point, the web browser has been exploited. The user is running code supplied by the attacker / exploit. At this point, anything could happen. Note how we’ve made it all the way through the ‘exploitation’ stage of this process and ExploitShield hasn’t entered the picture yet.
  5. The executed code calls one of the hooked functions, say WinExec. For this example, let’s say that the code executing is called from a page that is on the heap, so its permissions are RWX (read-write-execute).

ExploitShield is great if the attacker doesn’t know it’s there, and, isn’t globally represented enough to be a problem in the large for an attacker. If the attacker knows it’s there, and cares, they can bypass it trivially.

A Bad Day for ExploitShield

If an attacker knows about ExploitShield, how much effort does it take to create an exploit that does not set off the alarms monitored by ExploitShield? I argue it does not take much effort at all. Two immediate possibilities come to mind:

  • Use a (very) primitive form of ROP (Return-Oriented Programming). Identify a ret instruction in a loaded module and push that onto the stack as a return address. Push your return address onto the stack before this address. The checks made by ExploitShield will pass.
  • Use a function that is equivalent to one of the hooked functions, but is not the hooked function. If CreateProcess is hooked, use NtCreateProcess instead.

Both of these would defeat the protections I discovered in ExploitShield. Additionally, these techniques would function on systems where ExploitShield is absent, meaning that if an attacker cared to bypass ExploitShield when it was present they would only need to do the work of implementing these bypasses once.

Obscurity Isn’t Always Bad

The principle of ‘security through obscurity’ is often cited by security nerds as a negative property for a security system to hold. However, obscurity does actually make systems more secure as long as the defensive system remains obscure or unpredictable. The difficulty for obscurity-based defensive techniques lies in finding an obscure change that can be made with little cost and that the attacker can’t adapt to before they are disrupted by it, or a change that can be altered for very little cost when its obscurity is compromised.

For example, consider PatchGuard from Microsoft. PatchGuard ‘protects’ the system by crashing when modifications are detected. The operation of PatchGuard is concealed and not published by Microsoft. As long as PatchGuards operation is obscured and secret, it can protect systems by crashing them when it detects modification made by a rootkit.

However, PatchGuard has been frequently reverse engineered and studied by security researchers. Each time a researcher has sat down with the intent to bypass PatchGuard, they have met with success. The interesting thing is what happens next: at some point in the future, Microsoft silently releases an update that changes the behavior of PatchGuard such that it still accomplishes its goal of crashing the system if modifications are detected, but is not vulnerable to attacks created by security researchers.

In this instance, obscurity works. It’s very cheap for Microsoft to make a new PatchGuard, indeed the kernel team might have ten of them “on the bench” waiting for the currently fielded version to be dissected and bypassed. This changes the kernel from a static target into a moving target. The obscurity works because it is at Microsoft’s initiative to change the mechanism, changes are both cheap and effective, and the attacker can’t easily prepare to avoid these changes when they’re made.

The changes that ExploitShield introduces are extremely brittle and cannot be modified as readily. Perhaps if ExploitShield was an engine to quickly deliver a broad variety of runtime changes and randomly vary them per application, this dynamic would be different.

Some Implementation Problems

Implementing a HIPS correctly is a lot of work! There are fiddly engineering decisions to make everywhere and as the author you are interposing yourself into a very sticky security situation. ExploitShield makes some unnecessary implementation decisions.

The IOCTL Interface

The driver exposes an interface that is accessible to all users. Traditional best-practices for legacy Windows drivers ask that interfaces to the driver only be accessible to the users that should access it. The ExploitShield interface is accessible to the entire system however, including unprivileged users.

The driver processes messages that are sent to it. I didn’t fully discover what type of messages these are, or their format, however IOCTL handling code is full of possibilities for subtle mistakes. Any mistake present inside of the IOCTL handling code could lead to a kernel-level vulnerability, which would compromise the security of your entire system.

This interface creates additional attack surface.

The Hook Logic

Each hook invokes a routine to check if the return address is located in a loaded module. This routine makes use of a global list of modules that is populated only once by a call to EnumerateLoadedModules with a programmer-supplied callback. There are two bugs in ExploitShields methodology to retrieve the list of loaded modules.

The first bug is that there is apparently no mutual exclusion around the critical section of populating the global list. Multiple threads can call CreateProcessA at once, so it is theoretically possible for the user-mode logic to place itself into an inconsistent state.

The second bug is that the modules are only enumerated once. Once EnumerateLoadedModules has been invoked, a global flag is set to true and then EnumerateLoadedModules is never invoked again. If the system observes a call to CreateProcess, and then a new module is subsequently loaded, and that module has a call to CreateProcess, the security system will erroneously flag that module as an attempted exploit.

Neither of these flaws expose the user to any additional danger, they just indicate poor programming practice.

Why Hook At All?

An especially baffling decision made in the implementation of ExploitShield is the use of hooks at all! For each event that ExploitShield concerns itself with (process creation and file write), there are robust callback infrastructures present in the NT kernel. Indeed, authors of traditional anti-virus software so frequently reduced system stability with overly zealous use of hooks that Microsoft very strongly encouraged them to use this in-kernel monitoring API.

ExploitShield uses unnecessarily dangerous programming practices to achieve effects possible by using legitimate system services, possibly betraying a lack of understanding of the platform they aim to protect.

The Impossibility of ExploitShield’s success

What can ExploitShield do to change this dynamic? The problem is, not much. Defensive systems like this are wholly dependent on obscurity. Once studied by attackers, the systems lose their value. In the case of software like this, one problem is that the feedback loop does not inform the authors or users of the security software that the attacker has adapted to the security system. Another problem is that the obscurity of a system is difficult to maintain. The software has to be used by customers, so it has to be available in some sense, and if it is available for customers, it will most likely also be available for study by an attacker.

What Hope Do We Have?

It’s important to note that EMET differs from ExploitShield in an important regard: EMET aims to disrupt the act of exploiting a program, while ExploitShield aims to disrupt the act of executing a payload on a system. These might seem like fine points, however a distinction can be made around “how many choices does the attacker have that are effective”. When it comes to executing payloads, the attackers choices are nearly infinite since they are already executing arbitrary code.

In this regard, EMET is generally not based on obscurity. The authors of EMET are very willing to discuss in great detail the different mitigation strategies they implement, while the author of ExploitShield has yet to do so.

Generally, I believe if a defensive technique makes a deterministic change to program or run-time behavior, an attack will fail until it is adapted to this technique. The effectiveness of the attack relies on the obscurity of the technique, and on whether the change impacts the vulnerability, exploit, or payload. If the attack cannot be adapted to the modified environment, then the obscurity of the mitigation is irrelevant.

However, what if the technique was not obscure, but was instead unpredictable? What if there was a defensive technique that would randomly adjust system implementation behavior while preserving the semantic behavior of the system as experienced by the program? What is needed is identification of properties of a system that, if changed, would affect the functioning of attacks but would not change the functioning of programs.

When these properties are varied randomly, the attacker has fewer options. Perhaps they are aware of a vulnerability that can transcend any permutation of implementation details. If they are not, however, they are entirely at the mercy of chance for whether or not their attack will succeed.


ExploitShield is a time capsule containing the best host-based security technology that 2004 had to offer. In my opinion, it doesn’t represent a meaningful change in the computer security landscape. The techniques used hinge wholly on obscurity and secrecy, require very little work to overcome and only affect the later stage of computer attacks, the payload, and not the exploit.

When compared to other defensive technologies, ExploitShield comes up short. It uses poorly implemented techniques that work against phases of the attack that require very little attacker adaptation to overcome. Once ExploitShield gains enough market traction, malware authors and exploit writers will automate techniques that work around it.

ExploitShield even increases your attack surface, by installing a kernel-mode driver that will processes messages sent by any user on the system. Any flaws in that kernel-mode driver could result in the introduction of a privilege escalation bug into your system.

The detection logic it uses to find shellcode is not wholly flawed, it contains an implementation error that could result in some false positives, however it is generally the case that a call to a runtime library function, with a return address that is not in the bounds of a loaded module, is suspicious. The problem with this detection signature is that it is trivially modified to achieve the same effect. Additionally, this detection signature is not novel, HIPS products have implemented this check for a long time.

This is a shame, because, in my opinion, there is still some serious room for innovation in this type of software…

Pwn2Own Pre-Game

Just in time to get warmed up for Pwn2Own, we are delivering a joint offering of the training courses “Bug Hunting and Analysis 0x65” by Aaron Portnoy and Zef Cekaj as well as “Assured Exploitation” by Dino Dai Zovi and Alex Sotirov in New York City on January 31 – February 3. Students may take either course or both classes for a $1000 discount. Full course information is available in the Pwn2Own PreGame PDF or the full blog post after the jump.

[Read more…]

NYC Assured Exploitation Training

On June 8-9, right before SummerC0n, Alex Sotirov and I will be giving a special New York City edition of our Assured Exploitation training class. This is a great opportunity for anyone who was unable to take our class at CanSecWest this year. The two-day class costs $2500 per student for registrations received before May 25 and $3000 per student for registrations received afterwards.  We accept payment via Purchase Order, major credit cards, and PayPal.  Group discounts are also available (contact us for a price quote).  To register, e-mail me (ddz at theta44 dot org) or fill out the form below.  For full details, see below or download the full course description.

UPDATE: A location has been selected for the training (1 New York Plaza in lower Manhattan).

[Read more…]

Hacking at Mach 2!

At BayThreat last month, I gave an updated (and more much sober) version of my “Hacking at Mach Speed” presentation from SummerC0n.  Now, since the 0day Mach RPC privilege de-escalation vulnerability has been fixed, I can include full details on it.  The presentation is meant to give a walkthrough on how to identify and enumerate Mach RPC interfaces in bootstrap servers on Mac OS X.  Why would you want to do this?  Hint: there are other uses for these types of vulnerabilities besides gaining increased privileges on single-user Mac desktops.  Enjoy!

  • “Hacking at Mach 2!” (PDF)

Get every new post delivered to your Inbox.

Join 4,555 other followers