An accessible overview of Meltdown and Spectre, Part 1

In the past few weeks the details of two critical design flaws in modern processors were finally revealed to the public. Much has been written about the impact of Meltdown and Spectre, but there is scant detail about what these attacks are and how they work. We are going to try our best to fix that.

This article is explains how the Meltdown and Spectre attacks work, but in a way that is accessible to people who have never taken a computer architecture course. Some technical details will be greatly simplified or omitted, but you’ll understand the core concepts and why Spectre and Meltdown are so important and technically interesting.

This article is divided into two parts for easier reading. The first part (what you are reading now) starts with a crash course in computer architecture to provide some background and explains what Meltdown actually does. The second part explains both variants of Spectre and discusses why we’re fixing these bugs now, even though they’ve been around for the past 25 years.


First, a lightning-fast overview of some important computer architecture concepts and some basic assumptions about hardware, software, and how they work together. These are necessary to understand the flaws and why they work.

Software is Instructions and Data

All the software that you use (e.g. Chrome, Photoshop, Notepad, Outlook, etc.) is a sequence of small individual instructions executed by your computer’s processor. These instructions operate on data stored in memory (RAM) and also in a small table of special storage locations, called registers. Almost all software assumes that a program’s instructions execute one after another. This assumption is both sound and practical — it is equivalent to assuming that time travel is impossible — and it enables us to write functioning software.


These Intel x86-64 processor instructions in notepad.exe on Windows show what software looks like at the instruction level. The arrows flow from branch instructions to their possible destinations.

Processors are designed to be fast. Very, very fast. A modern Intel processor can execute ~300 billion instructions per second. Speed drives new processor sales. Consumers demand speed. Computer engineers have found some amazingly clever ways to make computers fast. Three of these techniques — caching, speculative execution, and branch prediction — are key to understanding Meltdown and Spectre. As you may have guessed, these optimizations are in conflict with the sequential assumption of how the hardware in your computer executes instructions.


Processors execute instructions very quickly (one instruction every ~2 ns). These instructions need to be stored somewhere, as does the data they operate on. That place is called main memory (i.e. RAM). Reading or writing to RAM is 50-100x slower (~100 ns/operation) than the speed at which processors execute instructions.

Because reading from and writing to memory is slow (relative to the instruction execution speed), a key goal of modern processors is to avoid this slowness. One way to achieve this is to assume a common behavior across most programs: they access the same data over and over again. Modern processors speed up reads and writes to frequently accessed memory locations by storing copies of the contents of those memory locations in a “cache.” This cache is located on-chip, near the processor cores that execute instructions. This closeness makes accessing cached memory locations considerably faster than going off-chip to the main storage in RAM. Cache access times vary by the type of cache and its location, but they are on the order of ~1ns to ~3ns, versus ~100ns for going to RAM.


An image of an Intel Nehalem processor die (first generation Core i7, taken from this press release). There are multiple levels of cache, numbered by how far away they are from the execution circuitry. The L1 and L2 cache are in the core itself (likely the bottom right/bottom left of the core images). The L3 cache is on the die but shared among multiple cores. Cache takes up a lot of expensive processor real estate because the performance gains are worth it.

Cache capacity is tiny compared to main memory capacity. When a cache fills, any new items put in the cache must evict an existing item. Because of the stark difference in access times between memory and cache, it is possible for a program to tell whether or not a memory location it requested was cached by timing how long the access took. We’ll discuss this in depth later, but this cache-based timing side-channel is what Meltdown and Spectre use to observe internal processor state.

Speculative Execution

Executing one instruction at a time is slow. Modern processors don’t wait. They execute a bundle of instructions at once, then re-order the results to pretend that everything executed in sequence. This technique is called out-of-order execution. It makes a lot of sense from a performance standpoint: executing 4 instructions one at a time would take 8ns (4 instructions x 2 ns/instruction). Executing 4 instructions at once (realistic on a modern processor) takes just 2ns — a 75% speedup!

While out-of-order execution and speculative execution have different technical definitions, for the purposes of this blog post we’ll be referring to both as speculative execution. We feel justified in this because out-of-order execution is by nature speculative. Some instructions in a bundle may not need to be executed. For example, an invalid operation like a division by zero may halt execution, thus forcing the processor to roll back operations performed by subsequent instructions in the same bundle.


A visualization of performance gained by speculatively executing instructions. Assuming 4 execution units and instructions that do not depend on each other, in-order execution will take 8ns while out-of-order execution will take 2ns. Please note that this diagram is a vast oversimplification and purposely doesn’t show many important things that happen in real processors (like pipelining).

Sometimes, the processor makes incorrect guesses about what instructions will execute. In those cases, speculatively executed instructions must be “un-executed.” As you may have guessed, researchers have discovered that some side-effects of un-executed instructions remain.

There are many caveats that lead to speculative execution guessing wrong, but we’ll focus on the two that are relevant to Meltdown and Spectre: exceptions and branch instructions. Exceptions happen when the processor detects that a rule is being violated. For example, a divide instruction could divide by zero, or a memory read could access memory without permission. We discuss this more in the section on Meltdown. The second caveat, branch instructions, tell the processor what to execute next. Branch instructions are critical to understanding Spectre and are further described in the next section.

Branch Prediction

Branch instructions control execution flow; they specify where the processor should get the next instruction. For this discussion we are only interested in two kinds of branches: conditional branches and indirect branches. A conditional branch is like an fork in the road because the processor must select one of two choices depending on the value of a condition (e.g. A > B; C = 0; etc. ). An indirect branch is more like a portal because the processor can go anywhere. In an indirect branch, the processor reads a value that tells it where to fetch the next instruction.


A conditional branch and its two potential destinations. If the two operands of the cmp instruction are equal, this branch will be taken (i.e. the processor will execute instructions at the green arrow). Otherwise the branch will not be taken (i.e. the processor will execute instructions at the red arrow). Code taken from notepad.exe


An indirect branch. This branch will redirect execution to whatever address is at memory location 0x10000c5f0. In this case, it will call the initterm function. Code taken from notepad.exe.

Branches happen very frequently, and get in the way of speculative execution. After all, the processor can’t know which code to execute until after the branch condition calculation completes. The way processors get around this dilemma is called branch prediction. The processor guesses the branch destination. When it guesses incorrectly, the already-executed actions are un-executed and new instructions are fetched from the correct location. This is uncommon. Modern branch predictors are easily 96+% accurate on normal workloads.

When the branch predictor is wrong, the processor speculatively executes instructions with the wrong context. Once the mistake is noticed, these phantom instructions are un-executed. As we’ll explain, the Spectre bug shows that it is possible to control both the branch predictor and to determine some effects of those un-executed instructions.


Now let’s apply the above computer architecture knowledge to explain Meltdown. The Meltdown bug is a design flaw in (almost) every Intel processor released since 1995. Meltdown allows a specially crafted program to read core operating system memory that it should not have permission to access.

Processors typically have two privilege modes: user and kernel. The user part is for normal programs you interact with every day. The kernel part is for the core of your operating system. The kernel is shared among all programs on your machine, making sure they can function together and with your computer hardware, and contains sensitive data (keystrokes, network traffic, encryption keys, etc) that you may not want exposed to all of the programs running on your machine. Because of that, user programs are not permitted to read kernel memory. The table that determines what part of memory is user and what part is kernel is also stored in memory.

Imagine a situation where some kernel memory content is in the cache, but its permissions are not. Checking permissions will be much slower than simply reading the value of the content (because it requires a memory read). In these cases, Intel processors check permissions asynchronously: they start the permission check, read the cached value anyway, and abort execution if permission was denied. Because processors are much faster than memory, dozens of instructions may speculatively execute before the permission result arrives. Normally, this is not a problem. Any instructions that happen after a permissions check fails will be thrown away, as if they were never executed.

What researchers figured out was that it was possible to speculatively execute a set of instructions that would leave an observable sign (via a cache timing side-channel), even after un-execution. Furthermore, it was possible to leave a different sign depending on the content of kernel memory — meaning a user application could indirectly observe kernel memory content, without ever having permission to read that memory.

Technical Details


A graphical representation of the core issue in Meltdown, using terminology from the steps described below. It is possible to speculatively read core system memory without permission. The effects of these temporary speculative reads are supposed to be invisible after instruction abort and un-execution. It turns out that cache effects of speculative execution cannot be undone, which creates a cache-based timing side channel that can be used to read core system memory without permission.

At a high level, the attack works as follows:

  1. A user application requests a large block of memory, which we’ll call bigblock, and ensures that none of it is cached. The block is logically divided into 256 pieces (bigblock[0], bigblock[1], bigblock[2], ... bigblock[255]).
  2. Some preparation takes place to ensure that a memory permissions check for a kernel address will take a long time.
  3. The fun begins! The program will read one byte from a kernel memory address — we’ll call this value secret_kernel_byte. As a refresher, a byte can be any number in the range of 0 to 255. This action starts a race between the permissions check and the processor.
  4. Before the permissions check completes, the hardware continues its speculative execution of the program, which uses secret_kernel_byte to read a piece of bigblock (i.e. x = bigblock[secret_kernel_byte]). This use of a piece of bigblock will cache that piece, even if the instruction is later undone.
  5. At this point the permissions check returns permission denied. All speculatively executed instructions are un-executed and the processor pretends it never read memory at bigblock[secret_kernel_byte]. There is just one problem: a piece of bigblock is now in the cache, and it wasn’t before.
  6. The program will time how long it takes to read every piece of bigblock. The piece cached due to speculative execution will be read much faster than the rest.
  7. The index of the piece in bigblock is the value of secret_kernel_byte. For example, if bigblock[42] was read much faster than any other entry, the value of secret_kernel_byte must be 42.
  8. The program has now read one byte from kernel memory via a cache timing side-channel and speculative execution.
  9. The program can now continue to read more bytes. The Meltdown paper authors claim they can read kernel memory at a rate of 503 Kb/s using this technique.

What is the impact?

Malicious software can use Meltdown to more easily gain a permanent foothold on your desktop and to spy on your passwords and network traffic. This is definitely bad. You should go apply the fixes. However, malicious software could already do those things, albeit with more effort.

If you are a cloud provider (like Amazon, Google, or Microsoft) or a company with major internet infrastructure (like Facebook), then this bug is an absolute disaster. It’s hard to underscore just how awful this bug is. Here’s the problem: the cloud works by dividing a massive datacenter into many virtual machines rentable by the minute. A single physical machine can have hundreds of different virtual tenants, each running their own custom code. Meltdown breaks down the walls between tenants: each of those tenants could potentially see everything the other is doing, like their passwords, encryption keys, source code, etc. Note: how the physical hardware was virtualized matters. Meltdown does not apply in some cases. The details are beyond the scope of this post.

The fix for Meltdown incurs a performance penalty. Some sources say it is a 5-30% performance penalty, some say it is negligible, and others say single digits to noticeable. What we know for sure is that older Intel processors are impacted much more than newer ones. For a desktop machine, this is slightly inconvenient. For a large cloud provider or internet company, a 5% performance penalty across their entire infrastructure is an enormous price. For example, Amazon is estimated to have 2 million servers. A 5 to 30% slowdown could mean buying and installing 100,000 (5%) to 600,000 (30%) additional servers to match prior capability.

What should I do?

Please install the latest updates to your operating system (i.e. MacOS, Windows, Linux). All major software vendors have released fixes that should be applied by your automatic updater.

All major cloud providers have deployed fixes internally, and you as a customer have nothing to worry about.

To be continued…

We hope you have a better understanding of computer architecture concepts and the technical details behind Meltdown. In the second half of this blog post we will explain the technical details of Spectre V1 and Spectre V2 and discuss why these bugs managed to stay hidden for the past 25 years. The technical background will get more complicated, but the bugs are also more interesting.

Finally, we’d like to remind our readers that this blog post was written to be accessible to someone without a computer architecture background, and we sincerely hope we succeeded in explaining some difficult concepts. The Meltdown and Spectre papers, and the Project Zero blog post are better sources for the gory details.

Heavy lifting with McSema 2.0

Four years ago, we released McSema, our x86 to LLVM bitcode binary translator. Since then, it has stretched and flexed; we added x86-64 support, put it on a performance-focused diet, and improved its usability and documentation.

McSema wasn’t the only thing improving these past years, though. At the same time, programs were increasingly adopting modern x86 features like the advanced vector extensions (AVX) instructions, which operate on 256-bit wide vector registers. Adjusting to these changes was back-breaking but achievable work. Then our lifting goals expanded to include AArch64, the architecture used by modern smartphones. That’s when we realized that we needed to step back and strengthen McSema’s core. This change in focus paid off; now McSema can transpile AArch64 binaries into x86-64! Keep reading for more details.

Enter the dragon

Today we are announcing the official release of McSema 2.0! This release greatly advances McSema’s core and brings several exciting new developments to our binary lifter:

  • Remill. Instruction semantics are now completely separated into Remill, their own library. McSema is a client that uses the library for binary lifting. To borrow an analogy, McSema is to Remill as Clang is to LLVM. Look out for future projects using Remill.
  • Simplified semantics. The separation of McSema and Remill makes it easier to add support for new instructions. In Remill, instruction semantics can be expressed directly in C++ and are automatically compiled by Clang into LLVM bitcode.
  • AArch64 (64-bit ARMv8). The switch to using Remill as a semantics backend means that McSema 2 supports multiple architectures from the start. Not only does it work on x86 and x86-64 binaries, but it also supports lifting 64-bit ARMv8 programs.
  • SSE3/4 and AVX support. McSema now supports lifting programs that utilize advanced vector instruction sets.
  • Better CFG recovery. A common source of lifting errors is poor control flow recovery. We improved the control flow recovery process to make it simpler, faster, and more accurate. McSema’s CFG recovery is also beginning to incorporate advanced features, like lifting global variables and stack variables.
  • Binary Ninja support. McSema now has beta support for recovering program control flow via Binary Ninja.

McSema 2.0 is under active development and is rapidly improving and gaining features. We hope to make both using and hacking on McSema easier and more accessible than ever.

See it soar: Using McSema 2

The biggest change to McSema is the switch to using Remill for instruction semantics, and the subsequent support for AArch64. A good demonstration of this improvement is to show that McSema can disassemble an AArch64 binary, lift it to bitcode, and then recompile that bitcode to x86-64 machine code. Let’s get to it then!

Getting McSema

The first step is to download and install the code. For now, Linux is the primary platform supported by McSema; however, we are working toward macOS and Windows build support. If your goal is to lift Windows binaries, then no sweat! Linux builds of McSema will happily analyze Windows binaries.

The above linked instructions give more details that you should follow (e.g. getting dependencies, resolving common errors, etc.), but the essential steps to downloading and installing McSema are as follows:

mkdir ~/data
cd ~/data
git clone
cd ~/data/remill/tools
git clone
cd ~/data
~/data/remill/scripts/ --llvm-version 3.9
cd ~/data/remill-build
sudo make install

These commands will clone Remill and McSema, invoke a common build script that compiles both projects in the ~/data/remill-build directory, and then install the projects onto the system.

Disassembling our first binary

Using McSema is usually a two- or three-step process. The first step is always to disassemble a binary into a “control flow graph” file using the mcsema-disass command-line tool. This file contains all of the program binary’s original code and data, but organized into logical groupings, like variables, functions, blocks of instructions, and references therebetween.

We’ll use Felipe Manzano’s maze, compiled as an AArch64 program binary, as our running example. It’s an interactive, command-line game that asks the user to solve a maze. Precompiled binaries for the maze can be found in the McSema’s examples/Maze/bin directory.

cd ~/data/remill/tools/mcsema/examples/Maze/bin
mcsema-disass --arch aarch64 
  --os linux 
  --binary maze.aarch64 
  --output /tmp/maze.aarch64.cfg 
  --log_file /tmp/maze.aarch64.log 
  --entrypoint main 
  --disassembler /opt/ida-6.9/idal64

The above steps will produce a control flow graph (CFG) file from the maze program, saving the CFG file to /tmp/maze.aarch64.cfg. If you’re following along at home and don’t have a licensed version of IDA Pro, but do have a Binary Ninja license, then you can change the value passed to the --disassembler option to point to the Binary Ninja executable instead (i.e. --disassembler /opt/binaryninja/binaryninja). Finally, if you are one of those radare2 holdouts, then no sweat — we have CFG files for the maze binary already made.

Lifting to bitcode

The second step is to lift the CFG file into LLVM bitcode using the mcsema-lift-3.9 command-line tool. The 3.9 in this case isn’t the McSema version; it’s the LLVM toolchain version. LLVM is a fast-evolving project, which sometimes means that interesting projects (e.g. KLEE) are left behind and only work with older LLVM versions. We’ve tried to make it as simple as possible for users to reap the benefits of using McSema — that’s why McSema works using LLVM versions 3.5 and up. In fact, with McSema 2, you can now have multiple versions of McSema installed on your system, each targeting distinct LLVM versions. Enough about that, time to lift some weights!

mcsema-lift-3.9 --arch aarch64 
  --os linux 
  --cfg /tmp/maze.aarch64.cfg 
  --output /tmp/maze.aarch64.bc 

The above command instructs McSema to save the lifted bitcode to the file /tmp/maze.aarch64.bc. The --explicit_args command-line flag is a new feature of McSema 2 that emulates the original behavior of McSema 1. If your goal is to perform static analysis or symbolic execution of lifted bitcode, then you will want to employ this option. Similarly, if you are compiling bitcode lifted from one architecture (e.g. AArch64) into machine code of another architecture (e.g. x86-64), then you also want this option. On the other hand, if your goal is to compile the lifted bitcode back into an executable program for the same architecture (as is the case for the Cyber Fault-tolerance Attack Recovery program), then you should not use --explicit_args.

Compiling bitcode back to runnable programs

It’s finally time to make the magic happen — we’re going to take bitcode from an AArch64 program, and make it run on x86-64. We have conveniently ensured that a Clang compiler is installed alongside McSema, and in such a way that it does not clash with any other compilers that you already have installed. Here’s how to use that Clang to compile the lifted bitcode into an executable named /tmp/maze.aarch64.lifted.

remill-clang-3.9 -o /tmp/maze.aarch64.lifted /tmp/maze.aarch64.bc

Note: If for some reason remill-clang-3.9 does not work for you, then you can also use ~/data/remill-build/libraries/llvm/bin/clang.

Solving the maze

We’ve now successfully transpiled an AArch64 program binary into a x86-64 program binary. Wait, what? Yes, we really did that. Running the transpiled version shows us the correct output, prompting us with instructions on how to play the game.

$ /tmp/maze.aarch64.lifted
 Maze dimensions: 11x7
 Player position: 1x1
 Iteration no. 0
 Program the player moves with a sequence of 'w', 's', 'a' and 'd'
 Try to reach the price(#)!
 |X| |#|
 | | --+ | |
 | | | | |
 | +-- | | |
 | | |

But what if — try as we might — we’re not able to solve the maze? That won’t be a problem, because we can always use the KLEE symbolic executor to solve the maze for us.

Your new workout routine

We’ve practiced all the moves and your new workout routine is ready. Day 1 in your routine is to disassemble a binary and make a CFG file.

mcsema-disass --arch aarch64 
  --os linux 
  --binary ~/data/remill/tools/mcsema/examples/Maze/bin/maze.aarch64 
  --output /tmp/maze.aarch64.cfg 
  --log_file /tmp/maze.aarch64.log 
  --entrypoint main 
  --disassembler /opt/ida-6.9/idal64

Day 2 is your lift day, where we lift the CFG file into LLVM bitcode.

mcsema-lift-3.9 --arch aarch64 
  --os linux 
  --cfg /tmp/maze.aarch64.cfg 
  --output /tmp/maze.aarch64.bc 

Day 3 ends your week with some intense compiling, producing a new machine code executable from the lifted bitcode.

remill-clang-3.9 -o /tmp/maze.aarch64.lifted /tmp/maze.aarch64.bc

Finally, don’t forget your stretches. We want to make sure those muscles still work.

echo ssssddddwwaawwddddssssddwwww | /tmp/maze.aarch64.lifted

Come with me if you want to lift

Come with me if you want to liftThe Maze transpiling and symbolic execution demos scratch the surface of what you can do with McSema 2. The ultimate goal has always been to enable binaries to be treated like source code. With the numerous improvements in McSema 2, we are getting closer to that ideal. In the coming months we’ll talk more about other exciting features of McSema 2 (like stack and global variable recovery) and how Trail of Bits and others are using McSema.

We’d love to talk to you about McSema and how it can solve your binary analysis and transformation problems. We’re always available at the Empire Hacking Slack and via our contact page.

For now though, put your belt on — it’s time for some heavy lifting. McSema version 2 is ready for your binaries.

Videos from Ethereum-focused Empire Hacking

On December 12, over 150 attendees learned how to write and hack secure smart contracts at the final Empire Hacking meetup of 2017. Thank you to everyone who came, to our superb speakers, and to Datadog for hosting this meetup at their office.

Watch the presentations again

We believe strongly that the community should share what knowledge it can. That’s why we’re posting these recordings from the event. We hope you find them useful.

A brief history of smart contract security

Jon Maurelian of Consensys Diligence reviewed the past, present, and future of Ethereum with an eye for security at each stage.


  • Ethereum was envisioned as a distributed shared computer for the world. High level languages such as Solidity enable developers to write smart contracts.
  • This shared computer where anyone can execute code comes with a number of inherent security issues. Delegate calls, reentrancy, and other idiosyncrasies of Ethereum have been exploited on the public chain for spectacular thefts.
  • Among the most exciting upcoming developments include safer languages like Viper, the promise of on-chain privacy with zk-SNARKs, and security tooling like Manticore and KEVM.

A CTF Field Guide for smart contracts

Sophia D’Antoine of Trail of Bits discussed recent Capture the Flag (CTF) competitions that featured Solidity and Ethereum challenges, and the tools required to exploit them.


  • CTFs have started to include Ethereum challenges. If you want to set up your own Ethereum CTF, reference Sophia’s scripts from CSAW 2017.
  • Become familiar projects from Trail of Bits, like Manticore, Ethersplay, and Not So Smart Contracts to learn about Ethereum security and compete in CTFs.
  • Integer overflows and reentrancy are common flaws to include in challenges. Review how to discover and exploit these flaws in write-ups from past competitions.

Automatic bug finding for the blockchain

Mark Mossberg of Trail of Bits explained practical symbolic execution of EVM bytecode with Manticore.


  • Symbolic execution is a program analysis technique that can achieve high code coverage, and has been used to create effective automated bug finding systems.
  • When applied to Ethereum, symbolic execution can automatically discover functions in a contract, generate transactions to trigger contract states, and check for failure states.
  • Manticore, an open source program analysis tool, uses symbolic execution to analyze EVM smart contracts.

Addressing infosec needs with blockchain technology

Paul Makowski introduced PolySwarm, an upcoming cybersecurity-focused Ethereum token, and explained how it aligns incentives and addresses deficiencies in the threat intelligence industry.


  • The economics of today’s threat intelligence market produce solutions with largely overlapping detection capabilities which result in limited coverage and expose enterprises to innovative threats.
  • Ethereum smart contracts provide a distributed platform for intelligent, programmed market design. They fix the incentives in the threat intelligence space without becoming a middleman.
  • PolySwarm unlocks latent security expertise by removing barriers to participate in tomorrow’s threat-intelligence community. PolySwarm directs this expertise toward the greater good, getting more security experts to create a better collective defense for all.

Learn more about Empire Hacking

Let’s secure your smart contracts

We’ve become one of the industry’s most trusted providers of audits, tools, and best practices for securing smart contracts and their adjacent technologies. We’ve secured token launches, decentralized apps, and entire blockchain platforms.

Contact us for help.

What are the current pain points of osquery?

You’re reading the second post in our four-part series about osquery. Read post number one for a snapshot of the tool’s current use, the reasons for its growing popularity among enterprise security teams, and how it stacks up against commercial alternatives.

osquery shows considerable potential to revolutionize the endpoint monitoring market. (For example, it greatly simplifies the detection of signed malware with Windows executable code signature verification.) However, the teams at five major tech firms and osquery developers whom we interviewed for this series say that the open source tool has room for improvement.

Some of these qualms relate to true limitations. However, we’ve also heard some recent grumbling among prospective users and industry competitors that doesn’t align with osquery’s actual shortcomings.

As with many rapidly improving open source projects, documentation updates lagging behind a blistering development schedule may be to blame for these misconceptions. So, before diving into what true pain points the osquery community should tackle next, let’s shed light on some current myths about osquery’s limitations.

Mythical limitations of osquery

“osquery has no support for containers”

Oh, but it does! Teams like Uptycs have poured significant development support into this much-requested feature within the past year. Currently, osquery can perform container introspection at the management host layer (more efficient) or it can operate in each container (more granularity) without dominating CPU.

“osquery cannot operate in real time”

osquery handles file integrity and process auditing for MacOS and Linux. It can also monitor user access, hardware events, and socket events in select operating systems. It performs these tasks through interaction with the audit kernel API. Unlike its other pull-based queries, these monitoring services create event-based logs in real time and ensure that osquery doesn’t miss important events between queries. These features are essential to osquery’s power in incident detection.

“osquery is high overhead”

osquery is a lightweight solution when correctly deployed and managed. As we’ll touch on later in the post, the leading causes of performance issues come from misconfiguration: scheduling runaway queries, performing event-based queries on high-traffic file paths, or running osquery in a resource-constrained environment without implementing CPU controls. In fact, respondents who implemented safeguards such as Cgroups were so confident in osquery’s performance that they deployed the tool on every endpoint, including production servers.

Current limitations of osquery; the facts

Demand for user support outstrips supply

For an open-source project, osquery offers a lot of user support; a website, an active slack channel, extensive documentation, and lots of blogs (like this one!). That said, the community can do better.

  • Documentation updates have not matched pace with the growing list of project feature updates, leaving users unaware of new functionality or confused about how to use it.
  • Confused users flocking to the osquery slack channel ask similar questions. Experts try to help these individuals instead of creating FAQs, writing comprehensive query packs, and making tutorials.

Users test feature integrity

Facebook has done an excellent job at thoroughly reviewing new code and hosting productive debates about how best to build new features. However, there has been a lack of oversight into the efficacy of new features. So far, this has been the job of users who report edge cases and unexpected behavior to the slack channel or github repo.

A recent example of this issue: Developers and users saw false negatives in file integrity monitoring for osquery. The audit backend contained multiple bugs that caused inaccurate logs. This has persisted unchecked since FIM was first enabled in 2015. Thankfully, our own Alessandro Gario is implementing a fix.

Issues with extensions

This crucial component of osquery has been causing problems for users. The issues stem from insufficient support during their development. osquery’s current SDK only provides the bare minimum needed to integrate with osquery. The documentation and APIs are also limited. Because of these factors, many extensions aren’t well-built or well-maintained, and therefore, introduce unreliable results.

Fortunately, the community has started to resolve the issues. Kolide’s osquery-go provides a rich SDK for developers to create osquery extensions with Go. Last week, we explained how to write an extension for osquery. We also released a repository of our own well-maintained osquery extensions that users can pull from (there’s only one in there right now but more to come, soon!). We intend to help the community navigate the extension-building process and to create a reliable source of updated extensions.

Limited platform support beyond Linux and macOS

Users are eager for osquery to support more platforms and provide better introspection on all endpoints. osquery’s current limit to just a subset of endpoints leaves holes in users’ monitoring capacity. Further, they noted that some supported platforms lacked important features.

For example, macOS and Linux platforms can collect usable real-time data about a variety of event types. Real-time data in osquery for Windows is limited to the Windows Event Log, which only exposes a stream of difficult-to-parse system data. No users whom we interviewed had successfully implemented or parsed these logs in their deployments.

Screen Shot 2017-12-20 at 5.05.57 PM

From Teddy Reed and Mitchell Grenier’s osquery presentation at the 2017 LISA conference

Readable real-time monitoring underpins osquery’s incident-detection capabilities. While scheduled queries can miss system events between queries, real-time event-based monitoring is less prone to false negatives. The absence of this feature in the Windows port greatly degrades osquery’s incident-detection utility for users running Windows machines in their fleets.

Runaway queries and guidance

Respondents reserved some of their harshest criticism for the lack of safeguards against bad queries, especially those that unexpectedly pulled an excessive amount of data. Sometimes, these runaway queries caused major issues for the larger fleet such as degraded endpoint system performance and clogged memory. In addition, malformed queries could flood data logs and rewrite over other data collected, causing other events to pass by undetected.

osquery’s watchdog feature does prevent some performance issues by killing any processes that consume too much CPU or memory. However, this is done without consideration of what’s running at the time. Well-formed audit-based queries often exceed the default quotas, killing processes unnecessarily. As a result, users turned off the feature to avoid missing essential data. A better solution would understand the scale of a user’s query and ask for confirmation.

Users also wanted smarter queries. One interviewee wanted guidance on the right query intervals for different types of system data. He also wanted to save wasted storage from overlapping data within different query packs. Though the issue is relatively cheap, it would be helpful if osquery could de-duplicate this data.

Insufficient debugging/diagnostics at scale

Users struggled with large-scale deployment of osquery, primarily because of difficulty debugging and diagnosing query issues in their fleets. One company reported that roughly 15% of nodes queried persist as pending for unknown reasons. Another reported that certain endpoints would occasionally “fall off the internet” without any apparent cause. Though users can restart osquery with the verbose setting to print information about every action performed, this option is primarily a tool for developers and is not user-friendly.

Deployment and maintenance issues

Every company implementing osquery tackles this ongoing struggle in a different way. We went into great detail in our previous post about the variety of tools and techniques they used to manage this problem. Despite support and documentation improvements, issues persist. One user reported ongoing troubles implementing osquery version updates on endpoints for which employees are admins.


After reading about all of these pain points, you might be wondering why osquery won multiple product awards this year, and why over 1,100 users have engaged with the development community’s Slack channel and GitHub repo. You might be wondering why the five top tech company teams we surveyed for this series reported that they liked osquery better than commercial fleet management tools

It’s simple. The tool has attracted a vibrant development community invested in its success. Every new development brings osquery closer to feature-parity with the equivalent components of competitors’ fully integrated – and higher-priced – security suites. As users commission companies like Trail of Bits to make those improvements, the entire community benefits.

That will be the topic of the third post in this series: osquery’s development requests. If you use osquery today and have requests you’d like to add to our research, please let us know! We’d love to hear from you.

How does your experience with osquery compare to the pains mentioned in this post? Do you have other complaints or issues that you’d like to see addressed in future releases? Tell us! Help us lead the way in improving osquery’s development and implementation.

Announcing the Trail of Bits osquery extension repository

Today, we are releasing access to our maintained repository of osquery extensions. Our first extension takes advantage of the Duo Labs EFIgy API to determine if the EFI firmware on your Mac fleet is up to date.

There are very few examples of publicly released osquery extensions. Very little documentation exists on the topic. This post aims to help future developers in navigating through the process of writing an extension for osquery. The rest of this post describes how we implemented the EFIgy extension for osquery.

About EFIgy

At this year’s Ekoparty, Duo Labs presented the results of its research on the state of support and security in EFI firmwares. These software components are really interesting for attackers. They operate on a privilege level that is out of reach even from operating systems and hypervisors. Duo Labs gathered and analyzed all the publicly released Apple updates from the last three years and verified the information by looking at more than 73,000 Macs across different organizations.

The researchers found that many of these computers were running on outdated firmware, even though the required EFI updates were supposed to be bundled in the same operating system patches that the hosts had installed correctly. Duo Labs followed this finding by creating the EFIgy service, a REST endpoint that can access the latest OS and EFI versions for any known Apple product through the use of details such as logic board id and product name.

Programmatically querying EFIgy

EFIgy expects a JSON object containing the details for the system we wish to query. The JSON request doesn’t require many keys; it boils down to hardware model and software versions:

  "board_id": "Mac-66E35819EE2D0D05",
  "smc_ver": "2.37f21",
  "build_num": "16G07",
  "rom_ver": "MBP132.0226.B20",
  "hw_ver": "MacBookPro12,1",
  "os_ver": "10.12.4",
  "hashed_uuid": ""

The rom_ver key is kind of tricky. It doesn’t include the full EFI version because (according to the author) it includes timestamps that are not necessarily useful or easy to keep track of. Separated by the dot characters, the only fields you need are the first, third and fourth.

As the name suggests, the hashed_uuid field is a SHA256 digest. To compute it correctly, the MAC address of the primary network interface must be prefixed to the system UUID like this: “0x001122334455” + “12345678-1234-1234-1234-1234567890AB”.

The remaining keys are self-explanatory, but keep in mind that they will not be reported correctly when running from a virtual machine. board_id, hw_ver and rom_ver will report information about the virtual components offered by your hypervisor.

Querying the service is simple. The JSON data is sent via an HTTP POST request to the following REST endpoint:

The server response is made of three JSON objects. Compare them with your original values to understand whether your system is fully up to date or not.

  "latest_efi_version" : {
    "msg" : "<version>"

  "latest_os_version" : {
    "msg" : "<version>"

  "latest_build_number" : {
    "msg" : "<error message>",
    "error" : "1"

Developing osquery extensions

The utilities provided by Duo Labs are easy and straightforward to use, but manually running them all on each system in our fleet was not an easy task. We decided to implement an osquery extension that queries EFIgy, an idea we got from Chris Long.

Why write an extension and not a virtual table? We’ve decided to keep native operating system functions in core and convert everything else into an extension. If a new feature uses an external service or a non-native component, we will default to writing an extension.

The only toolset you have available is the standard library and what you manually import into your project. osquery links everything statically. You have to take special care of the libraries you use. Don’t rely on loading dynamic libraries at runtime from your program.

Once your environment is ready, table extensions do not require much work to be implemented. You just have to inherit from the osquery::TablePlugin class and override the two methods used to define the columns and generate the table rows.

class MyTable final : public osquery::TablePlugin {
    osquery::TableColumns columns() const override {
      return {

    osquery::QueryData generate(osquery::QueryContext& request) override {
      osquery::Row row;
      row[“column_name”] = “value”;

      return { row };

The source files must then be placed in a dedicated folder inside osquery/external. Note that you must add the “extension_” prefix to the folder name. Otherwise, the CMake project will ignore it.

For more complex projects, it is also possible to add a CMakeLists.txt file, creating the target using the following helper function:

ADD_OSQUERY_EXTENSION(${PROJECT_NAME} source1.cpp source2.cpp)

You will have access to some of the libraries in osquery such as Boost, but not some other utilities (e.g. the really useful http_client class).

There is no list of recommended steps to take when developing an extension, but if you plan on writing more than one I recommend you bundle your utility functions in headers that can then be easily imported and reused. Keeping all external libraries statically linked is also a good idea, as it will make redistribution easier.

Using our osquery extensions repo

Without anywhere to submit our new feature, we created a new repository for our extensions. The EFIgy extension is the first item available. Expect more to follow.

Using the repository is simple. However, you will have to clone the full source code of osquery first since the SDK is not part of the distributable package. Building the extension is easy. You only have to create a symbolic link of the source folders you want to compile inside the osquery/external folder, taking care to name the link according to the following scheme: extension_<name>. You can then follow the usual build process for your platform, as the default ALL target will also build all extensions.

cd /src/osquery-extensions
ln -s efigy /src/osquery/external/extension_efigy

cd /src/osquery
make sysprep
make deps

make -j `nproc`
make externals

Extensions are easy to use. You can test them (both with the shell and the daemon) by specifying their path with the –extension parameter. Since they are normal executables, you can also start them after osquery. They will automatically connect via Thrift and expose the new functions. The official documentation explains the process very well.

To quickly test the extension, you can either start it from the osqueryi shell, or launch it manually and wait for it to connect to the running osquery instance.

osqueryi --extension /path/to/extension

Take action

If you have a Mac fleet, you can now monitor it with osquery and the EFIgy extension, and ensure all your endpoints have received the required software and firmware updates.

If you’re reading this post some time in the future, you have even more reason to visit our osquery extension repository. We’ll keep it maintained and add to it over time.

Do you have an idea for an osquery extension? Please file an issue on our Github repo for it. Do you need osquery development? Contact us.

Securing Ethereum at Empire Hacking

If you’re building real applications with blockchain technology and are worried about security, consider this meetup essential. Join us on December 12th for a special edition of Empire Hacking focused entirely on the security of Ethereum.

Why attend?

Four blockchain security experts will be sharing how to write secure smart contracts, and hack them. Two speakers come from our team.

We’ve become one of the industry’s most trusted providers of audits, tools, and best practices for securing smart contracts and their adjacent technologies. We’ve secured token launches, decentralized apps, and entire blockchain platforms. As with past Empire Hacking events, we’re excited to share the way forward with the development community.

Who will be speaking?

  • Sophia D’Antoine of Trail of Bits will discuss Solidity and Ethereum challenges in recent CTF competitions and the tools required to exploit them.
  • John Maurelian of Consensys Diligence will discuss his highlights from Devcon3 about the latest developments in Ethereum security.
  • Mark Mossberg will be sharing how Trail of Bits finds bugs in EVM bytecode with our symbolic execution engine, Manticore.
  • Paul Makowski will share his upcoming security-focused Ethereum token, PolySwarm, which uses blockchain technology to address deficiencies in the threat intelligence industry.
  • Amber Baldet and Brian Schroeder of the Enterprise Ethereum Alliance will discuss the threat modeling, confidential transactions, and zero-knowledge proofs that went into the Quorum blockchain.

When and where?

We’ll be meeting December 12th at 6pm. This month’s meetup will take place at the DataDog offices in the New York Times building. RSVP is required. As per usual, light food and beer will be served.

To find out more about Empire Hacking:

How are teams currently using osquery?

In the year since we ported osquery to Windows, the operating system instrumentation and endpoint monitoring agent has attracted a great deal of attention in the open-source community and beyond. In fact, it recently received the 2017 O’Reilly Defender Award for best project.

Many large and leading tech firms have deployed osquery to do totally customizable and cost-effective endpoint monitoring. Their choice and subsequent satisfaction fuels others’ curiosity about making the switch.

But deploying new software to your company’s entire fleet is not a decision to be made lightly. That’s why we sought to take the pulse of the osquery community – to help current and potential users know what to expect. This marks the start of a four-part blog series that sheds light on the current state of osquery, its shortcomings and opportunities for improvement.

Hopefully, the series will help those of you who are sitting on the fence decide if and how to deploy the platform in your companies.

For our research, we interviewed teams of osquery users at five major tech firms. We asked them:

  • How is osquery deployed and used currently?
  • What benefits are your team seeing?
  • What have been your biggest pain points about using osquery?
  • What new features would you most like to see added?

This post will focus on current use of osquery and its benefits.

How are companies using osquery today?

Market Penetration

osquery’s affordability, flexibility, and cross-platform compatibility has quickly established its place in the endpoint monitoring toolkits of top tech firms. Since its debut in October, 2014, over 1,000 users from more than 70 companies have engaged with the development community through its Slack channel and GitHub repo. In August, osquery developers at Facebook began offering bi-weekly office hours to discuss issues, new features, and design direction.

Users have increased due to a number of recent developments. Since contributors like Trail of Bits and Facebook have transformed osquery to support more operating systems (Windows and FreeBSD), a broader number of organizations are now able to install osquery on a greater portion of their endpoints. Multiple supplementary tools, such as Doorman, Kolide, and Uptycs, have emerged to help users deploy and manage the technology. Monitoring of event-based logs (e.g. process auditing and file integrity monitoring) has further enhanced its utility for incident response. Each of these developments has spurred more organizations with unique data and infrastructure needs to use osquery, sometimes in favor of competing commercial products.

Current Use

All the companies surveyed leveraged osquery for high performance and flexible monitoring of their fleets. Interviewees expressed particular interest in just-in-time incident response including initial malware detection and identifying propagation.

Many teams used osquery in conjunction with other open source and commercial technologies. Some used collection and aggregation services such as Splunk to mine data collected by osquery. One innovative team built incident alerting with osquery by piping log data into ElasticSearch and auto-generated Jira tickets through ElastAlert upon anomaly detection. Most of the companies interviewed expected to phase out some paid services, especially costly suites (e.g. Carbon Black, Tripwire, Red Cloak), in favor of the current osquery build or upon addition of new features.

Deployment Maturity

Deployment maturity for osquery varied widely. One company reported being at the phase of testing and setting up infrastructure. Other companies had osquery deployed on most or all endpoints in their fleets, including one team who reported plans to roll out to 17,500 machines. Three out of the five companies we interviewed had osquery deployed on production servers. However, one of these companies reported having installed osquery on production machines but rarely querying these endpoints due to concerns about osquery’s reliability and scalability. Runaway queries on production fleets was a major concern for all companies interviewed though no production performance incidents were reported.

Strategies for Deployment

Most companies used Chef or Puppet to deploy, configure, and manage osquery installations on their endpoints. One company used the fleet management tool Doorman to maintain their fleet of remote endpoints and bypass the need for separate aggregation tools. Many teams leveraged osquery’s TLS documentation to author their own custom deployment tools that granted them both independence from third party applications and freedom to fully customize features/configurations to their native environments.

Multiple teams took precautions while rolling out osquery by deploying in stages. One team avoided potential performance issues by assigning osquery tasks to CGroups with limits on CPU and memory usage.

osquery Governance

Security teams were responsible for initiating the installation of osquery in the fleet. While most teams did so with buy-in and collaboration from other teams, some executed the installation covertly. One team reported that a performance incident had mildly tarnished the osquery reputation within their organization. Some security teams we interviewed collaborated with other internal teams such as Data Analytics and Machine Learning to mine log data and generate actionable insights.

Benefits of osquery

Teams reported that they liked osquery better than other fleet management tools for a variety of reasons, including:

  • simpler to use,
  • more customizable, and
  • exposed new endpoint data that they had never before had access to.

For teams exploring alternatives to their current tools, the open-source technology helped them avoid the bureaucratic friction of buying new commercial security solutions. For one team, osquery also fit into a growing preference for home-built software within their company.

In its current state, osquery appeared to be most powerful when leveraged as a flexible building block within a suite of tools. Where other endpoint monitoring tools expose users to select log data, osquery provided simple, portable access to a far richer variety of endpoint data. For teams who want to roll their own solutions, or who can’t afford expensive commercial comprehensive suites, osquery was the best option.

How it compares to other endpoint monitoring solutions

Our interviewees mentioned having used or evaluated some alternative endpoint monitoring solutions in addition to osquery. We list the highlights of their comments below. While osquery did present a more flexible, affordable solution overall, some paid commercial solutions still offer distinct advantages, especially in integrating automated prevention and incident response. However, as the development community continues to build features in osquery, the capability gap appears to be closing.


OSSEC is an open source system monitoring and management platform. It features essential incident response tools such as file integrity checking, log monitoring, rootkit detection, and automatic incident response. However, OSSEC lacks osquery’s ability to query multiple hosts (Windows, BSD, etc) with a universal syntax. It’s also not as flexible; users of osquery can quickly form new queries with the usability of SQL syntax, while OSSEC requires cumbersome log file decoders and deliberate ahead-of-time configuration. Both the overall simplicity and the on-going development for community contributed tables have often been cited as advantages osquery has over OSSEC.


SysDig provides a commercial container performance monitoring tool and an open source container troubleshooting tool. While osquery is used for security and malicious incident detection, SysDig tools work with real-time data streams (network or file I/O, or tracking errors in running processes) and are best suited for monitoring performance. However, despite significant recent gains in container support including new Docker tables that allow instrumentation at the host level, SysDig maintains the advantage over osquery for performance-sensitive container introspection. Though osquery is capable of running within containers, our respondents indicated that the current version isn’t yet built to support all deployments cleanly. One user reported avoiding deployment of osquery on their Docker-based production fleet for this reason.

Carbon Black

Carbon Black is one of the industry’s leading malware detection, defense, and response packages. In contrast, osquery by itself only provides detection capabilities. However, when combined with alerting systems such as PagerDuty or ElastAlert, osquery can transform into a powerful incident response tool. Finally, interviewees considering Carbon Black remarked on its high price tag and voiced a desire to minimize its use.

Bromium vSentry

Bromium vSentry provides impact containment and introspection powered by micro-virtualization and supported by comprehensive dashboards. While companies can leverage tools like Kolide and Uptycs to access data visualizations similar to osquery, Bromium’s micro-virtualization isolation functionality to quarantine attacks remains an advantage. However, Bromium’s introspection is significantly less flexible and expansive. It can only access data about targeted isolated applications. osquery can be configured to gather data from a growing number of operating-level logs, events, and processes.

Red Cloak

Red Cloak provides automated threat detection as part of a service offering from Dell SecureWorks. It has two advantages over osquery: first, it provides an expert team to help with analysis and response; second, it aggregates endpoint information from all customers to inform and improve its detection and response. For organizations focused solely on breach response, Red Cloak may be worth its cost. However, for IT teams who want direct access to a variety of endpoint data, osquery is a better and cheaper solution.


osquery fills a need in many corporate security teams; its transparency and flexibility make it a great option for rolling bespoke endpoint monitoring solutions. Without any modification, it exposes all the endpoint data an analysis engine needs. We expect (and hope) to hear from more security teams multiplying osquery’s power with their incident response toolkit.

That will happen faster if teams would share their deployment techniques and lessons learned. Much of the Slack and Github discussions focus on codebase issues. Too few users openly discuss innovative implementation strategies. But that isn’t the only reason holding back osquery’s adoption.

The second post in this series will focus on users’ pain points. If you use osquery today and have pain points you’d like to add to our research, please let us know! We’d love to hear from you.

How does your experience with osquery compare to that of the teams mentioned in this post? Do you have other workarounds, deployment strategies, or features you’d like to see built in future releases? Tell us! Help us lead the way in improving osquery’s development and implementation.

Hands on the Ethernaut CTF

Last week Zeppelin released their Ethereum CTF, Ethernaut.

This CTF is a good introduction to discover how to interact with a blockchain and learn the basics of the smart contract vulnerabilities. The CTF is hosted on the ropsten blockchain, and you can receive free ethers for it. The browser developer console is used to interact with the CTF, as well as the metamask plugin.

I was fortunate enough to be the first one to finish the challenges. The following is how I did it.

1. Fallback

This challenge is the first one, and I think it is more of an introduction, to be sure that everyone is able to play with the API. Let us see in detail what a Solidity smart contract looks like.

Challenge description

The contract is composed of one constructor and four functions. The goal is to become the owner of the contract and to withdraw all the money.

The first function is the constructor of the contract, because it has the same name as the contract. The constructor is a specific function which is called only once when the contract is first deployed, and cannot be called later. This function is usually used to set up some parameters (here an initial contribution from the owner).

function Fallback() {
  contributions[msg.sender] = 1000 * (1 ether);

The second function, contribute(), stores the number of ethers (msg.value) sent by the caller in the contributions map. If this value is greater than the contributions of the contract owner, then the caller becomes the owner of the contract.

function contribute() public payable {
  require(msg.value < 0.001 ether);
  contributions[msg.sender] += msg.value;
  if(contributions[msg.sender] > contributions[owner]) {
    owner = msg.sender;

getContribution() is a simple getter:

function getContribution() public constant returns (uint) {
  return contributions[msg.sender];

withdraw() allows the owner of the contract to withdraw all the money. Notice the onlyOwner keyword after the signature of the function. This is a modifier that ensures that this function is only called by the owner.

function withdraw() onlyOwner {

Finally, the last function is the fallback function of the contract. This function can be executed if the caller has previously made a contribution.

function() payable {
  require(msg.value > 0 && contributions[msg.sender] > 0);
  owner = msg.sender;

Fallback function

To understand what a fallback is, we have to understand the function selector and arguments mechanism in ethereum. When you call a function in ethereum, you are in fact sending a transaction to the network. This transaction contains, among other things, the amount of ether sent (msg.value) and a so-called data, which is an array of bytes. This array of bytes holds the id of the function to be called, and the function’s arguments. They choose to use the first four bytes of the keccak256 of the function signature as the function id. For example, if the function signature is transfer(address,uint256), the function id is 0xa9059cbb.

If you want to call transfer(0x41414141, 0x42), the data will be:


During its execution, the first thing that a smart contract does is to check the function id, using a dispatcher. If there is no match, the fallback function is called, if it exists.

You can visualize this dispatcher using our open source disassembler Ethersplay:

ethersplay_largeEthersplay shows the EVM dispatcher structure(*)

(*) For simplification, the Owner inheritance was removed from the source code. You can find the solidity file and the runtime bytecode here: fallback


If we put everything together we have to:

  1. Call contribution to put some initial value inside contributions
  2. Call the fallback function to become the owner of the contract
  3. Call withdraw to get all the money

(1) is easily done by calling contract.contribution({value:1}) in the browser’s developer tool console. A simple way to call the fallback function (2) is just to send to ether directly to the contract using the metamask plugin. Then (3) is achieved by calling contract.withdraw().

2. Fallout

Challenge description

The goal here is to become the owner of the contract.

At first, this contract appears to have one constructor and four functions. But if we look closer at the constructor, we realize that the name of the function is slightly different than the contract’s name:

contract Fallout is Ownable {

  mapping (address => uint) allocations;

  /* constructor */
  function Fal1out() payable {
    owner = msg.sender;
    allocations[owner] = msg.value;

As a result, this function is not a constructor, but a classic public function. Anyone can call it once the contract is deployed!

This may look too simple to be a real vulnerability, but it is real. Using our internal static analyzer Slither, we have found several contracts where this mistake was made (for example, ZiberCrowdsale or PeerBudsToken)!


We only need to call contract.Fal1out() to become the owner, that’s it!

3. Token

Challenge description

Here we are given 20 free tokens in the contract. Our goal is to find a way to hold a very large amount of tokens.

The function called  transfer allows the transfer of tokens between users:

function transfer(address _to, uint _value) public returns (bool) {
  require(balances[msg.sender] - _value >= 0);
  balances[msg.sender] -= _value;
  balances[_to] += _value;
  return true;

At first, the function looks fine, as it seems to check for overflow

require(balances[msg.sender] - _value >= 0);

Usually, when I have to deal with arithmetic computations, I go for the easy way and use our open source symbolic executor manticore to check if I can abuse the contract. We recently added support for evm, a really powerful tool for auditing integer related issues. But here, with a closer look, we realize that _value and balances are unsigned integers, meaning that balances[msg.sender] - _value >= 0 is always true!

So we can produce an underflow in

balances[msg.sender] -= _value;

As a result, balances[msg.sender] will contain a very large number!


To trigger the underflow, we can simply call contract.transfer(0x0, 21).  balances[msg.sender] will then contain 2**256 – 1.


Challenge description

The goal here is to become the owner of the contract Delegation.

There is no direct way to change the owner in this contract. However, it holds another contract, Delegate with this function:

function pwn() {
  owner = msg.sender;

A particularity of Delegation is the use of delegatecall in the fallback function.

function() {
  if(delegate.delegatecall( {

The hint of the challenge is pretty explicit about it:

Usage of delegatecall is particularly risky and has been used as an attack vector on multiple historic hacks. With it, you contract is practically saying “here, -other contract- or -other library-, do whatever you want with my state”. Delegates have complete access to your contract’s state. The delegatecall function is a powerful feature, but a dangerous one, and must be used with extreme care.

Please refer to the The Parity Wallet Hack Explained article for an accurate explanation of how this idea was used to steal 30M USD.

So here we need to call the fallback function, and to put in the the signature of pwn(), so that the delegatecall will execute the function pwn() within the state of Delegation and change the owner of the contract.


As we saw in “Fallback”, we have to put in the function id of pwn(); which is 0xdd365b8b. As a result, Delegate.pwn() will be called within the state of Delegation, and we will become the owner of the contract.

5. Force

Challenge description

Here we have to send ethers to an empty contract. As there is no payable fallback function, a direct send of ether to the contract will fail.

There are other ways to send ether to a contract without executing its code:

  • Calling selfdestruct(address)
  • Specifying the address as the reward mining destination
  • Sending ethers to the address before the creation of the contract


We can create a contract that will simply call selfdestruct(address) to the targeted contract.

contract Selfdestruct{
  function Selfdestruct() payable{}
  function attack(){

Note that we use a payable constructor. Doing so we can directly put some value inside the contract at its construction. This value will then be sent through selfdestruct.

You can easily test and deploy this contract on ropsten using Remix browser.

6. Re-entrancy

Challenge description

The last challenge! You have to send one ether to the contract during its creation and then get your money back.

The contract has four functions. We are interested in two of them.

donate lets you donate ethers to the contract, and the number of ethers sent is stored in balances.

function donate(address _to) public payable {
  balances[_to] += msg.value;

withdraw, the second function allows users to retrieve the ethers that were stored in balances.

function withdraw(uint _amount) public {
  if(balances[msg.sender] >= _amount) {
    if( {
    balances[msg.sender] -= _amount;

The ethers are sent through the call

At first, everything seems fine here, as the value sent decreased in  balances[msg.sender] -= _amount; and there is no way to increase balances without sending ethers.

Now recall the fallback function mechanism explained in “Fallback.” If you send ethers to a contract containing a fallback function, this function will be executed. What is the problem here? You can have a fallback function which calls back withdraw, and thus can be executed twice before balances[msg.sender] -= _amount is executed!

This vulnerability is called a re-entrancy vulnerability and was used during the unfamous DAO hack.


To exploit a re-entrancy vulnerability you have to use another contract as a proxy. This contract will need to:

  1. Have a fallback function which will call withdraw
  2. Call donate to deposit ethers in the vulnerable contract
  3. Call withdraw

In our not-so-smart-contracts database, you will find an example of a generic skeleton to exploit this vulnerability. I’ll leave the exercise of adapting this skeleton to the reader.

Similar to the previous challenge, you can test and deploy the contract on ropsten using Remix browser.


This CTF was really cool. The interface makes it easy to get into smart contract security. Zeppelin did a good job, so thanks to them!

If you are interested in the tools mentioned in the article, or you need a smart contract security assessment, do not hesitate to contact us!

Trail of Bits joins the Enterprise Ethereum Alliance

We’re proud to announce that Trail of Bits has joined the Enterprise Ethereum Alliance (EEA), the world’s largest open source blockchain initiative. As the first information security company to join, and currently one of the industry’s top smart contract auditors, we’re excited to contribute our unparalleled expertise to the EEA.

As companies begin to re-architect their critical systems with blockchain technology, they will need a strong software engineering model that ensures the safety of confidential data and integration with existing security best practices. We already work with many of the world’s largest companies to secure critical systems and products. As businesses rush to make use of this emerging technology, we look forward to designing innovative and pragmatic security solutions for the enterprise Ethereum community.

Ethereum Enterprise Alliance

We’re helping to secure Ethereum with our expertise, tools, and results.

Preparing Ethereum for production enterprise use will take a lot of work. Collaboration with other motivated researchers, developers, and users is the best way to build a secure and useful Enterprise Ethereum ecosystem. By contributing the tools we’re building to help secure public Ethereum applications, and participating in the EEA Technical Steering Committee’s working groups, we will help the EEA to ensure the security model for Enterprise Ethereum meets enterprise requirements.

How we will contribute

  • Novel​ ​research.​ We’ll bring our industry-leading security expertise to help discover, formalize and secure the unexpected behaviors in DApps and Smart Contracts. We’re already accumulating discoveries from the security audits we’ve conducted.
  • Foundational​ ​tools.​ We’ll help other members reduce risk when building on this technology. As our audits uncover fundamental gaps in Ethereum’s tooling, we’ll fill them in with tools like our symbolic executor, Manticore, and others in development.
  • Sharing​ ​attitude.​ We’ll help define a secure development process for smart contracts, share the tools that we create, and warn the community about pitfalls we encounter. Our results will help smart contract developers and auditors find vulnerabilities and decrease risk.

Soon, we’ll release the internal tools and guidance we have adapted and refined over the course of many recent smart contract audits. In the weeks ahead, you can expect posts about:

  • Manticore, a symbolic emulator capable of simulating complex multi-contract and multi-transaction attacks against EVM bytecode.
  • Not So Smart Contracts, a collection of example Ethereum smart contract vulnerabilities, including code from real smart contracts, useful as a reference and a benchmark for security tools.
  • Ethersplay, a graphical Binary Ninja-based EVM disassembler capable of method recovery, dynamic jump computation, source code matching, and bytecode diffing.
  • Slither, a static analyzer for the Solidity AST that detects common security issues in reentrancy, constructors, method access, and more.
  • Echidna, a property-based tester for EVM bytecode with integrated shrinking that can rapidly find bugs in smart contracts in a manner similar to fuzzing.

We’ll also begin publishing case studies of our smart contract audits, how we used those tools, and the results we found.

Get help auditing your smart contracts

Contact us for a demonstration of how we can help your enterprise make the most of Ethereum and blockchain.

Our team is growing

We’ve added five more to our ranks in the last two months, bringing our total size to 32 employees. Their resumes feature words and acronyms like ‘CTO,’ ‘Co-founder’ and ‘Editor.’ You might recognize their names from publications and presentations that advance the field.

We’re excited to offer them a place where they can dig deeper into security research.

Please help us welcome Mike Myers, Chris Evans, Evan Teitelman, Evan Sultanik, and Alessandro Gario to Trail of Bits!

Mike Myers

Leads our L.A. office (of one). Mike brings 15 years of experience in embedded software development, exploitation, malware analysis, code audits, threat modeling, and reverse engineering. Prior to joining Trail of Bits, Mike was CTO at a small security firm that specializes in cyber security research for government agencies and Fortune 500 companies. Before that, as Principal Security Researcher at Crucial Security (acquired by Harris Corporation), he was the most senior engineer in a division providing cyber capabilities and on-site operations support. Mike contributed the Forensics chapter of the CTF Field Guide, and co-authored “Exploiting Weak Shellcode Hashes to Thwart Module Discovery” in POC || GTFO. He’s NOP certified.

Chris Evans

Chris’s background is in operating systems and software development, but he’s also interested in vulnerability research, reverse engineering and security tooling. Prior to joining Trail of Bits, Chris architected the toolchain and core functionality of a host-based embedded defense platform, reversed and exploited ARM TrustZone secure boot vulnerabilities, and recovered data out of a GPIO peripheral with Van Eck phreaking. Chris has experience in traditional frontend and backend development through work at Mic and Foursquare. He activates Vim key binding mode more frequently than could possibly be warranted.

Evan Teitelman

After solving the world’s largest case of lottery fraud, Evan spent the last year leading code audits for over a dozen state lotteries, including their random number generators (RNGs). Prior to that, Evan focused on security tooling for the Android Kernel and SELinux as a vulnerability researcher with Raytheon SI. Also, he brings experience as a circuit board designer and embedded programmer. He founded BlackArch Linux. In his free time he enjoys hiking and climbing in Seattle, Washington.

Evan Sultanik

A computer scientist with extensive experience in both industry and academia, Evan is particularly interested in computer security, AI/ML/NLP, combinatorial optimization, and distributed systems. Evan is an editor and frequent contributor to PoC || GTFO, a journal of offensive security and reverse engineering that follows in the tradition of Phrack and Uninformed. Prior to Trail of Bits, Evan was the Chief Scientist for a small security firm that specializes in providing cyber security research to government agencies and Fortune 500 companies. He obtained his PhD in Computer Science from Drexel University, where he still occasionally moonlights as an adjunct faculty member. A Pennsylvania native, Evan contributes time to Code for Philly and has published research on zoning density changes in Philadelphia over the last five years.

Alessandro Gario

Hailing out of Italy, Alessandro brings ten years of systems software development experience to the team. Prior to Trail of Bits, he worked on building and optimizing networked storage systems and reverse engineering file formats. In his free time, he enjoys playing CTFs and reverse engineering video games. Alessandro’s mustache has made him a local celebrity. He is our second European employee.

We are very excited for the contributions these people will make to the discipline of information security. If their areas of expertise overlap with the challenges your organization faces, please contact us for help.