Trail of Bits joins the Enterprise Ethereum Alliance

We’re proud to announce that Trail of Bits has joined the Enterprise Ethereum Alliance (EEA), the world’s largest open source blockchain initiative. As the first information security company to join, and currently one of the industry’s top smart contract auditors, we’re excited to contribute our unparalleled expertise to the EEA.

As companies begin to re-architect their critical systems with blockchain technology, they will need a strong software engineering model that ensures the safety of confidential data and integration with existing security best practices. We already work with many of the world’s largest companies to secure critical systems and products. As businesses rush to make use of this emerging technology, we look forward to designing innovative and pragmatic security solutions for the enterprise Ethereum community.

Ethereum Enterprise Alliance

We’re helping to secure Ethereum with our expertise, tools, and results.

Preparing Ethereum for production enterprise use will take a lot of work. Collaboration with other motivated researchers, developers, and users is the best way to build a secure and useful Enterprise Ethereum ecosystem. By contributing the tools we’re building to help secure public Ethereum applications, and participating in the EEA Technical Steering Committee’s working groups, we will help the EEA to ensure the security model for Enterprise Ethereum meets enterprise requirements.

How we will contribute

  • Novel​ ​research.​ We’ll bring our industry-leading security expertise to help discover, formalize and secure the unexpected behaviors in DApps and Smart Contracts. We’re already accumulating discoveries from the security audits we’ve conducted.
  • Foundational​ ​tools.​ We’ll help other members reduce risk when building on this technology. As our audits uncover fundamental gaps in Ethereum’s tooling, we’ll fill them in with tools like our symbolic executor, Manticore, and others in development.
  • Sharing​ ​attitude.​ We’ll help define a secure development process for smart contracts, share the tools that we create, and warn the community about pitfalls we encounter. Our results will help smart contract developers and auditors find vulnerabilities and decrease risk.

Soon, we’ll release the internal tools and guidance we have adapted and refined over the course of many recent smart contract audits. In the weeks ahead, you can expect posts about:

  • Manticore, a symbolic emulator capable of simulating complex multi-contract and multi-transaction attacks against EVM bytecode.
  • Not So Smart Contracts, a collection of example Ethereum smart contract vulnerabilities, including code from real smart contracts, useful as a reference and a benchmark for security tools.
  • Ethersplay, a graphical Binary Ninja-based EVM disassembler capable of method recovery, dynamic jump computation, source code matching, and bytecode diffing.
  • Slither, a static analyzer for the Solidity AST that detects common security issues in reentrancy, constructors, method access, and more.
  • Echidna, a property-based tester for EVM bytecode with integrated shrinking that can rapidly find bugs in smart contracts in a manner similar to fuzzing.

We’ll also begin publishing case studies of our smart contract audits, how we used those tools, and the results we found.

Get help auditing your smart contracts

Contact us for a demonstration of how we can help your enterprise make the most of Ethereum and blockchain.

Our team is growing

We’ve added five more to our ranks in the last two months, bringing our total size to 32 employees. Their resumes feature words and acronyms like ‘CTO,’ ‘Co-founder’ and ‘Editor.’ You might recognize their names from publications and presentations that advance the field.

We’re excited to offer them a place where they can dig deeper into security research.

Please help us welcome Mike Myers, Chris Evans, Evan Teitelman, Evan Sultanik, and Alessandro Gario to Trail of Bits!

Mike Myers

Leads our L.A. office (of one). Mike brings 15 years of experience in embedded software development, exploitation, malware analysis, code audits, threat modeling, and reverse engineering. Prior to joining Trail of Bits, Mike was CTO at a small security firm that specializes in cyber security research for government agencies and Fortune 500 companies. Before that, as Principal Security Researcher at Crucial Security (acquired by Harris Corporation), he was the most senior engineer in a division providing cyber capabilities and on-site operations support. Mike contributed the Forensics chapter of the CTF Field Guide, and co-authored “Exploiting Weak Shellcode Hashes to Thwart Module Discovery” in POC || GTFO. He’s NOP certified.

Chris Evans

Chris’s background is in operating systems and software development, but he’s also interested in vulnerability research, reverse engineering and security tooling. Prior to joining Trail of Bits, Chris architected the toolchain and core functionality of a host-based embedded defense platform, reversed and exploited ARM TrustZone secure boot vulnerabilities, and recovered data out of a GPIO peripheral with Van Eck phreaking. Chris has experience in traditional frontend and backend development through work at Mic and Foursquare. He activates Vim key binding mode more frequently than could possibly be warranted.

Evan Teitelman

After solving the world’s largest case of lottery fraud, Evan spent the last year leading code audits for over a dozen state lotteries, including their random number generators (RNGs). Prior to that, Evan focused on security tooling for the Android Kernel and SELinux as a vulnerability researcher with Raytheon SI. Also, he brings experience as a circuit board designer and embedded programmer. He founded BlackArch Linux. In his free time he enjoys hiking and climbing in Seattle, Washington.

Evan Sultanik

A computer scientist with extensive experience in both industry and academia, Evan is particularly interested in computer security, AI/ML/NLP, combinatorial optimization, and distributed systems. Evan is an editor and frequent contributor to PoC || GTFO, a journal of offensive security and reverse engineering that follows in the tradition of Phrack and Uninformed. Prior to Trail of Bits, Evan was the Chief Scientist for a small security firm that specializes in providing cyber security research to government agencies and Fortune 500 companies. He obtained his PhD in Computer Science from Drexel University, where he still occasionally moonlights as an adjunct faculty member. A Pennsylvania native, Evan contributes time to Code for Philly and has published research on zoning density changes in Philadelphia over the last five years.

Alessandro Gario

Hailing out of Italy, Alessandro brings ten years of systems software development experience to the team. Prior to Trail of Bits, he worked on building and optimizing networked storage systems and reverse engineering file formats. In his free time, he enjoys playing CTFs and reverse engineering video games. Alessandro’s mustache has made him a local celebrity. He is our second European employee.

We are very excited for the contributions these people will make to the discipline of information security. If their areas of expertise overlap with the challenges your organization faces, please contact us for help.

iOS jailbreak detection toolkit now available

We now offer a library for developers to check if their apps are running on jailbroken phones. It includes the most comprehensive checks in the industry and it is App Store compatible. Contact us now to license the iVerify security library for your app.

Jailbreaks threaten your work

Users like to install jailbreaks on their phones for extra functionality, unaware that they’re increasing their exposure to risks. Jailbreaks disable many of iOS’s security features such as mandatory code signing and application sandboxing. Apps and code found outside the App Store can contain malware and jailbreaks themselves have included backdoors in the past.

Moreover, running an app on a jailbroken phone may indicate a user is attempting to manipulate the app. App developers deserve to know when their apps are installed on such untrustworthy phones.

Developing the security library to do so requires time and knowledge outside the core competency of many development teams. Establishing and maintaining expertise in the depths of iOS security internals and keeping up with new developments in jailbreak tools and techniques requires more time than many developers can afford. Ineffective jailbreak detection can be worse than no jailbreak detection at all.

Why you should use iVerify

Trail of Bits employs some of the world’s best experts in the field of iOS security internals. Our engineers have reviewed the security model of iOS, jailbreak techniques, and the tools that exist to run them today and developed the best checks possible. The resulting library, iVerify, includes checks for known jailbreaks, like Pangu, and checks for anomalies that may indicate unknown or custom jailbreaks.

iVerify easily integrates into your app as an iOS Framework. Your app can read a simple pass/fail result or it can inspect the results of each check individually.

As an optional feature, the raw checks are aggregated into a JSON message that includes the results of each individual check. A helper function adds other identifying information from the device. Developers can match this information with user information from their app, send it to a centralized logging facility for collection and analysis, and then use it to take action.

We continuously explore new versions of iOS for more effective checks capable of finding known and unknown jailbreaks. iVerify detects jailbreaks on iOS 10 and 11, and our expert team will update the library as new versions are released and new checks are developed.

Start protecting your work with superb jailbreak detection

iVerify delivers an easy-to-use solution without heavy-weight dependencies or obligations to a SaaS service. The checks are the best available and are maintained by our team of experts. The iVerify library is suitable for App Store deployment and will integrate into your app easily.

Contact us now to discuss licensing options.

Tracking a stolen code-signing certificate with osquery

Oct. 14th, 2017: This post has been updated to reflect a change in the osquery table’s final name, from ‘signature’ to ‘authenticode.’

Recently, 2.27 million computers running Windows were infected with malware signed with a stolen certificate from the creators of a popular app called CCleaner, and inserted into its software update mechanism. Fortunately, signed malware is now simple to detect with osquery thanks to a pull request submitted by our colleague Alessandro Gario that adds Windows executable code signature verification (also known as Authenticode). This post explains the importance of code signatures in incident response, and demonstrates a use case for this new osquery feature by using it to detect the recent CCleaner malware.

If you are unfamiliar with osquery, take a moment to read our previous blog post in which we explain why we are osquery evangelists, and how we extended it to run on the Windows platform. Part of osquery’s appeal is its flexibility and open-source model – if there’s another feature you need built, let us know!

Code-signed malware

Code signing was intended to be an effective deterrent against maliciously modified executables, and to allow a user (or platform owner) to choose whether to run executables from untrusted sources. Unfortunately, on general-purpose computing platforms like Windows, third-party software vendors are individually responsible for protecting their code-signing certificates. Malicious actors realized that they only needed to steal one of these certificates in order to sign malware and make it appear to be from a legitimate software vendor. This realization (and the high-profile Stuxnet incident) began a trend of malware signed with stolen code-signing certificates. It has become a routine feature of criminal and nation-state malware attacks in the past few years, and most recently happened again with an infected software update to the popular app CCleaner.

So, defenders already know that a trust model based on an assumption that all third-party software vendors can protect their code-signing certificates is untenable, and that on platforms like Windows, code-signing is only a weak trust marker or application whitelisting mechanism. But, there’s another use for code signatures: incident response. Once a particular signing certificate is known to be stolen, it also works as a telltale indicator of compromise. As the defender you can make lemonade out of these lemons: search for other systems on your network with executables that were also signed with this stolen certificate. The malware might have successfully evaded antivirus-type protections, but any code signed with a known-stolen certificate is an easy red flag: signing can be checked with a 0% chance of any false-positives. osquery offers an ideal method for performing such a search.

Verifying Authenticode signatures with osquery

New sensors are added to osquery with the addition of “tables,” maintaining the abstraction of all system information as SQL tables.

To add a table to osquery, you first define its spec, or schema. An osquery table spec is just a short description of the table’s columns, their data types, and short descriptions, as well as a reference to the implementation. In Alessandro’s pull request, he added an ‘authenticode’ virtual table for Windows, containing the following columns: path, original_program_name (from the publisher), serial_number, issuer_name, subject_name, and result.

Alessandro implemented the code to read code signature and certificate information from the system in osquery/tables/system/windows/authenticode.cpp. The verification of signatures is done using a call to the system API, WinVerifyTrust().

Here’s a simplified example of using osquery to check a Windows executable’s code signature:

osquery> SELECT serial_number, issuer_name, subject_name,
    ...> result FROM authenticode
    ...> WHERE path = 'C:\Windows\explorer.exe';


Most of the columns are self-explanatory. The result values aren’t. “Result” could mean:

State Explanation
missing Missing signature.
invalid Invalid signature, caused by missing or broken files.
untrusted Signature that could not be validated.
distrusted Valid signature, explicitly distrusted by the user.
valid Valid signature, but which is not explicitly trusted by the user.
trusted Valid signature, trusted by the user.

Getting focused results with SQL in osquery

To make the most out of this new functionality, perform JOIN queries with other system tables within osquery. We will demonstrate how using SQL queries enhances system monitoring by reducing the amount of noise when listing processes:

osquery> SELECT, process.path, authenticode.result
    ...> FROM processes as process
    ...> LEFT JOIN authenticode
    ...> ON process.path = authenticode.path
    ...> WHERE result = 'missing';

| pid  | path                                                      | result  |
| 3752 | c:\windows\system32\sihost.exe                            | missing |
| 3872 | C:\Windows\system32\notepad.exe                           | missing |
| 4860 | C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe | missing |
| 5200 | C:\Windows\system32\conhost.exe                           | missing |
| 6040 | C:\Windows\osqueryi.exe                                   | missing |

Tracking a stolen signing certificate

Assume that you have just learned of a malware campaign. The malware authors code-signed their executables using a code-signing certificate that they stole from a legitimate software vendor. The vendor has responded to the incident by acquiring a new code-signing certificate and redistributing their application signed with the new certificate. In this example, we will use CCleaner. How can you search a machine for any software signed with this stolen certificate, but filter out software signed with the vendor’s new certificate?

Example 1: Find executables signed with the stolen certificate

osquery> SELECT files.path, authenticode.subject_name,
    ...>        authenticode.serial_number,
    ...>        authenticode.result AS status
    ...> FROM (
    ...>   SELECT * FROM file
    ...>   WHERE directory = "C:\Program Files\CCleaner"
    ...> ) AS files
    ...> LEFT JOIN authenticode
    ...> ON authenticode.path = files.path
    ...> WHERE authenticode.serial_number == "4b48b27c8224fe37b17a6a2ed7a81c9f";


Example 2: Find executables signed by the affected vendor, but not with their new certificate

osquery> SELECT files.path, authenticode.subject_name,
    ...>        authenticode.serial_number,
    ...>        authenticode.result AS status
    ...> FROM (
    ...>   SELECT * FROM file
    ...>   WHERE directory = "C:\Program Files\CCleaner"
    ...> ) AS files
    ...> LEFT JOIN authenticode
    ...> ON authenticode.path = files.path
    ...> WHERE authenticode.subject_name LIKE "%Piriform%"
    ...> AND authenticode.serial_number != "52b6a81474e8048920f1909e454d7fc0";


Example 3: Code signatures and file hashing

Perhaps you would also like to keep a log of hashes, to keep track of what has been installed:

SELECT files.path AS path,
    ...>        authenticode.subject_name AS subject_name,
    ...>        authenticode.serial_number AS serial_number,
    ...>        authenticode.result AS status,
    ...>        hashes.sha256 AS sha256
    ...> FROM (
    ...>   SELECT * FROM file
    ...>   WHERE directory = "C:\Program Files\CCleaner"
    ...> ) AS files
    ...> LEFT JOIN authenticode
    ...> ON authenticode.path = files.path
    ...> LEFT JOIN hash AS hashes
    ...> ON hashes.path = files.path
    ...> WHERE authenticode.subject_name LIKE "%Piriform%"
    ...> AND authenticode.serial_number != "52b6a81474e8048920f1909e454d7fc0"


For the purposes of our examples here, notice that we have restricted the searches to “C:\Program Files\CCleaner”. You could tailor the scope of your search as desired.

The queries we’ve shown have been run in osquery’s interactive shell mode, which is more appropriate for incident response. You could run any of these queries on a schedule – using osquery for detection rather than response. For this, you would install osqueryd (the osquery daemon) on the hosts you wish to monitor, and configure logging infrastructure to collect the output of these queries (feeding the osquery output to, for example, LogStash / ElasticSearch for later analysis).

Future osquery Work

In this post we demonstrated the flexibility of osquery as a system information retrieval tool: using familiar SQL syntax, you can quickly craft custom queries that return only the information relevant to your current objective. The ability to check Authenticode signatures is just one use of osquery as a response tool to search for potential indicators of compromise. Many IT and security teams are using osquery for just-in-time incidence response including initial malware detection and identifying propagation.

Trail of Bits was early to recognize osquery’s potential. For over a year we have been adding various features like this one in response to requests from our clients. If you are already using osquery or considering using it and there’s a feature you need built, let us know! We’re ready to help you tailor osquery to your needs.

Microsoft didn’t sandbox Windows Defender, so I did

Microsoft exposed their users to a lot of risks when they released Windows Defender without a sandbox. This surprised me. Sandboxing is one of the most effective security-hardening techniques. Why did Microsoft sandbox other high-value attack surfaces such as the JIT code in Microsoft Edge, but leave Windows Defender undefended?

As a proof of concept, I sandboxed Windows Defender for them and, am now open sourcing my code as the Flying Sandbox Monster. The core of Flying Sandbox Monster is AppJailLauncher-rs, a Rust-based framework to contain untrustworthy apps in AppContainers. It also allows you to wrap the I/O of an application behind a TCP server, allowing the sandboxed application to run on a completely different machine, for an additional layer of isolation.

In this blog post, I describe the process and results of creating this tool, as well as thoughts about Rust on Windows.

Flying Sandbox Monster running Defender in a sandbox to scan a WannaCry binary.

The Plan

Windows Defender’s unencumbered access to its host machine and wide-scale acceptance of hazardous file formats make it an ideal target for malicious hackers. The core Windows Defender process, MsMpEng, runs as a service with SYSTEM privileges. The scanning component, MpEngine, supports parsing an astronomical number of file formats. It also bundles full-system emulators for various architectures and interpreters for various languages. All of this, performed with the highest level of privilege on a Windows system. Yikes.

This got me thinking. How difficult would it be to sandbox MpEngine with the same set of tools that I had used to sandbox challenges for the CTF community two years ago?

The first step towards a sandboxed Windows Defender is the ability to launch AppContainers. I wanted to re-use AppJailLauncher, but there was a problem. The original AppJailLauncher was written as a proof-of-concept example. If I had any sense back then, I would’ve written it in C++ Core rather than deal with the pains of memory management. Over the past two years, I’ve attempted rewriting it in C++ but ended up with false starts (why are dependencies always such a pain?).

But then inspiration struck. Why not rewrite the AppContainer launching code in Rust?

Building The Sandbox

A few months later, after crash coursing through Rust tutorials and writing a novel of example Rust code, I had the three pillars of support for launching AppContainers in Rust: SimpleDacl, Profile, and WinFFI.

  • SimpleDacl is a generalized class that handles adding and removing simple discretionary access control entries (ACE) on Windows. While SimpleDacl can target both files and directories, it has a few setbacks. First, it completely overwrites the existing ACL with a new ACL and converts inherited ACEs to “normal” ACEs. Also, it disregards any ACEs that it cannot parse (i.e. anything other than AccessAllowedAce and AccessDeniedAce. Note: we don’t support mandatory and audit access control entries.).
  • Profile implements creation of AppContainer profiles and processes. From the profile, we can obtain a SID that can be used to create ACE on resources the AppContainer needs to access.
  • WinFFI contains the brunt of the functions and structures winapi-rs didn’t implement as well as useful utility classes/functions. I made a strong effort to wrap every raw HANDLE and pointer in Rust objects to manage their lifetimes.

Next, I needed to understand how to interface with the scanning component of Windows Defender. Tavis Ormandy’s loadlibrary repository already offered an example C implementation and instructions for starting an MsMpEng scan. Porting the structures and function prototypes to Rust was a simple affair to automate, though I initially forgot about array fields and function pointers, which caused all sorts of issues; however, with Rust’s built-in testing functionality, I quickly resolved all my porting errors and had a minimum test case that would scan an EICAR test file.

The basic architecture of Flying Sandbox Monster.

Our proof-of-concept, Flying Sandbox Monster, consists of a sandbox wrapper and the Malware Protection Engine (MpEngine). The single executable has two modes: parent process and child process. The mode is determined by the presence of an environment variable that contains the HANDLEs for the file to be scanned and child/parent communication. The parent process populates these two HANDLE values prior to creating an AppContainer’d child process. The now-sandboxed child process loads the malware protection engine library and scans the input file for malicious software.

This was not enough to get the proof-of-concept working. The Malware Protection Engine refused to initialize inside an AppContainer. Initially, I thought this was an access control issue. After extensive differential debugging in ProcMon (comparing AppContainer vs non-AppContainer execution), I realized the issue might actually be with the detected Windows version. Tavis’s code always self-reported the Windows version as Windows XP. My code was reporting the real underlying operating system; Windows 10 in my case. Verification via WinDbg proved that this was indeed the one and only issue causing the initialization failures. I needed to lie to MpEngine about the underlying Windows version. When using C/C++, I would whip up a bit of function hooking code with Detours. Unfortunately, there was no equivalent function hooking library for Rust on Windows (the few hooking libraries available seemed a lot more “heavyweight” than what I needed). Naturally, I implemented a simple IAT hooking library in Rust (32-bit Windows PE only).

Introducing AppJailLauncher-rs

Since I had already implemented the core components of AppJailLauncher in Rust, why not just finish the job and wrap it all in a Rust TCP server? I did, and now I’m happy to announce “version 2” of AppJailLauncher, AppJailLauncher-rs.

AppJailLauncher was a TCP server that listened on a specified port and launched an AppContainer process for every accepted TCP connection. I tried not to reinvent the wheel, but mio, the lightweight IO library for Rust, just didn’t work out. First, mio’s TcpClient did not provide access to raw “socket HANDLEs” on Windows. Second, these raw “socket HANDLEs” were not inheritable by the child AppContainer process. Because of these issues, I had to introduce another “pillar” to support appjaillauncher-rs: TcpServer.

TcpServer is responsible for instantiating an asynchronous TCP server with a client socket that is compatible with STDIN/STDOUT/STDERR redirection. Sockets created by the socket call cannot redirect a process’s standard input/output streams. Properly working standard input/output redirection requires “native” sockets (as constructed via WSASocket). To allow the redirection, TcpServer creates these “native” sockets and does not explicitly disable inheritance on them.

My Experience with Rust

My overall experience with Rust was very positive, despite the minor setbacks. Let me describe some key features that really stood out during AppJailLauncher’s development.

Cargo. Dependency management with C++ on Windows is tedious and complex, especially when linking against third-party libraries. Rust neatly solves dependency management with the cargo package management system. Cargo has a wide breadth of packages that solve many common-place problems such as argument parsing (clap-rs), Windows FFI (winapi-rs et. al.), and handling wide strings (widestring).

Built-in Testing. Unit tests for C++ applications require a third-party library and laborious, manual effort. That’s why unit test are rarely written for smaller projects, like the original AppJailLauncher. In Rust, unit test capability is built into the cargo system and unit tests co-exist with core functionality.

The Macro System. Rust’s macro system works at the abstract syntax tree (AST) level, unlike the simple text substitution engine in C/C++. While there is a bit of a learning curve, Rust macros completely eliminate annoyances of C/C++ macros like naming and scope collisions.

Debugging. Debugging Rust on Windows just works. Rust generates WinDbg compatible debugging symbols (PDB files) that provide seamless source-level debugging.

Foreign Function Interface. The Windows API is written in, and meant to be called from, C/C++ code. Other languages, like Rust, must use a foreign function interface (FFI) to invoke Windows APIs. Rust’s FFI to Windows (the winapi-rs crate) is mostly complete. It has the core APIs, but it is missing some lesser used subsystems like access control list modification APIs.

Attributes. Setting attributes is very cumbersome because they only apply to the next line. Squashing specific code format warnings necessitates a sprinkling of attributes throughout the program code.

The Borrow Checker. The concept of ownership is how Rust achieves memory safety. Understanding how the borrow checker works was fraught with cryptic, unique errors and took hours of reading documentation and tutorials. In the end it was worth it: once it “clicked,” my Rust programming dramatically improved.

Vectors. In C++, std::vector can expose its backing buffer to other code. The original vector is still valid, even if the backing buffer is modified. This is not the case for Rust’s Vec. Rust’s Vec requires the formation of a new Vec object from the “raw parts” of the old Vec.

Option and Result types. Native option and result types should make error checking easier, but instead error checking just seems more verbose. It’s possible to pretend errors will never exist and just call unwrap, but that will lead to runtime failure when an Error (or None) is inevitably returned.

Owned Types and Slices. Owned types and their complementary slices (e.g. String/str, PathBuf/Path) took a bit of getting used to. They come in pairs, have similar names, but behave differently. In Rust, an owned type represents a growable, mutable object (typically a string). A slice is a view of an immutable character buffer (also typically a string).

The Future

The Rust ecosystem for Windows is still maturing. There is plenty of room for new Rust libraries to simplify development of secure software on Windows. I’ve implemented initial versions of a few Rust libraries for Windows sandboxing, PE parsing, and IAT hooking. It is my hope that these are useful to the nascent Rust on Windows community.

I used Rust and AppJailLauncher to sandbox Windows Defender, Microsoft’s flagship anti-virus product. My accomplishment is both great and a bit shameful: it’s great that Windows’ robust sandboxing mechanism is exposed to third-party software. It’s shameful that Microsoft hasn’t sandboxed Defender on its own accord. Microsoft bought what eventually became Windows Defender in 2004. Back in 2004 these bugs and design decisions would be unacceptable, but understandable. During the past 13 years Microsoft has developed a great security engineering organization, advanced fuzzing and program testing, and sandboxed critical parts of Internet Explorer. Somehow Windows Defender got stuck back in 2004. Rather than taking Project Zero’s approach to the problem by continually pointing out the symptoms of this inherent flaw, let’s bring Windows Defender back to the future.

An extra bit of analysis for Clemency

This year’s DEF CON CTF used a unique hardware architecture, cLEMENCy, and only released a specification and reference tooling for it 24 hours before the final event began. cLEMENCy was purposefully designed to break existing tools and make writing new ones harder. This presented a formidable challenge given the timeboxed competition occurs over a single weekend.

Ryan, Sophia, and I wrote and used a Binary Ninja processor module for cLEMENCy during the event. This helped our team analyze challenges with Binary Ninja’s graph view and dataflow analyses faster than if we’d relied on the limited disassembler and debugger provided by the organizers. We are releasing this processor module today in the interest of helping others who want to try out the challenges on their own.

Binary Ninja in action during the competition

cLEMENCy creates a more equitable playing field in CTFs by degrading the ability to use advanced tools, like Manticore or a Cyber Reasoning System. It accomplishes this with architectural features such as:

  • 9-bit bytes instead of 8-bits. This makes parsing the binary difficult. The byte length of the architecture of the system parsing a challenge does not match that in cLEMENCy. The start of a byte on both systems would only match every 9th byte.
  • It’s Middle Endian. Every other architecture stores values in memory in one of two ways: from most significant byte to least significant (Big Endian), or least significant to most significant (Little Endian). Rather than storing a value like 0x123456 as 12 34 56 or 56 34 12, Middle Endian stores it as 34 56 12.
  • Instructions have variable length opcodes. Instructions were anywhere from 18 to 54 bits, with opcodes being anywhere from 4 bits to 18 bits.

This required creativity in a short timespan. With only 24 hours’ head start, we needed to work fast if we wanted something usable before the end of the four-day competition. This would have been hard to do even with an amenable architecture. Here’s how we solved these problems to write and use a disassembler during the CTF:

  • We expanded each 9-bit byte to a 16-bit short. Originally, I wrote some fancy bit masking and shifting to accomplish this, but then Ryan dropped a very simple script that did the same thing using the bitstream module. This had the side effect of doubling all memory offsets but that was trivial to correct.
  • We made liberal use of slicing in Python. Our disassembler first converted the bytes to a string of bits, then rearranged them to match the representation in the reference document. After that, we took the path of speed of implementation rather than brevity to compare the exact number of bits per opcode to identify and parse them.
  • We made instructions more verbose. The Load and Store instructions iterated over a specified number of registers from a starting point, copying each from or into a memory location. Rather than displaying the starting register and count alone, we expanded the entire list, making it much easier to understand the effects of the instruction in the disassembly at a glance.

With an implemented processor module, we could view and interact with the challenges, define functions with automated analyses, and control how assembly instructions were represented.

We also tried to write an LLIL lifter. This was not possible. You could either have consistent register math or consistent memory addresses, but not both. The weird three-byte register widths and the doubled memory addresses were incompatible. All was not lost, since enough instructions were liftable to locate strings with the dataflow analysis.

Binary Ninja’s graph view allowed us to rapidly analyze control flow structures

If you’d like to get started with our Binja module, you can find our Architecture and BinaryView plugins, as well as a script to pack and unpack the challenges, on our Github.

LegitBS has open-sourced their cLEMENCy tools. The challenges will be available shortly. We look forward to seeing how other teams dealt with cLEMENCy!

UPDATE: The challenges are now available. PPPChris Eagle, and Lab RATS released their processor modules for cLEMENCy.

Magic with Manticore

Manticore is a next-generation binary analysis tool with a simple yet powerful API for symbolic execution, taint analysis, and instrumentation. Using Manticore one can identify ‘interesting’ code locations and deduce inputs that reach them. This can generate inputs for improved test coverage, or quickly lead execution to a vulnerability.

I used Manticore’s power to solve Magic, a challenge from this year’s DEFCON CTF qualifying round that consists of 200 unique binaries, each with a separate key. When the correct key is entered into each binary, it prints out a sum:

enter code:
==== The meds helped 
sum is 12

Reverse engineering 200 executables in order to extract strings one at a time takes a significant amount of time. This challenge necessitates automation. As CTFs feature more of these challenges, modern tools will be required to remain competitive.

We’ll be combining the powers of two such tools –Binary Ninja and Manticore– in three different solutions to showcase how you can apply them in your own work.

Challenge structure

The Magic binaries have a simple structure. There is a main function that prompts for the key, reads from stdin, runs the checker function, and then prints out the sum. The checker function loads bytes of the input string one at a time and calls a function to check each character. The character-checking functions do a comparison against a fixed character value. If it matches, the function returns a value to be summed, if it does not, the program exits.

Main, the checker function, and a single character checking function

Manticore’s API is very straight forward. We will use hooks to call functions when instructions are reached, the CPU class to access registers, and the solver. The workflow involves loading a binary by providing the path and adding analysis hooks on instructions in that binary. After that, you run Manticore. As the addresses are reached, your hooks are executed, and you can reason about the state of the program.

Functions defined as hooks take a single parameter: state. The state contains functionality to create symbolic values or buffers, solve for symbolic values, and abandon paths. It also contains a member, cpu, which holds the state of the registers, and allows the reading and writing of memory and registers.


There are many ways to solve Magic. We’ll present three methods to demonstrate the flexibility of Manticore.

  1. A symbolic solution that hooks every instruction in order to discover where the character-checking functions are. When Manticore is at a character-checking function, it sets hooks to solve for the necessary value.
  2. A concrete solution that hooks the address of each character-checking function and simply reads the value from the opcodes.
  3. A symbolic solution that hooks the address of each character-checking function and solves for the value.

This is not an exhaustive list of the approaches you could take with Manticore. There is a saying, ‘there are many ways to skin a cat;’ Manticore is a cat-skinning machine.

Function addresses will be extracted using Binary Ninja. All strategies require an address for the terminating hook that prints out the solution. The latter two strategies need the addresses of the character-checking functions.

Address extraction with the Binary Ninja API

In order to extract the character-checking functions’ addresses, as well as the end_hook() address, we will be using Binary Ninja. Binary Ninja is a reverse engineering platform made for the fast-paced CTF environment. It’s user friendly and has powerful analysis features. We will use its API to locate the addresses we want. Loading the file in the Binary Ninja API is very straight forward.

bv = binja.BinaryViewType.get_view_of_file(path)

To reach the checker function, we first need the executable’s main function. We start by retrieving the entry block of the program’s entry function. We know the address of main is loaded in the 11th instruction of the LLIL. From that instruction we do a sanity check that it is a constant being loaded into RDI, then extract the constant (main’s address). Calling get_function_at() with main’s address gives the main function to be returned.

def get_main(bv):
    entry_fn = bv.entry_function
    entry_block = entry_fn.low_level_il.basic_blocks[0]
    assign_rdi_main = entry_block[11]
    rdi, main_const = assign_rdi_main.operands

    if rdi != 'rdi' or main_const.operation != LLIL_CONST:
        raise Exception('Instruction `rdi = main` not found.')

    main_addr = main_const.operands[0]
    main_fn = bv.get_function_at(main_addr)
    return main_fn

The get_checker() function is similar to get_main(). It locates the address of the checker function which is called from main. Then it loads the function at that address and returns it.

1. Symbolic solution via opcode identification

Each character-checking function has identical instructions. This means we can examine the opcodes and use them as an indication of when we’ve reached a target function. We like this solution for situations in which we might not necessarily know where we need to set hooks but can identify when we’ve arrived.

  • Set a hook on every instruction.
    • Check if the opcodes match the first few instructions of the check functions.
      • Set a hook on the positive branch to solve for the register value RDI and store the value.
      • Set a hook on the negative branch to abandon that state.
      • Set a hook at the pre-branch (current instruction) to check if we know the value that was solved for. If we know the value, set RDI so we do not need to solve for it again.
  • Set a hook at a terminating instruction.

The state.abandon() call on the negative branch is crucial. This stops Manticore from reasoning over that branch, which can take a while in more complex code. Without abandonment, you’re looking at a 3 hour solve; with it, 1 minute.

def symbolic(m, end_pc):
    # hook every instruction using None as the address
    def hook_all(state):
        # read an integer at the program counter
        cpu = state.cpu
        pc = cpu.PC
        instruction = cpu.read_int(pc)

        # check the instructions match
        # cmp   rdi, ??
        # je    +0xe
        if (instruction & 0xFFFFFF == 0xff8348) and (instruction >> 32 & 0xFFFF == 0x0e74):
            # the positive branch is 0x14 bytes from the beginning of the function
            target = pc + 0x14

            # if the target address is not seen yet
            #   add to list and declare solver hook
            if target not in m.context['values']:
                set_hooks(m, pc)

    # set the end hook to terminate execution
    end_hook(m, end_pc)

We’re using Manticore’s context here to store values. The context dictionary is actually the dictionary of a multiprocessing manager. When you start using multiple workers, you will need to use the context to share data between them.

The function set_hooks() will be reused in strategy 3: Symbolic solution via address hooking. It sets the pre-branch, positive-branch, and negative-branch hooks.

def set_hooks(m, pc):
    # pre branch
    def write(state):
        _pc = state.cpu.PC
        _target = _pc + 0x14

        if _target in m.context['values']:
            if debug:
                print 'Writing %s at %s...' % (chr(m.context['values'][_target]), hex(_pc))

            state.cpu.write_register('RDI', m.context['values'][_target])
            # print state.cpu

    # negative branch
    neg = pc + 0x6

    def bail(state):
        if debug:
            print 'Abandoning state at %s...' % hex(neg)


    # target branch
    target = pc + 0x14

    def solve(state):
        _cpu = state.cpu
        _target = _cpu.PC
        _pc = _target - 0x14

        # skip solver step if known
        if _target in m.context['values']:

        val = _cpu.read_register('RDI')
        solution = state.solve_one(val)

        values = m.context['values']
        values[_target] = solution
        m.context['values'] = values

        target_order = m.context['target_order']
        m.context['target_order'] = target_order

        if debug:
            print 'Reached target %s. Current key: ' % (hex(_target))
            print "'%s'" % ''.join([chr(m.context['values'][ea]) for ea in m.context['target_order']])

Note that there is a strange update pattern with the values dictionary and target_order array. They need to be reassigned to the context dictionary in order to notify the multiprocessing manager that they have changed.

The end_hook() function is used to declare a terminating point in all three strategies. It declares a hook after all the check-character functions. The hook prints out the characters discovered, then terminates Manticore.

def end_hook(m, end_pc):
    def hook_end(state):
        print 'GOAL:'
        print "'%s'" % ''.join([chr(m.context['values'][ea]) for ea in m.context['target_order']])

2. Concrete solution via address hooking

Since this challenge performs a simple equality check on each character, it is easy to extract the value. It would be more efficient to solve this statically. In fact, it can be solved with one hideous line of bash.

$ ls -d -1 /path/to/magic_dist/* | while read file; do echo -n "'"; grep -ao $'\x48\x83\xff.\x74\x0e' $file | while read line; do echo $line | head -c 4 | tail -c 1; done; echo "'"; done

However, in situations like this, we can take advantage of concretizing. When a value is written to a register, it is no longer symbolic. This causes the branch to be explicit and skips solving. This also means that the abandon hook on the negative branch is no longer necessary, since it will always take the positive branch due to the concrete value.

  • Set a hook on each character-checking function.
    • Extract the target value from the opcodes.
    • Write that target value to the register RDI.
  • Set a hook at a terminating instruction.
def concrete_pcs(m, pcs, end_pc):
    # for each character checking function address
    for pc in pcs:
        def write(state):
            # retrieve instruction bytes
            _pc = state.cpu.PC
            instruction = state.cpu.read_int(_pc)

            # extract value from instruction
            val = instruction >> 24 & 0xFF

            # concretize RDI
            state.cpu.write_register('RDI', val)

            # store value for display at end_hook()
            _target = _pc + 0x14

            values = m.context['values']
            values[_target] = val
            m.context['values'] = values

            target_order = m.context['target_order']
            m.context['target_order'] = target_order

            if debug:
                print 'Reached target %s. Current key: ' % (hex(_pc))
                print "'%s'" % ''.join([chr(m.context['values'][ea]) for ea in m.context['target_order']])

    end_hook(m, end_pc)

3. Symbolic solution via address hooking

It is easy to extract the value from each function statically. However, if each character-checking function did some arbitrary bit math before comparing the result, we would not want to reimplement all of those instructions for a static extraction. This is where a hybrid approach would be useful. We identify target functions statically, and then solve for the value in each function.

  • Set a hook on each character-checking function.
    • Set a hook on the positive branch to solve for the register value RDI and store the value.
    • Set a hook on the negative branch to abandon that state.
    • Set a hook at the pre-branch (current instruction) to check if we know the value that was solved for.
      • If we know the value, write it to RDI so we do not need to solve for it again.
  • Set a hook at a terminating instruction.
def symbolic_pcs(m, pcs, end_pc):
    for pc in pcs:
        set_hooks(m, pc)

    end_hook(m, end_pc)

Bringing everything together

With those three functions we have the target addresses we need. Putting everything together in main() we have a dynamic solver for the challenge Magic. You can find the full code listing here.

def main():
    path = sys.argv[1]
    m = Manticore(path)
    m.context['values'] = {}
    m.context['target_order'] = []

    pcs, end_pc = get_pcs(path)

    # symbolic(m, end_pc)
    # concrete_pcs(m, pcs, end_pc)
    symbolic_pcs(m, pcs, end_pc)

A run with our debug print statements enabled will help show the execution of this script. The first time the positive branch is hit we see a Reached target [addr]. Current Key: statement and the key up to this point. Sometimes the negative branch will be taken and the state will be abandoned. We see Writing [chr] at [addr]… when we use our previously solved values to concretize the branch. Finally, when the end_hook() is hit we see GOAL: with our final key.

Start working smarter with Manticore

Manticore delivers symbolic execution over smaller portions of compiled code. It can very quickly discover the inputs required to reach a specific path. Combine the mechanical efficiency of symbolic execution with human intuition and enhance your capabilities. With a straightforward API and powerful features, Manticore is a must-have for anyone working in binary analysis.

Take the Manticore challenge

How about you give this a shot? We created a challenge very similar to Magic, but designed it so you can’t simply grep for the solution. Install Manticore, compile the challenge, and take a step into the future of binary analysis. Try it today! The first solution to the challenge that executes in under 5 minutes will receive a bounty from the Manticore team. (Hint: Use multiple workers and optimize.)

Thanks to @saelo for contributing the functionality required to run Magic with Manticore.

Manticore: Symbolic execution for humans

Earlier this week, we open-sourced a tool we rely on for dynamic binary analysis: Manticore! Manticore helps us quickly take advantage of symbolic execution, taint analysis, and instrumentation to analyze binaries. Parts of Manticore underpinned our symbolic execution capabilities in the Cyber Grand Challenge. As an open-source tool, we hope that others can take advantage of these capabilities in their own projects.

We prioritized simplicity and usability while building Manticore. We used minimal external dependencies and our API should look familiar to anyone with an exploitation or reversing background. If you have never used such a tool before, give Manticore a try.

Two interfaces. Multiple use cases.

Manticore comes with an easy-to-use command line tool that quickly generates new program “test cases” (or sample inputs) with symbolic execution. Each test case results in a unique outcome when running the program, like a normal process exit or crash (e.g., invalid program counter, invalid memory read/write).

The command line tool satisfies some use cases, but practical use requires more flexibility. That’s why we created a Python API for custom analyses and application-specific optimizations. Manticore’s expressive and scriptable Python API can help you answer questions like:

  • At point X in execution, is it possible for variable Y to be a specified value?
  • Can the program reach this code at runtime?
  • What is a program input that will cause execution of this code?
  • Is user input ever used as a parameter to this libc function?
  • How many times does the program execute this function?
  • How many instructions does the program execute if given this input?

In our first release, the API provides functionality to extend the core analysis engine. In addition to test case generation, the Manticore API can:

  • Abandon irrelevant states
  • Run custom analysis functions at arbitrary execution points
  • Concretize symbolic memory
  • Introspect and modify emulated machine state

Early applications

Manticore is one of the primary tools we use for binary analysis research. We used an earlier version as the foundation of our symbolic execution vulnerability hunting in the Cyber Grand Challenge. We’re using it to build a custom program analyzer for DARPA LADS.

In the month leading up to our release, we solicited ideas from the community on simple use cases to demonstrate Manticore’s features. Here are a few of our favorites:

  • Eric Hennenfent solved a simple reversing challenge. He presented two solutions: one using binary instrumentation and one using symbolic execution.
  • Yan and Mark replaced a variable with a tainted symbolic value to determine which specific comparisons user input could influence.
  • Josselin Feist generated an exploit using only the Manticore API. He instrumented a binary to find a crash and then determined constraints to call an arbitrary function with symbolic execution.
  • Cory Duplantis solved a reversing challenge from Google CTF 2016. His script is a great example of how straightforward it is to solve many CTF challenges with Manticore.

Finally, a shoutout to Murmus who made a video review of Manticore only 4 hours after we open sourced it!

It’s easy to get started

With other tools, you’d have to spend time researching their internals. With Manticore, you have a well-written interface and an approachable codebase. So, jump right in and get something useful done sooner.

Grab an Ubuntu 16.04 VM and:

# Install the system dependencies
sudo apt-get update && sudo apt-get install z3 python-pip -y
python -m pip install -U pip

# Install manticore and its dependencies
git clone && cd manticore
sudo pip install --no-binary capstone .

You have installed the Manticore CLI and API. We included a few examples in our source repository. Let’s try the CLI first:

# Build the examples
cd examples/linux

# Use the Manticore CLI to discover unique test cases
manticore basic
cat mcore_*/*1.stdin | ./basic
cat mcore_*/*2.stdin | ./basic

Basic” is a toy example that reads user input and prints one of two statements. Manticore used symbolic execution to explore `basic` and discovered the two unique inputs. It puts sample inputs it discovers into “stdin” files that you can pipe to the binary. Next, we’ll use the API:

# Use the Manticore API to count executed instructions
cd ../script
python ../linux/helloworld

The script uses the Manticore API to instrument the `helloworld` binary and count the number of instructions it executes.

Let us know what you think!

If you’re interested in reverse engineering, binary exploitation, or just want to want to learn about CPU emulators and symbolic execution, we encourage you to play around with it and join #manticore on our Slack for discussion and feedback. See you there!

A walk down memory lane

Admit it. Every now and then someone does something, and you think: “I also had that idea!” You feel validated — a kindred spirit has had the same intuitions, the same insights, and even drawn the same conclusions. I was reminded of this feeling recently when I came across a paper describing how to use Intel’s hardware transactional memory to enforce controlflow integrity.

Applied accounting: Enforcing control-flow integrity with checks and balances

A while back I had the same idea, wrote some code, and never published it. When I saw the paper, I secretly, and perhaps predictably, had that negative thought: “I got there first.” That’s not productive. Let’s go back in time to when I was disabused of the notion that there are any “firsts” with ideas. Don’t worry, hardware transactional memory and control-flow integrity will show up later. For now, I will tell you the story of Granary, my first binary translator.


I am a self-described binary translator. In fact, I have monograms in my suits saying as much. I dove head-first into binary translation and instrumentation as part of my Master’s degree. My colleague Akshay and I were working with two professors at the Systems Lab at the University of Toronto (UofT). We identified a problem: most bugs in the Linux kernel were actually coming from kernel modules (extensions). Opportunity struck. A former UofT student ported DynamoRIO to work inside of the Linux kernel, and that tool could help catch kernel module bugs in the act.

The path to finding actual bugs was long and twisted. Slowly finding bugs wasn’t as cool as doing it fast, and instrumenting the whole kernel to catch bugs in modules wasn’t fast. Our solution was to instrument modules only, and let the kernel run at full speed. This was challenging and ran up against core design decisions in DynamoRIO; thus, Granary was born. Granary’s claim to fame was that it could selectively instrument only parts of the kernel, leaving the rest of the kernel to run natively.

Second place, or first to lose?

With Granary came Address Watchpoints. This was a cool technique for doing fine-grained memory access instrumentation. Address watchpoints made it possible to instrument accesses to specific allocations in memory and detect errors like buffer overflows, use-before-initialization, and use-after-frees.

Address Watchpoints worked by intercepting calls to memory allocation functions (e.g. kmalloc) and embedding a taint tracking ID into the high 15 bits of returned pointers. Granary made it possible to interpose on memory access instructions that used those pointers. It was comprehensive because tainted pointers spread like radioactive dye — pointer copies and arithmetic transparently preserved any embedded IDs.

Yet it turned out that Address Watchpoints was not novel (this is an important metric in academia). SoftBound+CETS had applied a similar technique years before. Stay positive!

Not again

Despite the lack of novelty, Address Watchpoints were practical and attacked the real problem of memory access bugs in Linux kernel modules. Granary stepped forward as the novelty, and Address Watchpoints were an application showing that Granary was useful.

I presented Address Watchpoints at HotDep in 2013, which was co-located with the SOSP conference. At the same conference, btkernel, a fast, Linux kernel-space dynamic binary translator was released. It applied many of the same techniques that made Granary novel, but beat us to a full paper publication. Darn.

Hardware transactional memory

Time to feel good again. Trail of Bits got me a sweet laptop when I joined in 2015. It had a Broadwell chip, and supported the latest x86 features like hardware transactional memory.

The concurrency tax

The stated purpose of hardware transactional memory (HTM) is to enable lock-free and highly concurrent algorithms. For example, let’s say that I want to find use-after free bugs. A solid approach to this problem is to represent a program using a points-to graph. Use-after-free bugs exist if there is a path through the program’s points-to graph that goes from a free to a use.

Scaling this approach is challenging, but my laptop has many cores and so my imaginary doppelganger can throw some concurrency at the problem and hope for the best. Consider two threads that are working together to update the points-to graph. They propagate information from node to node, figuring out what pointers point where. If they both want to update the same node at the same time, then they need to synchronize so that one thread’s updates don’t clobber the other’s.

How do we know when synchronization is actually needed? We don’t! Instead, we need to conservatively assume that every access to a particular node requires synchronization, just in case “that other thread” rudely shows up. But points-to graphs are huge; we shouldn’t need to handle the worst case every time. That’s where HTM comes in. HTM lets us take an opportunistic approach, where threads don’t bother trying to synchronize. Instead, they try to make their own changes within a transaction, and if they both do so at the same time, then their transactions abort and they fall back to doing things the old fashioned way. This works because transactions provide failure atomicity: either the transaction succeeds and the thread’s changes are committed, or the transaction aborts, and it’s as if none of the changes ever happened.

That’s not what HTM is for

Hooray, we’re concurrency experts now. But didn’t this article start off saying something about control-flow integrity (CFI)? What the heck does concurrency have to do with that? Nothing! But HTM’s failure atomicity property has to do with CFI and more.

It turns out that HTM can be applied to unorthodox problems. For example, Intel’s HTM implementation enables a side-channel attack that can be used to defeat address space layout randomization. For a time I was also looking into similar misuses of HTM and surprise, surprise, I applied it to CFI.

Parallel discovery

Two years ago I had an idea about using HTM to enforce CFI. I had a little proof-of-concept script to go along with it, and a colleague helped me whip up an LLVM instrumentation pass that did something similar. Much to my surprise, researchers from Eurecom and UCSB recently produced a similar, more fleshed out implementation of the same idea. Here’s the gist of things.

Suppose an attacker takes control of the program counter, e.g. via ROP. Before their attack can proceed, they need to pivot and make the program change direction and go down the path to evil. The path to evil is paved with good, albeit unintended instructions. What if we live in a world without consequences? What if we let the attacker go wherever they want?

In ordinary circumstances that would be an awful idea. But attackers taking control is extraordinary, kind of like two threads simultaneously operating on the same node in a massive graph. What HTM gives us is the opportunity to do the wrong thing and be forgiven. We can begin a transaction just before a function return instruction, and end the transaction at its intended destination. Think of it like cattle herding. The only valid destinations are those that end transactions. If an attacker takes control, but doesn’t end the transaction, then the hardware will eventually run out of space and abort the transaction, and the program will seize hold of its destiny.

I believed and still think that this is a cool idea. Why didn’t I follow through? The approach that I envisioned lacked practicality. First, it wasn’t good enough as described. There are perhaps better ways to herd cattle. Aligning them within fences, for example. Protecting CFI using only failure atomicity doesn’t change the game, instead it just adds an extra requirement to ROP chains. Second, hardware transactions aren’t actually that fast — they incur full memory barriers. Executing transactions all the time kills ILP and concurrency. That makes this technique out of the question for real programs like Google Chrome. Finally, Intel’s new CET instruction set for CFI makes this approach dead on arrival. CET provides special instructions that bound control flow targets in a similar way.

If a tree falls in a forest

If I had the idea, and the Eurecom/UCSB group had the idea, then I bet some unknown third party also had the idea. Maybe Martin Abadi dreamt this up one day and didn’t tell the world. Maybe this is like operating systems, or distributed systems, or really just… systems, where similar problems and solutions seem to reappear every twenty or thirty years.

Seeing that someone else had the same idea that I had was really refreshing. It made me feel good and bad all at the same time, and reminded me of a fun time a few years ago where I felt like I was doing something clever. I’m not a special snowflake, though. There are no firsts with ideas. The Eurecom/UCSB group had an idea, then they followed it through, produced an implementation, and evaluated it. That’s what counts.

April means Infiltrate

Break out your guayabera, it’s time for Infiltrate. Trail of Bits has attended every Infiltrate and has been a sponsor since 2015. The majority of the company will be in attendance this year (18 people!) and we’ll be swapping shirts and swag again. We’re looking forward to catching up with the latest research presented there and showcasing our own contributions.

Last year we spoke on Making a Scaleable Automated Hacking System and Swift Reversing. We’re speaking again this year: Sophia d’Antoine has teamed up with a pair of Binary Ninja developers to present Be a Binary Rockstar: Next-level static analyses for vulnerability research, which expands on her previous research bringing abstract interpretation to Binary Ninja.

This year we’re bringing Manticore, the iron heart of our CGC robot, and giving attendees early access to it as we prepare to open source it. Manticore is a binary symbolic and concolic execution engine with support for x86, x86-64, and ARM. Use it to solve a simple challenge and earn yourself a Trail of Bits mug.

We don’t just attend Infiltrate to boast about our work; Infiltrate is truly a top notch conference. Infiltrate’s talks are a sneak peek at the best talks presented at other conferences — all in one place. The lobbycon is strong, giving everyone a chance to interact with the speakers and top researchers. The conference is all-inclusive and the included food, drinks, and events are fantastic — so don’t expect to show up without a ticket and try to steal some people away for dinner.

Last year also saw the return of the NOP certification. Windows 2000 and ImmunityDbg caused much frustration among our team but resulted in an exciting competition.


2016 NOP Certification: 30 minutes fighting ImmunityDbg, 7 minutes 33 seconds to pop calc

We’re particularly excited for several talks:

Of course, we’ve seen Be a Binary Rockstar and it’s great. Infiltrate tickets are still on sale — you can see it for yourself.

Vegas is over. The real show is in Miami. See you there!