Trail of Bits @ Devcon IV Recap

We wanted to make up for missing the first three Devcons, so we participated in this year’s event through a number of talks, a panel, and two trainings. For those of you who couldn’t join us, we’ve summarized our contributions below. We hope to see you there next year.

Using Manticore and Symbolic Execution to Find Smart Contract Bugs

In this workshop, Josselin Feist showed how to use Manticore, our open-source symbolic execution engine. Manticore enables developers not only to discover bugs in their code immediately, but also to prove that their code works correctly. Josselin led 120 attendees through a variety of exercises with Manticore. Everyone left with hands-on formal methods that will help them ensure that their smart contracts follow their specifications.

Get the workshop’s slides and exercises

Blockchain Autopsies

In this lightning talk, Jay Little recovered and analyzed 30,000 self-destructed contracts, and identified possible attacks hidden among them. 2 million contracts have been created on Ethereum’s mainnet yet few holding any value have been destroyed. These high-signal transactions are difficult to find; many are not available to a fully synchronized Ethereum node. In order to achieve this feat, Jay created new tools that re-process blockchain ledger data, recreate contracts with state, and analyze suspect transactions using traces and heuristics.

Filtering deployment mistakes, DoS attacks, and spam to identify suspect self-destructs

Get Jay’s slides

Current State of Security

In this panel, Kevin Seagraves facilitated a discussion about Ethereum’s current security posture. What was the biggest change in Ethereum security in the last year? How is securing smart contracts different from traditional systems? How should we think about the utility of bug bounties? Hear what this panel of experts had to say:

Security Training

In this day-long training, JP shared how we conduct our security reviews; not just our tools or tricks, but the whole approach. In addition to that knowledge, we tried to impart our school of thought regarding assessments. Far too often, we encounter the belief that audits deliver a list of bugs and, consequently, the ability to say “Our code has been audited!” (and therefore “Our code is safe!”). That’s just part of the picture. Audits should also deliver an assessment of total project risk, guidance on architectural and development lifecycle, and someone to talk to.

We’re running the training again on December 11th in New York. Reserve yourself a seat.

Devcon Surprise

Instead of going to Devcon, Evan Sultanik stayed home and wrote an Ethereum client fuzzer. Etheno automatically seeks divergences among the world’s Ethereum clients, like the one that surfaced on Ropsten in October. Etheno automatically identified that same bug in two minutes.

We’re glad that we attended Devcon4, and look forward to participating more in future events.

We crypto now

Building and using cryptographic libraries is notoriously difficult. Even when each component of the system has been implemented correctly (quite difficult to do), improperly combining these pieces can lead to disastrous results.

Cryptography, when rolled right, forms the bedrock of any secure application. By combining cutting-edge mathematics and disciplined software engineering, modern crypto-systems guarantee data and communication privacy. Navigating these subtleties requires experts in both cryptography software engineering and the underlying mathematics. That’s where we can help.

How we can help

Trail of Bits has released tooling and services that demonstrate our talents in diverse areas including binary lifting, symbolic execution, static analysis, and architectural side channels. As our team has grown, we’ve expanded our expertise to include cryptography. (See our recent writings about elliptic curve implementation errors in Bluetooth, post-quantum algorithms, RSA fault analysis, and verifiable delay functions for a taste.) We’d like to share that expertise more effectively, so today we’re announcing a new cryptographic services practice to augment our existing offerings.

Our ambition is to improve the cryptography ecosystem for everyone. Misuse resistant constructions (both cryptographically and via API design), rigorously tested low-level implementations, and safer languages are all prerequisites for a secure future. We will be deeply involved in each of these efforts. We’ll be publishing a variety of tools, safe cryptographic constructions we are calling recipes, and a steady supply of blog posts to contribute to the field.

Who’s behind our cryptographic services practice

  • Paul Kehrer, a principal engineer at Trail of Bits, leads the cryptographic services practice and specializes in cryptographic engineering. He has spent his career writing cryptographic software, including a publicly trusted certification authority’s technical infrastructure, key management services for a cloud provider, and contributing to open source cryptographic libraries. Paul is one of the founding members of the Python Cryptographic Authority.
  • JP Smith, a security engineer at Trail of Bits, focuses on program analysis and cryptanalysis. He is the winner of the 2017 underhanded crypto contest and works on a mix of research, engineering, and assurance on technologies ranging from compilers to blockchains. He received a degree in mathematics at UIUC where he also led the security club/CTF team and researched symbolic execution and binary translation.
  • Ben Perez, a security engineer at Trail of Bits, specializes in blockchain security and cryptography. He received a masters degree in computer science from UC San Diego where he focused on post-quantum cryptography and machine learning. Prior to joining the team at Trail of Bits, he worked on binary analysis tools at Galois, the Quorum blockchain at JP Morgan, and published research in pure mathematics.

Get in touch

Whether you’re just trying to confirm that you’re using elliptic curves correctly or developing a novel crypto-system from scratch, we want to work with you. We are especially suited to help design and implement novel cryptographic constructions, review proposed schemes for soundness, and build tools to detect implementation errors in your environment.

If your company needs our deep expertise, then get in touch today.

How contract migration works

Smart contracts can be compromised: they can have bugs, the owner’s wallet can be stolen, or they can be trapped due to an incorrect setting. If you develop a smart contract for your business, you must be prepared to react to events such as these. In many cases, the only available solution is to deploy a new instance of the contract and migrate your data to it.

If you plan to develop an upgradable contract, a migration procedure will spare you the dangers of an upgradability mechanism.

Read this blog post for a detailed description of how contract migration works.

You need a contract migration capability

Even a bug-free contract can be hijacked with stolen private keys. The recent Bancor and KICKICO hacks showed that attackers can compromise smart contract wallets. In attacks like these, it may be impossible to fix the deployed smart contract, even if the contract has an upgradability mechanism. A new instance of the contract will need to be deployed and properly initialized to restore functionality to your users.

Therefore, all smart contract developers must integrate a migration procedure during the contract design phase and companies must be prepared to run the migration in case of compromise.

A migration has two steps:

  1. Recovering the data to migrate
  2. Writing the data to the new contract

Let’s walk through the details, costs and operational consequences.

How to perform the migration

Step 1: Data recovery

You need to read the data from a particular block on the blockchain. To recover from an incident (hack or failure), you need to use the block before the incident or filter the attacker’s actions.

If possible, pause your contract. It is more transparent for your users, and prevents attackers from taking advantage of users who are not aware of the migration.

The recovery of data will depend on your data structure.

For public variables of simple types (such as uint, or address), it is trivial to retrieve the value through their getters. For private variables, you can either rely on events or you can compute the offset in memory of the variable, then use the getStorageAt function to retrieve its value.

Arrays are easily recovered, too, since the number of elements is known. You can use the techniques described above.

The situation is a bit more complex for mappings. Keys of a mapping are not stored. You need to recover them to access the values. To simplify off-chain tracking, we recommend emitting events when a value is stored in a mapping.

For ERC20 token contracts, you can find the list of all the holders by tracking the addresses of the Transfer events. This process is difficult. We have prepared two options to help: in the first, you can scan the blockchain and retrieve the holders yourself; in the second, you can rely on the publicly available Google BigTable archive of the Ethereum blockchain.

If you are not familiar with the web3 API to extract information from the blockchain, you can use ethereum-etl, which provides a set of scripts to simplify the data extraction.

If you don’t have a synchronized blockchain, you can use the Google BigQuery API. Figure 1 shows how to collect all the addresses of a given token through BigQuery:

SELECT from_address FROM `bigquery-public-data.ethereum_blockchain.token_transfers` AS token_transfers WHERE token_transfers.token_address = 0x41424344
Union DISTINCT
SELECT to_address FROM `bigquery-public-data.ethereum_blockchain.token_transfers` AS token_transfers WHERE token_transfers.token_address = 0x41424344'

Figure 1: Using Google BigQuery to recover the addresses present in all Transfer events of the token at address 0x41424344

BigQuery provides access to the block number, so you can adapt this query to return the transactions up to a particular block.

Once your recover all the holder’s addresses, you can query the balanceOf function offline to recover the balance associated to each holder. Filter accounts with an empty balance.

Now that we know how to retrieve the data to be migrated, let’s write the data to the new contract.

Step 2: Data writing

Once you collect the data, you need to initiate your new contract.

For simple variables, you can set the values through the constructor of the contract.

The situation is slightly more complex and costly if your data cannot be held in a single transaction. Each transaction is included in a block, which limits the total amount of gas that can be used by its transactions (the so-called “GasLimit”). If the gas cost of a transaction approaches or exceeds this limit, miners won’t include it in a block. As a result, if you have a large amount of data to migrate, you must split the migration into several transactions.

The solution is to add an initialization state to your contract, where only the owner can change the state variables, and users can’t take any action.

For an ERC20 token, the process would take these steps:

  1. Deploy the contract in the initialization state,
  2. Migrate the balances,
  3. Move the contract’s state to production.

The initialization state can be implemented with a Pausable feature and a boolean indicating the initialization state.

To reduce the cost, the migration of the balances can be implemented with a batch transfer function that lets you set multiple accounts in a single transaction:

/**
* @dev Initiate the account of destinations[i] with values[i]. The function must only be called before
* any transfer of tokens (duringInitialization). The caller must check that destinations are unique addresses.
* For a large number of destinations, separate the balances initialization in different calls to batchTransfer.
* @param destinations List of addresses to set the values
* @param values List of values to set
*/
function batchTransfer(address[] destinations, uint256[] values) duringInitialization onlyOwner external{
require(destinations.length == values.length);

uint256 length = destinations.length;
uint i;

for(i=0; i < length; i++){
balances[destinations[i]] = values[i];
emit Transfer(0x0, destinations[i], values[i]);
}
}

Figure 2: An example of a batchTransfer function

Migration concerns

When migrating a contract, two major concerns arise:

  1. How much will the migration cost?
  2. What is the impact on exchanges?

Migration cost

The recovery of data is done off-chain and therefore is free. Ethereum-etl can be used locally. Google‘s BigQuery API offers sufficient free credit to cover its usage.

However, each transaction sent to the network and each byte stored by the new contract has a cost.

Using the batchTransfer function of Figure 2, the transfer of 200 accounts costs around 2.4M gas, which is $5.04 with an average gas price (10 Gwei) at the time of this article (use ETH Gas Station to recalculate this figure for today’s prices). Roughly speaking, you need $0.025 to migrate one balance.

If we look at the number of holders for the top five ERC20 tokens ranked by their market cap, we have:

Token Holders Cost of Migration
BNB 300,000 $7,500
VEN 45,000 $1,200
MKR 5,000 $125
OMG 660,000 $16,500
ZRX 60,000 $1,500

If you migrate additional information (such as the allowance), the cost will be higher. Even so, these amounts are low in comparison to the amount of money that these tokens represent, and the potential cost of a failed upgrade.

Exchanges

The deployment of a new contract may have operational consequences. For token-based contracts, it is important to collaborate with exchanges during a migration to be sure that the new contract will be listed and the previous one will be discarded.

Fortunately, previous token migration events (such as Augur, Vechain, and Tron), showed that exchanges are likely to cooperate.

Contract Migration versus Upgradable Contracts

In our previous blog post, we discussed a trend in smart contract design: the addition of an upgradability mechanism to the contract.

We saw several drawbacks to upgradeable contracts:

  • Detailed low-level expertise in EVM and Solidity is required. Delegatecall-based proxies requires the developer to master EVM and Solidity internals.
  • Increased complexity and code size. The contract is harder to review and is more likely to contain bugs and security issues.
  • Increased number of keys to handle. The contract will need multiple authorized users (owner, upgrader). The more authorized users, the larger the attack surface.
  • Increased gas cost of each transaction. The contract becomes less competitive than the same version without an upgrade mechanism.
  • They encourage solving problems after deployment. Developers tend to test and review contracts more thoroughly if they know that they can’t be updated easily.
  • They reduce users’ trust in the contract. Users need to trust the contract’s owner, which prevents a truly decentralized system.

A contract should have an upgradable mechanism only if there is a strong argument for it, such as:

  • The contract requires frequent updates. If the contract is meant to be modified on a regular basis, the cost of regular migration may be high enough to justify an upgradability mechanism.
  • The contract requires a fixed address. The migration of a contract necessitates the use of a new address, which may break interactions with third parties (such as with other contracts).

Contract migrations achieve the benefits of an upgrade with few of the downsides. The main advantage of an upgrade over a migration is a cheaper cost of the upgrade. However, this cost does not justify all the drawbacks.

Recommendations

Prepare a migration procedure prior to contract deployment.

Use events to facilitate data tracking.

If you go for an upgradable contract, you must also prepare a migration procedure, as your keys can be compromised, or your contract can suffer from incorrect and irreversible manipulation.

Smart contracts bring a new paradigm of development. Their immutable nature requires users to re-think the way they build applications and demands thorough design and development procedures.

Contact us if you need help in creating, verifying, or applying your migration procedure.

In the meantime, join our free Ethereum security office hours if you have any questions regarding the security of your contract.

The Good, the Bad, and the Weird

Let’s automatically identify weird machines in software.

Combating software exploitation has been a cat-and-mouse game ever since the Morris worm in 1988. Attackers use specific exploitation primitives to achieve unintended code execution. Major software vendors introduce exploit mitigation to break those primitives. Back and forth, back and forth. The mitigations have certainly raised the bar for successful exploitation, but there’s still opportunity to get closer to provable security gains.

I discussed the use of weird machines to either bypass these mitigation barriers or prove a program is unexploitable as part of the DARPA Risers session to an audience of PMs and other Defense officials earlier this year at the D60 conference. Describing this problem concisely was difficult, especially to non-practitioners.

Why weird machines matter

Attackers look for weird machines to defeat modern exploit mitigations. Weird machines are partially Turing-complete snippets of code that inherently exist in “loose contracts” around functions and groups of functions. A loose contract is a piece of code that not only implements the intended program, but does it in such a way that the program undergoes more state changes than it should (i.e. the set of preconditions of a program state is larger than necessary). We want “tight contracts” for better security, where the program only changes state on exactly the intended preconditions, and no “weird” or unintended state changes can arise.

A weird machine is a snippet of code that will process valid or invalid input in a way that the programmer did not intend.

Unfortunately, loose contracts exist in most software and are byproducts of functionality such as linked-list traversals, file parsing, small single-purpose functions, and any other features that emerge from complex systems. Modern attackers leverage these unintended computations and build weird machines that bypass exploit mitigations and security checks. Let’s take a look at an example of a weird machine.

struct ListItem
{
    ListItem* TrySetItem(ListItem* new_item)
    {
        if (!m_next)
            m_next = new_item;

        return m_next;
    }

    struct ListItem *m_next = nullptr;
};

The function ListItem::TrySetItem looks to have these preconditions:

  • You must pass this and item in, both as pointers
  • this and item must be allocated and constructed ListItem instances

However, once machine code is generated the preconditions are actually:

  • The this parameter must be a pointer to allocated memory of at least 8 bytes
  • You must pass a second parameter (item) but it can be of any type

This is an example of a loose contract which is inherent to the way we write code. An attacker who has overwritten the m_next pointer can leverage this function to check to see if memory at an arbitrary address is set: if yes, then the attacker may leak the memory, if not, then the attacker may set the memory.

A vulnerability is used to alter either program execution or data state. Execution after this is either a weird state or a state operating on unintended data.

Tightening the contract

One type of loose contract is the “execution” contract, which is the set of possible addresses that are valid indirect branches in a program.

Windows NT in 1995 is an example of a loose execution contract, where all memory marked as ‘read’ also implied ‘execute.’ This included all data in the program – not just the code. In 2003, Microsoft tightened the execution contract when it introduced non-executable data (NX, DEP) in Windows XP SP2. Microsoft further improved the contract in 2006 when it introduced Address Space Layout Randomization (ASLR) in Windows Vista, which randomizes the location of executable code. 2016 saw the introduction of Control Flow Guard (CFG) with Windows 8.1/10, which validates forward-edges of indirect branches point to a set of approved functions.

In the chart below, it’s clear that few valid indirect destinations remain. This tight “execution” contract makes exploitation much more difficult and the need for weird machines greater, dramatically increasing the value of weird machines. If we can tighten the program contract more, it would make weird machines that much more difficult to identify.

The execution contract defines areas of the program which are executable (yellow). These have diminished over the years as the contract has tightened.

What “weird” looks like

Identifying weird machines is a hard problem. How do we identify loose contracts when we don’t even know what the contract is in the first place? Automatically identifying these weird machines would allow us to triage properly whether a vulnerability is in fact exploitable and whether it would be unexploitable in the absence of weird machines.

One way to programmatically describe a weird machine is through Hoare triples. A Hoare triple describes how the execution of a piece of code changes the state of the computation: the preconditions necessary to move into a new state and the post conditions which describe how to leave a state. When we identify weird machines, we can tighten such contracts automatically by removing them or constraining the preconditions to be exactly what the state expects. This will get us one step closer to creating a program that’s provably secure.

Revisiting our example, we can add dynamic_casts to enforce the contract preconditions. If we analyze the snippet of code as a Hoare triple we notice that the preconditions for the function’s execution are loose, such that any address can be passed to the function. Furthermore, the post conditions are nonexistent such that once executing, the function will set or return memory regardless of program state.

struct ListItem
{
    ListItem* TrySetItem(ListItem* new_item)
    {
        if (!dynamic_cast<ListItem*>(this) ||
            !dynamic_cast<ListItem*>(new_item)) {
                // This path should not be allowed
                abort();
        }

        if (!m_next)
            m_next = new_item;

        return m_next;
    }

    struct ListItem *m_next = nullptr;
};

The dynamic_casts are runtime guards which check to validate that the function is operating on the intended pointers. This new function is decidedly not as useful in exploitation as it once was.

A Hoare triple with imprecisely defined preconditions allows for a “weird” state change to occur. Tightening these preconditions by improving input checks would make these states unattainable.

So how do we find them?

There are numerous difficult problems on the road to a solution. The first being scale. We don’t care about simple test cases, we care about real code deployed to billions of people today: browsers, operating systems, mail clients, messaging applications. Automatically identifying weird machines on these platforms is a significant challenge.

Given a set of possible execution paths and their pattern of object creation and access, we must identify program slices with specific and controllable side effects. These slices must themselves be Turing complete. The behavior of these “Turing thunks” may be different outside of their normal placement in the execution paths or with different data states. To scale our analyses, we can break the problem into subcomponents.

Starting with identification of Turing thunks, analyze their side effects, and determine their reachability. We can use data flow analysis and shape analysis to identify these “Turing thunks” and measure their side effects. The side effects of these identified weird machines will be measured to determine how these candidate weird machines compose. Alterations to the global state could alter the execution of subsequent weird machines. Data flow provides paths that are transformable based on controlled input. Shape analysis aids in reconstructing heap objects, layout, and the interactions between objects. This helps determine the input constraints necessary to generate a path of execution to a weird machine, as well as the heap state before and after execution of the weird machine.

Once candidates have been identified, it is possible to prioritize based on specific functionality and side effects. We can use symbolic and concolic execution to validate these candidates and machine learning to group candidates by behaviors, execution constraints, and side effects to make later querying easier.

The future of exploitation

In the end, weird machines are a fundamental tool in exploitation. As programs get more complex and mitigations pile on, the importance of weird machines only increases. Finding these Turing snippets and enumerating their properties in real-world programs will assist the next generation of exploitations and security.

Once we can automatically identify weird machines we will have the ability to remove these weird states, and determine the degree of exploitability of the program. We may also be able to prove a specific vulnerability is unexploitable.

Part of the solution to this is an improvement on the terminology, which needs to mature. The other part of the solution is further research into the problem space. While there was interest in the topic, I hope DARPA invests in this area in the future.

The tooling and systems to identify and classify weird machines doesn’t yet exist. We still have a lot to do, but the building blocks are there. With them we’ll come closer to solving the problems of tomorrow.

If you want to learn more about this area of research, I suggest you start with these publications:

A Guide to Post-Quantum Cryptography

For many high-assurance applications such as TLS traffic, medical databases, and blockchains, forward secrecy is absolutely essential. It is not sufficient to prevent an attacker from immediately decrypting sensitive information. Here the threat model encompasses situations where the adversary may dedicate many years to the decryption of ciphertexts after their collection. One potential way forward secrecy might be broken is that a combination of increased computing power and number-theoretic breakthroughs make attacking current cryptography tractable. However, unless someone finds a polynomial time algorithm for factoring large integers, this risk is minimal for current best practices. We should be more concerned about the successful development of a quantum computer, since such a breakthrough would render most of the cryptography we use today insecure.

Quantum Computing Primer

Quantum computers are not just massively parallel classical computers. It is often thought that since a quantum bit can occupy both 0 and 1 at the same time, then an n-bit quantum computer can be in 2n states simultaneously and therefore compute NP-complete problems extremely fast. This is not the case, since measuring a quantum state destroys much of the original information. For example, a quantum system has complete knowledge of both an object’s momentum and location, but any measurement of momentum will destroy information about location and vice versa. This is known as the Heisenberg uncertainty principle. Therefore, successful quantum algorithms consist of a series of transformations of quantum bits such that, at the end of the computation, measuring the state of the system will not destroy the needed information. As a matter of fact, it has been shown that there cannot exist a quantum algorithm that simultaneously attempts all solutions to some NP-complete problem and outputs a correct input. In other words, any quantum algorithm for solving hard classical problems must exploit the specific structure of the problem at hand. Today, there are two such algorithms that can be used in cryptanalysis.

The ability to quickly factor large numbers would break both RSA and discrete log-based cryptography. The fastest algorithm for integer factorization is the general number field sieve, which runs in sub-exponential time. However, in 1994 Peter Shor developed a quantum algorithm (Shor’s algorithm) for integer factorization that runs in polynomial time, and therefore would be able to break any RSA or discrete log-based cryptosystem (including those using elliptic curves). This implies that all widely used public key cryptography would be insecure if someone were to build a quantum computer.

The second is Grover’s algorithm, which is able to invert functions in O(√n) time. This algorithm would reduce the security of symmetric key cryptography by a root factor, so AES-256 would only offer 128-bits of security. Similarly, finding a pre-image of a 256-bit hash function would only take 2128 time. Since increasing the security of a hash function or AES by a factor of two is not very burdensome, Grover’s algorithm does not pose a serious threat to symmetric cryptography. Furthermore, none of the pseudorandom number generators suggested for cryptographic use would be affected by the invention of a quantum computer, other than perhaps the O(√n) factor incurred by Grover’s algorithm.

Types of Post-Quantum Algorithms

Post-quantum cryptography is the study of cryptosystems which can be run on a classical computer, but are secure even if an adversary possesses a quantum computer. Recently, NIST initiated a process for standardizing post-quantum cryptography and is currently reviewing first-round submissions. The most promising of these submissions included cryptosystems based on lattices, isogenies, hash functions, and codes.

Before diving more deeply into each class of submissions, we briefly summarize the tradeoffs inherent in each type of cryptosystem with comparisons to current (not post-quantum) elliptic-curve cryptography. Note that codes and isogenies are capable of producing digital signatures, but no such schemes were submitted to NIST.

Signatures Key Exchange Fast?
Elliptic Curves 64 bytes 32 bytes
Lattices 2.7kb 1 kb
Isogenies 330 bytes
Codes 1 mb
Hash functions 41 kb

Table 1: Comparison of classical ECC vs post-quantum schemes submitted to NIST

In terms of security proofs, none of the above cryptosystems reduce to NP-hard (or NP-complete) problems. In the case of lattices and codes, these cryptosystems are based on slight modifications of NP-hard problems. Hash-based constructions rely on the existence of good hash functions and make no other cryptographic assumptions. Finally, isogeny-based cryptography is based on a problem that is conjectured to be hard, but is not similar to an NP-hard problem or prior cryptographic assumption. It’s worth mentioning, however, that just as we cannot prove any classical algorithm is not breakable in polynomial time (since P could equal NP), it could be the case that problems thought to be difficult for quantum computers might not be. Furthermore, a cryptosystem not reducing to some NP-hard or complete problem shouldn’t be a mark against it, per se, since integer factorization and the discrete log problem are not believed to be NP-complete.

Lattices

Of all the approaches to post-quantum cryptography, lattices are the most actively studied and the most flexible. They have strong security reductions and are capable of key exchanges, digital signatures, and far more sophisticated constructions like fully homomorphic encryption. Despite the extremely complex math needed in both optimizations and security proofs for lattice cryptosystems, the foundational ideas only require basic linear algebra. Suppose you have a system of linear equations of the form

Solving for x is a classic linear algebra problem that can be solved quickly using Gaussian elimination. Another way to think about this is that we have a mystery function,

where given a vector a, we see the result of ax, without knowing x. After querying this function enough times we can learn f in a short amount of time (by solving the system of equations above). This way we can reframe a linear algebra problem as a machine learning problem.

Now, suppose we introduce a small amount of noise to our function, so that after multiplying x and a, we add an error term e and reduce the whole thing modulo a (medium-sized) prime q. Then our noisy mystery function looks like

Learning this noisy mystery function has been mathematically proven to be extremely difficult. The intuition is that at each step in the Gaussian elimination procedure we used in the non-noisy case, the error term gets bigger and bigger until it eclipses all useful information about the function. In the cryptographic literature this is known as the Learning With Errors problem (LWE).

The reason cryptography based on LWE gets called lattice-based cryptography is because the proof that LWE is hard relies on the fact that finding the shortest vector in something called a lattice is known to be NP-Hard. We won’t go into the mathematics of lattices in much depth here, but one can think of lattices as a tiling of n-dimensional space

Lattices are represented by coordinate vectors. In the example above, any point in the lattice can be reached by combining e1, e2, and e3 (via normal vector addition). The shortest vector problem (SVP) says: given a lattice, find the element whose length as a vector is shortest. The intuitive reason this is difficult is because not all coordinate systems for a given lattice are equally easy to work with. In the above example, we could have instead represented the lattice with three coordinate vectors that were extremely long and close together, which makes finding vectors close to the origin more difficult. As a matter of fact, there is a canonical way to find the “worst possible” representation of a lattice. When using such a representation, the shortest vector problem is known to be NP-hard.

Before getting into how to use LWE to make quantum-resistant cryptography, we should point out that LWE itself is not NP-Hard. Instead of reducing directly to SVP, it reduces to an approximation of SVP that is actually conjectured to not be NP-Hard. Nonetheless, there is currently no polynomial (or subexponential) algorithm for solving LWE.

Now let’s use the LWE problem to create an actual cryptosystem. The simplest scheme was created by Oded Regev in his original paper proving the hardness of the LWE problem. Here, the secret key is an n-dimensional vector with integer entries mod q, i.e. the LWE secret mentioned above. The public key is the matrix A from the previous discussion, along with a vector of outputs from the LWE function

An important property of this public key is that when it’s multiplied by the vector (-sk,1), we get back the error term, which is roughly 0.

To encrypt a bit of information m, we take the sum of random columns of A and encode m in the last coordinate of the result by adding 0 if m is 0 and q/2 if m is 1. In other words, we pick a random vector x of 0s or 1s, and compute

CodeCogsEqn (3)

Intuitively, we’ve just evaluated the LWE function (which we know is hard to break) and encoded our bit in the output of this function.

Decryption works because knowing the LWE secret will allow the recipient to get back the message, plus a small error term

When the error distribution is chosen correctly, it will never distort the message by more than q/4. The recipient can test whether the output is closer to 0 or q/2 mod q and decode the bit accordingly.

A major problem with this system is that it has very large keys. To encrypt just one bit of information requires public keys with size n2 in the security parameter. However, an appealing aspect of lattice cryptosystems is that they are extremely fast.

Since Regev’s original paper there has been a massive body of work around lattice-based cryptosystems. A key breakthrough for improving their practicality was the development of Ring-LWE, which is a variant of the LWE problem where keys are represented by certain polynomials. This has led to a quadratic decrease in key sizes and sped up encryption and decryption to use only n*log(n) operations (using Fast Fourier techniques).

Among the many lattice-based cryptosystems being considered for the NIST PQC standard, two that are especially worth mentioning are the Crystals constructions, Kyber and Dilithium.

Kyber is a key-encapsulation mechanism (KEM) which follows a similar structure to the system outlined above, but uses some fancy algebraic number theory to get even better performance than Ring-LWE. Key sizes are approximately 1kb for reasonable security parameters (still big!) but encryption and decryption time is on the order of .075 ms. Considering this speed was achieved in software, the Kyber KEM seems promising for post-quantum key exchange.

Dilithium is a digital signature scheme based on similar techniques to Kyber. Its details are beyond the scope of this blog post but it’s worth mentioning that it too achieves quite good performance. Public key sizes are around 1kb and signatures are 2kb. It is also quite performant. On Skylake processors the average number of cycles required to compute a signature was around 2 million. Verification took 390,000 cycles on average.

Codes

The study of error correcting codes has a long history in the computer science literature dating back to the ground-breaking work of Richard Hamming and Claude Shannon. While we cannot even begin to scratch the surface of this deep field in a short blog post, we give a quick overview.

When communicating binary messages, errors can occur in the form of bit flips. Error-correcting codes provide the ability to withstand a certain number of bit flips at the expense of message compactness. For example, we could protect against single bit flips by encoding 0 as 000 and 1 as 111. That way the receiver can determine that 101 was actually a 111, or that 001 was a 0 by taking a majority vote of the three bits. This code cannot correct errors where two bits are flipped, though, since 111 turning into 001 would be decoded as 0.

The most prominent type of error-correcting codes are called linear codes, and can be represented by k x n matrices, where k is the length of the original messages and n is the length of the encoded message. In general, it is computationally difficult to decode messages without knowing the underlying linear code. This hardness underpins the security of the McEliece public key cryptosystem.

At a high level, the secret key in the McEliece system is a random code (represented as a matrix G) from a class of codes called Goppa codes. The public key is the matrix SGP where S is an invertible matrix with binary entries and P is a permutation. To encrypt a message m, the sender computes c = m(SGP) + e, where e is a random error vector with precisely the number of errors the code is able to correct. To decrypt, we compute cP-1 = mSG + eP-1 so that mS is a codeword of G that can correct the added error term e. The message can be easily recovered by computing mSS-1.

Like lattices, code-based cryptography suffers from the fact that keys are large matrices. Using the recommended security parameters, McEliece public keys are around 1 mb and private keys are 11 kb. There is currently ongoing work trying to use a special class of codes called quasi-cyclic moderate density parity-check codes that can be represented more succinctly than Goppa codes, but the security of these codes is less well studied than Goppa codes.

Isogenies

The field of elliptic-curve cryptography is somewhat notorious for using quite a bit of arcane math. Isogenies take this to a whole new level. In elliptic-curve cryptography we use a Diffie-Hellman type protocol to acquire a shared secret, but instead of raising group elements to a certain power, we walk through points on an elliptic curve. In isogeny-based cryptography, we again use a Diffie-Hellman type protocol but instead of walking through points on elliptic curve, we walk through a sequence of elliptic curves themselves.

An isogeny is a function that transforms one elliptic curve into another in such a way that the group structure of the first curve is reflected in the second. For those familiar with group theory, it is a group homomorphism with some added structure dealing with the geometry of each curve. When we restrict our attention to supersingular elliptic curves (which we won’t define here), each curve is guaranteed to have a fixed number of isogenies from it to other supersingular curves.

Now, consider the graph created by examining all the isogenies of this form from our starting curve, then all the isogenies from those curves, and so on. This graph turns out to be highly structured in the sense that if we take a random walk starting at our first curve, the probability of hitting a specific other curve is negligibly small (unless we take exponentially many steps). In math jargon, we say that the graph generated by examining all these isogenies is an expander graph (and also Ramanujan). This property of expansion is precisely what makes isogeny-based cryptography secure.

For the Supersingular Isogeny Diffie-Hellman (SIDH) scheme, secret keys are a chain of isogenies and public keys are curves. When Alice and Bob combine this information, they acquire curves that are different, but have the same j-invariant. It’s not so important for the purposes of cryptography what a j-invariant is, but rather that it is a number that can easily be computed by both Alice and Bob once they’ve completed the key exchange.

Isogeny-based cryptography has extremely small key sizes compared to other post-quantum schemes, using only 330 bytes for public keys. Unfortunately, of all the techniques discussed in this post, they are the slowest, taking between 11-13 ms for both key generation and shared secret computation. They do, however, support perfect forward secrecy, which is not something other post-quantum cryptosystems possess.

Hash-Based Signatures

There are already many friendly introductions to hash-based signatures, so we keep our discussion of them fairly high-level. In short, hash signatures use inputs to a hash function as secret keys and outputs as public keys. These keys only work for one signature though, as the signature itself reveals parts of the secret key. This extreme inefficiency of hash-based signatures led to use of Merkle trees to reduce space consumption (yes, the same Merkle trees used in Bitcoin).

Unfortunately, it is not possible to construct a KEM or a public key encryption scheme out of hashes. Therefore hash-based signatures are not a full post-quantum cryptography solution. Furthermore, they are not space efficient; one of the more promising signature schemes, SPHINCS, produces signatures which are 41kb and public/private keys that are 1kb. On the other hand, hash-based schemes are extremely fast since they only require the computation of hash functions. They also have extremely strong security proofs, based solely on the assumption that there exist hash functions that are collision-resistant and preimage resistant. Since nothing suggests current widely used hash functions like SHA3 or BLAKE2 are vulnerable to these attacks, hash-based signatures are secure.

Takeaways

Post-quantum cryptography is an incredibly exciting area of research that has seen an immense amount of growth over the last decade. While the four types of cryptosystems described in this post have received lots of academic attention, none have been approved by NIST and as a result are not recommended for general use yet. Many of the schemes are not performant in their original form, and have been subject to various optimizations that may or may not affect security. Indeed, several attempts to use more space-efficient codes for the McEliece system have been shown to be insecure. As it stands, getting the best security from post-quantum cryptosystems requires a sacrifice of some amount of either space or time. Ring lattice-based cryptography is the most promising avenue of work in terms of flexibility (both signatures and KEM, also fully homomorphic encryption), but the assumptions that it is based on have only been studied intensely for several years. Right now, the safest bet is to use McEliece with Goppa codes since it has withstood several decades of cryptanalysis.

However, each use case is unique. If you think you might need post-quantum cryptography, get in touch with your friendly neighborhood cryptographer. Everyone else ought to wait until NIST has finished its standardization process.

Slither – a Solidity static analysis framework

Slither is the first open-source static analysis framework for Solidity. Slither is fast and precise; it can find real vulnerabilities in a few seconds without user intervention. It is highly customizable and provides a set of APIs to inspect and analyze Solidity code easily. We use it in all of our security reviews. Now you can integrate it into your code-review process.

We are open sourcing the core analysis engine of Slither. This core provides advanced static-analysis features, including an intermediate representation (SlithIR) with taint tracking capabilities on top of which complex analyses (“detectors”) can be built. We have built many detectors, including ones that detect reentrancy and suicidal contracts. We are open sourcing some as examples.

If you are a smart-contract developer, a security expert, or an academic researcher, then you will find Slither invaluable.

Built for continuous integration

Slither has a simple command line interface. To run all of its detectors on a Solidity file, this is all you need: $ slither contract.sol

You can integrate Slither into your development process without any configuration. Run it on each commit to check that you are not adding new bugs.

Helps automate security reviews

Slither provides an API to inspect Solidity code via custom scripts. We use this API to rapidly answer unique questions about the code we’re reviewing. We have used Slither to:

  • Identify code that can modify a variable’s value.
  • Isolate the conditional logic statements that are influenced by a particular variable’s value.
  • Find other functions that are transitively reachable as a result of a call to a particular function.

For example, the following script will show which function(s) in myContract write to the state variable myVar:

# function_writing.py
import sys
from slither.slither import Slither

if len(sys.argv) != 2:
print('python.py function_writing.py file.sol')
exit(-1)

# Init slither
slither = Slither(sys.argv[1])

# Get the contract
contract = slither.get_contract_from_name('myContract')

# Get the variable
myVar = contract.get_state_variable_from_name('myVar')

# Get the functions writing the variable
funcs_writing_myVar = contract.get_functions_writing_to_variable(myVar)

# Print the result
print('Functions that write to "myVar": {}'.format([f.name for f in funcs_writing_myVar]))

Figure 1: Slither API Example

Read the API documentation and the examples to start harnessing Slither.

Aids in understanding contracts

Slither comes with a set of predefined “printers” which show high-level information about the contract. We included four that work out-of-the-box to print essential security information: a contract summary, a function summary, a graph of inheritance, and an authorization overview.

1. Contract summary printer

Gives a quick summary of the contract, showing the functions and their visibility:

Figure 2: Contract Summary Printer

2. Function summary printer

Shows useful information for each function, such as the state variables read and written, or the functions called:

Figure 3: Function Summary Printer

3. Inheritance printer

Outputs a graph highlighting the inheritance dependencies of all the contracts:

Figure 3: Function Summary Printer

4. Authorization printer

Shows what a user with privileges can do on the contract:

Figure 4: Authorization Printer

See Slither’s documentation for information about adding your own printers.

A foundation for research

Slither uses its own intermediate representation, SlithIR, to build innovative vulnerability analyses on Solidity. It provides access to the CFG of the functions, the inheritance of the contracts, and lets you inspect Solidity expressions.

Many academic tools, such as Oyente or MAIAN, advanced the start of the art when they were released. However, each academic team had to invent their own framework, built for only the limited scope of their particular area of interest. Maintenance became a challenge quickly. In contrast, Slither is a generic framework. Because it’s capable of the widest possible range of security analyses, it is regularly maintained and used by our open source community.

If you are an academic researcher, don’t spend time and effort parsing and recovering information from smart contracts. Prototype your new innovations on top of Slither, complete your research sooner, and ensure it maintains its utility over time.

It’s easy to extend Slither’s capabilities with new detector plugins. Read the detector documentation to start writing your own.

Next steps

Slither can find real vulnerabilities in a few seconds with minimal or no user interaction. We use it on all of our Solidity security reviews. You should too!

Many of our ongoing projects will improve Slither, including:

  • API enhancements: Now that we have open sourced the core, we intend to provide the most effective static analysis framework possible.
  • More precise built-in analyses: We plan to make several new layers of information, such as value tracking, accessible to the API.
  • Toolchain integration: We plan to combine Slither with Manticore, Echidna, and Truffle to automate the triage of issues.

Questions about Slither’s API and its core framework? Join the Empire Hacking Slack. Need help integrating Slither into your development process? Want access to our full set of detectors? Contact us.

Introduction to Verifiable Delay Functions (VDFs)

Finding randomness on the blockchain is hard. A classic mistake developers make when trying to acquire a random value on-chain is to use quantities like future block hashes, block difficulty, or timestamps. The problem with these schemes is that they are vulnerable to manipulation by miners. For example, suppose we are trying to run an on-chain lottery where users guess whether the hash of the next block will be even or odd. A miner then could bet that the outcome is even, and if the next block they mine is odd, discard it. Here, tossing out the odd block slightly increases the miner’s probability of winning the lottery. There are many real-world examples of “randomness” being generated via block variables, but they all suffer from the unavoidable problem that it is computationally easy for observers to determine how choices they make will affect the randomness generated on-chain.

Another related problem is electing leaders and validators in proof of stake protocols. In this case it turns out that being able to influence or predict randomness allows a miner to affect when they will be chosen to mine a block. There are a wide variety of techniques for overcoming this issue, such as Ouroboros’s verifiable secret-sharing scheme. However, they all suffer from the same pitfall: a non-colluding honest majority must be present.

In both of the above scenarios it is easy for attackers to see how different inputs affect the result of a pseudorandom number generator. This led Boneh, et al. to define verifiable delay functions (VDF’s). VDF’s are functions that require a moderate amount of sequential computation to evaluate, but once a solution is found, it is easy for anyone to verify that it is correct. Think of VDF’s as a time delay imposed on the output of some pseudorandom generator. This delay prevents malicious actors from influencing the output of the pseudorandom generator, since all inputs will be finalized before anyone can finish computing the VDF.

When used for leader selection, VDF’s offer a substantial improvement over verifiable random functions. Instead of requiring a non-colluding honest majority, VDF-based leader selection only requires the presence of any honest participant. This added robustness is due to the fact that no amount of parallelism will speed up the VDF, and any non-malicious actor can easily verify anyone else’s claimed VDF output is accurate.

VDF Definitions

Given a delay time t, a verifiable delay function f must be both

  1. Sequential: anyone can compute f(x) in t sequential steps, but no adversary with a large number of processors can distinguish the output of f(x) from random in significantly fewer steps
  2. Efficiently verifiable: Given the output y, any observer can verify that y = f(x) in a short amount of time (specifically log(t)).

In other words, a VDF is a function which takes exponentially more time to compute (even on a highly parallel processor) than it does to verify on a single processor. Also, the probability of a verifier accepting a false VDF output must be extremely small (chosen by a security parameter λ during initialization). The condition that no one can distinguish the output of f(x) from random until the final result is reached is essential. Suppose we are running a lottery where users submit 16-bit integers and the winning number is determined by giving a seed to a VDF that takes 20 min to compute. If an adversary can learn 4 bits of the VDF output after only 1 min of VDF computation, then they might be able to alter their submission and boost their chance of success by a factor of 16!

Before jumping into VDF constructions, let’s examine why an “obvious” but incorrect approach to this problem fails. One such approach would be repeated hashing. If the computation of some hash function h takes t steps to compute, then using f = h(h(...h(x))) as a VDF would certainly satisfy the sequential requirement above. Indeed, it would be impossible to speed this computation up with parallelism since each application of the hash depends entirely on the output of the previous one. However, this does not satisfy the efficiently verifiable requirement of a VDF. Anyone trying to verify that f(x) = y would have to recompute the entire hash chain. We need the evaluation of our VDF to take exponentially more time to compute than to verify.

VDF Candidates

There are currently three candidate constructions that satisfy the VDF requirements. Each one has its own potential downsides. The first was outlined in the original VDF paper by Boneh, et al. and uses injective rational maps. However, evaluating this VDF requires a somewhat large amount of parallel processing, leading the authors to refer to it as a “weak VDF.” Later, Pietrzak and Wesolowski independently arrived at extremely similar constructions based on repeated squaring in groups of unknown order. At a high level, here’s how the Pietrzak scheme works.

  1. To set up the VDF, choose a time parameter T, a finite abelian group G of unknown order, and a hash function H from bytes to elements of G.
  2. Given an input x, let g = H(x) evaluate the VDF by computing y = g2T

The repeated squaring computation is not parallelizable and reveals nothing about the end result until the last squaring. These properties are both due to the fact that we do not know the order of G. That knowledge would allow attackers to use group theory based attacks to speed up the computation.

Now, suppose someone asserts that the output of the VDF is some number z (which may or may not be equal to y). This is equivalent to showing that z = v2(T/2) and v = g2(T/2). Since both of the previous equations have the same exponent, they can be verified simultaneously by checking a random linear combination, e.g., vr z = (gr v)2(T/2), for a random r in {1, … , 2λ}(where λ is the security parameter). More formally, the prover and verifier perform the following interactive proof scheme:

  1. The prover computes v = g2(T/2) and sends v to the verifier
  2. The verifier sends a random r in {1, … , 2l} to the prover
  3. Both the prover and verifier compute g1 = gr v and z1 = vr z
  4. The prover and verifier recursively prove that z1 = g12(T/2)

The above scheme can be made non-interactive using a technique called the Fiat-Shamir heuristic. Here, the prover generates a challenge r at each level of the recursion by hashing (G,g,z,T,v) and appending v to the proof. In this scenario the proof contains log2 T elements and requires approximately (1 + 2/√T) T.

Security Analysis of Pietrzak Scheme

The security of Pietrzak’s scheme relies on the the security of the low order assumption: it is computationally infeasible for an adversary to find an element of low order in the group being used by the VDF. To see why finding an element of low order breaks the scheme, first assume that a malicious prover Eve found some element m of small order d. Then Eve sends zm to the verifier (where z is the valid output). The invalid output will be accepted with probability 1/d since

  1. When computing the second step of the recursion, we will have the base element g1 = gr v, where v = g2T/2 m, and need to show that g1T/2 = vr(zm)
  2. The m term on the left hand side is mT/2
  3. The m term on the right hand side is mr+1
  4. Since m has order d, these two will be equal when r+1 = T/2 mod d, which happens with probability 1/d

To see a full proof of why the low order assumption is both necessary and sufficient to show Pietrzak’s scheme is sound, see Boneh’s survey of recent VDF constructions.

The security analysis assumes that one can easily generate a group of unknown order that satisfies the low order assumption. We will see below that there are not groups currently known to satisfy these constraints that are amenable to a trustless setup, i.e., a setup where there is no party who can subvert the VDF protocol.

For example, let’s try to use everyone’s favorite family of groups: the integers modulo the product of two large primes (RSA groups). These groups have unknown order, since finding the order requires factoring a large integer. However, they do not satisfy the low order assumption. Indeed, the element -1 is always of order 2. This situation can be remedied by taking the quotient of an RSA group G by the subgroup {1,-1}. In fact, if the modulus of G is a product of strong primes (primes such that p-1/ 2 is also prime), then after taking the aforementioned quotient there are no elements of low order other than 1.

This analysis implies that RSA groups are secure for Pietrzak’s VDF, but there’s a problem. To generate an RSA group, someone has to know the factorization of the modulus N. Devising a trustless RSA group selection protocol–-where no one knows the factorization of the modulus N–-is therefore an interesting and important open problem in this area.

Another avenue of work towards instantiating Pietrzak’s scheme involves using the class group of an imaginary quadratic number field. This family of groups does not suffer from the above issue where selection requires a trusted third party. Simply choosing a large negative prime (with several caveats) will generate a group whose order is computationally infeasible to determine even for the person who chose the prime. However, unlike RSA groups, the difficulty of finding low-order elements in class groups of quadratic number fields is not well studied and would require more investigation before any such scheme could be used.

State of VDFs and Open Problems

As mentioned in the previous section, both the Pietrzak and Wesolowski schemes rely on generating a group of unknown order. Doing so without a trusted party is difficult in the case of RSA groups, but class groups seem to be a somewhat promising avenue of work. Furthermore, the Wesolowski scheme assumes the existence of groups that satisfy something called the adaptive root assumption, which is not well studied in the mathematical literature. There are many other open problems in this area, including constructing quantum resistant VDFs, and the potential for ASICs to ruin the security guarantees of VDF constructions in practice.

As for industry adoption of VDF’s, several companies in the blockchain space are trying to use VDF’s for consensus algorithms. Chia, for example, uses the repeated squaring technique outlined above, and is currently running a competition for the fastest implementation of this scheme. The Ethereum Foundation also appears to be developing a pseudorandom number generator that combines RANDAO with VDF’s. While both are very exciting projects that will be hugely beneficial to the blockchain community, this remains a very young area of research. Take any claim of security with a grain of salt.

How to Spot Good Fuzzing Research

Of the nearly 200 papers on software fuzzing that have been published in the last three years, most of them—even some from high-impact conferences—are academic clamor. Fuzzing research suffers from inconsistent and subjective benchmarks, which keeps this potent field in a state of arrested development. We’d like to help explain why this has happened and offer some guidance for how to consume fuzzing publications.

Researchers play a high-stakes game in their pursuit of building the next generation of fuzzing tools. A major breakthrough can render obsolete what was once the state of the art. Nobody is eager to use the world’s second-greatest fuzzer. As a result, researchers must somehow demonstrate how their work surpasses the state of the art in finding bugs.

The problem is trying to objectively test the efficacy of fuzzers. There isn’t a set of universally accepted benchmarks that is statistically rigorous, reliable, and reproducible. Inconsistent fuzzing measurements persist throughout the literature and prevent meaningful meta-analysis. That was the motivation behind the paper “Evaluating Fuzz Testing,” to be presented by Andrew Ruef at the 2018 SIGSAC Conference on Computer and Communications Security in Toronto.

“Evaluating Fuzz Testing” offers a comprehensive set of best practices for constructing a dependable frame of reference for comparing fuzzing tools. Whether you’re submitting your fuzzing research for publication, peer-reviewing others’ submissions, or trying to decide which tool to use in practice, the recommendations from Ruef and his colleagues establish an objective lens for evaluating fuzzers. In case you don’t have time to read the whole paper, we’re summarizing the criteria we recommend you use when evaluating the performance claims in fuzzing research.

Quick Checklist for Benchmarking Fuzzers

  1. Compare new research against popular baseline tools like american fuzzy lop (AFL), Basic Fuzzing Framework (BFF), libfuzzer, Radamsa, and Zzuf. In lieu of a common benchmark, reviewing research about these well-accepted tools will prepare you to ascertain the quality of other fuzzing research. The authors note that “there is a real need for a solid, independently defined benchmark suite, e.g., a DaCapo or SPEC10 for fuzz testing.” We agree.
  2. Outputs should be easy to read and compare. Ultimately, this is about finding the fuzzer that delivers the best results. “Best” is subjective (at least until that common benchmark comes along), but evaluators’ work will be easier if they can interpret fuzzers’ results easily. As Ruef and his colleagues put it: “Clear knowledge of ground truth avoids overcounting inputs that correspond to the same bug, and allows for assessing a tool’s false positives and false negatives.”
  3. Account for differences in heuristics. Heuristics influence how fuzzers start and pursue their searches through code paths. If two fuzzers’ heuristics lead them to different targets, then the fuzzers will produce different results. Evaluators have to account for that influence in order to compare one fuzzer’s results against another’s.
  4. Targets representative datasets with distinguishable bugs like the Cyber Grand Challenge binaries, LAVA-M, and Google’s fuzzer test suite and native programs like nm, objdump, cxxfilt, gif2png, and FFmpeg. Again for lack of a common benchmark suite, fuzzer evaluators should look for research that used one of the above datasets (or, better yet, one native and one synthetic). Doing so can encourage researchers to ‘fuzz to the test,’ which doesn’t do anyone any good. Nevertheless, these datasets provide some basis for comparison. Related: As we put more effort into fuzzers, we should invest in refreshing datasets for their evaluation, too.
  5. Fuzzers are configured to begin in similar and comparable initial states. If two fuzzers’ configuration parameters reflect different priorities, then the fuzzers will yield different results. It isn’t realistic to expect that all researchers will use the same configuration parameters, but it’s quite reasonable to expect that those parameters are specified in their research.
  6. Timeout values are at least 24 hours. Of the 32 papers the authors reviewed, 11 capped timeouts at “less than 5 or 6 hours.” The results of their own tests of AFL and AFLFast varied by the length of the timeout: “When using a non-empty seed set on nm, AFL outperformed AFLFast at 6 hours, with statistical significance, but after 24 hours the trend reversed.” If all fuzzing researchers allotted the same time period—24 hours—for their fuzz runs, then evaluators would have one less variable to account for.
  7. Consistent definitions of distinct crashes throughout the experiment. Since there’s some disagreement in the profession about how to categorize unique crashes and bugs (by the input or by the bug triggered), evaluators need to seek the researchers’ definition in order to make a comparison. That said, beware the authors’ conclusion: “experiments we carried out showed that the heuristics [for de-duplicating or triaging crashes] can dramatically over-count the number of bugs, and indeed may suppress bugs by wrongly grouping crashing inputs.”
  8. Consistent input seed files. The authors found that fuzzers’ “performance can vary substantially depending on what seeds are used. In particular, two different non-empty inputs need not produce similar performance, and the empty seed can work better than one might expect.” Somewhat surprisingly, many of the 32 papers evaluated did not carefully consider the impact of seed choices on algorithmic improvements.
  9. At least 30 runs per configuration with variance measured. With that many runs, anomalies can be ignored. Don’t compare the results of single runs (“as nearly ⅔ of the examined papers seem to,” the authors report!). Instead, look for research that not only performed multiple runs, but also used statistical tests to account for variances in those tests’ performance.
  10. Prefer bugs discovered over code coverage metrics. We at Trail of Bits believe that our work should have a practical impact. Though code coverage is an important criterion in choosing a fuzzer, this is about finding and fixing bugs. Evaluators of fuzz research should measure performance in terms of known bugs, first and foremost.

Despite how obvious or simple these recommendations may seem, the authors reviewed 32 high-quality publications on fuzzing and did not find a single paper that aligned with all 10. Then they demonstrated how conclusive the results from rigorous and objective experiments can be by using AFLFast and AFL as a case study. They determined that, “Ultimately, while AFLFast found many more ‘unique’ crashing inputs than AFL, it only had a slightly higher likelihood of finding more unique bugs in a given run.”

The authors’ results and conclusions showed decisively that in order to advance the science of software fuzzing, researchers must strive for disciplined statistical measurements and better empirical measurements. We believe this paper will begin a new chapter in fuzzing research by providing computer scientists with an excellent set of standards for designing, evaluating, and reporting software fuzzing experiments in the future.

In the meantime, if you’re evaluating a fuzzer for your work, approach with caution and this checklist.

Ethereum security guidance for all

We came away from ETH Berlin with two overarching impressions: first, many developers were hungry for any guidance on security, and second; too few security firms were accessible.

When we began taking on blockchain security engagements in 2016, there were no tools engineered for the work. Useful documentation was hard to find and hidden among many bad recommendations.

We’re working to change that by: offering standing office hours, sharing our aggregation of the best Ethereum security references on the internet, and maintaining a list of contact information for bug reporting.

We want to support the community to produce more secure smart contracts and decentralized apps.

Ethereum security office hours

Once every other week, our engineers will host a one-hour video chat where we’ll take all comers and answer Ethereum security questions at no cost. We’ll help guide participants through using our suite of Ethereum security tools and reference the essential knowledge and resources that people need to know.

Office hours will be noon Eastern Standard Time (GMT-5) on the first and third Tuesdays of the month. Subscribe to our Ethereum Security Events calendar for notifications about new sessions. We’ll also post a sign up form on our Twitter and the Empire Hacking Slack one day ahead of time to help solicit for topics to cover.

Crowdsourced blockchain security contacts

It’s a little ironic, but most security researchers have struggled to report vulnerabilities. Sometimes, the reporting process itself puts unnecessary burden on the reporter. The interface may not support the reporter’s language. Or, as Project Zero’s Natalie Silvanovich recently shared, it may come down to legalities:

“When software vendors start [bug bounties], they often remove existing mechanisms for reporting vulnerabilities…” and “…without providing an alternative for vulnerability reporters who don’t agree or don’t want to participate in [a rewards] program for whatever reason.”

We routinely identify previously unknown flaws in smart contracts, decentralized applications, and blockchain software clients. In many cases, it has been difficult or impossible to track down contact information for anyone responsible. When that happens, we have to leave the vulnerability unreported and simply hope that no one malicious discovers it.

This is not ideal, so we decided to do something about it. We are crowdsourcing a directory of security contacts for blockchain companies. This directory, Blockchain Security Contacts, identifies the best way to contact an organization’s security team so that you can report vulnerabilities directly to those who can resolve them.

If you work on a security team at a blockchain company, please add yourself to the directory!

Security contact guidance

The directory is just the first step. Even with the best of intentions, many companies rush into bug bounties without fully thinking through the legal and operational ramifications. They need guidance for engaging with security researchers most effectively.

At a minimum, we recommend:

Ethereum security references

Over the course of our work in Blockchain security, we’ve curated the best community-maintained and open-source Ethereum security references on the internet. These are the references we rely on the most. They’re the most common resources that every team developing a decentralized application needs to know about, including:

  • Resources for secure development, CTFs & wargames, and even specific podcast episodes
  • Security tools for visualization, linting, bug finding, verification, and reversing
  • Pointers to related communities

This is a community resource we want to grow as the community does. We’re committed to keeping it up to date.

With that all said, please contact us if you’d like help securing your blockchain software.

Effortless security feature detection with Winchecksec

We’re proud to announce the release of Winchecksec, a new open-source tool that detects security features in Windows binaries. Developed to satisfy our analysis and research needs, Winchecksec aims to surpass current open-source security feature detection tools in depth, accuracy, and performance without sacrificing simplicity.

Feature detection, made simple

Winchecksec takes a Windows PE binary as input, and outputs a report of the security features baked into it at build time. Common features include:

  • Address-space layout randomization (ASLR) and 64-bit-aware high-entropy ASLR (HEASLR)
  • Authenticity/integrity protections (Authenticode, Forced Integrity)
  • Data Execution Prevention (DEP), better known as W^X or No eXecute (NX)
  • Manifest isolation
  • Structured Exception Handling (SEH) and SafeSEH
  • Control Flow Guard (CFG) and Return Flow Guard (RFG)
  • Guard Stack (GS), better known as stack cookies or canaries

Winchecksec’s two output modes are controlled by one flag (-j): the default plain-text tabular mode for humans, and a JSON mode for machine consumption. In action:

unnamed

Did you notice that Winchecksec distinguishes between “Dynamic Base” and ASLR above? This is because setting /DYNAMICBASE at build-time does not guarantee address-space randomization. Windows cannot perform ASLR without a relocation table, so binaries that explicitly request ASLR but lack relocation entries (indicated by IMAGE_FILE_RELOCS_STRIPPED in the image header’s flags) are silently loaded without randomized address spaces. This edge case was directly responsible for turning an otherwise moderate use-after-free in VLC 2.2.8 into a gaping hole (CVE-2017-17670). The underlying toolchain error in mingw-w64 remains unfixed.

Similarly, applications that run under the CLR are guaranteed to use ASLR and DEP, regardless of the state of the Dynamic Base/NX compatibility flags or the presence of a relocation table. As such, Winchecksec will report ASLR and DEP as enabled on any binary that indicates that it runs under the CLR. The CLR also provides safe exception handling but not via SafeSEH, so SafeSEH is not indicated unless enabled.

How do other tools compare?

Not well:

  • Microsoft released BinScope in 2014, only to let it wither on the vine. BinScope performs several security feature checks and provides XML and HTML outputs, but relies on .pdb files for its analysis on binaries. As such, it’s impractical for any use case outside of the Microsoft Secure Development Lifecycle. BinSkim appears to be the spiritual successor to BinScope and is actively maintained, but uses an obtuse overengineered format for machine consumption. Like BinScope, it also appears to depend on the availability of debugging information.
  • The Visual Studio toolchain provides dumpbin.exe, which can be used to dump some of the security attributes present in the given binary. But dumpbin.exe doesn’t provide a machine-consumable output, so developers are forced to write ad-hoc parsers. To make matters worse, dumpbin.exe provides a dump, not an analysis, of the given file. It won’t, for example, explain that a program with stripped relocation entries and Dynamic Base enabled isn’t ASLR-compatible. It’s up to the user to put two and two together.
  • NetSPI maintains PESecurity, a PowerShell script for testing many common PE security features. While it provides a CSV output option for programmatic consumption, it lags in performance compared to dumpbin.exe (and other compiled tools listed below), much less Winchecksec.
  • There are a few small feature detectors floating around the world of plugins and gists, like this one, this one, and this one (for x64dbg!). These are generally incomplete (in terms of checks), difficult to interact with programmatically, sporadically maintained, and/or perform ad-hoc PE parsing. Winchecksec aims for completeness in the domain of static checks, is maintained, and uses official Windows APIs for PE parsing.

Try it!

Winchecksec was developed as part of Sienna Locomotive, our integrated fuzzing and triaging system. As one of several triaging components, Winchecksec informs our exploitability scoring system (reducing the exploitability of a buffer overflow, for example, if both DEP and ASLR are enabled) and allows us to give users immediate advice on improving the baseline security of their applications. We expect that others will develop additional use cases, such as:

  • CI/CD integration to make a base set of security features mandatory for all builds.
  • Auditing entire production servers for deployed applications that lack key security features.
  • Evaluating the efficacy of security features in applications (e.g., whether stack cookies are effective in a C++ application with a large number of buffers in objects that contain vtables).

Get Winchecksec on GitHub now. If you’re interested in helping us develop it, try out this crop of first issues.