10 Rules for the Secure Use of Cryptocurrency Hardware Wallets

Earlier this year, the Web3 Foundation (W3F) commissioned Trail of Bits for a security review and assessment of the risks in storing cryptocurrency. Everyone who owns cryptocurrency — from large institutions to individual enthusiasts — shares the W3F’s concerns. In service to the broader community, the W3F encouraged us to publish our recommendations for the secure use of hardware wallets: the small tamper-resistant peripherals that enable users to securely create and protect cryptocurrency accounts.

Whether your cryptocurrency holdings amount to a few Satoshis or a small fortune, you will find this post useful.

44917824762_dc1fa21afd_k

Today’s hardware wallets require diligent security procedures (Image credit: Gareth Halfacree)

The Advent of Hardware Wallets

In the early days of cryptocurrency, users simply generated their payment addresses (in cryptographic terms, their public/private key pairs) using the client software on a standard PC. Unfortunately, once cryptocurrency became a hot commodity, securely storing an account private key on a general-purpose computer — using a “software wallet” — became a liability. Software wallet files could be lost or deleted, and they were targeted for theft. Most users were unprepared for the hefty responsibility of securely and reliably storing their private keys. Partially, this drove the adoption of custodial storage services (such as at cryptocurrency exchanges).

Years of massive, unpunished thefts from these services convinced many users that they could never trust a third party with their cryptocurrency holdings the same way that they might trust a regulated bank holding fiat currency. So, in the past couple of years, hardware wallets have gained popularity as useful tools for protecting cryptocurrency accounts without relying on a custodial service.

A Foolproof Solution?

Hardware wallets are a kind of consumer-grade Hardware Security Module (HSM), with a similar purpose: a device that embodies a tamper-resistant vault, inside of which the user’s cryptographic identity (in this case, a cryptocurrency account) can be created and used without the private key ever leaving the device. Fundamentally, a hardware wallet only needs to take a transaction created on a host computer, sign it to make it valid, and output the signed transaction for the host computer to publish to the blockchain.

In practice, it’s not so simple. Users must properly initialize their wallets. Sometimes the devices have firmware updates. Then there’s the matter of recovery codes (also known as the BIP39 recovery phrase or seed words). Hardware wallets are a huge improvement over storing private keys on a sheet of paper in a fire safe, or in a directory on a laptop, but hardware wallets still carry risks. Users need to take some safety precautions. In the words of Bruce Schneier, “Security is a process, not a product.

10 Rules for the Secure Use of Cryptocurrency Hardware Wallets

1: Purchase the device from a trusted source, preferably direct from the vendor, new and unopened

Avoid any unnecessary supply-chain risk. Buying a device directly from the manufacturer (e.g., Ledger or Trezor) rather than from a reseller minimizes the risk of acquiring a counterfeit or a device tampered by a middleman. At least one malicious eBay reseller has reportedly devised clever schemes to defraud buyers even while selling them genuine and unopened products (see rule #3).

2: Never use a pre-initialized hardware wallet

If a user accepts a pre-initialized hardware wallet, they are putting their cryptocurrency into a wallet that is potentially just a copy of a wallet controlled by an attacker. Ensure that you (and only you) properly initialize your hardware wallet before use. Follow the initialization instructions from your hardware wallet vendor’s website (for example, the instructions for a Ledger brand wallet; instructions for a Trezor wallet).

IMG_4723

The kind of prompt you want to see, new out of the box (Ledger Nano S pictured)

3: Never use a pre-selected set of recovery words, only ones generated on-device

Never accept pre-selected recovery words. Always initialize a hardware wallet from a clean slate with on-device generation of new random recovery words. Anyone that knows the recovery words has complete control over the wallet, the ability to watch it for activity, and the ability to steal all of its coins. Effectively, the words are the secret key.

In December 2017, a hardware wallet reseller reportedly packaged a counterfeit scratch-off card in the box with each device delivered to their customers. The scratch-off card revealed a list of recovery words, and the card instructed the buyer to set up their device using a recovery step, rather than initializing it to securely generate a new set of words. This was a clever scam to trick users into using a pre-configured wallet (see rule #2).

fake-ledger-scam-768x1151 (source Reddit)

Beware reseller frauds like this one: official-looking pre-selected recovery words (Image credit: Ledger, Reddit user ‘moodyrocket’)

4: Prefer a device that is able to provide an attestation of its integrity

While resetting or initializing a device ought to be sufficient, there is hypothetically still a risk of buying a counterfeit or tampered hardware wallet. Before you buy one, confirm that you’ll be able to verify the provenance, authenticity, or integrity of the new hardware wallet. Look for software provided by the device maker that can interrogate a Secure Element on the device and provide an attestation of the device’s integrity. Follow the verification instructions from the vendor of your wallet (for example, Ledger’s instructions to use secure element attestation to check device integrity). There are, however, still gaps in the attestation capabilities of today’s wallets. Users ought to continue to demand better and more complete attestation.

5: Test your recovery words

Data protection 101 is “always test your backup”: in this case, your backup is the set of recovery words. Using a spare hardware wallet device, use the recorded recovery words to initialize the test wallet. Eliminate any doubt that the recorded words can successfully recover the original wallet’s state. After testing the correctness of the recovery words, reset/wipe this test device. Do not use a general-purpose computer or software wallet to verify the recovery words. Follow the instructions from your vendor for performing a recovery dry-run to test your seed words (the steps for Trezor wallet users and the steps for Ledger users).

6: Protect your recovery words separately and equally to the hardware wallet. Do not take a picture of them. Do not type them into anything.

Write the recovery words by hand — do not type them into a computer or photograph them to be printed — and then laminate the paper (preferably archival-quality acid-free paper for long-term storage). Store it in an opaque tamper-evident sealed envelope (example) for assurance that it has not been viewed without authorization. Remember that the device’s PIN code is no protection against an attacker with physical access if the recovery words are stored alongside the device. Do not store them together.

IMG_4726

Write the words down, but don’t take a photo like this one!

7: Verify the software you use to communicate with the hardware wallet; understand that a backdoored desktop UI is part of your threat model

Hardware wallets rely on desktop software for initiating transactions, updating the hardware wallet’s firmware, and other sensitive operations. Users of cryptocurrency software should demand reproducible builds and code-signed executables to prevent tampering by an attacker post-installation. The advantage of code-signing, relative to manual verification with a tool like GPG, is that code signatures are automatically verified by the operating system on every launch of the application, whereas manual verification is typically only performed once, if at all. Even verifiable software, though, can still be subverted at runtime. Recognize that general-purpose computing devices are exposed to potentially risky data from untrusted sources on a routine basis.

8: Consider using a high assurance workstation, even with a hardware wallet

By dedicating a workstation to the single task of operating the hardware wallet, it can be locked down to a greater degree because it is not used for day-to-day tasks, nor exposed to as many potential sources of compromise. Consider operating your hardware wallet only from an immutable host PC configuration. This workstation would be offline only, and dedicated to the task of transaction creation and signing using the hardware wallet. First, lock down the system’s firmware configuration (e.g., restrict boot devices, disable network boot, etc.) to ensure the integrity of the boot process. Then, the boot media can be protected either by Secure Boot using a TPM-backed encrypted SSD / hard drive, or — for true immutability — by burning and verifying a trusted OS image onto a write-once DVD-R media and storing the DVD-R in a tamper-evident bag alongside the hardware wallet.

9: Consider a M-of-N multi-signature wallet with independently stored devices

Multi-signature” refers to requiring more than one key to authorize a transaction. This is a fantastic protection against a single point-of-failure. Consider creating a multi-signature wallet with keys generated and kept in hardware wallets stored in physically separate locations. Note that if the devices will be in the custody of different individuals, carefully consider how to coordinate and make decisions to spend from the wallet. For added paranoia, the hardware wallets could be of different device brands. Then, even in the unlikely case that an employee at one of the hardware wallet manufacturers were to have successfully backdoored their devices, they would still only control one of the keys in your multi-signature wallet.

10: Consider manually verifying the generation of a new multi-signature address

Related to rules #7 and #8, note that multi-signature wallets are created by “joining” several private key-holders into a single address defined by a script. In the case of Bitcoin, this is called a P2SH address (“pay-to-script hash”). This part of the address creation is done in a the desktop software UI using public keys, and not on the hardware wallet. If a compromised workstation provides the script basis during the generation of a new P2SH address, then the attacker may be able to join or control the multi-sig wallet. For example, attacker-controlled or -subverted desktop software could secretly turn a 2-of-3 wallet into a 2-of-5 wallet with two additional public keys inserted by the attacker. Remember, a hardware wallet does not entirely preclude the need to secure the host that interfaces with it.

More Secure, More Usable Solutions Still Needed

This discussion of risks and recommendations in regards to cryptocurrency hardware wallets illustrates the challenges for the broader security industry in attempting to design other kinds of fixed-function devices for private key protection. For instance, U2F tokens and Secure Enclaves.

For well over a decade, security researchers have promoted the goal of “usable security.” Usable security is simply the idea that secure computing should be easy to do right, and hard to do wrong. Compare the usability of a modern secure messaging client, for example, with the cumbersome and error-prone key management required to use GPG. Getting usability right is the difference between protecting a few thousand technologists and protecting tens of millions of regular users.

Avoid complacency. Demand safer, better designed devices that aren’t prone to traps and mistakes. The best hardware wallet should be a little bit boring! We hope that in the future, safe and usable hardware wallets will be a commodity device that we can take for granted.

Until then, we will continue doing our part to build security awareness independently and in collaboration with organizations like the W3F. If you work for a company that creates hardware wallets, we welcome you to contact us for help protecting your users.

Return of the Blockchain Security Empire Hacking

Remember last December’s Empire Hacking? The one where we dedicated the event to sharing the best information about blockchain and smart contract security? Let’s do that again, and let’s make it a tradition; a half-day mini conference focused exclusively on a single topic every December. On December 12, please join us at Buzzfeed’s NYC offices to hear 10 excellent speakers share their knowledge of blockchain security in an event that will assuredly expand your abilities.

Dinner will be served. We will congregate at The Headless Horseman afterwards to continue the conversation and for some holiday cheer.

Due the the nature of this event, we’ll be charging attendees $2.00 for entry. Only registered guests will be permitted to attend.

Reserve a spot while you can.

Talks will include:

Anatomy of an Unsafe Smart Contract Programming Language

This talk dissects Solidity: the most popular smart contract programming language. Various examples of its unsafe behavior are discussed, demonstrating that even an experienced, competent programmer can easily shoot themselves in the foot. These serve as a cautionary tale of how not to create a programming language and toolchain, particularly one that shall be trusted with hundreds of millions of dollars in cryptocurrency. The talk is concluded with a retrospective of how some of these issues could have been avoided, and what we can do to make smart contract development more secure moving forward.

Evan Sultanik is a security engineer from Trail of Bits.

Asset Insecurities: Evaluating Digital Asset Security Fundamentals

Spend a couple minutes learning about digital asset security ecosystem problems as faced at Coinbase scale. This will be a jaunt through insecure supply chain, the difference between a protocol white paper and the actual implementation, and a couple other things that’ll bite you if you’re not paying attention.

Shamiq herds cryptokitties, security engineers and developers at Coinbase as Head of Application Security. In his spare time, he loves to eat cheese and chocolate.

Designing the Gemini dollar: a regulated, upgradeable, transparent stablecoin

A regulated stablecoin requires important design decisions. How can you make your contracts upgradeable when many rely on them? How can you manage keys that protect the underlying assets? And how can you do this all completely transparently? In this talk, we explain the design decisions that went into the Gemini dollar, and compare and contrast with other possible implementations.

Brandon Arvanaghi is a security engineer at Gemini Trust.

Property testing with Echidna and Manticore for secure smart contracts

Property-based testing is an incredibly simple and powerful tool for bug discovery, but despite its efficacy, it’s almost unheard of in the smart contract development community. This talk will introduce the concept of property-based testing, discuss strategies for picking good properties and testing them thoroughly, then go into how to apply these ideas to smart contracts specifically. We’ll discuss the use of both Manticore and Echidna for testing, and look at real bugs these tools can find in production code.

JP Smith is a security engineer from Trail of Bits.

Contract upgrade risks and remediations

A popular trend in smart contract design is to promote the development of upgradable contracts. Existing techniques to upgrade contracts have flaws, increase the complexity of the contract significantly, and ultimately introduce bugs. We will detail our analysis of existing smart contract upgrade strategies, describe the weaknesses we have observed in practice, and provide recommendations for contracts that require upgrades.

Josselin Feist is a security engineer at Trail of Bits.

Failures in On-Chain Privacy

Many, including Satoshi, believed cryptocurrencies provided privacy for payments. In reality, cryptocurrency is Twitter for your bank account. Worse, the current set of decoy transaction–based approaches commonly believed to provide privacy—including coinjoin and cryptonote/Monero—provide fundamentally flawed privacy protections. Where did we go wrong? This talk covers how to critically evaluate the privacy provided by any proposed protocol for payment privacy. Through a series of thought experiments, it outlines three plausible attacks on existing decoy-based schemes. These issues show the unintuitive nature of privacy protections, as well as the need to both evaluate protocols in the context of real world threats, and use approaches with formal and peer reviewed privacy guarantees such as Zcash.

Ian Miers is a post-doctoral associate at Cornell Tech.

Secure Micropayment Protocols

Sending cryptocurrency micropayment transactions that must be confirmed on a blockchain is impractical today due to transaction fees that can exceed the value being sent. Instead, we can use micropayment protocols that only rely on the blockchain for settlement and disputes to minimize on-chain fees. In this talk, we will describe and compare different approaches to constructing secure micropayment protocols on top of Ethereum including probabilistic micropayments and payment channels. Furthermore, we will highlight the difficulties and considerations in implementing these types of protocols given the increased reliance on correct and timely client behavior to prevent the loss of funds.

Yondon Fu is a software engineer and researcher at Livepeer.

How To Buidl an Enterprise-Grade Mainnet Ethereum Client

The byzantine environment of the Ethereum mainnet is fraught with challenges for aspiring hackers seeking to publish a compatible client. This talk highlights the trials and tribulations of implementing a client capable of handily dispatching the adversarial events and actors of the sprawling P2P ecosystem that comprises the Ethereum blockchain’s world-wide compute network. The uniquely modular nature of the Pantheon codebase and it’s suitability for enterprise application will be treated in detail. The session will conclude with a brief sketch of the road ahead for Pantheon with an eye towards the Ethereum Enterprise Alliance and the forthcoming updates that comprise the broad strokes of the Ethereum 2.0 specification.

S. Matthew English is a PegaSys protocol engineer and Pantheon core dev.

Simple is hard: Making your awesome security thing usable

If the security assumptions of blockchain systems fail even a little, they provide very little value. They also have a high barrier to entry and are hard to use. But wait, people already don’t use security tools — how isn’t this the worst of all possible worlds? We’ll talk about some precedents from infosec history and how we might be able to avoid “Your elections are fine as long as you use The New PGP on The Blockchain” in favor of helping people build cool things that really do solve longstanding problems in novel ways.

Patrick Nielsen and Amber Baldet are founders of Clovyr.

Like it or not, blockchain voting is here to stay

I’m going to talk about how blockchain voting apps received serious pushback from academics who study voting security, but that West Virginia used the Voatz app for some counties during primaries, used it in almost half the state in the midterm election, and is pleased with how it went. Voatz is already in talks with other states and is hoping for up to 20 states to use it by 2020. And several other countries are testing different blockchain voting apps.

Kevin Collier is the cybersecurity correspondent at BuzzFeed News, where he covers cyberwar, hackers, election security, disinformation efforts, tech companies, and hacking laws. Prior to BuzzFeed, Kevin covered cybersecurity at Vocativ and the Daily Dot, and has written for Politico, Gizmodo, The Daily Beast, and NY Mag. A native of West Virginia, he lives in Brooklyn.

We’re look forward to seeing you there!

Workshop: Smart-Contract Security Analysis (December 11)

On December 11th, the day prior to Empire Hacking, we’ll be hosting a security training for Ethereum smart contract developers.

In this day-long training, JP Smith will share how we conduct our security reviews; not just our tools or tricks, but the whole approach. In addition to that knowledge, we’ll share our school of thought regarding assessments. Far too often, we encounter the belief that audits deliver a list of bugs and, consequently, the ability to say “Our code has been audited!” (and therefore “Our code is safe!”). That’s just part of the picture. Audits should also deliver an assessment of total project risk, guidance on architectural and development lifecycle, and someone to talk to. That’s the framework attendees will come away with.

Register for the day-long training.

Trail of Bits @ Devcon IV Recap

We wanted to make up for missing the first three Devcons, so we participated in this year’s event through a number of talks, a panel, and two trainings. For those of you who couldn’t join us, we’ve summarized our contributions below. We hope to see you there next year.

Using Manticore and Symbolic Execution to Find Smart Contract Bugs

In this workshop, Josselin Feist showed how to use Manticore, our open-source symbolic execution engine. Manticore enables developers not only to discover bugs in their code immediately, but also to prove that their code works correctly. Josselin led 120 attendees through a variety of exercises with Manticore. Everyone left with hands-on formal methods that will help them ensure that their smart contracts follow their specifications.

Get the workshop’s slides and exercises

Blockchain Autopsies

In this lightning talk, Jay Little recovered and analyzed 30,000 self-destructed contracts, and identified possible attacks hidden among them. 2 million contracts have been created on Ethereum’s mainnet yet few holding any value have been destroyed. These high-signal transactions are difficult to find; many are not available to a fully synchronized Ethereum node. In order to achieve this feat, Jay created new tools that re-process blockchain ledger data, recreate contracts with state, and analyze suspect transactions using traces and heuristics.

Filtering deployment mistakes, DoS attacks, and spam to identify suspect self-destructs

Get Jay’s slides

Current State of Security

In this panel, Kevin Seagraves facilitated a discussion about Ethereum’s current security posture. What was the biggest change in Ethereum security in the last year? How is securing smart contracts different from traditional systems? How should we think about the utility of bug bounties? Hear what this panel of experts had to say:

Security Training

In this day-long training, JP shared how we conduct our security reviews; not just our tools or tricks, but the whole approach. In addition to that knowledge, we tried to impart our school of thought regarding assessments. Far too often, we encounter the belief that audits deliver a list of bugs and, consequently, the ability to say “Our code has been audited!” (and therefore “Our code is safe!”). That’s just part of the picture. Audits should also deliver an assessment of total project risk, guidance on architectural and development lifecycle, and someone to talk to.

We’re running the training again on December 11th in New York. Reserve yourself a seat.

Devcon Surprise

Instead of going to Devcon, Evan Sultanik stayed home and wrote an Ethereum client fuzzer. Etheno automatically seeks divergences among the world’s Ethereum clients, like the one that surfaced on Ropsten in October. Etheno automatically identified that same bug in two minutes.

We’re glad that we attended Devcon4, and look forward to participating more in future events.

We crypto now

Building and using cryptographic libraries is notoriously difficult. Even when each component of the system has been implemented correctly (quite difficult to do), improperly combining these pieces can lead to disastrous results.

Cryptography, when rolled right, forms the bedrock of any secure application. By combining cutting-edge mathematics and disciplined software engineering, modern crypto-systems guarantee data and communication privacy. Navigating these subtleties requires experts in both cryptography software engineering and the underlying mathematics. That’s where we can help.

How we can help

Trail of Bits has released tooling and services that demonstrate our talents in diverse areas including binary lifting, symbolic execution, static analysis, and architectural side channels. As our team has grown, we’ve expanded our expertise to include cryptography. (See our recent writings about elliptic curve implementation errors in Bluetooth, post-quantum algorithms, RSA fault analysis, and verifiable delay functions for a taste.) We’d like to share that expertise more effectively, so today we’re announcing a new cryptographic services practice to augment our existing offerings.

Our ambition is to improve the cryptography ecosystem for everyone. Misuse resistant constructions (both cryptographically and via API design), rigorously tested low-level implementations, and safer languages are all prerequisites for a secure future. We will be deeply involved in each of these efforts. We’ll be publishing a variety of tools, safe cryptographic constructions we are calling recipes, and a steady supply of blog posts to contribute to the field.

Who’s behind our cryptographic services practice

  • Paul Kehrer, a principal engineer at Trail of Bits, leads the cryptographic services practice and specializes in cryptographic engineering. He has spent his career writing cryptographic software, including a publicly trusted certification authority’s technical infrastructure, key management services for a cloud provider, and contributing to open source cryptographic libraries. Paul is one of the founding members of the Python Cryptographic Authority.
  • JP Smith, a security engineer at Trail of Bits, focuses on program analysis and cryptanalysis. He is the winner of the 2017 underhanded crypto contest and works on a mix of research, engineering, and assurance on technologies ranging from compilers to blockchains. He received a degree in mathematics at UIUC where he also led the security club/CTF team and researched symbolic execution and binary translation.
  • Ben Perez, a security engineer at Trail of Bits, specializes in blockchain security and cryptography. He received a masters degree in computer science from UC San Diego where he focused on post-quantum cryptography and machine learning. Prior to joining the team at Trail of Bits, he worked on binary analysis tools at Galois, the Quorum blockchain at JP Morgan, and published research in pure mathematics.

Get in touch

Whether you’re just trying to confirm that you’re using elliptic curves correctly or developing a novel crypto-system from scratch, we want to work with you. We are especially suited to help design and implement novel cryptographic constructions, review proposed schemes for soundness, and build tools to detect implementation errors in your environment.

If your company needs our deep expertise, then get in touch today.

How contract migration works

Smart contracts can be compromised: they can have bugs, the owner’s wallet can be stolen, or they can be trapped due to an incorrect setting. If you develop a smart contract for your business, you must be prepared to react to events such as these. In many cases, the only available solution is to deploy a new instance of the contract and migrate your data to it.

If you plan to develop an upgradable contract, a migration procedure will spare you the dangers of an upgradability mechanism.

Read this blog post for a detailed description of how contract migration works.

You need a contract migration capability

Even a bug-free contract can be hijacked with stolen private keys. The recent Bancor and KICKICO hacks showed that attackers can compromise smart contract wallets. In attacks like these, it may be impossible to fix the deployed smart contract, even if the contract has an upgradability mechanism. A new instance of the contract will need to be deployed and properly initialized to restore functionality to your users.

Therefore, all smart contract developers must integrate a migration procedure during the contract design phase and companies must be prepared to run the migration in case of compromise.

A migration has two steps:

  1. Recovering the data to migrate
  2. Writing the data to the new contract

Let’s walk through the details, costs and operational consequences.

How to perform the migration

Step 1: Data recovery

You need to read the data from a particular block on the blockchain. To recover from an incident (hack or failure), you need to use the block before the incident or filter the attacker’s actions.

If possible, pause your contract. It is more transparent for your users, and prevents attackers from taking advantage of users who are not aware of the migration.

The recovery of data will depend on your data structure.

For public variables of simple types (such as uint, or address), it is trivial to retrieve the value through their getters. For private variables, you can either rely on events or you can compute the offset in memory of the variable, then use the getStorageAt function to retrieve its value.

Arrays are easily recovered, too, since the number of elements is known. You can use the techniques described above.

The situation is a bit more complex for mappings. Keys of a mapping are not stored. You need to recover them to access the values. To simplify off-chain tracking, we recommend emitting events when a value is stored in a mapping.

For ERC20 token contracts, you can find the list of all the holders by tracking the addresses of the Transfer events. This process is difficult. We have prepared two options to help: in the first, you can scan the blockchain and retrieve the holders yourself; in the second, you can rely on the publicly available Google BigTable archive of the Ethereum blockchain.

If you are not familiar with the web3 API to extract information from the blockchain, you can use ethereum-etl, which provides a set of scripts to simplify the data extraction.

If you don’t have a synchronized blockchain, you can use the Google BigQuery API. Figure 1 shows how to collect all the addresses of a given token through BigQuery:

SELECT from_address FROM `bigquery-public-data.ethereum_blockchain.token_transfers` AS token_transfers WHERE token_transfers.token_address = 0x41424344
Union DISTINCT
SELECT to_address FROM `bigquery-public-data.ethereum_blockchain.token_transfers` AS token_transfers WHERE token_transfers.token_address = 0x41424344'

Figure 1: Using Google BigQuery to recover the addresses present in all Transfer events of the token at address 0x41424344

BigQuery provides access to the block number, so you can adapt this query to return the transactions up to a particular block.

Once your recover all the holder’s addresses, you can query the balanceOf function offline to recover the balance associated to each holder. Filter accounts with an empty balance.

Now that we know how to retrieve the data to be migrated, let’s write the data to the new contract.

Step 2: Data writing

Once you collect the data, you need to initiate your new contract.

For simple variables, you can set the values through the constructor of the contract.

The situation is slightly more complex and costly if your data cannot be held in a single transaction. Each transaction is included in a block, which limits the total amount of gas that can be used by its transactions (the so-called “GasLimit”). If the gas cost of a transaction approaches or exceeds this limit, miners won’t include it in a block. As a result, if you have a large amount of data to migrate, you must split the migration into several transactions.

The solution is to add an initialization state to your contract, where only the owner can change the state variables, and users can’t take any action.

For an ERC20 token, the process would take these steps:

  1. Deploy the contract in the initialization state,
  2. Migrate the balances,
  3. Move the contract’s state to production.

The initialization state can be implemented with a Pausable feature and a boolean indicating the initialization state.

To reduce the cost, the migration of the balances can be implemented with a batch transfer function that lets you set multiple accounts in a single transaction:

/**
* @dev Initiate the account of destinations[i] with values[i]. The function must only be called before
* any transfer of tokens (duringInitialization). The caller must check that destinations are unique addresses.
* For a large number of destinations, separate the balances initialization in different calls to batchTransfer.
* @param destinations List of addresses to set the values
* @param values List of values to set
*/
function batchTransfer(address[] destinations, uint256[] values) duringInitialization onlyOwner external{
require(destinations.length == values.length);

uint256 length = destinations.length;
uint i;

for(i=0; i < length; i++){
balances[destinations[i]] = values[i];
emit Transfer(0x0, destinations[i], values[i]);
}
}

Figure 2: An example of a batchTransfer function

Migration concerns

When migrating a contract, two major concerns arise:

  1. How much will the migration cost?
  2. What is the impact on exchanges?

Migration cost

The recovery of data is done off-chain and therefore is free. Ethereum-etl can be used locally. Google‘s BigQuery API offers sufficient free credit to cover its usage.

However, each transaction sent to the network and each byte stored by the new contract has a cost.

Using the batchTransfer function of Figure 2, the transfer of 200 accounts costs around 2.4M gas, which is $5.04 with an average gas price (10 Gwei) at the time of this article (use ETH Gas Station to recalculate this figure for today’s prices). Roughly speaking, you need $0.025 to migrate one balance.

If we look at the number of holders for the top five ERC20 tokens ranked by their market cap, we have:

Token Holders Cost of Migration
BNB 300,000 $7,500
VEN 45,000 $1,200
MKR 5,000 $125
OMG 660,000 $16,500
ZRX 60,000 $1,500

If you migrate additional information (such as the allowance), the cost will be higher. Even so, these amounts are low in comparison to the amount of money that these tokens represent, and the potential cost of a failed upgrade.

Exchanges

The deployment of a new contract may have operational consequences. For token-based contracts, it is important to collaborate with exchanges during a migration to be sure that the new contract will be listed and the previous one will be discarded.

Fortunately, previous token migration events (such as Augur, Vechain, and Tron), showed that exchanges are likely to cooperate.

Contract Migration versus Upgradable Contracts

In our previous blog post, we discussed a trend in smart contract design: the addition of an upgradability mechanism to the contract.

We saw several drawbacks to upgradeable contracts:

  • Detailed low-level expertise in EVM and Solidity is required. Delegatecall-based proxies requires the developer to master EVM and Solidity internals.
  • Increased complexity and code size. The contract is harder to review and is more likely to contain bugs and security issues.
  • Increased number of keys to handle. The contract will need multiple authorized users (owner, upgrader). The more authorized users, the larger the attack surface.
  • Increased gas cost of each transaction. The contract becomes less competitive than the same version without an upgrade mechanism.
  • They encourage solving problems after deployment. Developers tend to test and review contracts more thoroughly if they know that they can’t be updated easily.
  • They reduce users’ trust in the contract. Users need to trust the contract’s owner, which prevents a truly decentralized system.

A contract should have an upgradable mechanism only if there is a strong argument for it, such as:

  • The contract requires frequent updates. If the contract is meant to be modified on a regular basis, the cost of regular migration may be high enough to justify an upgradability mechanism.
  • The contract requires a fixed address. The migration of a contract necessitates the use of a new address, which may break interactions with third parties (such as with other contracts).

Contract migrations achieve the benefits of an upgrade with few of the downsides. The main advantage of an upgrade over a migration is a cheaper cost of the upgrade. However, this cost does not justify all the drawbacks.

Recommendations

Prepare a migration procedure prior to contract deployment.

Use events to facilitate data tracking.

If you go for an upgradable contract, you must also prepare a migration procedure, as your keys can be compromised, or your contract can suffer from incorrect and irreversible manipulation.

Smart contracts bring a new paradigm of development. Their immutable nature requires users to re-think the way they build applications and demands thorough design and development procedures.

Contact us if you need help in creating, verifying, or applying your migration procedure.

In the meantime, join our free Ethereum security office hours if you have any questions regarding the security of your contract.

The Good, the Bad, and the Weird

Let’s automatically identify weird machines in software.

Combating software exploitation has been a cat-and-mouse game ever since the Morris worm in 1988. Attackers use specific exploitation primitives to achieve unintended code execution. Major software vendors introduce exploit mitigation to break those primitives. Back and forth, back and forth. The mitigations have certainly raised the bar for successful exploitation, but there’s still opportunity to get closer to provable security gains.

I discussed the use of weird machines to either bypass these mitigation barriers or prove a program is unexploitable as part of the DARPA Risers session to an audience of PMs and other Defense officials earlier this year at the D60 conference. Describing this problem concisely was difficult, especially to non-practitioners.

Why weird machines matter

Attackers look for weird machines to defeat modern exploit mitigations. Weird machines are partially Turing-complete snippets of code that inherently exist in “loose contracts” around functions and groups of functions. A loose contract is a piece of code that not only implements the intended program, but does it in such a way that the program undergoes more state changes than it should (i.e. the set of preconditions of a program state is larger than necessary). We want “tight contracts” for better security, where the program only changes state on exactly the intended preconditions, and no “weird” or unintended state changes can arise.

A weird machine is a snippet of code that will process valid or invalid input in a way that the programmer did not intend.

Unfortunately, loose contracts exist in most software and are byproducts of functionality such as linked-list traversals, file parsing, small single-purpose functions, and any other features that emerge from complex systems. Modern attackers leverage these unintended computations and build weird machines that bypass exploit mitigations and security checks. Let’s take a look at an example of a weird machine.

struct ListItem
{
    ListItem* TrySetItem(ListItem* new_item)
    {
        if (!m_next)
            m_next = new_item;

        return m_next;
    }

    struct ListItem *m_next = nullptr;
};

The function ListItem::TrySetItem looks to have these preconditions:

  • You must pass this and item in, both as pointers
  • this and item must be allocated and constructed ListItem instances

However, once machine code is generated the preconditions are actually:

  • The this parameter must be a pointer to allocated memory of at least 8 bytes
  • You must pass a second parameter (item) but it can be of any type

This is an example of a loose contract which is inherent to the way we write code. An attacker who has overwritten the m_next pointer can leverage this function to check to see if memory at an arbitrary address is set: if yes, then the attacker may leak the memory, if not, then the attacker may set the memory.

A vulnerability is used to alter either program execution or data state. Execution after this is either a weird state or a state operating on unintended data.

Tightening the contract

One type of loose contract is the “execution” contract, which is the set of possible addresses that are valid indirect branches in a program.

Windows NT in 1995 is an example of a loose execution contract, where all memory marked as ‘read’ also implied ‘execute.’ This included all data in the program – not just the code. In 2003, Microsoft tightened the execution contract when it introduced non-executable data (NX, DEP) in Windows XP SP2. Microsoft further improved the contract in 2006 when it introduced Address Space Layout Randomization (ASLR) in Windows Vista, which randomizes the location of executable code. 2016 saw the introduction of Control Flow Guard (CFG) with Windows 8.1/10, which validates forward-edges of indirect branches point to a set of approved functions.

In the chart below, it’s clear that few valid indirect destinations remain. This tight “execution” contract makes exploitation much more difficult and the need for weird machines greater, dramatically increasing the value of weird machines. If we can tighten the program contract more, it would make weird machines that much more difficult to identify.

The execution contract defines areas of the program which are executable (yellow). These have diminished over the years as the contract has tightened.

What “weird” looks like

Identifying weird machines is a hard problem. How do we identify loose contracts when we don’t even know what the contract is in the first place? Automatically identifying these weird machines would allow us to triage properly whether a vulnerability is in fact exploitable and whether it would be unexploitable in the absence of weird machines.

One way to programmatically describe a weird machine is through Hoare triples. A Hoare triple describes how the execution of a piece of code changes the state of the computation: the preconditions necessary to move into a new state and the post conditions which describe how to leave a state. When we identify weird machines, we can tighten such contracts automatically by removing them or constraining the preconditions to be exactly what the state expects. This will get us one step closer to creating a program that’s provably secure.

Revisiting our example, we can add dynamic_casts to enforce the contract preconditions. If we analyze the snippet of code as a Hoare triple we notice that the preconditions for the function’s execution are loose, such that any address can be passed to the function. Furthermore, the post conditions are nonexistent such that once executing, the function will set or return memory regardless of program state.

struct ListItem
{
    ListItem* TrySetItem(ListItem* new_item)
    {
        if (!dynamic_cast<ListItem*>(this) ||
            !dynamic_cast<ListItem*>(new_item)) {
                // This path should not be allowed
                abort();
        }

        if (!m_next)
            m_next = new_item;

        return m_next;
    }

    struct ListItem *m_next = nullptr;
};

The dynamic_casts are runtime guards which check to validate that the function is operating on the intended pointers. This new function is decidedly not as useful in exploitation as it once was.

A Hoare triple with imprecisely defined preconditions allows for a “weird” state change to occur. Tightening these preconditions by improving input checks would make these states unattainable.

So how do we find them?

There are numerous difficult problems on the road to a solution. The first being scale. We don’t care about simple test cases, we care about real code deployed to billions of people today: browsers, operating systems, mail clients, messaging applications. Automatically identifying weird machines on these platforms is a significant challenge.

Given a set of possible execution paths and their pattern of object creation and access, we must identify program slices with specific and controllable side effects. These slices must themselves be Turing complete. The behavior of these “Turing thunks” may be different outside of their normal placement in the execution paths or with different data states. To scale our analyses, we can break the problem into subcomponents.

Starting with identification of Turing thunks, analyze their side effects, and determine their reachability. We can use data flow analysis and shape analysis to identify these “Turing thunks” and measure their side effects. The side effects of these identified weird machines will be measured to determine how these candidate weird machines compose. Alterations to the global state could alter the execution of subsequent weird machines. Data flow provides paths that are transformable based on controlled input. Shape analysis aids in reconstructing heap objects, layout, and the interactions between objects. This helps determine the input constraints necessary to generate a path of execution to a weird machine, as well as the heap state before and after execution of the weird machine.

Once candidates have been identified, it is possible to prioritize based on specific functionality and side effects. We can use symbolic and concolic execution to validate these candidates and machine learning to group candidates by behaviors, execution constraints, and side effects to make later querying easier.

The future of exploitation

In the end, weird machines are a fundamental tool in exploitation. As programs get more complex and mitigations pile on, the importance of weird machines only increases. Finding these Turing snippets and enumerating their properties in real-world programs will assist the next generation of exploitations and security.

Once we can automatically identify weird machines we will have the ability to remove these weird states, and determine the degree of exploitability of the program. We may also be able to prove a specific vulnerability is unexploitable.

Part of the solution to this is an improvement on the terminology, which needs to mature. The other part of the solution is further research into the problem space. While there was interest in the topic, I hope DARPA invests in this area in the future.

The tooling and systems to identify and classify weird machines doesn’t yet exist. We still have a lot to do, but the building blocks are there. With them we’ll come closer to solving the problems of tomorrow.

If you want to learn more about this area of research, I suggest you start with these publications:

A Guide to Post-Quantum Cryptography

For many high-assurance applications such as TLS traffic, medical databases, and blockchains, forward secrecy is absolutely essential. It is not sufficient to prevent an attacker from immediately decrypting sensitive information. Here the threat model encompasses situations where the adversary may dedicate many years to the decryption of ciphertexts after their collection. One potential way forward secrecy might be broken is that a combination of increased computing power and number-theoretic breakthroughs make attacking current cryptography tractable. However, unless someone finds a polynomial time algorithm for factoring large integers, this risk is minimal for current best practices. We should be more concerned about the successful development of a quantum computer, since such a breakthrough would render most of the cryptography we use today insecure.

Quantum Computing Primer

Quantum computers are not just massively parallel classical computers. It is often thought that since a quantum bit can occupy both 0 and 1 at the same time, then an n-bit quantum computer can be in 2n states simultaneously and therefore compute NP-complete problems extremely fast. This is not the case, since measuring a quantum state destroys much of the original information. For example, a quantum system has complete knowledge of both an object’s momentum and location, but any measurement of momentum will destroy information about location and vice versa. This is known as the Heisenberg uncertainty principle. Therefore, successful quantum algorithms consist of a series of transformations of quantum bits such that, at the end of the computation, measuring the state of the system will not destroy the needed information. As a matter of fact, it has been shown that there cannot exist a quantum algorithm that simultaneously attempts all solutions to some NP-complete problem and outputs a correct input. In other words, any quantum algorithm for solving hard classical problems must exploit the specific structure of the problem at hand. Today, there are two such algorithms that can be used in cryptanalysis.

The ability to quickly factor large numbers would break both RSA and discrete log-based cryptography. The fastest algorithm for integer factorization is the general number field sieve, which runs in sub-exponential time. However, in 1994 Peter Shor developed a quantum algorithm (Shor’s algorithm) for integer factorization that runs in polynomial time, and therefore would be able to break any RSA or discrete log-based cryptosystem (including those using elliptic curves). This implies that all widely used public key cryptography would be insecure if someone were to build a quantum computer.

The second is Grover’s algorithm, which is able to invert functions in O(√n) time. This algorithm would reduce the security of symmetric key cryptography by a root factor, so AES-256 would only offer 128-bits of security. Similarly, finding a pre-image of a 256-bit hash function would only take 2128 time. Since increasing the security of a hash function or AES by a factor of two is not very burdensome, Grover’s algorithm does not pose a serious threat to symmetric cryptography. Furthermore, none of the pseudorandom number generators suggested for cryptographic use would be affected by the invention of a quantum computer, other than perhaps the O(√n) factor incurred by Grover’s algorithm.

Types of Post-Quantum Algorithms

Post-quantum cryptography is the study of cryptosystems which can be run on a classical computer, but are secure even if an adversary possesses a quantum computer. Recently, NIST initiated a process for standardizing post-quantum cryptography and is currently reviewing first-round submissions. The most promising of these submissions included cryptosystems based on lattices, isogenies, hash functions, and codes.

Before diving more deeply into each class of submissions, we briefly summarize the tradeoffs inherent in each type of cryptosystem with comparisons to current (not post-quantum) elliptic-curve cryptography. Note that codes and isogenies are capable of producing digital signatures, but no such schemes were submitted to NIST.

Signatures Key Exchange Fast?
Elliptic Curves 64 bytes 32 bytes
Lattices 2.7kb 1 kb
Isogenies 330 bytes
Codes 1 mb
Hash functions 41 kb

Table 1: Comparison of classical ECC vs post-quantum schemes submitted to NIST

In terms of security proofs, none of the above cryptosystems reduce to NP-hard (or NP-complete) problems. In the case of lattices and codes, these cryptosystems are based on slight modifications of NP-hard problems. Hash-based constructions rely on the existence of good hash functions and make no other cryptographic assumptions. Finally, isogeny-based cryptography is based on a problem that is conjectured to be hard, but is not similar to an NP-hard problem or prior cryptographic assumption. It’s worth mentioning, however, that just as we cannot prove any classical algorithm is not breakable in polynomial time (since P could equal NP), it could be the case that problems thought to be difficult for quantum computers might not be. Furthermore, a cryptosystem not reducing to some NP-hard or complete problem shouldn’t be a mark against it, per se, since integer factorization and the discrete log problem are not believed to be NP-complete.

Lattices

Of all the approaches to post-quantum cryptography, lattices are the most actively studied and the most flexible. They have strong security reductions and are capable of key exchanges, digital signatures, and far more sophisticated constructions like fully homomorphic encryption. Despite the extremely complex math needed in both optimizations and security proofs for lattice cryptosystems, the foundational ideas only require basic linear algebra. Suppose you have a system of linear equations of the form

Solving for x is a classic linear algebra problem that can be solved quickly using Gaussian elimination. Another way to think about this is that we have a mystery function,

where given a vector a, we see the result of ax, without knowing x. After querying this function enough times we can learn f in a short amount of time (by solving the system of equations above). This way we can reframe a linear algebra problem as a machine learning problem.

Now, suppose we introduce a small amount of noise to our function, so that after multiplying x and a, we add an error term e and reduce the whole thing modulo a (medium-sized) prime q. Then our noisy mystery function looks like

Learning this noisy mystery function has been mathematically proven to be extremely difficult. The intuition is that at each step in the Gaussian elimination procedure we used in the non-noisy case, the error term gets bigger and bigger until it eclipses all useful information about the function. In the cryptographic literature this is known as the Learning With Errors problem (LWE).

The reason cryptography based on LWE gets called lattice-based cryptography is because the proof that LWE is hard relies on the fact that finding the shortest vector in something called a lattice is known to be NP-Hard. We won’t go into the mathematics of lattices in much depth here, but one can think of lattices as a tiling of n-dimensional space

Lattices are represented by coordinate vectors. In the example above, any point in the lattice can be reached by combining e1, e2, and e3 (via normal vector addition). The shortest vector problem (SVP) says: given a lattice, find the element whose length as a vector is shortest. The intuitive reason this is difficult is because not all coordinate systems for a given lattice are equally easy to work with. In the above example, we could have instead represented the lattice with three coordinate vectors that were extremely long and close together, which makes finding vectors close to the origin more difficult. As a matter of fact, there is a canonical way to find the “worst possible” representation of a lattice. When using such a representation, the shortest vector problem is known to be NP-hard.

Before getting into how to use LWE to make quantum-resistant cryptography, we should point out that LWE itself is not NP-Hard. Instead of reducing directly to SVP, it reduces to an approximation of SVP that is actually conjectured to not be NP-Hard. Nonetheless, there is currently no polynomial (or subexponential) algorithm for solving LWE.

Now let’s use the LWE problem to create an actual cryptosystem. The simplest scheme was created by Oded Regev in his original paper proving the hardness of the LWE problem. Here, the secret key is an n-dimensional vector with integer entries mod q, i.e. the LWE secret mentioned above. The public key is the matrix A from the previous discussion, along with a vector of outputs from the LWE function

An important property of this public key is that when it’s multiplied by the vector (-sk,1), we get back the error term, which is roughly 0.

To encrypt a bit of information m, we take the sum of random columns of A and encode m in the last coordinate of the result by adding 0 if m is 0 and q/2 if m is 1. In other words, we pick a random vector x of 0s or 1s, and compute

CodeCogsEqn (3)

Intuitively, we’ve just evaluated the LWE function (which we know is hard to break) and encoded our bit in the output of this function.

Decryption works because knowing the LWE secret will allow the recipient to get back the message, plus a small error term

When the error distribution is chosen correctly, it will never distort the message by more than q/4. The recipient can test whether the output is closer to 0 or q/2 mod q and decode the bit accordingly.

A major problem with this system is that it has very large keys. To encrypt just one bit of information requires public keys with size n2 in the security parameter. However, an appealing aspect of lattice cryptosystems is that they are extremely fast.

Since Regev’s original paper there has been a massive body of work around lattice-based cryptosystems. A key breakthrough for improving their practicality was the development of Ring-LWE, which is a variant of the LWE problem where keys are represented by certain polynomials. This has led to a quadratic decrease in key sizes and sped up encryption and decryption to use only n*log(n) operations (using Fast Fourier techniques).

Among the many lattice-based cryptosystems being considered for the NIST PQC standard, two that are especially worth mentioning are the Crystals constructions, Kyber and Dilithium.

Kyber is a key-encapsulation mechanism (KEM) which follows a similar structure to the system outlined above, but uses some fancy algebraic number theory to get even better performance than Ring-LWE. Key sizes are approximately 1kb for reasonable security parameters (still big!) but encryption and decryption time is on the order of .075 ms. Considering this speed was achieved in software, the Kyber KEM seems promising for post-quantum key exchange.

Dilithium is a digital signature scheme based on similar techniques to Kyber. Its details are beyond the scope of this blog post but it’s worth mentioning that it too achieves quite good performance. Public key sizes are around 1kb and signatures are 2kb. It is also quite performant. On Skylake processors the average number of cycles required to compute a signature was around 2 million. Verification took 390,000 cycles on average.

Codes

The study of error correcting codes has a long history in the computer science literature dating back to the ground-breaking work of Richard Hamming and Claude Shannon. While we cannot even begin to scratch the surface of this deep field in a short blog post, we give a quick overview.

When communicating binary messages, errors can occur in the form of bit flips. Error-correcting codes provide the ability to withstand a certain number of bit flips at the expense of message compactness. For example, we could protect against single bit flips by encoding 0 as 000 and 1 as 111. That way the receiver can determine that 101 was actually a 111, or that 001 was a 0 by taking a majority vote of the three bits. This code cannot correct errors where two bits are flipped, though, since 111 turning into 001 would be decoded as 0.

The most prominent type of error-correcting codes are called linear codes, and can be represented by k x n matrices, where k is the length of the original messages and n is the length of the encoded message. In general, it is computationally difficult to decode messages without knowing the underlying linear code. This hardness underpins the security of the McEliece public key cryptosystem.

At a high level, the secret key in the McEliece system is a random code (represented as a matrix G) from a class of codes called Goppa codes. The public key is the matrix SGP where S is an invertible matrix with binary entries and P is a permutation. To encrypt a message m, the sender computes c = m(SGP) + e, where e is a random error vector with precisely the number of errors the code is able to correct. To decrypt, we compute cP-1 = mSG + eP-1 so that mS is a codeword of G that can correct the added error term e. The message can be easily recovered by computing mSS-1.

Like lattices, code-based cryptography suffers from the fact that keys are large matrices. Using the recommended security parameters, McEliece public keys are around 1 mb and private keys are 11 kb. There is currently ongoing work trying to use a special class of codes called quasi-cyclic moderate density parity-check codes that can be represented more succinctly than Goppa codes, but the security of these codes is less well studied than Goppa codes.

Isogenies

The field of elliptic-curve cryptography is somewhat notorious for using quite a bit of arcane math. Isogenies take this to a whole new level. In elliptic-curve cryptography we use a Diffie-Hellman type protocol to acquire a shared secret, but instead of raising group elements to a certain power, we walk through points on an elliptic curve. In isogeny-based cryptography, we again use a Diffie-Hellman type protocol but instead of walking through points on elliptic curve, we walk through a sequence of elliptic curves themselves.

An isogeny is a function that transforms one elliptic curve into another in such a way that the group structure of the first curve is reflected in the second. For those familiar with group theory, it is a group homomorphism with some added structure dealing with the geometry of each curve. When we restrict our attention to supersingular elliptic curves (which we won’t define here), each curve is guaranteed to have a fixed number of isogenies from it to other supersingular curves.

Now, consider the graph created by examining all the isogenies of this form from our starting curve, then all the isogenies from those curves, and so on. This graph turns out to be highly structured in the sense that if we take a random walk starting at our first curve, the probability of hitting a specific other curve is negligibly small (unless we take exponentially many steps). In math jargon, we say that the graph generated by examining all these isogenies is an expander graph (and also Ramanujan). This property of expansion is precisely what makes isogeny-based cryptography secure.

For the Supersingular Isogeny Diffie-Hellman (SIDH) scheme, secret keys are a chain of isogenies and public keys are curves. When Alice and Bob combine this information, they acquire curves that are different, but have the same j-invariant. It’s not so important for the purposes of cryptography what a j-invariant is, but rather that it is a number that can easily be computed by both Alice and Bob once they’ve completed the key exchange.

Isogeny-based cryptography has extremely small key sizes compared to other post-quantum schemes, using only 330 bytes for public keys. Unfortunately, of all the techniques discussed in this post, they are the slowest, taking between 11-13 ms for both key generation and shared secret computation. They do, however, support perfect forward secrecy, which is not something other post-quantum cryptosystems possess.

Hash-Based Signatures

There are already many friendly introductions to hash-based signatures, so we keep our discussion of them fairly high-level. In short, hash signatures use inputs to a hash function as secret keys and outputs as public keys. These keys only work for one signature though, as the signature itself reveals parts of the secret key. This extreme inefficiency of hash-based signatures led to use of Merkle trees to reduce space consumption (yes, the same Merkle trees used in Bitcoin).

Unfortunately, it is not possible to construct a KEM or a public key encryption scheme out of hashes. Therefore hash-based signatures are not a full post-quantum cryptography solution. Furthermore, they are not space efficient; one of the more promising signature schemes, SPHINCS, produces signatures which are 41kb and public/private keys that are 1kb. On the other hand, hash-based schemes are extremely fast since they only require the computation of hash functions. They also have extremely strong security proofs, based solely on the assumption that there exist hash functions that are collision-resistant and preimage resistant. Since nothing suggests current widely used hash functions like SHA3 or BLAKE2 are vulnerable to these attacks, hash-based signatures are secure.

Takeaways

Post-quantum cryptography is an incredibly exciting area of research that has seen an immense amount of growth over the last decade. While the four types of cryptosystems described in this post have received lots of academic attention, none have been approved by NIST and as a result are not recommended for general use yet. Many of the schemes are not performant in their original form, and have been subject to various optimizations that may or may not affect security. Indeed, several attempts to use more space-efficient codes for the McEliece system have been shown to be insecure. As it stands, getting the best security from post-quantum cryptosystems requires a sacrifice of some amount of either space or time. Ring lattice-based cryptography is the most promising avenue of work in terms of flexibility (both signatures and KEM, also fully homomorphic encryption), but the assumptions that it is based on have only been studied intensely for several years. Right now, the safest bet is to use McEliece with Goppa codes since it has withstood several decades of cryptanalysis.

However, each use case is unique. If you think you might need post-quantum cryptography, get in touch with your friendly neighborhood cryptographer. Everyone else ought to wait until NIST has finished its standardization process.

Slither – a Solidity static analysis framework

Slither is the first open-source static analysis framework for Solidity. Slither is fast and precise; it can find real vulnerabilities in a few seconds without user intervention. It is highly customizable and provides a set of APIs to inspect and analyze Solidity code easily. We use it in all of our security reviews. Now you can integrate it into your code-review process.

We are open sourcing the core analysis engine of Slither. This core provides advanced static-analysis features, including an intermediate representation (SlithIR) with taint tracking capabilities on top of which complex analyses (“detectors”) can be built. We have built many detectors, including ones that detect reentrancy and suicidal contracts. We are open sourcing some as examples.

If you are a smart-contract developer, a security expert, or an academic researcher, then you will find Slither invaluable.

Built for continuous integration

Slither has a simple command line interface. To run all of its detectors on a Solidity file, this is all you need: $ slither contract.sol

You can integrate Slither into your development process without any configuration. Run it on each commit to check that you are not adding new bugs.

Helps automate security reviews

Slither provides an API to inspect Solidity code via custom scripts. We use this API to rapidly answer unique questions about the code we’re reviewing. We have used Slither to:

  • Identify code that can modify a variable’s value.
  • Isolate the conditional logic statements that are influenced by a particular variable’s value.
  • Find other functions that are transitively reachable as a result of a call to a particular function.

For example, the following script will show which function(s) in myContract write to the state variable myVar:

# function_writing.py
import sys
from slither.slither import Slither

if len(sys.argv) != 2:
print('python.py function_writing.py file.sol')
exit(-1)

# Init slither
slither = Slither(sys.argv[1])

# Get the contract
contract = slither.get_contract_from_name('myContract')

# Get the variable
myVar = contract.get_state_variable_from_name('myVar')

# Get the functions writing the variable
funcs_writing_myVar = contract.get_functions_writing_to_variable(myVar)

# Print the result
print('Functions that write to "myVar": {}'.format([f.name for f in funcs_writing_myVar]))

Figure 1: Slither API Example

Read the API documentation and the examples to start harnessing Slither.

Aids in understanding contracts

Slither comes with a set of predefined “printers” which show high-level information about the contract. We included four that work out-of-the-box to print essential security information: a contract summary, a function summary, a graph of inheritance, and an authorization overview.

1. Contract summary printer

Gives a quick summary of the contract, showing the functions and their visibility:

Figure 2: Contract Summary Printer

2. Function summary printer

Shows useful information for each function, such as the state variables read and written, or the functions called:

Figure 3: Function Summary Printer

3. Inheritance printer

Outputs a graph highlighting the inheritance dependencies of all the contracts:

Figure 3: Function Summary Printer

4. Authorization printer

Shows what a user with privileges can do on the contract:

Figure 4: Authorization Printer

See Slither’s documentation for information about adding your own printers.

A foundation for research

Slither uses its own intermediate representation, SlithIR, to build innovative vulnerability analyses on Solidity. It provides access to the CFG of the functions, the inheritance of the contracts, and lets you inspect Solidity expressions.

Many academic tools, such as Oyente or MAIAN, advanced the start of the art when they were released. However, each academic team had to invent their own framework, built for only the limited scope of their particular area of interest. Maintenance became a challenge quickly. In contrast, Slither is a generic framework. Because it’s capable of the widest possible range of security analyses, it is regularly maintained and used by our open source community.

If you are an academic researcher, don’t spend time and effort parsing and recovering information from smart contracts. Prototype your new innovations on top of Slither, complete your research sooner, and ensure it maintains its utility over time.

It’s easy to extend Slither’s capabilities with new detector plugins. Read the detector documentation to start writing your own.

Next steps

Slither can find real vulnerabilities in a few seconds with minimal or no user interaction. We use it on all of our Solidity security reviews. You should too!

Many of our ongoing projects will improve Slither, including:

  • API enhancements: Now that we have open sourced the core, we intend to provide the most effective static analysis framework possible.
  • More precise built-in analyses: We plan to make several new layers of information, such as value tracking, accessible to the API.
  • Toolchain integration: We plan to combine Slither with Manticore, Echidna, and Truffle to automate the triage of issues.

Questions about Slither’s API and its core framework? Join the Empire Hacking Slack. Need help integrating Slither into your development process? Want access to our full set of detectors? Contact us.

Introduction to Verifiable Delay Functions (VDFs)

Finding randomness on the blockchain is hard. A classic mistake developers make when trying to acquire a random value on-chain is to use quantities like future block hashes, block difficulty, or timestamps. The problem with these schemes is that they are vulnerable to manipulation by miners. For example, suppose we are trying to run an on-chain lottery where users guess whether the hash of the next block will be even or odd. A miner then could bet that the outcome is even, and if the next block they mine is odd, discard it. Here, tossing out the odd block slightly increases the miner’s probability of winning the lottery. There are many real-world examples of “randomness” being generated via block variables, but they all suffer from the unavoidable problem that it is computationally easy for observers to determine how choices they make will affect the randomness generated on-chain.

Another related problem is electing leaders and validators in proof of stake protocols. In this case it turns out that being able to influence or predict randomness allows a miner to affect when they will be chosen to mine a block. There are a wide variety of techniques for overcoming this issue, such as Ouroboros’s verifiable secret-sharing scheme. However, they all suffer from the same pitfall: a non-colluding honest majority must be present.

In both of the above scenarios it is easy for attackers to see how different inputs affect the result of a pseudorandom number generator. This led Boneh, et al. to define verifiable delay functions (VDF’s). VDF’s are functions that require a moderate amount of sequential computation to evaluate, but once a solution is found, it is easy for anyone to verify that it is correct. Think of VDF’s as a time delay imposed on the output of some pseudorandom generator. This delay prevents malicious actors from influencing the output of the pseudorandom generator, since all inputs will be finalized before anyone can finish computing the VDF.

When used for leader selection, VDF’s offer a substantial improvement over verifiable random functions. Instead of requiring a non-colluding honest majority, VDF-based leader selection only requires the presence of any honest participant. This added robustness is due to the fact that no amount of parallelism will speed up the VDF, and any non-malicious actor can easily verify anyone else’s claimed VDF output is accurate.

VDF Definitions

Given a delay time t, a verifiable delay function f must be both

  1. Sequential: anyone can compute f(x) in t sequential steps, but no adversary with a large number of processors can distinguish the output of f(x) from random in significantly fewer steps
  2. Efficiently verifiable: Given the output y, any observer can verify that y = f(x) in a short amount of time (specifically log(t)).

In other words, a VDF is a function which takes exponentially more time to compute (even on a highly parallel processor) than it does to verify on a single processor. Also, the probability of a verifier accepting a false VDF output must be extremely small (chosen by a security parameter λ during initialization). The condition that no one can distinguish the output of f(x) from random until the final result is reached is essential. Suppose we are running a lottery where users submit 16-bit integers and the winning number is determined by giving a seed to a VDF that takes 20 min to compute. If an adversary can learn 4 bits of the VDF output after only 1 min of VDF computation, then they might be able to alter their submission and boost their chance of success by a factor of 16!

Before jumping into VDF constructions, let’s examine why an “obvious” but incorrect approach to this problem fails. One such approach would be repeated hashing. If the computation of some hash function h takes t steps to compute, then using f = h(h(...h(x))) as a VDF would certainly satisfy the sequential requirement above. Indeed, it would be impossible to speed this computation up with parallelism since each application of the hash depends entirely on the output of the previous one. However, this does not satisfy the efficiently verifiable requirement of a VDF. Anyone trying to verify that f(x) = y would have to recompute the entire hash chain. We need the evaluation of our VDF to take exponentially more time to compute than to verify.

VDF Candidates

There are currently three candidate constructions that satisfy the VDF requirements. Each one has its own potential downsides. The first was outlined in the original VDF paper by Boneh, et al. and uses injective rational maps. However, evaluating this VDF requires a somewhat large amount of parallel processing, leading the authors to refer to it as a “weak VDF.” Later, Pietrzak and Wesolowski independently arrived at extremely similar constructions based on repeated squaring in groups of unknown order. At a high level, here’s how the Pietrzak scheme works.

  1. To set up the VDF, choose a time parameter T, a finite abelian group G of unknown order, and a hash function H from bytes to elements of G.
  2. Given an input x, let g = H(x) evaluate the VDF by computing y = g2T

The repeated squaring computation is not parallelizable and reveals nothing about the end result until the last squaring. These properties are both due to the fact that we do not know the order of G. That knowledge would allow attackers to use group theory based attacks to speed up the computation.

Now, suppose someone asserts that the output of the VDF is some number z (which may or may not be equal to y). This is equivalent to showing that z = v2(T/2) and v = g2(T/2). Since both of the previous equations have the same exponent, they can be verified simultaneously by checking a random linear combination, e.g., vr z = (gr v)2(T/2), for a random r in {1, … , 2λ}(where λ is the security parameter). More formally, the prover and verifier perform the following interactive proof scheme:

  1. The prover computes v = g2(T/2) and sends v to the verifier
  2. The verifier sends a random r in {1, … , 2l} to the prover
  3. Both the prover and verifier compute g1 = gr v and z1 = vr z
  4. The prover and verifier recursively prove that z1 = g12(T/2)

The above scheme can be made non-interactive using a technique called the Fiat-Shamir heuristic. Here, the prover generates a challenge r at each level of the recursion by hashing (G,g,z,T,v) and appending v to the proof. In this scenario the proof contains log2 T elements and requires approximately (1 + 2/√T) T.

Security Analysis of Pietrzak Scheme

The security of Pietrzak’s scheme relies on the the security of the low order assumption: it is computationally infeasible for an adversary to find an element of low order in the group being used by the VDF. To see why finding an element of low order breaks the scheme, first assume that a malicious prover Eve found some element m of small order d. Then Eve sends zm to the verifier (where z is the valid output). The invalid output will be accepted with probability 1/d since

  1. When computing the second step of the recursion, we will have the base element g1 = gr v, where v = g2T/2 m, and need to show that g1T/2 = vr(zm)
  2. The m term on the left hand side is mT/2
  3. The m term on the right hand side is mr+1
  4. Since m has order d, these two will be equal when r+1 = T/2 mod d, which happens with probability 1/d

To see a full proof of why the low order assumption is both necessary and sufficient to show Pietrzak’s scheme is sound, see Boneh’s survey of recent VDF constructions.

The security analysis assumes that one can easily generate a group of unknown order that satisfies the low order assumption. We will see below that there are not groups currently known to satisfy these constraints that are amenable to a trustless setup, i.e., a setup where there is no party who can subvert the VDF protocol.

For example, let’s try to use everyone’s favorite family of groups: the integers modulo the product of two large primes (RSA groups). These groups have unknown order, since finding the order requires factoring a large integer. However, they do not satisfy the low order assumption. Indeed, the element -1 is always of order 2. This situation can be remedied by taking the quotient of an RSA group G by the subgroup {1,-1}. In fact, if the modulus of G is a product of strong primes (primes such that p-1/ 2 is also prime), then after taking the aforementioned quotient there are no elements of low order other than 1.

This analysis implies that RSA groups are secure for Pietrzak’s VDF, but there’s a problem. To generate an RSA group, someone has to know the factorization of the modulus N. Devising a trustless RSA group selection protocol–-where no one knows the factorization of the modulus N–-is therefore an interesting and important open problem in this area.

Another avenue of work towards instantiating Pietrzak’s scheme involves using the class group of an imaginary quadratic number field. This family of groups does not suffer from the above issue where selection requires a trusted third party. Simply choosing a large negative prime (with several caveats) will generate a group whose order is computationally infeasible to determine even for the person who chose the prime. However, unlike RSA groups, the difficulty of finding low-order elements in class groups of quadratic number fields is not well studied and would require more investigation before any such scheme could be used.

State of VDFs and Open Problems

As mentioned in the previous section, both the Pietrzak and Wesolowski schemes rely on generating a group of unknown order. Doing so without a trusted party is difficult in the case of RSA groups, but class groups seem to be a somewhat promising avenue of work. Furthermore, the Wesolowski scheme assumes the existence of groups that satisfy something called the adaptive root assumption, which is not well studied in the mathematical literature. There are many other open problems in this area, including constructing quantum resistant VDFs, and the potential for ASICs to ruin the security guarantees of VDF constructions in practice.

As for industry adoption of VDF’s, several companies in the blockchain space are trying to use VDF’s for consensus algorithms. Chia, for example, uses the repeated squaring technique outlined above, and is currently running a competition for the fastest implementation of this scheme. The Ethereum Foundation also appears to be developing a pseudorandom number generator that combines RANDAO with VDF’s. While both are very exciting projects that will be hugely beneficial to the blockchain community, this remains a very young area of research. Take any claim of security with a grain of salt.

How to Spot Good Fuzzing Research

Of the nearly 200 papers on software fuzzing that have been published in the last three years, most of them—even some from high-impact conferences—are academic clamor. Fuzzing research suffers from inconsistent and subjective benchmarks, which keeps this potent field in a state of arrested development. We’d like to help explain why this has happened and offer some guidance for how to consume fuzzing publications.

Researchers play a high-stakes game in their pursuit of building the next generation of fuzzing tools. A major breakthrough can render obsolete what was once the state of the art. Nobody is eager to use the world’s second-greatest fuzzer. As a result, researchers must somehow demonstrate how their work surpasses the state of the art in finding bugs.

The problem is trying to objectively test the efficacy of fuzzers. There isn’t a set of universally accepted benchmarks that is statistically rigorous, reliable, and reproducible. Inconsistent fuzzing measurements persist throughout the literature and prevent meaningful meta-analysis. That was the motivation behind the paper “Evaluating Fuzz Testing,” to be presented by Andrew Ruef at the 2018 SIGSAC Conference on Computer and Communications Security in Toronto.

“Evaluating Fuzz Testing” offers a comprehensive set of best practices for constructing a dependable frame of reference for comparing fuzzing tools. Whether you’re submitting your fuzzing research for publication, peer-reviewing others’ submissions, or trying to decide which tool to use in practice, the recommendations from Ruef and his colleagues establish an objective lens for evaluating fuzzers. In case you don’t have time to read the whole paper, we’re summarizing the criteria we recommend you use when evaluating the performance claims in fuzzing research.

Quick Checklist for Benchmarking Fuzzers

  1. Compare new research against popular baseline tools like american fuzzy lop (AFL), Basic Fuzzing Framework (BFF), libfuzzer, Radamsa, and Zzuf. In lieu of a common benchmark, reviewing research about these well-accepted tools will prepare you to ascertain the quality of other fuzzing research. The authors note that “there is a real need for a solid, independently defined benchmark suite, e.g., a DaCapo or SPEC10 for fuzz testing.” We agree.
  2. Outputs should be easy to read and compare. Ultimately, this is about finding the fuzzer that delivers the best results. “Best” is subjective (at least until that common benchmark comes along), but evaluators’ work will be easier if they can interpret fuzzers’ results easily. As Ruef and his colleagues put it: “Clear knowledge of ground truth avoids overcounting inputs that correspond to the same bug, and allows for assessing a tool’s false positives and false negatives.”
  3. Account for differences in heuristics. Heuristics influence how fuzzers start and pursue their searches through code paths. If two fuzzers’ heuristics lead them to different targets, then the fuzzers will produce different results. Evaluators have to account for that influence in order to compare one fuzzer’s results against another’s.
  4. Targets representative datasets with distinguishable bugs like the Cyber Grand Challenge binaries, LAVA-M, and Google’s fuzzer test suite and native programs like nm, objdump, cxxfilt, gif2png, and FFmpeg. Again for lack of a common benchmark suite, fuzzer evaluators should look for research that used one of the above datasets (or, better yet, one native and one synthetic). Doing so can encourage researchers to ‘fuzz to the test,’ which doesn’t do anyone any good. Nevertheless, these datasets provide some basis for comparison. Related: As we put more effort into fuzzers, we should invest in refreshing datasets for their evaluation, too.
  5. Fuzzers are configured to begin in similar and comparable initial states. If two fuzzers’ configuration parameters reflect different priorities, then the fuzzers will yield different results. It isn’t realistic to expect that all researchers will use the same configuration parameters, but it’s quite reasonable to expect that those parameters are specified in their research.
  6. Timeout values are at least 24 hours. Of the 32 papers the authors reviewed, 11 capped timeouts at “less than 5 or 6 hours.” The results of their own tests of AFL and AFLFast varied by the length of the timeout: “When using a non-empty seed set on nm, AFL outperformed AFLFast at 6 hours, with statistical significance, but after 24 hours the trend reversed.” If all fuzzing researchers allotted the same time period—24 hours—for their fuzz runs, then evaluators would have one less variable to account for.
  7. Consistent definitions of distinct crashes throughout the experiment. Since there’s some disagreement in the profession about how to categorize unique crashes and bugs (by the input or by the bug triggered), evaluators need to seek the researchers’ definition in order to make a comparison. That said, beware the authors’ conclusion: “experiments we carried out showed that the heuristics [for de-duplicating or triaging crashes] can dramatically over-count the number of bugs, and indeed may suppress bugs by wrongly grouping crashing inputs.”
  8. Consistent input seed files. The authors found that fuzzers’ “performance can vary substantially depending on what seeds are used. In particular, two different non-empty inputs need not produce similar performance, and the empty seed can work better than one might expect.” Somewhat surprisingly, many of the 32 papers evaluated did not carefully consider the impact of seed choices on algorithmic improvements.
  9. At least 30 runs per configuration with variance measured. With that many runs, anomalies can be ignored. Don’t compare the results of single runs (“as nearly ⅔ of the examined papers seem to,” the authors report!). Instead, look for research that not only performed multiple runs, but also used statistical tests to account for variances in those tests’ performance.
  10. Prefer bugs discovered over code coverage metrics. We at Trail of Bits believe that our work should have a practical impact. Though code coverage is an important criterion in choosing a fuzzer, this is about finding and fixing bugs. Evaluators of fuzz research should measure performance in terms of known bugs, first and foremost.

Despite how obvious or simple these recommendations may seem, the authors reviewed 32 high-quality publications on fuzzing and did not find a single paper that aligned with all 10. Then they demonstrated how conclusive the results from rigorous and objective experiments can be by using AFLFast and AFL as a case study. They determined that, “Ultimately, while AFLFast found many more ‘unique’ crashing inputs than AFL, it only had a slightly higher likelihood of finding more unique bugs in a given run.”

The authors’ results and conclusions showed decisively that in order to advance the science of software fuzzing, researchers must strive for disciplined statistical measurements and better empirical measurements. We believe this paper will begin a new chapter in fuzzing research by providing computer scientists with an excellent set of standards for designing, evaluating, and reporting software fuzzing experiments in the future.

In the meantime, if you’re evaluating a fuzzer for your work, approach with caution and this checklist.