Finding randomness on the blockchain is hard. A classic mistake developers make when trying to acquire a random value on-chain is to use quantities like future block hashes, block difficulty, or timestamps. The problem with these schemes is that they are vulnerable to manipulation by miners. For example, suppose we are trying to run an on-chain lottery where users guess whether the hash of the next block will be even or odd. A miner then could bet that the outcome is even, and if the next block they mine is odd, discard it. Here, tossing out the odd block slightly increases the miner’s probability of winning the lottery. There are many real-world examples of “randomness” being generated via block variables, but they all suffer from the unavoidable problem that it is computationally easy for observers to determine how choices they make will affect the randomness generated on-chain.

Another related problem is electing leaders and validators in proof of stake protocols. In this case it turns out that being able to influence or predict randomness allows a miner to affect when they will be chosen to mine a block. There are a wide variety of techniques for overcoming this issue, such as Ouroboros’s verifiable secret-sharing scheme. However, they all suffer from the same pitfall: a non-colluding honest majority must be present.

In both of the above scenarios it is easy for attackers to see how different inputs affect the result of a pseudorandom number generator. This led Boneh, *et al.* to define **verifiable delay functions** (VDF’s). VDF’s are functions that require a moderate amount of sequential computation to evaluate, but once a solution is found, it is easy for anyone to verify that it is correct. Think of VDF’s as a time delay imposed on the output of some pseudorandom generator. This delay prevents malicious actors from influencing the output of the pseudorandom generator, since all inputs will be finalized before anyone can finish computing the VDF.

When used for leader selection, VDF’s offer a substantial improvement over verifiable random functions. Instead of requiring a non-colluding honest majority, VDF-based leader selection only requires the presence of any honest participant. This added robustness is due to the fact that no amount of parallelism will speed up the VDF, and any non-malicious actor can easily verify anyone else’s claimed VDF output is accurate.

## VDF Definitions

Given a delay time `t`

, a verifiable delay function `f`

must be both

**Sequential**: anyone can compute`f(x)`

in`t`

sequential steps, but no adversary with a large number of processors can distinguish the output of`f(x)`

from random in significantly fewer steps**Efficiently verifiable**: Given the output`y`

, any observer can verify that`y = f(x)`

in a short amount of time (specifically`log(t)`

).

In other words, a VDF is a function which takes exponentially more time to compute (even on a highly parallel processor) than it does to verify on a single processor. Also, the probability of a verifier accepting a false VDF output must be extremely small (chosen by a security parameter λ during initialization). The condition that no one can distinguish the output of `f(x)`

from random until the final result is reached is essential. Suppose we are running a lottery where users submit 16-bit integers and the winning number is determined by giving a seed to a VDF that takes 20 min to compute. If an adversary can learn 4 bits of the VDF output after only 1 min of VDF computation, then they might be able to alter their submission and boost their chance of success by a factor of 16!

Before jumping into VDF constructions, let’s examine why an “obvious” but incorrect approach to this problem fails. One such approach would be repeated hashing. If the computation of some hash function h takes t steps to compute, then using `f = h(h(...h(x)))`

as a VDF would certainly satisfy the sequential requirement above. Indeed, it would be impossible to speed this computation up with parallelism since each application of the hash depends entirely on the output of the previous one. However, this does not satisfy the efficiently verifiable requirement of a VDF. Anyone trying to verify that `f(x) = y`

would have to recompute the entire hash chain. We need the evaluation of our VDF to take exponentially more time to compute than to verify.

## VDF Candidates

There are currently three candidate constructions that satisfy the VDF requirements. Each one has its own potential downsides. The first was outlined in the original VDF paper by Boneh, et al. and uses injective rational maps. However, evaluating this VDF requires a somewhat large amount of parallel processing, leading the authors to refer to it as a “weak VDF.” Later, Pietrzak and Wesolowski independently arrived at extremely similar constructions based on repeated squaring in groups of unknown order. At a high level, here’s how the Pietrzak scheme works.

- To set up the VDF, choose a time parameter
`T`

, a finite abelian group`G`

of unknown order, and a hash function`H`

from bytes to elements of`G`

. - Given an input
`x`

, let`g = H(x)`

evaluate the VDF by computing`y = g`

^{2T}

The repeated squaring computation is not parallelizable and reveals nothing about the end result until the last squaring. These properties are both due to the fact that we do not know the order of `G`

. That knowledge would allow attackers to use group theory based attacks to speed up the computation.

Now, suppose someone asserts that the output of the VDF is some number `z`

(which may or may not be equal to `y`

). This is equivalent to showing that `z = v`

and ^{2(T/2)}`v = g`

. Since both of the previous equations have the same exponent, they can be verified simultaneously by checking a random linear combination, e.g., ^{2(T/2)}`v`

, for a random ^{r} z = (g^{r} v)^{2(T/2)}`r`

in `{1, … , 2`

(where λ is the security parameter). More formally, the prover and verifier perform the following interactive proof scheme:^{λ}}

- The prover computes
`v = g`

and sends^{2(T/2)}`v`

to the verifier - The verifier sends a random
`r`

in`{1, … , 2`

to the prover^{l}} - Both the prover and verifier compute
`g`

and_{1}= g^{r}v`z`

_{1}= v^{r z} - The prover and verifier recursively prove that
`z`

_{1}= g_{1}^{2(T/2)}

The above scheme can be made non-interactive using a technique called the Fiat-Shamir heuristic. Here, the prover generates a challenge `r`

at each level of the recursion by hashing `(G,g,z,T,v)`

and appending `v`

to the proof. In this scenario the proof contains `log`

elements and requires approximately _{2} T`(1 + 2/√T) T`

.

## Security Analysis of Pietrzak Scheme

The security of Pietrzak’s scheme relies on the the security of the **low order assumption**: it is computationally infeasible for an adversary to find an element of low order in the group being used by the VDF. To see why finding an element of low order breaks the scheme, first assume that a malicious prover Eve found some element `m`

of small order `d`

. Then Eve sends `zm`

to the verifier (where `z`

is the valid output). The invalid output will be accepted with probability `1/d`

since

- When computing the second step of the recursion, we will have the base element
`g`

, where_{1}= g^{r}v`v = g`

, and need to show that^{2T/2}m`g`

_{1}^{T/2}= v^{r}(zm) - The
`m`

term on the left hand side is`m`

^{T/2} - The
`m`

term on the right hand side is`m`

^{r+1} - Since
`m`

has order`d`

, these two will be equal when`r+1 = T/2 mod d`

, which happens with probability`1/d`

To see a full proof of why the low order assumption is both necessary and sufficient to show Pietrzak’s scheme is sound, see Boneh’s survey of recent VDF constructions.

The security analysis assumes that one can easily generate a group of unknown order that satisfies the low order assumption. We will see below that there are not groups currently known to satisfy these constraints that are amenable to a trustless setup, i.e., a setup where there is no party who can subvert the VDF protocol.

For example, let’s try to use everyone’s favorite family of groups: the integers modulo the product of two large primes (RSA groups). These groups have unknown order, since finding the order requires factoring a large integer. However, they do not satisfy the low order assumption. Indeed, the element `-1`

is always of order 2. This situation can be remedied by taking the quotient of an RSA group `G`

by the subgroup `{1,-1}`

. In fact, if the modulus of `G`

is a product of strong primes (primes such that `p-1/ 2`

is also prime), then after taking the aforementioned quotient there are no elements of low order other than 1.

This analysis implies that RSA groups are secure for Pietrzak’s VDF, but there’s a problem. To generate an RSA group, someone has to know the factorization of the modulus N. Devising a trustless RSA group selection protocol–-where no one knows the factorization of the modulus N–-is therefore an interesting and important open problem in this area.

Another avenue of work towards instantiating Pietrzak’s scheme involves using the class group of an imaginary quadratic number field. This family of groups does not suffer from the above issue where selection requires a trusted third party. Simply choosing a large negative prime (with several caveats) will generate a group whose order is computationally infeasible to determine even for the person who chose the prime. However, unlike RSA groups, the difficulty of finding low-order elements in class groups of quadratic number fields is not well studied and would require more investigation before any such scheme could be used.

## State of VDFs and Open Problems

As mentioned in the previous section, both the Pietrzak and Wesolowski schemes rely on generating a group of unknown order. Doing so without a trusted party is difficult in the case of RSA groups, but class groups seem to be a somewhat promising avenue of work. Furthermore, the Wesolowski scheme assumes the existence of groups that satisfy something called the adaptive root assumption, which is not well studied in the mathematical literature. There are many other open problems in this area, including constructing quantum resistant VDFs, and the potential for ASICs to ruin the security guarantees of VDF constructions in practice.

As for industry adoption of VDF’s, several companies in the blockchain space are trying to use VDF’s for consensus algorithms. Chia, for example, uses the repeated squaring technique outlined above, and is currently running a competition for the fastest implementation of this scheme. The Ethereum Foundation also appears to be developing a pseudorandom number generator that combines RANDAO with VDF’s. While both are very exciting projects that will be hugely beneficial to the blockchain community, this remains a very young area of research. Take any claim of security with a grain of salt.

Pingback: We crypto now | Trail of Bits Blog

The above scheme can be made non-interactive using a technique called the Fiat-Shamir heuristic. Here, the prover generates a challenge r at each level of the recursion by hashing (G,g,z,T,v) and appending v to the proof. In this scenario the proof contains log2 T elements and requires approximately (1 + 2/√T) T.

This part have some error descrition. I think you means is “Here, the verifier generates a challenge r at each level of the recursion by hashing (G,g,z,T,v) and appending v to the proof.”

Pingback: Crypto 2019 Takeaways | Trail of Bits Blog