Don’t overextend your Oblivious Transfer

By Joop van de Pol

We found a vulnerability in a threshold signature scheme that allows an attacker to recover the signing key of threshold ECDSA implementations that are based on Oblivious Transfer (OT). A malicious participant of the threshold signing protocols could perform selective abort attacks during the OT extension subprotocol, recover the secret values of other parties, and eventually recover the signing key. Using this key, the attacker could assume the identities of users, gain control over critical systems, or pilfer financial assets. While we cannot yet disclose the client software affected by this vulnerability, we believe it is instructive for other developers implementing MPC protocols.

Protecting against this vulnerability is straightforward: since the attack relies on causing selective aborts during several protocol rounds, punishing or excluding participants that cause selective aborts is sufficient. Still, it’s a good example of a common problem that often leads to severe or even critical issues: a disconnect between assumptions made by academics and the implementers trying to build these protocols efficiently in real systems.

Threshold signature schemes

Threshold signature schemes (TSS) are powerful cryptographic objects that allow decentralized control of the signing key for a digital signature scheme. They are a specific application of the more general multi-party computation (MPC), which aims to decentralize arbitrary computation. Each TSS protocol is typically defined for a specific digital signature scheme because different signature schemes require different computations to create signatures.

Current research aims to define efficient TSS protocols for various digital signature schemes. The target for efficiency includes both computation and communication between the different participants. Typically, TSS protocols rely on standard techniques used in MPC, such as secret sharing, zero-knowledge proofs, and multiplicative-to-additive conversion

Threshold ECDSA

The ECDSA signature scheme is widely used. However, threshold schemes for ECDSA are generally more complicated than those for other signature schemes. This is because an ECDSA signature requires the computation of the modular inverse of a secret value.

Various MPC techniques can be used to distribute the computation of this modular inverse. Currently, one line of work in threshold signature schemes for ECDSA uses the homomorphic Paillier encryption scheme for this purpose, as shown in work by Lindell et al., Gennaro et al., and following works. This blog post will focus on schemes that rely on oblivious transfer (OT), such as the work by Doerner et al. and following works, or Cait-Sith.

Before explaining what OT is, it should be noted that the basic variant is relatively inefficient. To mitigate this issue, researchers proposed something called OT extension, where a small number of OTs can efficiently be turned into a larger number of OTs. This feature is eagerly used by the creators of threshold signature schemes, as you can run the setup of the small number of OTs once, and then extend arbitrarily many times.

Oblivious transfer

Oblivious transfer is like the setup of a magician’s card trick. A magician has a bunch of cards and wants you to choose one of them, so they can show off their magic prowess by divining which card you chose. To this end, it is important that the magician does not know which card you chose, but also that you choose exactly one card and cannot claim later that you actually chose another card.

In real life, the magician could let you write something on the card that you chose, forcing you to choose exactly one card. However, this would not be good enough for the cryptographic setting, because the magician could afterwards just look at all the cards (using their impressive sleight of hand to hide this fact) and pick out the card that has text on it. A better solution would be to have the magician write a random word on each card, such that you can choose a card by memorizing this word. Now, in real life the magician might allow you to look at multiple cards before choosing one, whereas in the cryptographic case you have to choose a card blindly such that you only learn the random word written on the card that you chose.

After choosing the card and giving it back to the magician (randomly shuffling the cards to ensure they cannot directly pick out the card that you returned), they can now try to figure out which card you chose. In real life, the magician will use all kinds of tricks to try and pick out your card, whereas in the cryptographic setting, they actually should not be able to.

So, in a nutshell, OT is about a sender (the magician) who wants to give a receiver (you, the mark) a choice between some values (cards). The sender should not learn the receiver’s choice, and the receiver should not learn any other values than the chosen one.

It turns out that OT is a very powerful MPC primitive, as it can be used as a building block to construct protocols for any arbitrary multi-party computation. However, implementing OT without any special assumptions requires asymmetric cryptography, which is relatively expensive. Using expensive building blocks will lead to an inefficient protocol, so something more is needed to be able to use OT in practice.

OT extension

OT requires either asymmetric cryptography or “special assumptions.” This means that OT is possible when the two parties already have access to something called correlated randomness. Even better, this correlated randomness can be created from the output of an OT protocol.

As a result, it is possible to run the expensive OT protocol a number of times, and then to extend these “base” OTs into many more OTs. This extension is possible using only symmetric cryptography (such as hash functions and pseudo-random generators), which makes it more efficient than the expensive asymmetric variants.

For this blog post, we will focus on a particular line of work in OT extension, starting with this paper by Ishai et al. It is a bit too complicated to explain in detail how this scheme works, but the following points are important:

  • It uses only symmetric primitives (pseudo-random generator and hash function).
  • The role of sender and receiver is swapped (sender in base OT becomes receiver in extended OT and vice versa).
  • The protocol includes constructing randomness that is correlated with both the old choices (of the base OTs) and the new choices (of the OT extension).
  • The extended OT sender cannot cheat, but the protocol is not secure against a cheating extended OT receiver.
  • What does this last point mean? The extended OT receiver can cheat and learn the original choice bits belonging to the extended OT sender. Ishai et al. proposed a solution, but it is not very efficient. Therefore, followup works such as by Asharov et al. and by Keller et al. add a kind of consistency check, where the extended OT receiver has to provide some additional information. The extended OT sender can then use this information to verify that the receiver did not cheat.

    These consistency checks restrict how much the receiver can learn about the sender’s secret choices, but they are not perfect. The check that the sender performs to verify the receiver’s information depends on their own secret choices. Therefore, the receiver can still cheat in specific places such that they learn some bits of the sender’s secret choices based on whether or not the sender aborts. This is known as a selective abort attack, as the receiver can selectively try to cause an abort and learns some information from the sender as a result.

    The aforementioned papers acknowledge that this kind of leakage can happen for a cheating receiver. However, the authors choose the parameters of the scheme such that the receiver can never learn enough about the sender’s original choice bits when running the protocol once. Problem solved, right?

    How the vulnerability works

    Recall that in the context of threshold signature schemes based on OT, you want to perform the base OTs once during a set-up phase and reuse this set-up arbitrarily many times to perform OT extension. Since this improves efficiency, implementers will jump on it. What is not mentioned very explicitly, and what caused the vulnerability that we found, is that you can reuse the set-up arbitrarily many times only if the OT extension receiver does not cheat.

    If the receiver cheats, then they can learn a few bits of the secret set-up value of the OT extension sender. This does become a problem if you allow the receiver to do this multiple times over different executions of the protocol. Eventually, the receiver learns all secret sender bits, and the security is completely compromised. Typically, depending on the specific TSS, the receiver can now use the secret sender bits to recover sender shares corresponding to the ECDSA nonce. In a scheme with a threshold of two, this means that the receiver recovers the nonce, and they can recover the ECDSA signing key given a valid signature with this nonce. (In schemes with more parties, the attacker may have to repeat this attack for multiple parties.)

    So what’s the issue here exactly? Selective abort attacks are known and explicitly discussed in OT extension papers, but those papers are not very clear on whether you can reuse the base OTs. Implementers and TSS protocol designers want to reuse the base OTs arbitrarily many times, because that’s efficient. TSS protocol designers know that selective abort attacks are an issue, so they even specify checks and consider the case closed, but they are not very clear on what implementers are supposed to do when checks fail. This kind of vagueness in academic papers invariably leads to attacks on real-world systems.

    In this case, a clear solution would be to throw away the setup for a participant that has attempted to cheat during the OT extension protocol. Looking at some public repositories out there, most OT extension libraries will report something along the lines of “correlation check failed,” which does not tell a user what to do next. In fact, only one library added a warning that a particular check’s failure may represent an attack and that you should not re-run the protocol.

    Bridging the gap between academia and implementations

    Most academic MPC papers provide a general overview of the scheme and corresponding proof of security; however, they don’t have the detail required to constitute a clear specification and aren’t intended to be blueprints for implementations. Making wrong assumptions when interpreting academic papers to create real-world applications can lead to severe issues. We hope that the recent call from NIST for Multi-Party Threshold Cryptography will set a standard for specifications of MPC and TSS and prevent such issues in the future.

    In the meantime, if you’re planning to implement threshold ECDSA, another TSS, or MPC in general, you can contact us to specify, implement, or review those implementations.

    Leave a Reply