← All Authors
Artem Dinaburg
AI-enabled code assistants (like GitHub’s Copilot, Continue.dev, and Tabby) are making software development faster and more productive. Unfortunately, these tools are often bad at Solidity. So we decided to improve them! To make it easier to write, edit, and understand Solidity with AI-enabled tools, we have: Added support for Solidity into Tabby […]
(Would you get up and throw it away?) [sing to the tune of The Beatles – With A Little Help From My Friends] Here’s a riddle: when new GPUs are constantly being produced, product cycles are ~18-24 months long, and each cycle doubles GPU power (per Huang’s Law), what […]
Today we’re going to provision some cloud infrastructure the Max Power way: by combining automation with unchecked AI output. Unfortunately, this method produces cloud infrastructure code that 1) works and 2) has terrible security properties. In a nutshell, AI-based tools like Claude and ChatGPT readily provide extremely bad cloud infrastructure provisioning code, […]
Earlier this week, at Apple’s WWDC, we finally witnessed Apple’s AI strategy. The videos and live demos were accompanied by two long-form releases: Apple’s Private Cloud Compute and Apple’s On-Device and Server Foundation Models. This blog post is about the latter. So, what is Apple releasing, and how does it compare to […]
eBPF (extended Berkeley Packet Filter) has emerged as the de facto Linux standard for security monitoring and endpoint observability. It is used by technologies such as BPFTrace, Cilium, Pixie, Sysdig, and Falco due to its low overhead and its versatility. There is, however, a dark (but open) secret: eBPF was never intended […]
Trail of Bits has developed a suite of open-source libraries designed to streamline the creation and deployment of eBPF applications. These libraries facilitate efficient process and network event monitoring, function tracing, kernel debug symbol parsing, and eBPF code generation. Previously, deploying portable, dependency-free eBPF applications posed significant challenges due to Linux kernel […]
The National Telecommunications and Information Administration (NTIA) has circulated an Artificial Intelligence (AI) Accountability Policy Request for Comment on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems. Trail of Bits has submitted a response to the […]
Is artificial intelligence (AI) capable of powering software security audits? Over the last four months, we piloted a project called Toucan to find out. Toucan was intended to integrate OpenAI’s Codex into our Solidity auditing workflow. This experiment went far […]
How quickly can we use brute force to guess a 64-bit number? The short answer is, it all depends on what resources are available. So we’re going to examine this problem starting with the most naive approach and then expand to other techniques involving parallelization. We’ll discuss parallelization at the CPU level with SIMD instructions, […]
Have you ever tried using LLVM’s X-Ray profiling tools to make some flame graphs, but gotten obscure errors like: ==65892==Unable to determine CPU frequency for TSC accounting. ==65892==Unable to determine CPU frequency. Or worse, have you profiled every function in an application, only to find the sum of all function runtimes accounted for ~15 minutes […]
It is time for the second installment of our efforts to reproduce original fuzzing research on modern systems. If you haven’t yet, please read the first part. This time we tackle fuzzing on Windows by reproducing the results of “An Empirical Study of the Robustness of Windows NT Applications Using Random Testing” (aka ‘the NT […]
With 2019 a day away, let’s reflect on the past to see how we can improve. Yes, let’s take a long look back 30 years and reflect on the original fuzzing paper, An Empirical Study of the Reliability of UNIX Utilities, and its 1995 follow-up, Fuzz Revisited, by Barton P. Miller. In this blog post, […]
Today, we’re going to talk about a hard problem that we are working on as part of DARPA’s Cyber Fault-Tolerant Attack Recovery (CFAR) program: automatically protecting software from 0-day exploits, memory corruption, and many currently undiscovered bugs. You might be thinking: “Why bother? Can’t I just compile my code with exploit mitigations like stack guard, […]
This is the second half of our blog post on the Meltdown an Spectre vulnerabilities, describing Spectre Variant 1 (V1) and Spectre Variant 2 (V2). If you have not done so already, please review the first blog post for an accessible review of computer architecture fundamentals. This blog post will start by covering the technical […]
In the past few weeks the details of two critical design flaws in modern processors were finally revealed to the public. Much has been written about the impact of Meltdown and Spectre, but there is scant detail about what these attacks are and how they work. We are going to try our best to fix […]