While Trail of Bits is known for developing security tools like Slither, Medusa, and Fickling, our engineering efforts extend far beyond our own projects. Throughout 2024, our team has been deeply engaged with the broader security ecosystem, tackling challenges in open-source tools and infrastructure that security engineers rely on every day. This year, our engineers […]
This is a joint post with the Hugging Face Gradio team; read their announcement here! You can find the full report with all of the detailed findings from our security audit of Gradio 5 here. Hugging Face hired Trail of Bits to audit Gradio 5, a popular open-source library that provides a web interface that […]
At DEF CON, Michael Brown, Principal Security Engineer at Trail of Bits, sat down with Michael Novinson from Information Security Media Group (ISMG) to discuss four critical areas where AI/ML is revolutionizing security. Here’s what they covered: AI/ML techniques surpass the limits of traditional software analysis As Moore’s law slows down after 20 years of […]
Today we’re going to provision some cloud infrastructure the Max Power way: by combining automation with unchecked AI output. Unfortunately, this method produces cloud infrastructure code that 1) works and 2) has terrible security properties. In a nutshell, AI-based tools like Claude and ChatGPT readily provide extremely bad cloud infrastructure provisioning code, […]
With DARPA’s AI Cyber Challenge (AIxCC) semifinal starting today at DEF CON 2024, we want to introduce Buttercup, our AIxCC submission. Buttercup is a Cyber Reasoning System (CRS) that combines conventional cybersecurity techniques like fuzzing and static analysis with AI and machine learning to find and fix software vulnerabilities. The system is designed to operate […]
Today, we present the second of our open-source AI security audits: a look at security issues we found in an open-source retrieval augmented generation (RAG) application that could lead to chatbot output poisoning, inaccurate document ingestion, and potential denial of service. This audit follows up on our previous work that identified 11 security vulnerabilities in […]
Earlier this week, at Apple’s WWDC, we finally witnessed Apple’s AI strategy. The videos and live demos were accompanied by two long-form releases: Apple’s Private Cloud Compute and Apple’s On-Device and Server Foundation Models. This blog post is about the latter. So, what is Apple releasing, and how does it compare to […]
Earlier this week, Apple announced Private Cloud Compute (or PCC for short). Without deep context on the state of the art of Artificial Intelligence (AI) and Machine Learning (ML) security, some sensible design choices may seem surprising. Conversely, some of the risks linked to this design are hidden in the fine print. […]
In part 1, we introduced Sleepy Pickle, an attack that uses malicious pickle files to stealthily compromise ML models and carry out sophisticated attacks against end users. Here we show how this technique can be adapted to enable long-lasting presence on compromised systems while remaining undetected. This variant technique, which we call […]
We’ve developed a new hybrid machine learning (ML) model exploitation technique called Sleepy Pickle that takes advantage of the pervasive and notoriously insecure Pickle file format used to package and distribute ML models. Sleepy pickle goes beyond previous exploit techniques that target an organization’s systems when they deploy ML models to instead […]
We are offering AI/ML safety and security training this year! Recent advances in AI/ML technologies opened up a new world of possibilities for businesses to run more efficiently and offer better services and products. However, incorporating AI/ML into computing systems brings new and unique complexities, risks, and attack surfaces. In our experience […]
We’ve added new features to Fickling to offer enhanced threat detection and analysis across a broad spectrum of machine learning (ML) workflows. Fickling is a decompiler, static analyzer, and bytecode rewriter for the Python pickle module that can help you detect, analyze, or create malicious pickle files. While the ML community […]
The US Army’s Program Executive Office for Intelligence, Electronic Warfare and Sensors (PEO IEW&S) recently issued a request for information (RFI) on methods to implement and automate production of an artificial intelligence bill of materials (AIBOM) as part of Project Linchpin. The RFI describes the AIBOM as a detailed […]
At Trail of Bits, we pride ourselves on making our best tools open source, such as Slither, PolyTracker, and RPC Investigator. But while this post is about open source, it’s not about our tools… In 2023, our employees submitted over 450 pull requests (PRs) that were merged into non-Trail of Bits repositories. This demonstrates our […]
Late last month, DARPA officially opened registration for their AI Cyber Challenge (AIxCC). As part of the festivities, DARPA also released some highly anticipated information about the competition: a request for comments (RFC) that contained a sample challenge problem and the scoring methodology. Prior rules documents and FAQs released by DARPA painted […]