Publications
LeftoverLocals: Listening to LLM responses through leaked GPU local memory
We are disclosing LeftoverLocals: a vulnerability that allows recovery of data from GPU local memory created by another process on Apple, Qualcomm, AMD, and Imagination GPUs. LeftoverLocals impacts the security posture of GPU applications as a whole, with particular significance to LLMs and ML models run on impacted GPU […]
Assessing the security posture of a widely used vision model: YOLOv7
TL;DR: We identified 11 security vulnerabilities in YOLOv7, a popular computer vision framework, that could enable attacks including remote code execution (RCE), denial of service, and model differentials (where an attacker can trigger a model to perform differently in different contexts). Open-source software […]
Trail of Bits’s Response to OSTP National Priorities for AI RFI
The Office of Science and Technology Policy (OSTP) has circulated a request for information (RFI) on how best to develop policies that support the responsible development of AI while minimizing risk to rights, safety, and national security. In our response, we highlight the following points: To ensure that AI […]
Trail of Bits’s Response to NTIA AI Accountability RFC
The National Telecommunications and Information Administration (NTIA) has circulated an Artificial Intelligence (AI) Accountability Policy Request for Comment on what policies can support the development of AI audits, assessments, certifications, and other mechanisms to create earned trust in AI systems. Trail of Bits has submitted a response to the […]
