Mutation testing reveals blind spots in test suites by systematically introducing bugs and checking if tests catch them. Blockchain developers should use mutation testing to measure the effectiveness of their test suites and find bugs that traditional testing can miss.
We’ve added a pickle file scanner to Fickling that uses an allowlist approach to protect AI/ML environments from malicious pickle files that could compromise models or infrastructure.
Sui’s Move language significantly improves flash loan security by replacing Solidity’s reliance on callbacks and runtime checks with a “hot potato” model that enforces repayment at the language level. This shift makes flash loan security a language guarantee rather than a developer responsibility.
By using smart contract programmability, exchanges can build custody solutions that remain secure even when multisig keys are compromised.
A vulnerability in Electron applications allows attackers to bypass code integrity checks by tampering with V8 heap snapshot files, enabling local backdoors in applications like Signal, 1Password, and Slack.
Our business operations intern at Trail of Bits built two AI-powered tools that became permanent company resources—a podcast workflow that saves 1,250 hours annually and a Slack exporter that enables efficient knowledge retrieval across the organization.
EIP-7730 enables hardware wallets to decode transactions into human-readable formats, eliminating blind signing vulnerabilities with minimal implementation effort for dApp developers.
We optimized the route for visiting every NYC subway station using algorithms from combinatorial optimization, creating a 20-hour tour that beats the existing world record by 45 minutes.
In this blog post, we’ll detail how attackers can exploit image scaling on Gemini CLI, Vertex AI Studio, Gemini’s web and API interfaces, Google Assistant, Genspark, and other production AI systems. We’ll also explain how to mitigate and defend against these attacks, and we’ll introduce Anamorpher, our open-source tool that lets you explore and generate these crafted images.
This post traces the decade-long evolution of Ruby Marshal deserialization exploits, demonstrating how security researchers have repeatedly bypassed patches and why fundamental changes to the Ruby ecosystem are needed rather than continued patch-and-hope approaches.
Our team won the runner-up prize of $3M at DARPA’s AI Cyber Challenge, demonstrating Buttercup’s world-class automated vulnerability discovery and patching capabilities with remarkable cost efficiency.
Now that DARPA’s AI Cyber Challenge (AIxCC) has officially ended, we can finally make Buttercup, our CRS (Cyber Reasoning System), open source!
While the AIxCC winner has not yet been announced, differences in the finalists’ approaches show that there are multiple viable paths forward to using AI for vulnerability detection.
Prompt injection pervades discussions about security for LLMs and AI agents. But there is little public information on how to write powerful, discreet, and reliable prompt injection exploits. In this post, we will design and implement a prompt injection exploit targeting GitHub’s Copilot Agent, with a focus on maximizing reliability and minimizing the odds of detection.
In my first month at Trail of Bits as an AI/ML security engineer, I found two remotely accessible memory corruption bugs in NVIDIA’s Triton Inference Server during a routine onboarding practice.