I recently had the privilege of giving a keynote at BSidesLisbon. I had a great time at the conference, and I’d like to thank Bruno Morisson for inviting me. If you’re into port, this is the conference for you! I recommend that anyone in the area consider attending next year.
I felt there was a need to put the recent advances in automated bug finding into context. The new developments of the Cyber Grand Challenge, AFL, and libFuzzer were easy to miss if you weren’t paying attention. However, the potential impact they have on our industry is dramatic.
After giving this talk a second time at IT Defense yesterday, I would now like to share it with the Internet. You can watch it below to get my take on where this research area has come from, where we are now, and where I expect we will go. Enjoy!
The last 2 years have seen greater advances in automated security testing than the 10 before it. AFL engineered known best practices into an easy-to-use tool, the DARPA Cyber Grand Challenge provided a reliable competitive benchmark and funding for new research, and Project Springfield (aka SAGE) is now available to the public. The common availability of these new technologies has the potential for massive impact on our industry.
How do these tools work and what sets them apart from past approaches? Where do they excel and what are their limitations? How can I use these tools today? How will these technologies advance and what further developed is needed? And finally, how much longer do humans have as part of the secure development lifecycle?
Original fuzzing project assignment from UW-Madison (1988)
PROTOS – systematic approach to eliminate software vulnerabilities (2002)
The Advantages of Block-Based Protocol Analysis for Security Testing (2002)
DART: Directed Automated Random Testing (2005)
EXE: Automatically Generating Inputs of Death (2006)
EXE: 10 years later (2016)
Automated Whitebox Fuzz Testing (2008)
American Fuzzy Lop (AFL)
DARPA Cyber Grand Challenge Competitor Portal (2013)
Exploitation and state machines (2011)
Your tool works better than mine? Prove it. (2016)
Microsoft Springfield (2016)
Google OSS-Fuzz (2016)
GRR – High-throughput fuzzer and emulator of DECREE binaries
Manticore – A Python symbolic execution platform
McSema – x86 to machine code translation framework
DARPA Challenge Sets for Linux, macOS, and Windows
Trail of Bits publications about the Cyber Grand Challenge
- The University of Oulu is in Finland.
- The University of Wisconsin assigned homework in fuzzing in 1988.
- SV-Comp is for software verification. ML competitions exist too.
Pingback: 2016 Year in Review | Trail of Bits Blog
Pingback: 2017 in review | Trail of Bits Blog
Pingback: Fuzzing Like It’s 1989 | Trail of Bits Blog