The Smart Fuzzer Revolution

I recently had the privilege of giving a keynote at BSidesLisbon. I had a great time at the conference, and I’d like to thank Bruno Morisson for inviting me. If you’re into port, this is the conference for you! I recommend that anyone in the area consider attending next year.

I felt there was a need to put the recent advances in automated bug finding into context. The new developments of the Cyber Grand Challenge, AFL, and libFuzzer were easy to miss if you weren’t paying attention. However, the potential impact they have on our industry is dramatic.

After giving this talk a second time at IT Defense yesterday, I would now like to share it with the Internet. You can watch it below to get my take on where this research area has come from, where we are now, and where I expect we will go. Enjoy!

You should go to BSidesLisbon

You should go to BSidesLisbon

——–

The last 2 years have seen greater advances in automated security testing than the 10 before it. AFL engineered known best practices into an easy-to-use tool, the DARPA Cyber Grand Challenge provided a reliable competitive benchmark and funding for new research, and Project Springfield (aka SAGE) is now available to the public. The common availability of these new technologies has the potential for massive impact on our industry.

How do these tools work and what sets them apart from past approaches? Where do they excel and what are their limitations? How can I use these tools today? How will these technologies advance and what further developed is needed? And finally, how much longer do humans have as part of the secure development lifecycle?

See the slides in full here.

References

Original fuzzing project assignment from UW-Madison (1988)
http://pages.cs.wisc.edu/~bart/fuzz/CS736-Projects-f1988.pdf

PROTOS – systematic approach to eliminate software vulnerabilities (2002)
https://www.ee.oulu.fi/roles/ouspg/PROTOS_MSR2002-protos

The Advantages of Block-Based Protocol Analysis for Security Testing (2002)
http://www.immunitysec.com/downloads/advantages_of_block_based_analysis.html

DART: Directed Automated Random Testing (2005)
https://wkr.io/public/ref/godefroid2005dart.pdf

EXE: Automatically Generating Inputs of Death (2006)
https://web.stanford.edu/~engler/exe-ccs-06.pdf

EXE: 10 years later (2016)
https://ccadar.blogspot.com/2016/11/exe-10-years-later.html

Automated Whitebox Fuzz Testing (2008)
https://patricegodefroid.github.io/public_psfiles/ndss2008.pdf

American Fuzzy Lop (AFL)
http://lcamtuf.coredump.cx/afl/

DARPA Cyber Grand Challenge Competitor Portal (2013)
http://archive.darpa.mil/CyberGrandChallenge_CompetitorSite/

Exploitation and state machines (2011)
http://archives.scovetta.com/pub/conferences/infiltrate_2011/Fundamentals_of_exploitation_revisited.pdf

Your tool works better than mine? Prove it. (2016)
https://blog.trailofbits.com/2016/08/01/your-tool-works-better-than-mine-prove-it/

Microsoft Springfield (2016)
https://www.microsoft.com/en-us/springfield/

Google OSS-Fuzz (2016)
https://github.com/google/oss-fuzz

LLVM libFuzzer
http://llvm.org/docs/LibFuzzer.html

GRR – High-throughput fuzzer and emulator of DECREE binaries
https://github.com/trailofbits/grr

Manticore – A Python symbolic execution platform
https://github.com/trailofbits/manticore

McSema – x86 to machine code translation framework
https://github.com/trailofbits/mcsema

DARPA Challenge Sets for Linux, macOS, and Windows
https://github.com/trailofbits/cb-multios

Trail of Bits publications about the Cyber Grand Challenge
https://blog.trailofbits.com/category/cyber-grand-challenge/

Errata

  • The University of Oulu is in Finland.
  • The University of Wisconsin assigned homework in fuzzing in 1988.
  • SV-Comp is for software verification. ML competitions exist too.

3 thoughts on “The Smart Fuzzer Revolution

  1. Pingback: 2016 Year in Review | Trail of Bits Blog

  2. Pingback: 2017 in review | Trail of Bits Blog

  3. Pingback: Fuzzing Like It’s 1989 | Trail of Bits Blog

Leave a Reply