A few weeks ago, Charlie Miller, Alex Sotirov, and I arrived on a new meme: No More Free Bugs. We started talking about it publicly at CanSecWest where Charlie Miller notably announced it for his Lightning Talk and in his ZDNet interview. It is now making its way through Twitter and the rest of the tubes. It is understandable that this may be a controversial position, so I’m going to give some more background on the argument here.
First, this is neither advocating non-disclosure nor any disclosure at all. That decision is left to the discoverer of the vulnerability. I’m not even going to touch the anti/partial/full disclosure argument.
Second, this philosophy is primarily regarding vulnerabilities in products sold for profit by for profit companies, especially those that already employ security engineers as employees or consultants. Vulnerabilities discovered in open source projects or Internet infrastructure deservedly require different handling.
The basic argument is as follows:
- Vulnerabilities place users and customers at risk. Otherwise, vendors wouldn’t bother to fix them. Internet malware and worms spread via security vulnerabilities and place home users’ and enterprises’ sensitive data at risk.
- Vulnerabilities have legitimate value. Software vendors pay their own employees and consultants to find them and help them fix them in their products during development. Third-party companies such as Verisign (iDefense) and ZDI will pay researchers for exclusive rights to the vulnerability so that they may responsibly disclose it to the vendor but also share advance information about it to their customers (Verisign/iDefense) or build detection for it into their product (ZDI). Google is even offering a cash bounty for the best security vulnerability in Native Client. Donald Knuth personally pays for bugs found in his software and Dan Bernstein paid $1000 personally as a bounty for a vulnerability in djbdns.
- Reporting vulnerabilities can be legally and professionally risky. When a researcher discloses the vulnerability to the vendor, there is no “whistle blower” protection and independent security researchers may be unable to legally defend themselves. You may get threatened, sued, or even thrown in jail. A number of security researchers have had their employers pressured by vendors to whom they were responsibly disclosing security vulnerabilities. Vendors expect security researchers to follow responsible disclosure guidelines when they volunteer vulnerabilities, but they are under no such pressure to follow responsible guidelines in their actions towards security researchers. Where are the vendors’ security research amnesty agreements?
- It is unfair to paying customers. Professional bug hunting is a specialized and expensive business. Software vendors that “freeload” on the security research community place their customers at risk by not putting forth resources to discover vulnerabilities in and fix their products.
Therefore, reporting vulnerabilities for free without any legal agreements in place is risky volunteer work. There are a number of legitimate alternatives to the risky proposition of volunteering free vulnerabilities and I have already mentioned a few (I don’t want to turn this into an advertisement or discussion on the best/proper way to monetize security research). There just need to be more legal and transparent options for monetizing security research. This would provide a fair market value for a researcher’s findings and incentivize more researchers to find and report vulnerabilities to these organizations. All of this would help make security research a more widespread and legitimate profession. In the meantime, I’m not complaining about its current cachet and allure.