No More Free Bugs

No More Free Bugs

Alex and I holding "No More Free Bugs" sign during Charlie's Lighting Talk at CanSecWest

A few weeks ago, Charlie Miller, Alex Sotirov, and I arrived on a new meme: No More Free Bugs.  We started talking about it publicly at CanSecWest where Charlie Miller notably announced it for his Lightning Talk and in his ZDNet interview.  It is now making its way through Twitter and the rest of the tubes.  It is understandable that this may be a controversial position, so I’m going to give some more background on the argument here.

First, this is neither advocating non-disclosure nor any disclosure at all.  That decision is left to the discoverer of the vulnerability.  I’m not even going to touch the anti/partial/full disclosure argument.

Second, this philosophy is primarily regarding vulnerabilities in products sold for profit by for profit companies, especially those that already employ security engineers as employees or consultants.  Vulnerabilities discovered in open source projects or Internet infrastructure deservedly require different handling.

The basic argument is as follows:

  • Vulnerabilities place users and customers at risk.  Otherwise, vendors wouldn’t bother to fix them.  Internet malware and worms spread via security vulnerabilities and place home users’ and enterprises’ sensitive data at risk.
  • Vulnerabilities have legitimate value.  Software vendors pay their own employees and consultants to find them and help them fix them in their products during development.  Third-party companies such as Verisign (iDefense) and ZDI will pay researchers for exclusive rights to the vulnerability so that they may responsibly disclose it to the vendor but also share advance information about it to their customers (Verisign/iDefense) or build detection for it into their product (ZDI).  Google is even offering a cash bounty for the best security vulnerability in Native Client.  Donald Knuth personally pays for bugs found in his software and Dan Bernstein paid $1000 personally as a bounty for a vulnerability in djbdns.
  • Reporting vulnerabilities can be legally and professionally risky.  When a researcher discloses the vulnerability to the vendor, there is no “whistle blower” protection and independent security researchers may be unable to legally defend themselves.  You may get threatened, sued, or even thrown in jail.  A number of security researchers have had their employers pressured by vendors to whom they were responsibly disclosing security vulnerabilities.  Vendors expect security researchers to follow responsible disclosure guidelines when they volunteer vulnerabilities, but they are under no such pressure to follow responsible guidelines in their actions towards security researchers.  Where are the vendors’ security research amnesty agreements?
  • It is unfair to paying customers.  Professional bug hunting is a specialized and expensive business.  Software vendors that “freeload” on the security research community place their customers at risk by not putting forth resources to discover vulnerabilities in and fix their products.

Therefore, reporting vulnerabilities for free without any legal agreements in place is risky volunteer work.  There are a number of legitimate alternatives to the risky proposition of volunteering free vulnerabilities and I have already mentioned a few (I don’t want to turn this into an advertisement or discussion on the best/proper way to monetize security research).   There just need to be more legal and transparent options for monetizing security research.  This would provide a fair market value for a researcher’s findings and incentivize more researchers to find and report vulnerabilities to these organizations.  All of this would help make security research a more widespread and legitimate profession.  In the meantime, I’m not complaining about its current cachet and allure.

Comments

  1. Dino,

    there’s an interesting dilemma here with companies like AAPL and GOOG that use and contribute to open-source software. How do you deal with that? I.e. you hunt for bugs in their products, are successful – alas the component you hit is an OSS one, from a not-for-profit project. So, whatcha gonna do? Sell it to ZDI, iDefense or others which should get the bug fixed in the OSS product as well (for suitable choices of “others”). Alternatively, report directly to the OSS project and receive a nice pat on the back.

    I for myself think the first route (selling) is the correct one here, but morally people of course are free to disagree with me. Of course, with some of the commercial vendors buying either key OSS devs or whole projects (see CUPS) the point might become moot soon enough anyway.

    Cheers,
    .:Ralf:.

    • Hi Ralf,

      This is my current opinion on vulnerabilities in open source projects. Personally, I would report it to the OSS project as a free contribution to the non-profit project. Commercial vendors that ship OSS code often contribute development back up and often security fixes as well. Tavis Ormandy, Chris Evans, and Will Drewry at Google notably find a lot of bugs in open source software and contribute them to the upstream projects. I consider this a way that a vendor can use OSS projects responsibly. If the OSS project doesn’t like the commercialization of their work, they can just change their license. Some open source projects, however, are primarily vendor-sponsored (i.e. WebKit), so I would personally consider bugs in them “non-free” despite their volunteer origin (as KHTML, for example) because they have far more users via commercial projects than the open source ones. Either way, bugs in commercially shipped and supported open source projects are a big grey area and I’m still reasoning it out.

      -Dino

  2. I understand that this is not a non-disclosure/disclosure matter. And as you have specified this is just an issue regarding vulnerabilities in products sold for profit by for profit companies. So it all about money. I dont agree that this campain should add as extra motiviation that vulnerabilities place users and customers at risk because then you have to deal with the non-disclosure dilemma that in the begining you dont want to touch. Also it does not change a thing if you get paid or not for a vulnerability regarding the risk for the researcher. The risk is the same and 10000 dollars is not going to cover your legal expenses when going against a ‘profit company’. So the argument of dont give it for free if you are going to take a risk is of no use at all as again the risk is the same.
    “Software vendors that “freeload” on the security research community place their customers at risk” So again is about risk or is about money because if it risk if everybody start selling there 0day the i think the custmers will be more at risk. Just having the idea of ZDI, idefense, etc (which are profit companies) getting all this for whatever purposes they have does not reduce the risk for customers. Security is already a legitimate profession and is a profession thanks to many people before you guys that gave to the community free bugs and free knowledge to find and exploit them.

    • Hello nulio,

      Risk is a large part of it and decreasing the risk changes the equation. When money changes hands, there is a contract and agreement. That lessens the risk for both parties. If there is no money changing hands, there is rarely a contract to protect both parties. And vendors are extremely loathe to sign a contract with an external researcher. Try asking for a legal agreement before revealing details of the vulnerability and the vendor will cry, “extortion!” Ideally, researchers should have an agreement in place before disclosing details of the vulnerability to any party.

      If the vendors were to pay researchers for confirmed vulnerabilities, how would that place users at risk? They will be getting more vulnerabilities in their shipped products fixed and thereby better protecting users. Otherwise, those latent vulnerabilities are just waiting for someone else to find them and potentially exploit maliciously.

      We all owe a lot to those who we have learned from and I’m not advocating a stop to the free flow of security knowledge. However, in the current norm of responsible disclosure, posting vulnerability details of fresh vulnerabilities is still frowned upon. That is another debate. Personally, I believe in a one-year embargo on publicly revealing exploits for vulnerabilities that I have discovered. I think that shares knowledge without placing users at undue risk.

      -Dino

  3. I believe my primary problem with this line of reasoning is that it assumes there is a valid place to sell bugs that will be just as beneficial to the end user as full disclosure.

    When I look at things like the Zero Day Initiative’s (Tippingpoint) list of upcoming bugs (http://www.zerodayinitiative.com/advisories/upcoming/), and I see that half of the bugs listed are over 200 days old, it makes me angry. I admittedly don’t have all of the information, but it reads as if Tippingpoint is almost culpable in those issues not being fixed, because of their lack of pressure resulting in a fixed product.

    The bug developer has the information available to ensure that bug is fixed before it harms the public, and many (ok, most) of them do get fixed when you let an iDefense or Tippingpoint take the reigns, but the end result is still worse for the public. (Slower turn around time, or in some cases, no turn around at all.)

    So, yeah, selling your bugs will net you cash and lower your risk, but it’s at the public’s expense. And if we’re arguing for a bit more risk to the public for a bit more cash to the bug developer, why not just sell it on the black market? Lowest risk, highest payout. But it’s awful for the consumer.

    • I thing the original reporter should just publish the details anonymously after 200 day or so. That would force ZDI to do a better job.
      You are right, the oldest bug is 942 days old and this is not acceptable.

  4. Hello Dino,
    I think it’s fine if you want to sell your vulnerabilities to willing buyers, like ZDI or iDefense. They report the vulnerabilities to the affected vendors responsibly; you get paid – works out for everyone.
    However, demanding that all vendors pay you directly for work they did not budget to hire you to do is a little like expecting a person driving along to give a buck to each guy who volunteers to clean his windshield at every stop light. It may be doable if the person only passes a few stop lights (in the software industry, this would mean they don’t have many products), but what if there are hundreds of stop lights (aka hundreds of products)?
    Good vendors budget for security in their development lifecycles, and part of that includes hiring researchers to find security vulnerabilities that may have slipped through. I think you can use the market that’s out there if you want to get paid for vulnerabilities, or you can market your work to vendors to hire you for specific projects. But I don’t think it’s necessarily the best thing to demand remuneration directly from vendors for uncomissioned security research.

    -k8e

    • Thanks for the great comments, I am glad to have multiple viewpoints here. Microsoft, as we have come to expect from them, leads in most things security-related. But that doesn’t mean that we can’t ask for more :).

      Microsoft’s documented policy for online service vulnerabilities is great, but it could spell out a little more what Microsoft means by “responsibly submit” and what sort of security testing they deem reasonable against their online web services. This would make sure that any researchers do not inadvertently step afoul of it. The web service security researcher acknowledgments are a great way to recognize volunteer contributions. But, they should give some description of what the issue was. For a bug finder, the qualities of the bug found matter greatly in order to demonstrate expertise and skill. If that is unspecified, they do not have an independent reputable source that they can point to in order to validate their work. I think that simply specifying, “blind second-order SQL injection in XXX” or “persistent cross-site scripting in XXX” would be enough to give researchers public documentation of their issue.

      I hope my modest proposal doesn’t come across as advocating “unsolicited demands for payment.” Asking for payment for a vulnerability found in a vendor’s product that was found through unsolicited security analysis is pretty close to extortion (IANAL, obviously), so it is best not to go anywhere near that scenario. I’m trying to advocate a different response towards software security vulnerabilities because it seems like the status quo could be improved. Someone (sorry, I forgot whom) at CSW likened this to gun buyback programs in high-risk neighborhoods that get guns off of the streets. Perhaps vendors could sponsor bug bounties in high-risk software (web browsers, anyone?) to “get bugs off of the streets.”

      In any industry except for software, product recalls are incredibly expensive. Software patches are much cheaper. I’m told that “real” engineers mock software engineers for the disparity in rigor between the disciplines. If vendors were held liable for damages caused by security faults in their products, they would ship products with far fewer security vulnerabilities. Commercial software would also be much more expensive. So far, consumers haven’t been given this choice.

      -Dino

  5. Jonathan Ness says:

    Responsibly submitting vulnerabilities to Microsoft (if you choose to do so) is not legally or professionally risky, by the way. Take a screenshot of this FAQ webpage and save it away if you have any concerns:

    http://technet.microsoft.com/en-us/security/cc261624.aspx

    Q: Will Microsoft take legal action against those who submit online services security vulnerabilities?
    Microsoft will not pursue legal action against security researchers that responsibly submit potential online services security vulnerabilities.

    -jness

    P.S. I think researchers submitting bugs to iDefense or ZDI is great. They validate the bugs and always provide high-quality reports so it’s good for the vendor. They handle the back-and-forth logistics so its good for the researcher. Seems like a win-win.

  6. Hello Dino
    How do you handle vulnerabilities, which you can not sell to companies like idefense or zdi, because they are not interested (e.g. they don’t have customers with the vulnerable product)?

    Lofi

  7. Hey Dino,

    I agree with you 100%. It is becoming increasingly time consuming, and difficult, to find vulnerabilities in modern software. Hard work should be rewarded. Unfortunately in our world that is in the form of cash. There is no other industry, that I know of, that has volunteers working for large corporations.

    One of the reasons I joined ZDI/TP was their program and attitude agreed with my own. I hope more people come around to the NO MORE FREE BUGS campaign :).

  8. Hard work should be rewarded. Unfortunately in our world that is in the form of cash.
    —–
    Which big companies give attributions to individuals who submitted security related holes?
    In fact, i can only remember a M$ patch which attributed the Samba team as the reporter.

  9. Hi Dino. I like your goals with this initiative. Free vulnerability research in commercial products has helped in the past by getting us to the place where every major software company has a security team on staff and/or hires external audits. But perhaps that period has come to an end for some companies and what we need now can be handled under formal agreements.

    This will mean that the “trailing edge” companies will have a harder time than their predecessors. Finding out about buffer overflows by having your entire userbase 0wned is a harsh wakeup call. But at least it will be clear what they should do to remedy it since the industry as a whole has realized the need for security audits. That wasn’t as true in the 90’s.

    However, you neglect one category — entire industries that have never been through the informal disclosure period. For example, industrial control systems have not had the scrutiny Internet client and server software has. Without informal review and reporting, the only notice such companies will get will be widespread exploitation by the bad guys.

    When I did the FasTrak research last year, I tried contacting the toll agency. Since I couldn’t get anyone to respond, I spoke with the media. They talked to the company several times and I got a followup email from a spokesperson. I talked to a technical project manager. But they never came through with setting up a meeting time to hear my presentation and answer any questions they had (no charge).

    This is an example of the widening awareness gap. If you write software, you almost certainly know some kind of security is required. But outside this niche, there are still billions of devices running code, supported by companies without that awareness. Since you can’t expect someone to be proactive about something they never heard of, informal research and disclosure to such companies is the only way to educate them before widespread exploitation occurs.

  10. responsableundiclosure says:

    “There just need to be more legal and transparent options for monetizing security research. This would provide a fair market value for a researcher’s findings and incentivize more researchers to find and report vulnerabilities to these organizations.”

    this is very much needed! we don’t even know what fair market value for any vulnerability is, _LET THE MARKET TALK_, dammit!

  11. Dino,

    I agree with your point. In fact the economic approach to bug hunting is the path you are defending. Bugs are externalities and when you bug search from free they “freeride” your work.

    Economically speaking charging for the bugs (and why not for the criminal exploitation of it) will motivate the developers to improve testing design and processes.

  12. 0xC0000022L says:

    Let me rephrase this: you want to be paid for a service you provide to people who didn’t ask for it. Right?!

    Don’t get me wrong! The idea to get paid is indeed legit, but it needs far more thought. You cannot “impose” a service on someone and then expect them to pay for it. That’s just as ridiculous as those people “cleaning” (i.e. smearing) the windshield of your car at the traffic lights even though you waved a “no” or even shouted it.

    The order you would have to do these things in is first ask them whether they want it, then ask to be paid after delivering. One mode there could be how you define “responsible disclosure” for vendors that agree to pay and those who don’t. It still looks a little bit like blackmailing as well in this case, but this will probably be acceptable that bugs in products of non-paying vendors would be disclosed after 10 or 14 days, while others get up to 14+X days (with a defined maximum) before the bugs get disclosed. Once you have the lower limit low enough that the vendors are concerned, you’ll have their attention.

    A good step would be to found some kind of umbrella organization that would speak on behalf of its members and could establish a code of conduct and rules about what is deemed responsible disclosure, what the compensation should look like in what cases and so on.

    Just like in RCE and other “art forms”, time might not be a good measure, but neither would be the size of the exploit or quantity of found/exploitable bugs.

    So in conclusion, I guess, what I’m trying to say is that security researchers should join up and voice their demand with one voice. First of all this would allow it to become accepted/acceptable throughout the software industry and secondly this means the arrangement(s) with vendors would be with the organization. Even though a bit besides the point, the companies collecting money on behalf of musicians (on a national basis) could serve as one example how such a model might work.

    Sure, it should be the responsibility of the vendors to find and fix bugs, but without any kind of “legislation” demanding it and a “police” enforcing it, you’ll end up in a weak position. So found an organization that acts as your voice …

    • Just to be clear: This does not advocate telling vendors that you have bugs and want to be compensated for them. That is extortion. Nowhere in the post does it suggest that course of action.

      Also, I forgot to mention that Mozilla already runs a bug bounty program. That is the approach that more vendors should adopt since it rewards researchers for their work. The researchers also know how much would be paid so they know how much time they can devote to finding a vulnerability and do their own cost/benefit analysis.

  13. Drunken Economist says:

    As a retired SQA, let me chime in. Part of the problem here is that Software QA is becoming more ‘automated’ and ‘commoditized’. You can see this with the push to whitebox QE and automation, and also with the offshoring of not only testing but active software development. Some companies now have MOST of their development staff, both dev and sqa offshore. And it shows.

    So when you say “Professional bug hunting is a specialized and expensive business” that is something that USED to be true. It’s what software execs are actively trying to cut to say, the level of customer support.

    Will they be successful? Good question. You can see the results in the newspaper from time to time, the most glaring one lately being ADBE, for dragging their feet for weeks with Acrobat in the light of an active, running in the wild and causing damage exploit.

    The other thing is that these companies are relying more on beta testers to catch edge & security cases instead of investing in test environments and black box testers; in fact the ‘black box tester’ per se is an endangered if not extinct species. Beta testers are under NDA, and are treated as unpaid SQA in some cases.

    What really matters to the execs of APPL, GOOG, MSFT and 3rd party companies is ‘the bottom line’ and staying out of the press. And as long as this mentality pervades, as long as MBAs drive the tech industry [and not engineers] you’ll have less of a sympathetic ear to these concerns.

    • John Terrill says:

      I have to disagree with you.

      Professional bug hunting is alive and well. The automation of the process has reduced time in finding the low hanging fruit but there are still massive problems with black and white box analysis tools.

      I have worked on a number of commercial products that audit using black and white box methods and I can tell you that if we are only going to rely on those tools, then we might as well pack it in. Blackbox scanners are weak for anything non-web and the web application scanners have their own set of problems. Whitebox scanners have their own set of problems especially when you consider that even the most advanced static analysis is still formally undecidable as its reduced to the halting problem.

      Without professional bug finders, we will never discover some of the truly critical vulnerabilities in commercial applications. For instance, when Mark Dowd went back in time and killed hitler with his IE/Flash exploit, that was a critical bug that would have never been discovered with an automated tool. That’s also true for Mike Lynn’s Cisco discovery a few years back. In fact, there is a giant list of these types of critical bugs that required a very skilled individual to identify and exploit.

      Considering how little security researchers get paid in the grand scheme of things, I’d say its a pretty good deal to pay for exploits.

      Then again, I’m not totally opposed to the extortion idea… ;)

  14. dissenting_opinion says:

    It seems to me that the people that are opposed to NO MORE FREE BUGS are the ones benefitting the most from the free work.

    It is cast as a moral issue, the right thing to do. Of course, major corporations promote the concept that ethically bugs should be reported for free. At the same time the same companies fight tooth and nail if someone uses their software (their hard work) without compensation.

    The argument has never been to extort vendors, but vendor friendly posters concentrate on that issue misdirecting discussion.
    “However, demanding that all vendors pay you directly for work they did not budget to hire you to do is a little like expecting a person driving along to give a buck to each guy who volunteers to clean his windshield at every stop light.”

    To be fair, the Microsoft car has a big sign on it:
    “Clean my Windows for FREE! It is a great honor to clean my Windows. I will bestow you the honor of listing your name in the fine print. Kthx, keep up the good work!”

    The no more free bugs concept is not new. Although before my time, there was anti.security.is. However it always had more of a blackhat slant. In either case paraphrasing halvar; The bugs belong to the finder and they can choose to sell them, hoard them, or set them free.

  15. Drunken Economist says:

    Recently, I’ve been cajoled into being a beta tester for one of these ‘big corps’.

    I’ve been asked to regress and follow up, I’ve basically told them that that is their QE’s job. And if their QE can’t, then that’s not my problem. Take the bug as a data point and move on.

    If they’re not smart enough to see a bug as a data point in a data set, with some bugs more important than others, should I care?

    Again, it’s down to the bottom line, and perhaps getting shamed in the press. I hate to be cynical, but the only time these guys ‘move’ is when there’s a threat to their stock price.

    Potentially, or even losing 20% of your market cap is a powerful incentive to make good software.

  16. Finally great post,because giving free bugs isn’t good idea,it looks like we are idiots.Anyway,this is just my opinion.

  17. The stakeholder that is largely ignored by this proposal is the most important one: The user community. How does requiring commercial software vendors to pay for discovered bugs help or hurt them? Without an rational answer to that question the proposal is little more than just one more event of the negotiations between a small set of vendors (the major ones) and a small group of part-time bug finders (“professional” bugfinders are already employed and paid to find bugs )

  18. @ivan

    You don’t have to name names, but how many professional bugfinders are paid by vendors to find bugs in shipped products? How many users have been harmed by vulnerabilities in a pre-release product? Yes, I know that pre-release products grow up to become released products, but the point of my rhetorical question is that most software vendors do very little for the security of their products *after* they ship them, and that is when vulnerabilities affect users the most.

    I believe that vendor-sponsored bug bounty programs would incentivise researchers to find and responsibly disclose more vulnerabilities in those products. That would lessen the number of latent vulnerabilities in those software products, which would directly benefit users because it would make it harder for malware groups to find their own vulnerabilities that they would use maliciously against those users.

    -Dino

  19. I believe KatieM (k8em0) left a revealing tweet on this topic. While I can understand the motive here of receiving reward for meaningful work, when the work is not asked for and is not budgeted for, and has long precedent as volunteer work in many cases, that changes the dynamic even if the work is of value. I’d guess that there is a host of legal and liability issues that this raises within the commercial software industry. ZDI and IDefense are obvious players here but I have heard multiple people criticize the amount they pay versus what someone might make in a grey or black market situation. If a researcher is going to a grey or black market, then their motive is only greed, in my opinion. I believe the social welfare is (and should be) a factor in the decision on how to handle the information that is of value. I’m wondering how the “no more free bugs” has made any change so far? Are they are any concrete examples of this? One possibility is that some bugs are probably going to be sat on much longer, leaving more time for criminals to find them and use them to spead crap such as Zeus, Torpig, Infostealer and other crimeware. IANAL but clearly the software vendors who’s bugs are used to spread crimeware aren’t going to be held liable for their bugs being exploited and if the researchers are sitting on those bugs then the general public loses. Where is the vendors responsibility here? In some ways, this could be seen as a pressure tactic. Clearly those that launched this initiative are skilled researchers and their collective value to the security ecosystem should not be minimized. I suppose there is a big question on how the closed source software vendors are going to respond to this. What is the long term impact, I wonder?

  20. Alex Nicolaou says:

    Do you consider Chrome and Android ‘non-free’ uses of WebKit? I am having trouble distinguishing between WebKit’s status as a ‘not really OSS project’ versus other projects like the Linux kernel which are arguably run on many ‘commercial’ CPUs.

    Secondly, I don’t know what to think about your proposal of NO MORE FREE BUGS. There’s a long standing tradition of reporting, monitoring bugtraq, and working together. I don’t know if you can really make money a part of that.

    If I’m a civil engineer and I see a bridge is in danger of collapse due to a century storm, do I block it off so that people don’t die, or do I investigate whether it was a community built bridge via taxation versus a commercial toll bridge to decide?

    Of course software vulnerabilities don’t usually kill lots of people. Yet they are costly and do impact lots of people. Is there a social responsibility implied by being involved in security research, or not?

    alex

    • Hi Alex,

      Thanks for the comment, you bring up some interesting points.

      Your comparison with civil engineering is a good one. In civil engineering, there are safety engineers who assess the safety of buildings in design as well as post-design for potentially dangerous faults. That is an almost ideal comparison to software engineering and security engineers. Continuing with that metaphor, say that a number of safety engineers pointed out flaws in a bridge or building. The engineering firm responsible for its construction would probably respond in the same ways that software vendors do now (address them, ignore them, etc). At what point should the engineering firm engage a safety engineering firm for an actual third-party safety assessment? In the software world, vendors do not engage security engineers to assess their shipped products, only their in-development products. The answer the question about when the engineering firm would engage a third-party safety assessment would be: never or only when it was forced to by public opinion or government regulation.

      One final comparison with civil engineering is in order. I don’t actually know the answer to this, but I would love to see a comparison between the ratio of the number of civil engineers vs. civil safety engineers within an engineering firm compared to the number of software engineers to security engineers employed at a large software vendor. As you mentioned, security vulnerabilities don’t usually kill lots of people, so that probably justifies the difference.

      One of my larger points is that the public is not served by large vendors doing very little to secure the software products sitting on user’s systems. With identity theft and financial fraud ever on the rise, is “We actively listen to what people freely volunteer to tell us” a proactive security stance? Monitoring security mailing lists, patching freely reported vulnerabilities, and doing a code-review and/or pen-test before shipping are the current software security best practices. However, a two-week review at the end of a project that took two years to develop is hardly enough.

      I don’t want to comment on what are “free” or “non-free” uses of OSS software, that is a debate for the free/open software philosophers. For my opinion on whether bugs in them should be considered “free” or not, see what I wrote in the second comment.

      Finally, I consider most forms of security research to be socially responsible. Working to make the Internet a safer place to socialize, have fun, and do business is a noble pursuit. I just believe that solely relying on volunteers to address the security of commercial vendors’ shipped products is not a sustainable model and one that I personally consider to be somewhat socially irresponsible.

      -Dino

  21. bl0wf1sh says:

    ..very interesting to read through this thread, but all I see is just talking about commerce & money.

    I’m shocked to see nerds thinking like that – what happens with the real intent of old-school hacking?

    Isn’t it all about beeing the smartest and share your knowledge for free with the community?

    So is this the end of having serious discussion in public? Researcher liability issues – well that will always happen in a free cyberspace and democracy!

    I hope there are still researchers out there following their passion of hacking and not following the money trail…

  22. .::Phrack #65::.

    =[ The Underground Myth ]=
    ….
    The hacker underground has been systematically dismantled, a victim of
    circumstance. There was no reason for this, no conspiracy, no winner. A
    conquered people, but with no conqueror, no enemy to fight. No chance
    of rebellion. Conquered by circumstance, if not fate.
    ….

    /to all others – keep your passion and curiosity, hacking is not just winning
    the next Mac on another Pwn2Own contest

    • Hi bl0wf1sh,

      There is nothing about NMFB that is against sharing knowledge with the community. There is a long history of that in the security community and I hope that it will continue. Nowhere above is public security research addressed, only the act of responsibly disclosing vulnerabilities to large commercial software vendors for free.

      Responsibly disclosing vulnerabilities to vendors for free is not “sharing your knowledge for free with the community” since it is still kept secret. And even after the vulnerability is patched, properly following “responsible disclosure” requires you to continue to withhold details on the exploitation of that vulnerability for another 6 months to prevent exploitation of customers who have not yet patched. At the same conference where Charlie announced NMFB, Charlie and I presented a number of tools and exploitation techniques for OS X. You can get the slides for that presentation as well as all of the source code and tools from our book for free right here (http://blog.trailofbits.com/the-mac-hackers-handbook/) and in Metasploit. I do not feel like I am being hypocritical in doing that, either.

      I hope that clarifies my position on public research, namely that I am not against it, nor is NMFB about putting a stop to it. A number of software vendors are even very supportive of the security research community and sponsor the popular security conferences, which I am very appreciative of since I have spoken at a number of those sponsored conferences. I also have a high degree of respect for the product security teams at many of these vendors, both personally and professionally. They employ a number of very talented researchers. I will, however, criticize some of the decisions regarding security made by the larger company. Namely, I believe that by not doing more to address the security of their shipped products, they are doing their users a disservice.

      -Dino

  23. Hey Dino,
    i think you’ve right.

    But i think, that whole thing about free bugs is a issue of big companies such Google, or Microsoft. If you’ll try to report something to these companys, you’ll know, that they’ll max thank you for it. I tried to make a deal, coz i found something more as XSS (i don’t think that XSS is little vuln, i think it’s big one, but still, i found something i think bigger). When i reported this to Google sec. team, they told me “we will not pay, or do anything else for it. publish it, we don’t care”. There is the exact problem.

    I had an idea long time ago to build a web (project) where can anybody anonymously put any discovered vulnerabilities by him. guys from this site will contact vendor and try to make a deal. if he’ll not be interested, ok. his site will be in public list vuln sites/apps. after it’s on the vendor to do something with this.

    maybe it’s time to make something like this.

  24. I can’t, for the life of me, figure out what this has to do with Indian MREs.

Trackbacks

  1. […] place sur le terrain de la rémunération du travail de chercheur. Un thème souvent évoqué par Charly Miller, Alex Sotirov et Dino Dai Zovi. Car le monde de la découverte de faille est un monde codifié, truffé de règles non-écrites, […]

  2. […] ago, a triumvirate of security researchers – Charlie Miller, Alex Sotirov, and Dino Dai Zovi – announced what they hoped would become an internet meme: "No more free […]

  3. […] 但不是每家公司都懂得如何善意對待這些一方面是自發性的資訊安全研究者,另一方面卻是駭客的族群。2009年,Patrick Webster 提醒管理他退休金帳戶的澳洲投顧公司 First State Super,他們的網站有重大安全漏洞,可以讓使用者任意查詢其他77萬個退休金帳目。為了證明這點,Webster寫了一個腳本,並下載500個帳戶資料當作佐證。不料First State Super 完全不領情,竟馬上報警要求查封他的電腦。於是以Miller為首的一批資訊安全研究者,發起「臭蟲不再免費」(No More Free Bugs)抗議行動,指責那些收到安全報告不付錢,甚至採取法律行動的企業。 […]

  4. […] Charlie Miller, a well-known bug finder and big proponent of researchers getting paid for find flaws, said these numbers also reflect the reality that it is […]

  5. […] I don’t believe you should do it for free and that’s why I wont mention those companies that run BBPs but don’t pay in cash. The following post shows you why you shouldn’t be doing it for free better: No More Free Bugs. […]

  6. […] 2009, Miller and fellow security researchers Alex Sotirov and Dino Dai Zovi launched a “No More Free Bugs” campaign to protest freeloading vendors who weren’t willing to pay for the valuable service bug […]

  7. […] By now you’ll have probably heard that Dino Dai Zovi, Charlie Miller and Alex Sotirov have declared “No more free bugs” (Dai Zovi affirms his position and provides insight to his side of the argument over on his blog titled “No more free bugs“). […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 3,863 other followers

%d bloggers like this: