Hello, Michael Barrett here again.
I’ve written before about the responsible disclosure of security research, and the need for the industry to align around that. And, other members of my team have previously highlighted PayPal’s own disclosure policy (https://www.paypal.com/cgi-bin/webscr?cmd=xpt/Marketing/securitycenter/general/ReportingSecurityIssues-outside) which attempts to lay out what we regard as acceptable vs. unacceptable behavior in disclosures of potential vulnerabilities within PayPal itself.
In the ensuing time, a couple of things have happened. First, my own team has done a certain amount of security research into vulnerabilities that we’ve run into, and it’s helped clarify our own thinking about how the research itself should be conducted. The experience of both conducting the research, and then working the disclosure process, gave us a great deal of insight. Second, PayPal and its customers have recently been uniquely put at risk, not just by irresponsible disclosure, but by what we believe was irresponsible conduct in the way that the research itself was carried out.
Over the last few years, there’s been a lot of debate within the security research community about the question of whether the laws that apply to security breaches have the necessary “carve outs” for legitimate security research. The adequacy of today’s legal framework is a difficult topic. I personally believe that these laws have grown in a rather organic fashion - that as such they are perhaps less well focused than they should be, and indeed they have often not kept up with the evolution of the Internet. Frankly it has not helped where law-makers – no doubt with good intention – have attempted to outlaw all “hacking” tools, with apparently no awareness of where these tools have legitimate dual-use within the security community.
However, the security research community has tended to focus on “what’s legal” and “what do I do if I inadvertently breached the law”, but has given relatively little attention to the question of “what’s the ethical way to conduct and disclose my research”. There is some thinking on this topic that is worth referencing, and these analyses represent a good starting point:
· Towards Community Standards for Ethical Behavior in Computer Security Research, by David Dittrich, Michael Bailey, and Sven Dietrich, Stevens CS Technical Report 2009-1, April 20, 2009 [Local copy and most recent draft release.]
· EFF’s Coders Bill of Rights - http://www.eff.org/issues/coders
·
Conducting Cybersecurity Research Legally and
Ethically. Aaron J. Burstein. http://www.usenix.org/event/leet08/tech/full_papers/burstein/burstein_html/
·
Toward a Culture of Cybersecurity Research. Aaron J. Burstein. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1113014
We believe that the time is right to open a robust debate within the security research community on this question.
There are some who may argue that it’s still too early to develop such an ethical framework, either based on notions that the field is still too immature, or on potentially misguided notions that this is a First Amendment issue. I think these two points are fairly easy to cover. First, the security research area has in fact been operating for quite some time now: one could argue that the infamous Morris worm was the first example of rogue security research - and that was in 1988 - more than twenty years ago. Second, I am personally very sympathetic to supporting First Amendment rights. But, there are recognized limits to First Amendment rights in the real world - the famous shouting of “Fire!” in a crowded theater - and I’d argue that there should be some reasonable limitations to “speech” in the virtual world too.
To make this clearer, consider a hypothetical conference at which a researcher is presenting a design for an improved bomb. (Please note that this is a very hypothetical case – there would likely be all sorts of practical reasons why this would never happen!) It’s one thing to publish the details of how this improved bomb might work; it’s another to leave a pile of components & materials on the sidewalk outside the conference center. The problem is that in the virtual world, publishing attack source code and cryptographic material is tantamount to being that pile of components & materials, and as such I believe it’s doubtful that it’s automatically covered by First Amendment rights.
Also, it’s quite clear that there are other areas where voluntary and non-binding codes of conduct have been very effective. Probably the best example is of bioethics, where the overwhelming majority of researchers in biological & genetic engineering voluntarily subject their research to falling within certain guidelines, and formal oversight. Another example is virology research, where there are well-understood guidelines to ensure that pathogens don’t escape into the outside world, as well as controls over publication to ensure that raw research is not disclosed in ways that would imperil public safety. Why should information security research be any different?
While we’re not proposing any kind of formal oversight organization, we do believe that some kind of ethical framework for responsible research and disclosure is quite feasible. If the guidelines are well-developed with good input from the community, they should be quite practical.
Based on the above thinking, we are intending to work to see how many people within the security research community we can find who are interested in the development of such guidelines. We believe it’s finally time.
Why don't you own up to and talk about the ethics of hanging Moxie out to dry? Maybe it was the right thing to do, but I think ethical behavior starts with forthright speech.
Posted by: John Doe | October 23, 2009 at 06:33 PM
Please see the 'Conventions' we attached to this blog for a discussion of the types of issues that we will not cover here.
http://www.thesecuritypractice.com/the_security_practice/conventions.html
Posted by: Andy Steingruebl | October 26, 2009 at 10:59 AM