Hello, Michael Barrett here.
Anyone who works in the information security business would have to have been in full Rip Van Winkle mode to have missed the debate about disclosure policies over the last several years. The agonizing that we have collectively done over what constitutes “responsible” disclosure would have been greatly admired by a mediaeval philosopher, more accustomed to debating how many Angels can dance on the head of a pin.
2007 marked something of a watershed in that the practice – which had previously been kept pretty quiet – of companies, such as iDefense, purchasing vulnerabilities on the black market turned into at least a publicly visible grey market. While there was some debate about it, I was personally surprised by how little the ethics of this were discussed.
2007 also marked something of a change in that it was the first year wherein e-commerce websites, such as PayPal, started publishing clear policies as to what they consider to be responsible. We published our own in late 2007, and it can be found at:
https://www.paypal.com/us/cgi-bin/webscr?cmd=xpt/cps/securitycenter/general/ReportingSecurityIssues-outside
While I wouldn’t necessarily describe our own policy as perfect, a great deal of work went into it. Actually, considering how short the policy is, that statement may seem surprising, but nonetheless it is the case.
We’re very interested in hearing feedback from the community as to whether we’ve got this policy right, or close to it. We’re certainly prepared to tweak it, if we hear convincing arguments that there are good reasons for doing so. There are a couple of areas of our policy where we agonized for some time.
We frankly fudged over the length of time we have to ensure that a particular vulnerability is fixed. I was personally in favour of describing the maximum period that we’d commit to; however, the team more or less unanimously over-rode me on that one. In the end, we settled on the weasel words “reasonable time” instead. What convinced me was that:
1) In order to give a publicly useful time commitment, we would have to explain our entire vulnerability classification methodology (which would be several times longer than the policy itself), because what constitutes “reasonable” isn’t basically absolute but is instead situational. For example, in the case of an extremely serious vulnerability (an SQL injection flaw, for example), we’d almost certainly respond with extreme speed, taking the offending page down, even if there was non-trivial business impact. However, an extremely difficult-to-exploit and scope limited XSS, we would probably fix in a more leisurely fashion – especially if it’s one where we can easily implement automated monitoring to detect if it ever gets exploited. (By the way, this is one of the key differences in vulnerabilities in websites vs. vulnerabilities in distributed software – by definition, we have much more immediate control over how we respond.)
2) Having said that, we also don’t necessarily have direct control over our development and functionality rollout priorities. (Shocking, eh? Who knew that Information Security isn’t the Centre of the Universe?) Imagine a case for a low priority issue, where we were doing automated monitoring. Of course, we’d still have it in the priority queue for fixing but we’d frankly not necessarily fight too hard if our dev teams de-prioritized the fix over more important business enhancements – especially if there was no sign of any exploit on the horizon.
While I grudgingly accepted all of these arguments, my concern was and is that “reasonable” is in the eye of the beholder. While we try to be responsible and prudent, I can certainly imagine other organizations whose functional definition of reasonable might translate to “when Hell freezes over”. I’d rather be able to craft a policy that, if adopted by one of those companies, would effectively change their behaviour. Alas, in that regard, I don’t believe we’ve succeeded yet. The obvious question is whether it’s actually possible to come up with a short pithy policy that also meets that particular litmus test. I’m not sure that it is.
More on disclosure in a few days, where I’ll talk about the really contentious aspects…