It’s an interesting question, I feel, and one that seems to split both the IT security business and, well, business, asunder. I guess that some clarification is required before going any further, to save myself from needless big stick grief. I am not talking about the kind of penetration testing that allows security consultants to employ talented white hat hackers in an effort to expose holes within a security strategy and implementation. That kind of thing is obviously a given, and many large enterprises might not admit to it in public but it’s pretty common practise.
And there is the hook for my argument, the word public. The reason those enterprises do not go public with the holes that the hackers find is simply because it is nothing to do with us. They identify a weakness, they resolve the situation and reinforce the security infrastructure, the bad guys are kept out, the customers are not troubled by tales of the ‘we used to have a security problems but it’s all better now, honest’ variety. Now let’s take a look at a recent example where a marketing exercise, disguise it as public vulnerability research or an open approach to security all you will, at the end of the day it is still usually just a PR stunt, went wrong.
A security company undertook a high profile hacker contest in which a prize of $10,000 was put on offer for anyone who could successfully hack the Mac. Great, lots of cool publicity for that security company (but not from me, here, you will note) before the event, during it, and afterwards when one hacker managed to uncover a vulnerability in QuickTime for which a patch does not exist and collect the cash. What’s the problem with that? Well how about the resulting publicity leaving thousands of QuickTime users open to potential compromise because the company itself was not informed first and given the chance to block the hole before it was announced. The hacker would not do that, of course, because then he might not have got his money because Apple could have closed that particular door real quick for all anyone knew.
There can be no doubt that any kind of public vulnerability research effort will have the opportunity to turn sour, both for the company promoting it and the users of whatever software or service finds itself exposed to attack without any chance to defend itself. Throw a financial reward into the mix and the lure of the hunt, the scent of blood, is going to be too much for all but the most responsible of hackers. There really is no incentive to report their findings to the vulnerable company, and plenty not to.
Which is why, especially in the IT security business, there needs to be a code of conduct with regard to responsible disclosure. There, I have said it, and probably run the risk of being kicked out of the media reporters club as a direct result. After all, how are we meant to report on security vulnerabilities if we don’t know they exist in the first place? True, but along with a duty to report there also comes a duty to be responsible. So when I have uncovered such a story in the past, before I go to press I have always contacted the company concerned to give them a chance to respond. I gain because I get a quote to add to the story, they gain because they get a heads up at least 24 hours before the story breaks.