If you follow info-security news, you might have heard about Google considering a change in its vulnerability disclosure policy. Here is a link to their blog. Until now, when Google came across a new vulnerability of software from some vendor, it notified them and then posted the details only after 60 days. That is, the software vendor had a grace period of two months to investigate the issue and release a patch. With the new policy, Google will disclose the details in 7 days if the issue is already "under active exploitation". Not yet confirmed by the software behemoth, it does signal a significant change. If Google has notified you of a vulnerability in a product you own, you have only one week, yes only one week, to get it investigated and patched or you'll be in a very unpleasant situation: The details of your weakness will be up in the public while your customers are not going to have a patch to install and get protected. They are not likely to be very happy… You'd have to move fast, actually very fast, to avoid this undesired situation.
Regardless who you are and what your job is, a worker in the hi-tech industry, an IT security administrator, a journalist or one of many other positions, you may likely think that this is a great move by Google. If the issue is being exploited out there, it means that the bad guys already have the exact information how to leverage it for their needs and customers are at great risk of being targeted with limited ability to defend their systems. This rationale makes a lot of sense. But is it all that simple? Let's think about it a little further.
If you own a piece of software, and you have just been notified about vulnerability there, there is a multi-step process you'd now have to follow. Regardless of what your application does, that process would include the following steps:
- Investigate the report and identify the bad code. Ideally you should be able to reproduce the
vulnerability in your lab so that you can fully investigate it.
- Investigate which of your products and which supported product versions have that vulnerability. It may exist in some shared code that has been used in multiple places (some refer to such snippets of code as jiblets)
- Identify the team and the developer(s) who own the buggy code and immediately mobilize them to develop a fix
- Investigate all the possible variations of the vulnerability and fix them all. You don't want to
release a patch and shortly after be embarrassed that there is a close variation of the issue that you have not addressed. That would end up with a frustrating experience for customers who'd have to install not just one patch but a series of patches.
- As soon as the fix is ready, compile and deliver it to your QA people. If multiple products and
versions are affected, test each of them in the relevant scenarios.
- Prepare the bulletin with information about the issue.
- Publish the fix and the bulletin
Consider this list of steps. Each of them can be quite complex and involve many brains in your company to make sure it is done right. It is a lot of work and often it is not the end of the process. Your support teams will have to answer many questions coming from your worried customers. Releasing a patch quickly is a necessity, but releasing a bad patch, hmmm, a patch that causes a regression, can be disastrous. Your customers are already unhappy. If you now release a bad patch, you just made them furious! There is an Arabic saying that the speed is from the devil, meaning if you do things in a rushed manner, the likelihood of mistakes and failures increases tremendously. That's the rule of life and it is always true.
Now, this is not to say I think software vendors should not move fast under such circumstances. Being in the hi-tech industry for long time and in particular in the security research domain, I am very much aware of the potential damage cyber-attacks can cause to organizations, businesses and consumers. However as you might already sense from the description so far, rushing to release a patch can be a bad thing as well. There is a balance here between the desire to help protect customers as quickly as possible and the responsibility not to break their systems with a buggy patch. So where is the right balance? How quickly shall we push vendors to release a patch if some new vulnerability in their products turns out to be exploited out there?
As usual, life is not black and white. Our life in almost any aspect also includes gray, actually many shades of gray. Here are some possible situations for a public vulnerability:
- Exploits of the vulnerability are being used in large scale for example by exploit kits or by
other prevalent malware. The recent zero-days in Java are good examples.
- Exploits of the vulnerability were used in one or a few targeted attacks and therefore most
users/organizations out there were not affected yet.
- The details of the vulnerability got posted publicly and but no actual exploitation has been
observed yet. This case actually has several sub cases:
- Hackers or researchers already figured out a way to run malicious code by exploiting this vulnerability
- Hackers and researchers are still trying to figure out a way to exploit the vulnerability in a way that would allow running malicious code
- Hackers and researchers are quite sure that there is no way to run malicious code using that vulnerability. Or maybe there is one but it requires some uncommon configuration therefore not applicable to most customer deployments.
- Rumors about a certain vulnerability being actively exploited are traveling
but no details have been shared or identified yet
You might identify even more shades of gray which do not show on this list. What's the level of risk for the customer in one of the latter cases? Well, there is certainly a risk in each of these cases and sub-cases, but you may agree that it is not very high and may not justify the other risk in putting huge pressure on the vendor which may end up with a bad patch that will break your systems. It's a risk versus risk game. You already realized that, right?
Bottom line, I think disclosure policies should be specific. It is 100% justified to move fast when something is being wildly exploited out there and is used to install malicious code but the policy has to be balanced when other less-urgent cases occur and avoid unnecessary risks of regression. Theright approach is to identify the major scenarios and use the right level of urgency for those cases.
In their blog, the security engineers from Google, Chris Evans and Drew Hintz, mentioned that there might be some advice the vendor can provide to customers with mitigations how to eliminate or reduce risks, for example by modifying configuration, disabling certain services etc. It is possible, in my experience, that in most cases the mitigation may not be very practical for customers. Unless the sky is falling, a customer will not disable functionality say of a web server or desktop software that their business relies on. Even if the mitigation applies to unnecessary parts of the software,
administrators are wary of making such changes since it is very difficult for them to assess the full impact of such a change.
We live in a complex world which often requires complex solutions. The recent move by Google generally is good but it feels like it needs some more thinking in order to come up with a policy that optimally balances the various risks: The risk of being impacted by exploits versus the risk of
breaking customer's environments. Let's keep this community discussion open and end up with the best tradeoff.