On Project Zero's 90+30 vulnerability disclosure policy changes

Publish soon enough to put pressure on the vendor, but far enough out to avoid creating additional risk for the users... It's a balance.

I was asked a few questions by Lindsay O'Donnell of the awesome Decipher Bureau regarding Google Project Zero's changes to their default disclosure policy. The responses, along with comment from Tim Willis of Project Zero and Brian Gorenc of Trend ZDI, made their way into Lindsay's writeup which I highly recommend taking a moment to go and read:

Suffice to say, between my upbringing in the hacker community, my experience with Bugcrowd, and my more recent involvement the The disclose.io Project, I had a fair bit to say on this. Below is a fuller-form version of my responses.

What are your thoughts on Google Project Zero's recent vulnerability disclosure policy changes? What are the biggest takeaways from these changes?

Disclosure windows are an interesting balancing act. Too short and users of the software are placed at unnecessary risk from exploitation of unpatched vulnerabilities. Too long, and there's substantial and often illustrated risk that the vendor responsible for the fix won't act with the appropriate urgency to protect their users. The addition of the 30 days extension from Project Zero reflects a third dynamic, the lag which often exists between patch release and a meaningful percentage of the user population having installed the patch. Google is acknowledging the increasing prevalence of n-day exploitation in the wild, particularly over the past 18 months (e.g. the CISA/NSA memo) have taken their next step in refining how they strike balance between these forces.

Given that many researchers base their own disclosure policies on that of GPZ, do you see this change having a reverberating impact at large on how others will handle vulnerability disclosure?

Researchers looking to take a practice leadership position communicating vendor empathy might adopt these same types of policies, but I think its likelier the researcher won't change the policies stated to a vendor on a vulnerability report. However, they might be more open to expanding a disclosure deadline for similar reasons P0 has laid out, provided there are open communications and agreement.

What overall trends in patch management and vulnerability disclosure in the overall security industry do these changes point towards?

Patching and patching deployment is difficult, unless you architect well and plan ahead for the inevitable need.

Consider the two extremes:

  1. A dramatic comparison here would be the difference between Google patching one of their websites. The website is centralized, accessible, likely has a recent codebase, and gets solved once in order to protect all users, so it's reasonable to assume the patches can be developed, tested, and deployed quickly - and that if this isn't happening, either a communication failure or vendor ambivalence may be in play.

  2. Now consider a company responsible for a fleet of satellites in the sky, or medical devices implanted on humans. These products are decentralized, very difficult to access, likely have a variety of technologies and aging codebases which are inherently more complicated to patch and regression test. Not only do multiple instances need fixing, there can be a safety critical impact if there is a failure to do so. Sometimes organizations do plan ahead and make this almost as smooth as patching a website, but this is still very much the exception, and not the rule.

Ultimately, the bulk of P0's research goes into browser, consumer embedded, and server side software which are all categories which fall closer to the second camp with respect to distribution and accessibility, but are also at this point reasonably expected to have the house in order when it comes to security deployment, or at least be in the process of figuring this out. The 30-day extension provides some grace here, but also signals a very clear expectation that these types of products will be targeted, and that they should be stepping up efforts to reduce vulnerability remediation and user base dwell time.

The advantage is the P0 does not typically target systems which are safety critical and aged by design like the examples I used. Most of those vendors need much longer than 30 days to deploy, but should also be starting to think through the same things.

Oftentimes research teams (like GPZ, Trend Micro's Zero Day Initiative) have their own disclosure policies. How do these types of policies typically work when researchers find vulnerabilities in a vendor that has their own policies?

It varies from researcher to researcher. Many respect the fact that a policy with a CVD timeline has been published in the first place (most organizations have yet to do this) and will either honor what is set out proactively, or attempt to compromise. Others will essentially proceed according to their own rules, based on workflows they've established and what they've seen be most effective for getting a secure outcome in the past.

How has vulnerability disclosure changed over the past few years? Where do you see it going/ how do you see it continuing to change in 2021?

Vulnerability Disclosure Programs are getting enormous traction from policy makers and legislators over the past several years including the ongoing review of Computer Fraud and Abuse Act in the Van Buren trial by SCOTUS, the mandant for Federal Government VDP in the DHS/CISA BOD 20-01, and the inclusion and definition of VDP (as well as how it is distinctly different from bug bounty) in NIST 800.53 r5. In general, there's growing top down pressure, which is being pincered by increasing bottom-up pressure both from researchers identifying issues, and consumers who are beginning to realize that having a Neighborhood Watch for the Internet is probably a good idea.

In 2021, BOD 20-01 will roll out, Van Buren will be settled, and several other important cases around the  Digital Millennium Copyright Act (DMCA) and via the Organisation for Economic Co-operation and Development and other government agencies outside of the United States will be ratified. The snowball of VDP becoming a normal part of being on the Internet will continue to roll down the hill.

As always, and especially in the never-ending and highly nuance vulnerability disclosure discussion/debate, comments are welcome below.