DEF CON 31 Policy - All Your Vulns Are Belong to Terms and Conditions

Abstract:
What happens when a vulnerability is submitted to a programme? Why do some disclosures take forever? What are governments doing about vulnerability disclosure and why are they so bothered about it? Why do people not understand what the words “vulnerability disclosure” mean and why can’t policy makers quite get their heads around 0 days? Why are companies in some sectors just not adopting CVD even though governments are passing it into law? Have we got RAS or can we think of anymore TLAs to add to the CRA, NIS and VDP? What are countries and regions around the world doing and how do they differ? And yes, what the heck is an equities process?

David Rogers:
Welcome, everybody. And thank you for coming. My name is David Rogers. I guess why am I here doing CVD? So I created the CVD program at the Mobile Industries Association, GSMA, and I had to fight hard for it so I know the pains and tribulations of it. And I'm going to be moderating this panel while we talk everything from vulnerabilities, through to legal, through to how we can help hackers and what's going on around the world. So in flow order, I'd like our speakers to just introduce themselves and just tell us a little bit about themselves.

Katie Trimble-Noble:
Hi, everyone. I hope you're having a good DEF CON. So I'm Katie Noble, Katie Trimble-Noble for some people because you might know me in my previous persona. So I am one of the four Bug Bounty Katies. There's me, there's me, there's Katie Nichols, and there's Katie Moussouris. So I'm half of the four Bug Bounty Katies in the world. Yeah, I know. It's great. So my background, I worked for the US government for about 15 years. Previously at CISA, I did coordinated vulnerability disclosure where I ran the vulnerabilities equities process, the MITRE CVE program, which I'm still a board member of, the NVD program, the Carnegie Mellon CERT/CC program, and the ICS-CERT Vulnerability Handling program. And our claim to fame was that we coordinated and disclosed over 20,000 cybersecurity vulnerabilities in a two-year time period. And I currently work for a Fortune 50 tech company where I am the director of product security incident response and bug bounty. And we won't name that company because I'm not really here.

Harley Geiger:
Hi, everybody. My name is Harley Geiger and I'm a cybersecurity lawyer. I'm currently with a law firm called Venable. I too am not representing Venable here and nothing that you hear is legal advice. You do need a lawyer but I am not your lawyer. And prior to joining Venable, I was in-house at Rapid7 and I was also a HillStaffer, again working on cybersecurity and privacy for about 15 years. And now, I help people with compliance, I help people with incident management, and focus a lot on hacker law and vulnerability disclosure.

Casey Ellis:
[inaudible 00:03:37]. Hi, my name's Casey. I am the token non-American on the panel, I guess, in terms of [inaudible 00:03:44].

Katie Trimble-Noble:
That's not true. That's not true.

Casey Ellis:
[inaudible 00:03:44].

Katie Trimble-Noble:
This is a half [inaudible 00:03:46].

Casey Ellis:
That's right. Okay. So forget all of that.

David Rogers:
He's going to mention the ashes in a minute. I'm just getting this one in.

Casey Ellis:
There you go. Let's get this party started. No. So I'm the founder and CTO of Bugcrowd. Bugcrowd, we didn't invent vulnerability disclosure or bug bounty program. That was prior art. But we did pioneer the idea of putting a platform in between all of the researchers that are out there and all of the potential problems that exist on the defender side of things. So that informs my point of view on this subject because we've seen quite a bit in the course of the last 11 years of doing this stuff. Also, the co-founder of the disclose.io project, which...

Speaker 2:
They can't hear you in the back.

Casey Ellis:
Better? Also the co-founder of the disclose.io project, which is basically a vulnerability policy standardization exercise. That's been running now for the better part of eight years in this form. But I think the prior art there goes back to Rain Forest Puppy in 2001 and stuff that existed before that. So really, the idea of that is to make the adoption of sane VDP language and VDP terminology easy or at least as difficult to screw up as possible. And as a part of that, to actually put positive pressure on top-down legislative change. They've been involved in things like the charging rule changes out of DOJ, election security, administrative guidance out of CISA in 2020, BOD 20-01, so on and so forth. So yeah.

David Rogers:
Cool. Already, you mentioned Rain Forest Puppy, so maybe we need to rewind a bit in history. So just a show of hands, so first of all, who knows what vulnerability disclosure is? That's good, right? Who knows what coordinated vulnerability disclosure is? Okay, so a lot less, right? Who knows Rain Forest Puppy? Who's heard of Rain Forest Puppy?

Casey Ellis:
Oh, God. I'm old.

David Rogers:
Okay, so the history is slipping away.

Casey Ellis:
Wow.

David Rogers:
Actually, you should look them up because that whole brinksmanship process is really, really interesting. But I'm just going to ask, maybe we start with Casey actually. Just give us the very short potted history. We're talking we're over 20 years in really to all this stuff, right?

Casey Ellis:
Yeah, that poll just then makes me feel super old. So thank you for all.... Thank you, everyone, for that. Yeah, I mean, look, I think the history, there's a talk I give titled The Unlikely Romance, and this is why, basically. When you think about where we're up to at this point in time, we have reached a point where hackers and organizations can at least have a productive conversation. That's really only an advance of the last 10 years through my eyes and it's partly through the work that Bugcrowd's done, it's partly through the work that a lot of other people have done to keep pressure on this. That effort goes back a long ways.
And I think really what it comes down to when you start looking into some of the stories that come out of the late '80s and '90s, it's this idea of security not even being a consideration in the early creation of the internet, the early creation of software, and all of a sudden, people start to break things because they can, that hacker spirit starts to manifest in this domain and everyone promptly freaks out. The best example of that's the CFAA, which legend has it, was written after Ronald Reagan watched War Games at Camp David and freaked out basically and went to the DOJ and the CFAA was ultimately the product of that fear. The fact that that prompted the creation of regulation, I think that was important to do. But that baseline freak out reaction, I think, kind of informed how that legislation was created. There's lots of different examples of that and we've had to basically claw back from that ever since.

David Rogers:
Just recall that Ronald Reagan also watched Encounters of the Third Kind, didn't he? That's a different story. So Katie used to work for the government, didn't you? So we kind of come all this way. There's been a lot of pain along the way, particularly for hackers who've been sued and badly treated by companies and so on. So we got to the point where CVD is a thing and governments are advocating for it. So how do governments at the moment handle vulnerability, defense, and disclosure?

Katie Trimble-Noble:
I do not represent the government, but I will represent my own interactions with this particular topic. So I will say that in the last, I would say probably 2015 or so, there was what I would like to refer to as a watershed moment. We really did start to see the US government and other foreign governments start shifting their mindset towards from vulnerability disclosure being a terrible thing to "Hey, if you see something, say something, this could be actually really good for us." And I think one of the big things that made that change was really the DOD's Hack the Pentagon effort. I think when Hack the Pentagon happened, it was something that was so groundbreaking because all of a sudden, the Department of Defense, the grudgey Department of Defense that's got all the guns and scary things was saying, "Actually, maybe we should take these security researchers a little seriously and bring them in and see what they tell us and test ourselves. We are confident. So let's test that, let's not be afraid."
And that was a huge moment I think that sent a huge signal to private industry and to other governments and to other government agencies. So again, DOD and the federal executive agencies, I know a lot of times, we think the US government or the government and we just kind of think big government, it's all the same. But they are different and the relationships are very different. So you have... Sorry, I don't mean to give you a civics lesson. So I'll make it really quick. But DOD has what we refer to as a parent-child relationship. So there's, DOD and then there's Air Force, Navy, you get the idea here. So they have a parent-child relationship and then the federal executive agencies have really a sibling relationship. So you have Department of Homeland Security, Department of Commerce, Department of Treasury. They all work together but they're more siblings, they don't have the ability to force each other to do things except in slight situations, particularly OMB has the power of the purse so they can force most people to do a lot of things.
But when DOD did that, it sent a very strong signal to the rest of the government that they should be open to this idea as well. And then we started seeing CISA really pick it up as well with the creation of CISA, which happened in 2018, and the Homeland Security really picked it up from there. And so I think that that really did change it and opened the door. And now, we have things like Hack-A-Satellite, the Hack the DHS. It really was groundbreaking and it's very exciting because now, instead of the don't go to the government because the government's scary and the law enforcement's going to show up at your door, now the relationship is if you have a problem, if there's an issue, go to CISA because CISA will help you. And that's very different for the individual hacker than it was 10 years ago. So I like to show that as a really great evolution of the way things could be and should be and are trending.

Harley Geiger:
I would add on to this, to the question of how the government is treating vulnerability disclosure now. A couple of other developments just in the past couple years, an important one is now, all federal civilian agencies are required to have a coordinated vulnerability disclosure program. So CISA worked with and it wasn't... So CISA has the power of creating binding operational directives. They're usually not really enforceable unless OMB attaches some teeth to it. And in this case, they did. So CISA and OMB work together. Now, all civilian agencies should have a vulnerability disclosure process. If you look at pick the Department of Education or HHS or something like that, you should be able to find their VDP page. And it is in fact a bit more permissive. It provides a measure of authorization that we don't see as often in the private sector. So that's one piece. There are several others. We are seeing CVD being brought into regulation, as well... And particularly for government contractors, for IoT devices, and as well as best practices in a variety of sectors.
An important one is the NIST Cybersecurity Framework. So the Cybersecurity Framework is used... It started out for just critical infrastructure, but now, it's being used in a lot of other sectors as well. It's sort of the gold standard for risk management and CVD is in there. The last thing I'll say about it, it relates to the treatment of hackers specifically. Starting in about 2018, we saw the Department of Justice, on more than one occasion, step up and create, and advocate for specific protection, legal protection for security researchers. We mentioned the DMCA earlier, the DMCA, the Department of Justice, in two different occasions, issued letters to the copyright office saying we should not be charging hackers under the DMCA. And it was very influential in getting strong protection for security research under DMCA.
I think what is about a year and a half ago, the Department of Justice also changed the charging policy under the CFAA, saying that prosecutors should decline to prosecute good faith security researchers for good faith security research. They're not going to withhold if you're doing extortion or something. So as far as I'm concerned, the federal government has made tremendous strides on adopting CVD and protection of hackers. The places that I think we need to catch up now actually are the private sector and the states. These things that we've just discussed are not necessarily there in the private sector, not as much adoption of CBD and the states still have all the things that we've complained about for CFAA are present in state law.

David Rogers:
But there are other countries around the world. So I just [inaudible 00:13:45]. I'm being flippant but I just wanted touch... So my own experience. So I can give you a kind of war story here about how successful really coordinated vulnerability disclosure as a concept has become and you see this sort of snowball effect of governments and understanding and adopting it. So back in 2017, I wrote a document to create some principles on IoT security, and that was in a government committee. And I put vulnerability disclosure policy as a key requirement on IoT manufacturers, and there was a guy in the room and he said, "We can't talk to the hackers, we shouldn't be talking to hackers." And it was that kind of age-old issue that all of us have fought for years and years and years. But in that moment, I was really, really confident because we already had two ISO specs. So two international standards for vulnerability disclosure and it was already adopted as good practice by companies on the West Coast of the States.
So I had that moment, I was really confident to be able to say, "No, you're talking complete rubbish." And we got it through. But that sort of attitude still persisted and it was great to have the reference points and I always think that we're sort of standing on the shoulders of giants each time we move this subject forward. And you're seeing this thing. So that thing became an international standard and it was adopted across the world. So in Australia, Singapore, India, Turkey. And it's baked in and it's also now law in the UK through the Product Security and Telecommunications Infrastructure Act, which is a PSTI Act. And that in turn, and I know that's not subject to this panel, is starting to create new discussions about what we define as computer misuse. So it feels like that's the kind of next step in this story maybe.

Harley Geiger:
So I want to bring up a couple things internationally that cast a shadow over this largely positive picture that we're painting. So yes, it is true that CVD is being adopted, I think, not just in the United States but around the world. That's good and helpful. When Europe, they passed the NIST 2 Act, we saw an infrastructure being set up for coordinated vulnerability disclosure we thought was largely positive and applied mostly to critical infrastructure like what we think of as critical infrastructure. And each member state was supposed to have a process for processing vulnerabilities.
Then China came out with a vulnerability disclosure law. I'm not sure how many of you're familiar with some of the details of that law, but essentially, when it comes to vulnerability disclosure, as an individual hacker, you have the ability to disclose your vulnerabilities to the Chinese government directly or to the vendor who then must disclose to the Chinese government. And as a vendor, you're required to disclose vulnerabilities that you receive to the Chinese government. So you're encouraged to have a vulnerability disclosure policy, you're encouraged to have a bug bounty, but it's a giant sucking sound of vulnerabilities flowing to the Chinese government.

Casey Ellis:
[inaudible 00:16:51] 48 hours, right?

Harley Geiger:
That's right. That's right. So within 48 hours. Now, the other development I just want to highlight here as well is in Europe presently, the EU is currently considering something called the Cyber Resilience Act, the CRA. And the Cyber Resilience Act is flown under the radar quite a lot in the United States. We're only talking about the vulnerability disclosure part of it, but this is a huge law. It will have a GDPR like effect on security and it is going to pass. It is going to pass. They're actually an advanced stage. Okay, vulnerability disclosure, under the CRA, there are three different versions, but the nut of it is that if you have an actively exploited vulnerability, you have software in the EU and you discover you have an actively exploited vulnerability, you must notify either ENISA or CSIRTs, depending on how it shakes out, within 24 hours. Within 24 hours.
If you've had an actively exploited vulnerability 24 hours later, it is likely not patched, it is likely not mitigated, and you are telling government agencies about it. Now, again, there's multiple versions, they're going to consolidate them, but some of the versions would have that first agency that you talk to, send it to other agencies who then send it to other agencies, leading to about 55 government agencies in the EU if you have software that's deployed across the EU. So it is a rolling list of software packages with unmitigated vulnerabilities. The point is, so CVD, great, but we are also sort of seeing that process co-opted in a way, right? We're going further left of boom to the point where we are sort of forcing disclosure before people are ready, and doing it in situations where we're putting those vulnerabilities at risk of being used for surveillance or are tipping off adversaries.

David Rogers:
So what we're seeing is the kind of governments potentially taking advantage. We've seen companies already take advantage of situations. And there are always constant issues of NDAs and private disclosures and almost unwritten threats against researchers. So it feels like they kind of were blindsided by it originally and then now, they're kind of getting to grips with it. I mean, what do you think about that, Katie, would you agree?

Katie Trimble-Noble:
Yeah, I think that that's exactly the case. I think that they're... And I'm a little bit concerned because as I see the coordinated vulnerability disclosure principles be adopted across industry, but there are still a lot of big companies that don't have coordinated vulnerability disclosure programs or that are trying to stand them up, and these are international companies. So as these regulations continue to... Some are good regulations, some are bad regulations. And as they continue, you start seeing industry pull back. And so that progress that's being made positively is now being pulled back a little bit. Think about this from an industry perspective. If you have a product and it has a vulnerability in it, 24 hours, have you even had an opportunity to triage that vulnerability yet? Do you know it's a vulnerability? And now, the whole idea of coordinated vulnerability disclosure, that was something David said. Do you know about vulnerability disclosure? Do you know about coordinated vulnerability disclosure?
Coordinated vulnerability disclosure means that all parties agree that they will treat this vulnerability under embargoed status until a point where all parties are able to accurately disclose that vulnerability in a way that doesn't give an adversary an advantage when a mitigation is developed and disclosed. So if you're disclosing vulnerabilities within 24 hours, what you are doing is you are enabling an adversary to be able to exploit a customer or end user. And I don't think that's the goal. But you see right there, you have a law that's about to be passed and you have the technical details of that. And the impacts of that are so wide-reaching. You get to a point where you say, "Why are we even doing coordinated vulnerability disclosure anymore? Why don't we just put it all on Twitter or simply not have a CVD program." And that's dangerous and it's undoing a lot of the progress that's been made over the last 10 years.

David Rogers:
So I hate to say it feels like a chill spreading across the world. Is that what's going to happen, Casey?

Casey Ellis:
Yeah, sure. Let's go with that. Look, I think in general, the way that I'd render out what's happening as someone who lives in the US in San Francisco but also has a home functionally in Australia, I'm thinking about this through the Western lens. And what I see happening is that Western countries are trending towards transparency as a mode of resilience, right? Transparency is anti-fragile, that's kind of a design principle that's being used to push this along. And I think that's a lot of how it's been sold frankly within government. It's a lot of how it's been pushed downhill and how it's been adopted. And that's certainly something that we've taken advantage of.
As Bugcrowd, I've seen the folks working on the disclose.io project take advantage of as well. It's just a security model that seems to make sense, right? On the flip side of it, non-Western countries do tend to be trending towards control or aggregation just in general because ultimately, there's a motivation there to build up firepower. They want resilience, they want defense, they want to be able to fix the bugs that get found, but they also want to increase their capability from a sovereign capacity standpoint. And this is a pipeline for that. So I do see it kind of heading in that direction, and obviously, that's a 300,000-foot observation. I do expect that to continue because ultimately, I think transparency as a resilience strategy wins in the end because it's fundamentally anti-fragile. But if the other approach manages to land the first set of punches, that could change the outcome.

David Rogers:
So you touched on something there, which there's been articles about one country hoarding vulnerabilities recently actually. So is this maybe this kind of, I want to say arms race, but it's probably not an arms race, it's just a... I don't know quite what it is, what is it?

Harley Geiger:
[inaudible 00:23:04].

David Rogers:
[inaudible 00:23:04].

Casey Ellis:
Yeah, I think the international relations environment's gotten progressively more tense and it's always been tense, but I think that tension's become more obvious post COVID. And this is through the lens of an Australian, you look at what... This is all on public record, there was a budget of, I think, $97 billion in a project called Red Spies, which is specifically around standoff deterrent capability in case there's something that goes hot in the region for reasons that should be fairly obvious that the Aussies are nervous about that. I think it's 2.7 billion of that's going into cyber, and that's offensive and defensive. So it's capability stockpiling is ultimately what it's about.

David Rogers:
That's what I've observed is that there's a lot of endorsement of offensive now that wasn't there before, right?

Casey Ellis:
Yeah, I think it used to be a dirty word. And I mean, you think about these conversations, I always go back to the FBI versus Apple in the San Bernardino case, I think it was 2016. And at the time, to create zero-day and weaponize it for offensive use of any sort was kind of dirty. But there was this sort of stench around it. But then you look at there was a lead time between when that happened and when it came out, who was actually responsible for that as an organization, it was like four or five years later. They're actually celebrated and it was an Australian company, it was actually celebrated as an Australian startup story. So in that time, it went from a dirty word to a thing that was kind of accepted, which is pretty amazing.

David Rogers:
And this is, as I say, this dark secret of forensics tools. These companies try to play off how genius they are, but a lot of them are buying vulnerabilities off the gray market and there's a healthy... You can go to the Zerodium website and see their little table of elements and the prices that attach to them. I know, obviously, Project Zero has been trying to counter some of that stuff, but not a lot of people know about that either, do they? I mean, can any of us sort of discuss that a little bit more about the gray market and vulnerabilities and I guess how that operates?

Casey Ellis:
Who wants that one? Sure.

David Rogers:
Go to the Zerodium website. I mean, it exists, right?

Casey Ellis:
Yeah, it absolutely exists. I think vulnerability discovery, the way that I try to pass this out to explain it and even to think about it is that there's defensive and offensive vulnerability procurement. The defensive procurement cycle, basically, you get the bug and you kill it, right? The offensive cycle is where you get the bug and you actually productize it. And as a part of that, you work to keep it secret so that your capacity stays operational in the wild and it doesn't get burnt. So everything leading up to that decision of whether or not you burn it or keep it alive looks pretty much the same. It's basically vulnerability research, discovery, thinking through impact, thinking through usability, like where the attacker is going to be coming from and what kind of capability and access they need from that vulnerability, because not every vulnerability is eligible for this type of purpose.
But there's a lot of researchers out there that can do that stuff. And I think that the thing I find interesting at this point in time is even with some of these legislative changes that are happening, gray is the perfect way to describe it because I think it's becoming less illegal or less tightly regulated in a way that I think will actually dictate some of the outcomes that we'll see if you think forward 10 years.

Harley Geiger:
[inaudible 00:26:53]. Also now a factor in the gray market, a couple of things, one, and Casey alluded to this, but the profit, the price point for vulnerabilities being used offensively is going to be higher than for defense. It puts defense at a disadvantage. And part of the reason, as Casey said, is for defensive purposes, you want to plug the vulnerability. Offense can be used multiple times or you sell it under NDA so that you get a higher price for it so that you can sell it to a single buyer. It is an issue. But the other force also though is we want to encourage this being used, this process for defensive purposes. The White House is currently looking at ways to deal with the offensive market. There was recently an executive order on commercial surveillance brokers. I think four of them went under sanction recently as well. And so it is difficult to find that nexus to human rights and be able to distinguish between the use of vulnerabilities for legitimate police operations versus oppressive purposes. But that space is being looked at now.

Casey Ellis:
If I could just really quickly on the pricing piece. Harley's exactly right, and this is actually a phenomena that we see a lot of actually demonstrated in bug bounty, which is defensive vuln procurement. So on the one side, if you're selling a vulnerability that's going to be productized, that is inherently more valuable as a transaction. The thing that works in favor of the defender is you've got the prisoner's dilemma. So if Harley and I have both found a bug and I'm trying to sell it off to the offensive buyer, he's trying to sell it off to the defensive buyer. If he lands the punch first, then I lose the opportunity for the bounty plus the opportunity to exploit the bug because it gets burned at that point in time. So that economic kind of delta works in both directions.

Katie Trimble-Noble:
So I'm just going to comment on, and kind of what Harley was saying, when we think about vulnerabilities used offensively, I think we're in an age now where we have to seriously consider the impacts of the things that we do. And I'll use an extreme example here, but anything can be used for good and anything can be used for bad. And think about nuclear power, nuclear power for power plants, nuclear power for weapons. And the difference there can be things like regulations and informed decision-making and an informed community, because we have the ability to influence the decision-making and the legislative process and the regulations and standards that go around this market. And I think we're starting to see that with the White House, which is a wonderful thing and I'd like to see more of that because we're at a cusp now where it's becoming very common to understand cybersecurity.
There is no real world and online world anymore. Used to be, "Oh, my God. My Twitter went down. What am I going to do?" Now, you can't get a job without the internet. COVID really accelerated that for us. So it's becoming more in the forefront of humanity that we understand how cybersecurity works and that we have that informed process, because if we don't, it will pass us by and that's something that I would empower everyone. It's part of the reason the policy village is here, a policy department is here at DEF CON to help T people who are very technical talk to policy makers so that we can have an informed decision-making process.

Harley Geiger:
And I think to close the loop also on policy and what we were just discussing as far as offensive versus defensive use of vulnerabilities, we should be directing our policy developments towards streamlining the ability for people to disclose vulnerabilities. And so what we had talked about at the outset about greater adoption of CVD, that's one way to do that. The greater legal protection for hackers for acts of good faith security research is another way to do that. There's still room to go though. We talked about China's vulnerability disclosure law, we talked about the EU... If you are a hacker and you are disclosing a vulnerability to a company and you know that company then has to disclose that to a whole bunch of government agencies, you don't know what's going to happen, there's a couple of things as a researcher you may feel conflicted about making that disclosure. And then as a company, you may not want to have to kick off that process every time someone discloses to you.
So we are seeing policies that are kind of going in both directions where we're making it more difficult or at least a greater moral choice, a moral conflict for vulnerability disclosure at the same time that we're easing it in some ways. Another great one that we are hopeful that the US government will take on is sanctions. So there are questions about the extent to which vulnerability disclosures are a sanctioned event or sanctioned transaction. If you are a US company and you're receiving a vulnerability disclosure from a individual in a comprehensively sanctioned nation or somebody that is two hops removed from the SDN list and you are asking follow-up questions but not paying or anything like that, there's no money exchanging hands, that should not be a sanctioned event. As of now, this is in a gray area. This is something that we are hopeful that can change and it is for the purpose of greasing the skids for defensive use of vulnerabilities, taking away incentives, taking away that easy path towards offensive use.

David Rogers:
I guess for some researchers, especially some of the more naive ones, they might not even realize that they were talking to a nation state as well. So there was the infamous conversation, the George Hotz zero-day iPhone recording that happened and it could have been naivety, but was he trying to sell a vulnerability to China? That was the debate at the time. And I guess nation states might get a bit smart and just false flag the researcher, right?

Harley Geiger:
Or host a vuln contest.

David Rogers:
Oh, yeah, yeah. So frequently, when I talk about this with companies, they get mixed up between bug bounty and vulnerability disclosure. They also get mixed up with vulnerability disclosure as well. So I was doing a deposition to a government committee, and clearly, they didn't understand. They thought vulnerability disclosure was forcing companies to disclose vulnerabilities publicly. So let's start on the bug bounty thing. So I guess, Casey, first of all, just explain the difference, please.

Casey Ellis:
Yeah, so when I speak to this, I think there's a added piece there that bug bounties often use interchangeably with crowdsourcing at this point in time as well. So it's like words are hard, basically. But I think for the purpose of the regulatory renderings of this and I think how most people coming into it think about it, the definition I go to is NIST 800-53r5, which is basically that a vulnerability disclosure program, it's an intake point, it's a policy, and it's a process that allows people from the outside world to submit a vulnerability into an organization and to have some sort of expectation of what happens next effectively. And that's a butchered version but that's the cliff notes of it. But what it also does is it distinguishes it from a bug bounty program. So it basically says a bug bounty program is if you do everything that we just said but what you also do is you reward someone with cash if they do this and they're first defined and report a unique issue.
Usually, the way it works is that's kind of the model. So the first defined is the one who gets paid. And the more impactful the issue, the more they get paid. So what you're ultimately doing on the proactive side is incentivizing the things that you want and actually mimicking some of the economic incentives that exist for the adversary. So that's the purpose of it and how it works. But the reason that confusion happened, because obviously, I've had a box receipt for a lot of this story. Bug bounties just got sexy really fast. The thing that happened with Hack the Pentagon which was hugely valuable I think for reforming the public perception of what a hacker is and actually helping tell a story of their place in the safety of the internet, it's like if the apex predator of the planet relies on 16-year-olds to give them security feedback, then maybe I should do that too. And that was a pretty massive shift in thinking that happened at that point.
But it was all talked about in terms of it being a bug bounty, and I think like us as an organization, I think some of the other platforms that joined after we started the category, hackers getting paid money and being able to celebrate that, that's a thing that's exciting to talk about and it's fun. So all of this led to most of the focus of terms of art actually being around bug bounty. Even though frankly, it's actually the minority. From my own perspective, VDP is something that absolutely every... That is a cost, it's a bar of entry must be this high to ride the internet type of thing at this point in time. Bug bounty in the public sense is discretionary. Some organizations are ready to do that, others aren't because they're just not ready to remediate or listen to the internet in that way.

Harley Geiger:
One other distinction is authorization. So for a bug bounty, you are authorized to try to find vulnerabilities and use certain testing methods against a defined scope of assets. A vulnerability disclosure policy can do that. You can provide authorization. But many don't. Many are just a channel saying, "Look, if you find a vulnerability, disclose it to us but we're not guaranteeing that to get out of jail free or anything." In fact, the US governments for civil agencies does provide authorization. It is the more advanced form of VDP where there is some authorization which provides... And the reason authorization is so important is because it provides legal cover for the security researchers that are trying to find the vulnerabilities. But to Casey's point about VDP should be a fundamental practice at this stage. You can have a very basic VDP that does not provide that authorization and you're still edging into that world. It is still helpful.

Katie Trimble-Noble:
I'm going to jump in real quick. So to simplify, a VDP is a see something, say something. A bug bounty is an invitation to hack. So there's a difference there. And I like to think of bug bounties, payments or rewards. I don't like to think of them as rewards. I like to think of them as reimbursement for your time because how much time and effort was spent in researching that vulnerability, triaging that, not triaging, developing the proof of concept, submitting that vulnerability to a company going through the friction of even filling out a submission because each company wants it a little different and going through that whole process and then working with that company, maybe that company has follow on questions, that back and forth.
So I see it as a reimbursement for your time rather than a reward. And I will tell you I work at a Fortune 50 tech company and this is becoming a very common way of reframing the narrative of what's actually happening. It is a way to offset the dark market and give people an opportunity, those who want to do good in the world and those who have a intrinsic motivation to be positive and to help defend. It's an outlet for them but also it's a mechanism of saying thank you and we appreciate your time and recognizing your time.

David Rogers:
That's a really good point and I've forgotten to raise this topic, but the value of labor is a real big question for security researchers that people, many hackers feel they're essentially being exploited by governments and companies for their time and that there's not a recognition of that. But is that a fair statement or do you guys have different views to that?

Casey Ellis:
Sure. I think that the other piece, frankly, this goes back to one of the reasons I've founded Bugcrowd in the first place is that it does distribute access on both sides to the answer to the question that they've got, which is this thing vulnerable or not? So you think about that, you switch that into the defender's side previous to crowdsourcing, previous to bug bounty, you're basically held hostage to whatever alley rate you're getting to get consulting. And ultimately, like one person being paid. So you've already got a supply-demand imbalance there and then you've got the problem of one person being paid by the hour, probably never being able to actually outsmart all of the potential adversaries that could figure the solution out before they can. So I think there's a counterbalance to that, what you just brought out in that I think the security consulting industry has been incredibly overpriced in different areas with a very much a caveat [inaudible 00:40:02] approach to quality. What this does is it actually brings quality to the fore.
So I think there's that part of it that's pretty important to call out. But also, on the payment side, it's an interesting one and obviously, I'm going to have some bias in my answer here, but people don't have to participate. There's no arm being twisted in terms of their own participation. And I would add to that, I partly agree with Katie, how you were rendering what the purpose of payment is. I actually think about it more in terms of the value of the data. If the organization running a program has a question, whoever ends up giving the answer has that answer. Then there's a value to that transaction that's more tied to the data in terms of the payer. On the researcher side, maybe it's got to do with the amount of time and effort they put into it, but ultimately, that doesn't really matter in terms of the marketplace dynamic there, if that makes sense.

David Rogers:
Yeah. And I guess it depends where you come from in the world and what the cost of living is, right? The value might be huge, it might be life changing. So I just really... We don't have much time left. I just really want to give the audience the opportunity to ask any questions if you'd like. I know Casey has to dash off to deliver a talk, so I don't want to hold him back. But you have a great opportunity to speak to some of the world's greatest minds in vulnerability disclosure. So David.

David:
Good afternoon, super interesting. What are folks' thoughts about I'll say US government, because we've got multiple governments represented here, US government coordination as it relates to very recent regulations put out by our good friends at the SEC that can actually end up driving victims to disclose really sensitive stuff in the middle of their trial?

Harley Geiger:
Totally agree. I think the SEC has made a very bad move. So for those of you that are not familiar, the Securities and Exchange Commission has actually historically been a very forward-leaning agency when it comes to cybersecurity and they actually have done some really great work in trying to push public companies towards being more transparent about their security posture and securing their systems of control for financial reporting. So I want to have that caveat. So they recently came out with a rule. The rule does a lot of things. And much of the rule, I think, is positive. There is one aspect of the rule that's getting a lot of attention and rightly so. And this is on incident reporting. And so what that does is it requires all publicly traded companies to report their cybersecurity incidents within four days of determining that those incidents are material.
And what that means is materiality means that you determine it's significant enough that somebody ought to know about it. But as we discussed with regard to vulnerabilities, with that four days, you may or may not have contained or mitigated your incident. Now, that reporting through the SEC is public, it's public by default. So you report it through your 8-K form, your 8-K becomes public as an investing document. And that is the Securities and Exchange Commission's purpose. It is to provide information to investors. It is not really to, with this regulation, to strengthen cybersecurity. The SEC has heard that and they heard it not just from private industry, but from consumer groups, as well as I know from other government agencies, US government agencies. I was surprised that they kept that timeline in. Arguably, companies were already supposed to disclose their cybersecurity incidents. What's new is this timeline, this four day timeline after materiality.
I thought that they were going to have something about maybe if you haven't contained or mitigated your incident, perhaps you can delay disclosure and have some other things on top of it, no dice. Instead, what they said was, "Well, here's our exception." If the attorney general asks the SEC in writing to delay because there is a threat to national security or safety, then there can be a delay. It's like that is not going to happen. That is a very, very low number of incidents. So for those of you that work at public companies, you'll probably... Functionally, what's going to happen is if you have a cybersecurity incident, you're going to have to draw in more parts of your team. You're going to draw in your corporate attorneys who should be familiar with the concept of materiality as well as your corporate communications because you will be disclosing this incident publicly within four days.

David Rogers:
Go ahead.

Speaker 3:
Hi. I have a question about safe harbor. One of the things I've noticed recently is that it's pretty common for safe harbor language to exist in programs launching now. But I've noticed a trend of them including request per second limitations. And I would like to know where I'm wrong in my logic. If you for instance, take Visa, they launched a program recently with a one request per second limit for testing. You can't load a web page and you're exceeding that automatically. Why is safe harbor not meaningless in that scenario?

Katie Trimble-Noble:
Sounds like poorly written rules by people who were uninformed.

Speaker 3:
Yeah, but you see...

Katie Trimble-Noble:
Casey, that's you.

Casey Ellis:
I mean, what Katie said. Basically, I think there's a bunch of things like that that you see. A lot of the recommendations and policy guidelines that exist in bug bounty programs in particular are basically copypasta that's 20 years old or more. So there might've been a point in time or some organization or some program at some point in time that said, "Okay, here's a particular rate limit that we want to put on this program because of blah, blah, blah, it made sense." But the idea of that being a useful way... Really, what they're saying is, "Please don't hammer our stuff." The idea of actually putting a specific number on that's a bit silly, but I kind of get the intent behind it.
I think with the safe harbor component of it, because usually, safe harbor clauses are written as a if this, then that. And this is if you follow the rules, these conditions, then that will authorize you against CFAA, exempt you from any circumvention for the DMCA, will exempt you from TOS violation and we'll just say that what you're doing is a good thing, right? That's generally how it works. And yeah, your point around this actually being pretty important alongside that, it's well understood. I think Harley would probably be better to jump in on the legal side. What I'm trying to do here is paint a picture of why that happens and why that exists. And oftentimes, it is a balancing act between creating comfort on the side of the organization, putting these policies out there, and it being fully complete from that side.

David Rogers:
So before we take the next question or before Harley comments, I'm going to give you your fleeing rights to leave the room. So please thank Casey for joining us. And good luck with your talk. So sorry, you all don't have to leave. You can stay and talk to us. So next question.

Speaker 4:
Thank you. I'll be cognizant of time, but I think this is pretty relevant. Okay, so governments starting to mature and enable their citizens to report vulnerabilities. We think that's naturally a good thing. Everyone on the security researcher front, you can think, "Hey, I have a little extra protection here than I had before and can sleep a little easier when you report something." But as governments start to kind of wisen up, and all around the world, not just ours or any other particular ones, and encourage people to report vulnerabilities, what's to stop governments from starting to really emphasize other citizens of other countries reporting vulnerabilities to them? Because we might be friendly with some folks in public, but behind the scenes, there's always an information war race going on. And I was curious if everyone had a take on that.

Harley Geiger:
I can't hear very well.

Speaker 4:
Oh, I'm so sorry.

Harley Geiger:
[inaudible 00:48:07].

Katie Trimble-Noble:
Yeah, yeah, yeah. [inaudible 00:48:09].

David Rogers:
So basically, as this becomes, I guess, sort of quite nationalized reporting to these schemes, does that then have sort of blowback on the individuals and their nationality? So for example, in the mobile industry, we have this pan industry reporting scheme. We have people from all different countries, from all... Individually and companies that are in different countries. Do you think that some countries may start to restrict people from nationalities just because of where they're from, basically?

Harley Geiger:
So I think the answer is yes, but I also think it differs by country. So China's vulnerability disclosure law that we touched on earlier does in fact have a restriction on reporting your vulnerability outside of China. I don't have the exact language in front of me, but that is in there. I'm actually very curious to know how companies are actualizing this, particularly multinational companies that are based outside of China but have operations in China. But yes, so if you're an individual there, like I said, you report to the vendor who reports to the government or you report to the government. You are not authorized. In fact, there are criminal penalties if you make it public or if you report to some other government outside of China. But most other nations that I'm aware of, even the law that I would describe as negative on vulnerability reporting, the Cyber Resilience Act in the EU, that included... Most other laws do not really require you to report directly to government as an individual.
So you report to the vendor and then the vendor has certain obligations, but you're not punished or restricted generally as an individual from reporting outside. I don't see that as a trend so much outside of China. And maybe there are other countries that are more authoritarian that have something like that. I'm sure there's some unspoken codes in certain countries, but I'm not really aware of them personally. Now, the other, sorry, the other restriction is on the exports, or sorry, the sanction side. I'll let Katie talk about that. But that is the other way that they can discriminate against individuals based on their geography. Lynn, can we close that-

Katie Trimble-Noble:
Yeah, just let me finish and we'll go ahead. So there is one other way, and Harley just said it. When we think about sanctions, US government sanctions specifically, and this happens across the world, there are a couple ways sanctions impact, but one of them is if you are a researcher in an embargoed country, if you try to submit a vulnerability to a US company, that US company may not be able to receive it and may simply IP block you. So they can't actually even receive the information from a sanctioned company because they don't want to have the possibility of having a sanctioned event happen, so they simply do IP blocking.
And so that means that you can't prove a negative. I don't know how often that's happening because it's being screened. And so that's a perfect avenue where you're having individual researchers who in some cases may be taking great personal risk upon themselves for trying to do the right thing and report it to the vendor prior to reporting it to their highly oppressive government. So clarifying sanctions and upgrading sanctions, US sanctions and sanctions for other countries across the United States is something that should be taken very seriously. Those regulations should be updated to apply to reality as we understand it. And I think Lynn's going to kick us off now.

Speaker 5:
[inaudible 00:51:46].

David Rogers:
Okay.

Harley Geiger:
Thank you.

David Rogers:
Thank you very much.