Responsible Disclosure Programs with Katie Moussouris & Casey Ellis | 401 Access Denied Ep. 22


Rev.com transcript from podcast recording on 25 February, 2021

Katie Moussouris, Founder & CEO of Luta Security and Casey Ellis, Founder & CTO of Bugcrowd join Joe and Mike to talk all things responsibility disclosure – the good, the bad, and the ugly. It can be difficult for organizations to know how to share vulnerabilities safely and where to put their time and effort. Our guests break down the best practices to formulate your responsible disclosure program and bring security awareness to your organization. Casey Ellis, Founder & CTO of Bugcrowd and Katie Moussouris, Founder & CEO of Luta Security discuss vulnerability disclosure programs with Mike and Joe today. Developing a disclosure program can be so complex that many organizations don’t create one at all. So we asked - what processes should companies put in place to be sure they provide vulnerability information safely and in a usable way? Our guests today share the best practice steps that companies should take to prepare.

Mike Gruen:
You're listening to the 401 Access Denied Podcast. I'm Mike Gruen, VP of engineering and CISO at Cybrary. Please join me and my co-host Joseph Carson, chief security scientist at Thycotic as we discuss the latest news and attempt to make cybersecurity accessible, usable, and fun. Be sure to check back every two weeks for new episodes.


Joseph Carson:
Hi, everyone. Welcome to another episode of 401 Access Denied. I'm your co-host today, Joe Carson, joining you from Tallinn, Estonia, where it's pretty dark and cold. I'm the chief security cyber of Thycotic and chief advisory CISO for several companies out there. And I'm joined with some awesome guests here, really excited about today's conversation because I think it's really important. I think this is something in the industry we really have to bring forward and we all have to find a way to work together. I think this is a discussion where that will actually come to life. I'm joined again with my awesome co-host Mike Gruen. Mike, do you want to give us a little bit background onto what we're trying to do.

Mike Gruen:
Mike Gruen, VP of engineering and CISO at Cybrary. And today we're going to be talking to Katie and Casey about vulnerability disclosure programs and bug bounty programs and all sorts of good stuff and why, if you're not doing it, you're doing it wrong. So yeah, I'll let Katie introduce herself and then Casey.

Katie Mossouris:
All right. Well, thanks for having me. I'm Katie Mossouris, I'm founder and CEO of Luta Security, and we provide governments, large complex organizations, a positive roadmap and assessment in how they can build sustainable vuln disclosure programs and bug bounty programs. I'm really happy to be here today and talking with you and my good old friend, Casey.

Casey Ellis:
Absolutely. So yeah, my name is Casey Ellis. I'm the founder, chairman and CTO of Bugcrowd. Basically pioneered the ... we didn't invent vuln disclosure or bug bounty programs, they predated Bugcrowd's existence, but we pioneered the idea of bringing the hotpots and actually delivering those over a platform back in 2011, 2012. Yeah, it's fantastic to be chatting, obviously good to be catching up with you. Again Katie, it feels like a recurring conversation that you and I have had over the past eight years. And there's been progress, but there's still more work to do, so this will be a fun chat.

Mike Gruen:
Absolutely. I think 2020 has been a year that's been ... it's had its goods and bads and lessons learned. I think we've seen a lot of different disclosures and different things happening. And again, I think that 2021 is going to be pretty much a continuous of that. But I am hoping that we've learned a lot of lessons. Katie, you've been, let's say one of the innovators and really let's say in the forefront of vulnerability disclosures. Can you tell us a little bit of background about what's been happening, where do we start and how did we get to where we are? I guess it's been a long time since you've been involved in it. Can you give us a bit of a background in what's been happening today?

Katie Mossouris:
Yeah. I mean, I can definitely give a little bit of a quick history of vuln disclosure. What a lot of organizations fail to realize is that the concept of vulnerability disclosure originated with hackers. They're thinking that it originated with companies who started vuln disclosure programs early because they were forced to, Microsoft is a great example. But actually the history is something different. Hackers basically wanted to alert the public that they were vulnerable to things. And as a courtesy maybe give the vendor a few days to fix it. The original vuln disclosure policies were five day disclosure deadlines. We've come a long way from the earliest vuln disclosure policies to where we are today, which we have a couple of ISO standards that govern not just the external components to a vuln disclosure program, but more importantly the internal digestive system that has to process all potential vulnerabilities, whether you found them yourself or somebody outside your organization has found them.

Katie Mossouris:
And as a hacker myself, it shocked me more than anyone when Microsoft, when I worked there, asked me to help with those ISO standards. And I got there saying, "Why would hackers ever want to be ISO compliant?" We managed to scope it down to what vendors should do when they're preparing to receive vulnerability reports and process them. And then we got a couple of ISO standards out of it, which were the basis of some of the biggest programs that we've seen out there. I launched Microsoft's first bug bounty program in 2013. That was really the first time a major vendor besides Google had offered a big bug bounty. And that was one of the big sort of see changes that happened, that it made it possible for complex organizations to think this through.

Katie Mossouris:
Now, luckily my friend Casey was around and had just started this company. And so folks were catching on to this idea that hackers weren't necessarily your enemies, hackers could be confrontational. But hackers are trying to do the right thing for the most part because all humans are pretty much trying to do the right thing for the most part. Last time I checked hackers are still human beings. So kind of wrapping up the quick history lesson brings us fast forward to 2016 when I helped the Pentagon launch the very first DOD program embracing hackers, and that was called Hack The Pentagon, first bug bounty program of the US Department of Defense.

Katie Mossouris:
And the momentum has been growing, it's been great to see. I've seen a few folks and organizations kind of getting a little ahead of themselves in terms of trying to roll out these programs before they really have set their digestive system up appropriately. You don't want to get bug indigestion by any means. But I think overall the work of all of us in this community has been contributing to the acceptance of hackers and the idea that we can be helpful, and especially if you pay attention to what we're telling you, we don't necessarily need to get paid every time, although that's good and appropriate. But really it's about communication and getting those bugs fixed. And that seems to be the most important thing to all of us. There you go, short history of vuln disclosure, and you got it.

Joseph Carson:
And I think just picking up on that a little bit on the bug indigestion, I mean, I think that's a great segue into Casey and Bugcrowd. I mean, Cybrary is a customer of Bugcrowd and they definitely help us so that we're not choking all of the constant reports and repeat reports and repeat reports of things that aren't actually problems. So yeah, Casey, do you want to maybe jump in and talk a little bit about that?

Casey Ellis:
Yeah, for sure. I mean, what I'll fit into that, that was a fantastic, I think, synopsis of kind of where we've come from and where we are today, where we fit into that. I think really the two things that triggered me to start Bugcrowd, one was this awareness of the fact that cybersecurity is a human problem. We're talking about humans being good, and the hackers being humans too, humans also make mistakes. And I think there's this idea that the internet collectively is becoming more aware of. It's the fact that vulnerabilities are going to happen in spite of all of your best efforts. And those efforts vary across organizations to varying degrees, but it's just a part of human nature as well.

Casey Ellis:
All right, if you've got this helpful group of people that are at the table with the ability to identify risk as a by-product of that, who are trying to give you information for you to use, that seems to sort out some of the issues that the cyber security industry has with the lack of talent and a lack of the ability to really get the right people to answer this questions. That was a big piece of what I wanted to solve with Bugcrowd was to be able to plug that lighten potential together with the unmet demand, to try to make security roll forward in a better way.

Casey Ellis:
The other part of it is just keeping my friends out of jail, because it's this sort of origin story, I think finding a vulnerability in someone's stuff and telling them about it, it's an inherently adversarial process usually the first time around, especially if the recipient hasn't prepared for it, and if the person who's doing the talking might be inexperienced and just going in a little hot, that's the thing that happens a lot. So to be able to standardize and normalize the conversation a little bit in order to make it smoother, that's really kind of what we were trying to get sorted out with Bugcrowd.

Casey Ellis:
When it comes to bug indigestion and all those sorts of things, it's really ... triage is one part of it. I think it's the part of it that gets talked the most in context of a public program. Because you'll try to listen to the entire internet and it's a noisy place. So trying to get to the signal through the noise is very definitely a part of what needs to get done there. I think where I've had a lot of respect for the work that Katie's done was ISO 30111 and those different pieces, what she does with Luta as well, is that most organizations don't realize how, for starters, vulnerable they are. But for seconds ill-equipped they are to actually deal with information that's coming in from the outside on a reactive basis and integrate that into their process of building their company. Because they didn't start the company to handle bug reports. They started it to do whatever they do.

Casey Ellis:
So finding a way to fit that in and have it all rolled forward. That's a very important aspect of it, which is something that we help particularly smaller organizations with and the larger ones that we deal with that we've been working with for longer. But there's a lot of different moving pieces to the puzzle, I guess, is really the point here.

Mike Gruen:
Absolutely. For the audience, I just want to make sure we clarify is that, absolutely, for me, I'm always beating that drum as most hackers out there in the media are kind of misrepresented. Most hackers, they're using their skills for good, they're there to help. We're good citizens out there trying to make things transparent, trying to get people to step up and be accountable. And that's ultimately ... so there's a misrepresentation I always feel in the media, and the news that hackers are bad. And I want to make sure that ... I myself, I'm an ethical hacker. I'm always looking to make sure that I do things in order to make the world a safer place, and you're using your skills to help organizations and help identify those. For the audience, when we always talk about hackers, unless there's more context, we're talking about good citizens who's using their skills to help you improve and provide a better service to customers. And I think what we're really talking about here. [crosstalk 00:10:51]. Go ahead.

Casey Ellis:
Sorry, just to jump in. And I'm fully mindful of the fact that I'm sitting in front of usual suspects. Because there is this element of offensive security that is like ... One of the things that got me into security in the first place is I really enjoy thinking like a criminal. I've just got absolutely no desire to be one. And I think that's been one of the triggers or the sources of some of the misunderstanding that's out there. But it's a really good point that you've raised like hacker as a phrase became synonymous with the bad version. To me, hacking is actually amoral. It's a thing that you do. It's a mindset. It's a set of activities and a set of things, interests that people have. It doesn't actually have any inherent moral loading. You can use it for good, or you can use it for bad. And we did the same thing. We try to use the word hacker purely in the good context. If we're talking about the bad version, it's malicious attacker or a cyber attacker, something like that.

Joseph Carson:
We always have to make sure we put the right context, it's criminal, it's malicious [crosstalk 00:11:54].

Mike Gruen:
I was just going to say, just criminal is fine.

Joseph Carson:
We've had discussions. I work in a lot of inter response, I do a lot of penetration testing and I will always make sure that when we're talking about ... I've been working in a ransomware case for a number of weeks. And they're digital thieves, they're criminals, and that we have to make sure that we call the right context because otherwise what we end up doing is we put them on a pedestal. We put the criminals up there as elite, as sophisticated, and they like that, they embrace it. What we're doing is we're encouraging them to do more. But we want to make sure we actually call it what it is. It's a criminal activity, it's crime, it's digital crime.

Joseph Carson:
And we had to make sure that we get the media to pick that and actually use that as the headline. It doesn't make it cool, it doesn't make a headline. But we had to get to reality and call it what it really is. I have a question for you Casey is, one thing is that ... I'm definitely a very big promoter of security researchers and finding vulnerabilities. What's the standpoint when they weaponize it? There's a point of actually finding a bug, but then weaponizing it and making that available. Where's the kind of ethics? Where's the kind of boundaries where it should be kind of staying within the legal side of things? What's your advise when you do find something, but then you make an exploit that actually will take advantage of it.

Casey Ellis:
Yeah. It's a challenging question to give a single answer to because half the time weaponization in a good faith hacking context is really about explaining what the nature of the problem is to the recipient. Engineers don't necessarily seeing one on a website as a POC, and automatically understand the importance of that. Sometimes you got to do a bit of extra work. I think when it comes to drawing the line around ethics as a finder and as a submitter, it really does come down to, firstly understanding what the expectations are. I think this whole idea of standardizing vulnerability disclosure, brief language, and all of those different things to make as much of that, like expectations setting for both sides.

Casey Ellis:
As a finder, if I submit to this program, this is what they expect me to do. I'm not forced into that because I'm on the internet and I can do whatever I want. But if I'm engaging, it's going to be most productive if I engage in these sorts of ways. And probably more importantly, this is what I can expect in response from the recipient. If I'm going and making sure that I've weaponized to use your balance there, in order to explain it, I know that the recipient is not going to misinterpret that or take it the wrong way.

Casey Ellis:
It's all those sorts of things. I mean, we're talking about unintended consequences as a service here, so it's very difficult to give a one size fits all answer to that, but that's as close as I could get. I'd be interested to see if Katie has thoughts on that too, because she's obviously seen a few of this.

Katie Mossouris:
Well, yeah, I mean definitely creating proof of concept that demonstrates the severity is one way to interpret your question, in terms of proving it to folks. Sometimes even with a very, very strong proof of concept exploit that you've developed they still misunderstand the root cause, and they only understand that one vector that you showed them. So they'll fix that one vector. I mean, I think it's an important piece. And in fact, in the Microsoft bug bounties, which I had created initially, and they still have this criteria, in order to qualify for the highest bounty amount, you have to produce a working reliable proof of concept code.

Katie Mossouris:
The point of doing that is actually showing because Microsoft's defenses have evolved over time and they're quite sophisticated in the latest operating systems. You have to demonstrate that you could actually leverage that vulnerability to do harm. And that's part of the reward that they're paying for is they want you to do that extra validation step for them, so that it's really fast for them to say, "Yep, that's definitely an issue," and get to work on fixing it as opposed to offloading that work to the receiving team, to either be able to understand it in the first place and get to the root cause, but also to be able to address it comprehensively. I think it's an important piece.

Mike Gruen:
Yeah. I completely agree. And for me, quite a few years, probably about six years ago I was doing a bit of research, and I remember one of the things I was doing was basically taking some previous breaches data and correlate them together. And in EU that is considered as actually creating a new data breach, because of GDPR and data protection. And it was a bit of kind of ... so it meant that you're not in the same, from a legal perspective, getting a slap on the hand from Europol, at least they can inform me not to be doing it. But when you get into those things as well, especially when you're working across borders in different countries, would you recommend security resources?

Mike Gruen:
Because when I go from different country to country, I had to choose which laptop to take or which hard disk or which SD drive to pop out of my laptops or not breaking laws. How do we deal with this when it comes into cross border and especially companies that have different office locations across the world. How does that challenge into this?

Katie Mossouris:
I can take the first facet this one, because I helped to renegotiate some of the export controls around intrusion software, intrusion software technology, as part of The Wassenaar Arrangement. So Wassenaar Arrangement for those of people who don't know, and thank goodness you don't have to know is basically ... it's an export control agreement between 42 countries. It was originally 41, but they added India in the last couple of years, so it's a total of 42 countries. The issue is that at the Wassenaar level, all of these countries decided that they would have people fill out export control forms in order to bring their tools across borders, et cetera. But even in some cases, depending on the country, even if it was from the same company to a different corporate office in a different country, they might have to deal with some export controls.

Katie Mossouris:
One of the most important things for me to get accomplished as part of the official delegation to renegotiate that, was to make sure that incident responders and people trying to do vuln disclosure would not have to bother with export control forms and waiting, and delays and whatnot. Now that being said, that doesn't mean that every single country that's part of Wassenaar has not also implemented their own more restrictive controls. Let's just talk about France and Germany for a second here. They have some of the most restrictive controls, especially around tools. There's a very famous moment when our good friend in the hacker community, Halvar Flake, also known as real name, Thomas Dullien. He was trying to come and bring a training to black hat and was basically denied entry to the country because of what was on his laptop. And so he had to miss the training. Now, fun fact, everybody in their grandpa and grandma was trying to impersonate Halvar to get into the Microsoft party in Vegas that year, just as an aside, as if we'd never seen it before.

Casey Ellis:
That totally check out.

Katie Mossouris:
Yeah, exactly. So there's damn export controls, almost got a bunch of gatecrashers into the Microsoft party in Vegas. But really, I mean, on a very serious level, I worked last year back when travel was a reality and I can't even put my head around it.

Joseph Carson:
The old normal.

Katie Mossouris:
Right.

Mike Gruen:
The old normal, yeah.

Katie Mossouris:
But last year, we were doing an exploitation contest similar to [inaudible 00:19:44], and we were doing it in the United Arab Emirates. And so that was a whole thing where we had to make sure that essentially the exploits themselves were contained to just the researchers and the receiving party. We had to have no devices in the room. We had to basically do an impromptu skiff to be able to try and contain these exploits. Because I knew that we weren't export control protected in that place where we happen to be. And the exemptions only work if you are the reporter, if you are the receiving vendor or if you're the coordinator that is going to kind of work that arrangement.

Katie Mossouris:
Now we were none of those things, we were judges who had set up a contest. So it was really, really tricky. And I'm just glad we all, to Casey's point, we all stayed out of jail. Nobody got fired, nobody got arrested when they landed back in their home countries. And actually the state department reached out to me because I did a little slow talk about it at Summercon this year about how crazy it was. They reached out to me and they were like, "We saw your talk." And then they didn't say anything like that, so I guess it was fine. I guess they were watching it, I guess it was fine. But yeah, this is super tricky.

Katie Mossouris:
Last thing I'll say about it is that we're never going to get to a true safe harbor for researchers until we get normalization across the globe about hacking cybercrime laws,  anti-cybercrime laws and export controls. I mean, we're going to be very old people, Casey, by the time this has settled. I mean, we're going to be radically old. [crosstalk 00:21:18].

Casey Ellis:
Well, I don't know about you, but I've aged 10 years in the last 12 months, so there is that. I completely agree with that. I don't know that we'd ever necessarily see this get solved short of any hacking goals and even things like the DMCA, like any circumvention laws, all of the different things that get brought in to either legitimately or more frequently to chill security research, legitimately prosecute or chills, security research, until those get basically made an afterthought or an addendum to a more traditional crime. That to me seems to be like a rejigging of the legal construct that could work, but that's a ways off, because we're still in this place where things are very vulnerable, people don't necessarily ... there's not consensus for unified understanding of the role of good faith hacking as a part of the overall security of the internet.

Casey Ellis:
There's definitely a lot of good intentions and I feel like there's been a lot of progress that's directionally correct, particularly over the past 12 months, but really over the past 10 years. But yeah, I mean to Katie's point, I think there's a long way to go with that. What I would add to it as well just real quick is the idea of, so we're talking about physical transit and we're talking about mostly potential like criminal or state level legal risk. On the civil side, this is where I think it's a part of what we've been really pushing forward on. The folks that are involved in the disclosure project is like, how do we standardize as best as possible the kind of things that need to be written in a policy to clearly indicate to the finder that if they don't basically do criminal stuff, then they're going to be okay, they're not going to be pursued from a civil standpoint?

Casey Ellis:
And that's not perfect, but I think the more consensus there is the more adoption there is of those types of terms, kind of the higher the tide rises on that. And I do think as well, through combos with people like the AFF and ICLU and so on, organizations that go to the effort of actually enumerating the different things they're going to allow from an authorization or access or exception standpoint. The more things they've done, the less likely they are to actually kind of prosecute by mistake, if that makes sense. The whole idea of civil cases being brought under 1030, for example, where there has been a decently written vulnerability disclosure brief is very low because that organizations actually put the time into figuring out what that means. I think there is an element of that as well. People actually just stopping for a second and working out what the implications of all this are, and moving forward like that, kind of [crosstalk 00:24:15].

Joseph Carson:
I totally agree. For me, it's all about content, if your intention is for good or intention is for bad, that's ultimately what it comes ... the definition is [crosstalk 00:24:23]. Sometimes we make mistakes.

Casey Ellis:
Ideally, I don't know if the law has quite figured out how to wrap its arms around that. And if you want evidence of that you'd just watch the Van Buren briefings to the Supreme Court-

Joseph Carson:
Yeah, I think I remember that.

Casey Ellis:
... a month ago.

Joseph Carson:
Yeah. The Starbucks double spending one, even the rewards cards going through that. And even the guy who did the PlayStation mod, all of those kind of thinking outside the box about how you can do something. And I think companies should look at that as feedback.

Casey Ellis:
Yes, ideally.

Joseph Carson:
For me, it's more of a whistleblowing in my view. And there should be some type of protection, like a whistleblower protection.

Casey Ellis:
Yeah, I think, so if I touched on that for a second, because the dynamic I agree with. There's still this history of it being a very adversarial relationship. And I think how I like to explain vulnerability disclosure in particular, I've heard Katie use this as well, as well as others, is more like the idea of neighborhood watch. You've got stuff out on the internet, the internet is this gigantic neighborhood, there's people that can identify risk and potentially want to tell you where you might have an issue. Really what you're doing is saying, "I'm open to that feedback and I will interpret that feedback as positive."

Casey Ellis:
And you can sort of see the difference, because it's a fault like neighborhood watch is a form of whistle blowing too, but I think catching it in more friendlier ways, really what that does is actually establishes the true nature of what's going on in a way that helps ameliorate some of the sins of the past.

Mike Gruen:
It's a metaphor. Again, simply your neighbor coming up to your door and saying, "You left a window open." And that's the form of what it is. And ultimately if the neighbor says, "Oh, I'm going to sue you because you shouldn't have come to my door." I don't want-

Joseph Carson:
Suppose the neighbor who comes in through the window to say you left your window open, right?

Katie Mossouris:
Yeah. So that's where we get into some ambiguity, right? A lot of times the way that some scoping of vuln disclosure programs and bug bounty programs is written is it confuses the hackers, and I don't blame them. It'll say things like, do your best to show the impact of this vulnerability. And they're like, "I've got some credentials, what can I do with them?" And then they start pivoting all through your network and that's not what you meant, right. That would be the equivalent of, they think that they're doing neighborhood watch stuff and it's all authorized, but actually you've just found them rooting around your underwear drawer and them saying, "What? You told me to tell you, I could possibly do with this access." And I'm like, "Get out of my house." You know what I mean?

Katie Mossouris:
I think a lot of organizations are feeling invaded when they haven't really thought this thing through. I mean, honestly, that's a lot of the work that my company does with organizations is they understand conceptually what they want in terms of scope, but they have never met some of the scenarios that are fully reasonable scenarios, fairly common, where a researcher will accidentally or deliberately go out of scope, but it's still not with bad intent, right. And they have to kind of figure out how are they going to make those decisions and how are they going to behave.

Katie Mossouris:
Essentially when something like that happens because I mean, a great example is the DOJ put out guidelines for how you're supposed to think these things through. And part of that was because of the work we had done with Hack The Pentagon, and helping to create the ongoing vuln disclosure program that was outside of any particular bug bounty time limited challenge, where if you see something say something. you're supposed to come forward to the DOD. But we did run into people who were definitely out of scope and had to coach the DOD and DOJ in like, "No, this is not it. You're not going after these folks right now."

Katie Mossouris:
But that's why DOJ put out those guidelines to help organizations think that through, and a great example of where they always fall down in first pass is they say, "Oh, we want no data exfiltration whatsoever." And I'm like, "Yeah, but accidents happen." So what are you going to do when somebody actually tells you, "I didn't mean to, but I did see data I wasn't supposed to." That's just a classic example.

Casey Ellis:
No data exfiltration whatsoever, please provide a clear POC.

Katie Mossouris:
Right.

Casey Ellis:
Where's the line between those two, because dumping all of the records versus creating enough information to demonstrate that you've actually found something, that line there's no real established standard on where to draw that line, which is part of where it becomes a case by case thing. I think on the flip side of that, you see a common clause as please don't use automated tools, was like a computer, that's kind of the whole point. What exactly are we talking about here? What that clause is actually saying is don't do aggressive scanning where you've not inserted your own creativity. Because A, we've already done that, and B, we don't want the traffic, right.

Joseph Carson:
And we don't want that traffic multiplied by all of you. [crosstalk 00:29:32].

Casey Ellis:
But it's such an ambiguously phrase term that you end up in a position where you've got folks that are ESL. You've got folks that definitely don't necessarily have a legal background trying to read through and interpret all this stuff to work out what's okay and what's not, and it's imperfect. I liked that there's a lot of work going into standardizing this stuff like better preparing the companies on Katie's side with Luta, all that, on our side sitting in the middle to be a translation layer and effectively a broker between these two groups of people that really need to talk that don't really have a great history of being able to do that. It's all heading in the right direction, but there's still a long way to go I think.

Joseph Carson:
Scoping is the most critical thing we're probably talking about in the entire process, is making sure people understand what is okay. My guess this is the top, everything I do is that when I'm doing scoping is my goal in the whole process is top of my scope is doing no harm, cause no harm, everything else can be accidental. It's don't cause the business to lose money as a result of my activity. Do no harm, try not to let ... If I see data is one thing, but if I leak data is another thing. My goal is to not to cause any harm to the business, so everything else is by kind of product of what we're doing.

Joseph Carson:
One of the things that we're kind of also going in this topic is around one of the things you get into, you're talking about automated tools and talking about the ethics side of things. I really kind of understand, and the only thing is there's a lot of cross border, like a lot of people in India doing this as well. And one thing I think I loved last year listening to the talk around when we're talking about scoping was around the guys who did the courthouse and ended up getting arrested and so forth.

Katie Mossouris:
Although that was in scope though, that was totally in scope.

Casey Ellis:
As an example of like, despite it being in scope, but then still all of that happening, that was a really good example for that.

Katie Mossouris:
I mean, honestly, that's a good example of why I don't think scoping is the most important thing in this process at all, because it's having the organizational maturity to, one, get behind doing something good with vulnerability reports. Two, I mean, in that particular case, that was just infighting politically happening, where one branch said they had legal authority and another branch of law enforcement said, "No, you don't." And it was their infighting that caused those arrests to happen and everything. [crosstalk 00:32:10].

Katie Mossouris:
For me, yeah, we do a vulnerability maturity assessment of organizations and it's five different capability areas and only one of them is engineering, right? Because it's this whole picture internally. I've seen organizations try and just skip to the scoping step. They're like, "Okay, first thing we need to do." And this is my big problem that I have with the existing guidance that came out DHS CISA for the binding operational directive. I love that they have this as a major initiative for the US government saying, "Thou shalt have some way to get in touch with you to report a vulnerability." I love that. What I hated was they said, "Step one, decide what's in scope." And I'm like, "No, step one is decide how are you fixing vulnerabilities that you already know about? What is your capacity? You need to add capacity."

Joseph Carson:
Can I jump in there, because I think the number on thing is to figure out how you want to actually receive that information. To me, that was the thing, right? As a security researcher, sometimes they're coming in through the window or through the other way, because they have no other way to get your attention. They don't know how to actually approach you and how to get you that information. To me, the most important thing is how do I as a security researcher provide this information to you so that you can then act on it. [crosstalk 00:33:21]. I'm happy to be wrong.

Katie Mossouris:
As a security researcher, sure, that would be the most important thing. But in terms of what security researchers actually want, which is responsiveness, adequate responsiveness, competent understanding of what they're trying to do and action, right?

Joseph Carson:
Well, yes.

Katie Mossouris:
All that stuff that a researcher actually wants after they find the point of contact, that is the digestive system. Here's the thing, nobody actually gets from ... I don't receive any vuln reports from the outside to a functional vuln disclosure program without doing some of those steps. If they do, it is a trial by fire, completely unnecessary, painful, moment where they have to basically struggle, where they basically put out a menu for a restaurant that they've got no kitchen staff for. You know what I mean? It's like, they can do it, they can pull off the shifts. They could maybe feed some people. They can maybe get this done. But it's excessively painful. And there's no reason for it because we literally have two ISO standards that have been out since 2014 and numerous other examples and ways to get ready.

Katie Mossouris:
And one thing I love about Casey's approach with his company is that I don't get a lot of let's just say bug bounty refugees from Bugcrowd, if that makes sense, folks who just got in over their heads and were hurting and everything. I feel like Casey's team does a good job of making sure that there's no bug bounty Botox going on over there. And that's something that's really important. You can put up a front door, but if nobody's listening, I mean, it's a Hollywood ghost town set, and it's [crosstalk 00:35:02].

Casey Ellis:
I do appreciate that Katie. It has been a focus for us. Bugcrowd actually started out like the way that I characterize why I started the company in the first place, looks more like a crowdsourcing problem than it does a disclosure problem. And I think part of the challenge around where we're up to now is that a lot of those things are conflated together. It's all about bounty or it's all of VDP or if VDP is all about engaging people to do work. It's not, because you're not paying them. So by definition, it's not actually work you're doing, you're actually putting out basically a policy in a way to listen for me and send it when it's got something to tell you, that's what a VDP is.

Casey Ellis:
But there's all of this kind of tone confusion around the different ways of coming back to the founding of Bugcrowd, get access to this pool of people that want to help out in a variety of different ways, I think. What we focused on, because we observed this very quickly it's like, okay, yeah, people that build software that deploy enterprise networks, they're generally way more vulnerable than they think they are. And when you get the people with the right kind of talent into the mix and activate them, you tend to figure that out. Okay, if too much of that happens too quickly for the organization, they're going to become overwhelmed. They're not going to be in a position to step back and think about what kind of frameworks do we need to change? Do we need to implement a proper STLC? What's our risk management, our approach to vulnerability risk management within the organizational risk based vulnerability management for example.

Casey Ellis:
If they haven't had the opportunity to do that, because they're so busy swatting bugs, then we're not really ... we've given them information that's valuable, but it could be way more helpful than that. That's been a big part of why we've kind of always tried to take that like crawl, walk, run approach. The other is to make sure the researcher gets looked after. Because if a bunch of people get a commitment made to them and all of a sudden the essence of what's happening on the other side changes because they've gotten overwhelmed, that's a bum steer for the research community. It becomes huge overhead for us to be able to keep everyone's expectations in line. And it's just a bad time. That's context behind that.

Casey Ellis:
I will say with VDP as well, this whole distinction between scope versus not scope, I do put the root cause of that as term confusion around what these things are. A VDP is different from a crowdsource security assessment, or what we call next gen pen tests like crowdsource pen testing, those sorts of things. The latter crowdsourcing is against targets. It's effectively a different way of engaging or encouraging information and you scope that. For a VDP it's against you as an entity. So you're basically saying, "I want to know about all of it." This is me wanting to hear reactively what the Internet's discovering about my risk posture, so that I can do something about that. And scope, it kind of better reflects I think the fact that attack is read scope in the first place, right? [crosstalk 00:38:08]. This whole idea of like, yeah.

Mike Gruen:
I've got, from you Casey, one thing is, I don't see it as a crowdsourcing, I see as a skill sourcing because I think it's about the skills. Because it's not about the volume of people, I think it's about getting the right people.

Casey Ellis:
That's difficult too, crowdsourcing triggers particular things [inaudible 00:38:27].

Mike Gruen:
Yeah. For me it's skill sourcing because we can't be the experts in everything. A developer who's writing code is not the expert in security, they're the expert and they'll be able to build a module that will actually serve the purpose of the business, and going and getting the right skills. It kind of gets me, kind of moving on from two questions [inaudible 00:38:45], is one is, how the world is very much cloud and a lot of organizations it's shared services, it's shared resources, it's infrastructure and services.

Mike Gruen:
I remember working a lot of companies were I did penetration testing in maritime. And one of the things I did is going into a shift and into power station. The problem I had was that the company, they own the engine, but they don't own the data. You have a car today, you're buying a car contract, you [crosstalk 00:39:19].

Casey Ellis:
Car is the way you see this play out.

Mike Gruen:
Yeah. You own a car, but the data in the car is not owned by you. You're actually there to provide that to the manufacturer. Same as TVs, everything we're moving to this multiple contracts type of scenario and cloud is definitely one of those areas. I remember, the company might be hosting it in whatever hosting name cloud over it might be. And now how do you do that especially if it's shared resources to do those overall testing. And the second part-

Casey Ellis:
I'm sorry. Go ahead.

Mike Gruen:
[crosstalk 00:39:49], two part question. I'll let you answer the first one and then I'll move to the second one.

Casey Ellis:
Yeah. I think honestly cloud is almost easier if you're talking about this is one of the things that's a big feature in the ISO standards. Katie mentioned before, and it's been this kind of tough nut to crack behind the scenes with a lot of what the platforms have done since we came on the scene, is this idea of multi-vendor coordination or multi-party coordination of response. A car is effectively, most of the time, a collection of OEMs, that's been assembled into a unit, and then sold to a customer. And then you've got the data and then you've got all the other things that you were just talking about there. The average piece of home networking kit often has a lot of aspects like that, like supply chains that make up physical things that we interact with to have a cyber component to them that they've been there the whole time.

Casey Ellis:
So trying to figure out how to coordinate vulnerability disclosure, like down that supply chain, obviously we're recording this at a point in time in 2020, where supply chain is now very much top of line, so I swear I'm not doing the buzzword thing right now. But it's true, that's just sort of how it works. With cloud it's a little easier because it tends to be a dynamic target that you're assessing and it's in one place and it's either there or it's not. So who's hosting it and models behind it, it's not to say that, that doesn't ... that still factors in heavily. So [inaudible 00:41:20] I'm not saying that, but I do think it tends to be more obvious in terms of who owns what.

Casey Ellis:
And honestly going back to the combo around VDP and scope and all that stuff before, the sale of expanse this year and the rise of attack surface management as a category, it's evidence of something that we've known insecurity for a really long time, people don't know where the stuff is. And ultimately that's really what it comes down to. This is not a surprise for anyone who's been in space because it's always been a difficult thing to solve. But then cloud's gone and made that happen at the speed of caffeine and the internet. You go short docker and whatever else on top of that and you've now suddenly got everything everywhere. That potentially comes back to me as an entity because I am the owner of it. But I don't really know where it is. So when it comes to defining scope and who's responsible for what, that's actually usually a fair bit of work.

Mike Gruen:
You've got me laughing here. As part of the disclosure program, should be telling me where my data really is, because I don't know where it is.

Casey Ellis:
That happens. I mean, S3, all of these breaches you hear about was databases being left in cloud provider, cloud storage buckets. That's essentially that, it's like, "Where's my data?" I forgot. Someone put it there probably well intentioned. And this is part of the thing that they were being asked to do for work. But they didn't consider the security implications of that, nor did they integrate that particular thing that they bought on a credit card, into the ISMS of that organization, such that it could be managed going forward. I could see that problem actually getting more difficult over time because 2020 is like the great zero trust experiment, and we're all cloud native now.

Joseph Carson:
I'm not a big fan Zero Trust. I think it's the balance for building trust. Zero Trust is not effective for the business. It's always [inaudible 00:43:16]. Katie, I have a question for you.

Katie Mossouris:
Yeah. But I was going to jump in on this supply chain vulnerability coordination stuff. I started Microsoft vulnerability research in 2008. And part of that was to do research on third party software that affected Microsoft customers. And a major driver of that was doing multi-party and supply chain vulnerability coordination because it's essentially a vital cousin of the first party vulnerability coordination and it needed dedicated attention. And this is at Microsoft obviously, major operating system company, the biggest software company in the world.

Katie Mossouris:
But the concept of organizations having that capability in-house is literally a dozen years old. I want people to kind of gain some perspective of when they're thinking about, oh, we have to be able to handle all this stuff. It's like, "We're still dealing with most of the internet not adequately handling first party vulnerability disclosure, one bug, one vendor." And settling on that. I don't want to underestimate the importance of thinking those things through. And that is literally what we help companies do is help understand not just the tactical of where are my assets, where am I bugs, but it's how do we even deal with a situation where we've got a complex supply chain up and down, you're somewhere in the middle of it, because usually organizations are. They have dependencies, up and downstream in the supply chain.

Casey Ellis:
Yeah, actually. Sorry, just to jump in there. The first real disclosure that we got through Bugcrowd was actually against not a first party thing, it was SaaS that we use for authentication. And I was like, "Oh, great. Now how do I communicate this to them?" And we locked out because they also happened to be a Bugcrowd customer. It made it super easy to sort of get the security researcher connected to the right people over at the other company. And it went fairly well. But I do think that's going to be the big challenge moving forward, as I get more of those, how do I make sure that I can effectively get the right people communicating to the right people without being an inefficient middle man.

Katie Mossouris:
Right. Well, and that's exciting that you were able to do that with Casey's company's, with Bugcrowd's help. I've definitely seen not Bugcrowd failing miserably at that one to the point where I'm not kidding you, I have been embedded at a shared customer with a different bug bounty platform provider and two groups at the same company, they couldn't coordinate bugs and they were both using that, the other folks. And so that just kind of goes to show that I think that this skill set in general and even the concept of what's in scope for them to help you with is still developing. And I like that we've got folks who've lived to be consultants and pen testers life before. And Casey being a real hacker who started that company with Bugcrowd. And I just feel like there's a lot of your mileage may vary in terms of getting the right help for this type of thing.

Casey Ellis:
It's very kind of you to say that Katie and I completely agree. Your mileage may vary in terms of one size fits all, the different approaches that third parties have taken. But also that every company is a snowflake, every system is a snowflake, every vulnerability is a snowflake. You've got so many snowflakes at this point that it really becomes more around the underlying approach the organization has to this, as opposed to what is the security thing that we're going to bolt on top of our existing security program? To me, that's the true value of this. It's not a better pen test or a vuln scan or a particular thing. It's really this idea of integrating build or break feedback loops into your organization in a way that becomes a part of design, becomes a part of how you structure your organization itself.

Casey Ellis:
How do you negotiate a third party supply chain contracts to accommodate the fact that that upstream provider is a part of your problem when it comes to risk, as it relates to your customer? If you haven't seen this stuff as an organization, you're less inclined to even have that thought in the first place. I think that's where the true value of this starts to come out, this idea of builders and breakers not thinking the same, but if you can get them talking to each other and have them basically exchanging Vulcan mind melding as best they can then good things come out of that.

Joseph Carson:
Absolutely. This leads me into my next question, when are we going to see fix crowd?

Katie Mossouris:
Oh, yeah. I mean, well, my company does a lot of that stuff where we help with internal staff augmentation because the growing volume, it's not just the volume of bugs, but it's the skill sets required to understand them, to guide the existing developers into a better understanding, to point them towards secure development life cycle practices that will reduce the overall number and severity of vulnerabilities over time and ideally increase the complexity. You want the low hanging fruit eliminated by you, right. So that's what you want. You don't want it hanging out there for bug bounty botox cowboys out there. [crosstalk 00:48:50]. Yeah, exactly.

Katie Mossouris:
It's making sure that there's that kind of efficiency and yeah, we've been doing that kind of work. We did that kind of work with Zoom, and we flattened their bug curve because they got a big spike of bugs because they got really popular, and we helped flatten their curve by like 37% in 10 weeks, which was, trust me, if you knew the raw numbers, that was significant. And yeah, it is really about making sure that the organization inside, if they don't have the right people, the right tools, the right skills, that we've got to prop them up. I wouldn't say fix bounty per se, because that makes it too transactional.

Katie Mossouris:
Internally, you have to have organizational memory that carries why you made a certain security decision, why you punted something and that's knowledge of your product life cycle, your support life cycle, how long are you going to even keep that product under support, right. You may make a different prioritization decision. It's about getting embedded and understanding all of these internal company needs and our company helps a lot with that. So yeah, thanks for asking.

Casey Ellis:
That's awesome.

Katie Mossouris:
I'm like, "We didn't even talk about this. We were in, that's great."

Casey Ellis:
What Katie's talking about there as well, other things that if you're working on those that you're addressing the problem closest to the recalls at that point. And you're in a position where the cost of having to catch things later on down the kind of timeline is less expensive because you're getting it done sooner. I completely agree with that where what we've done and what we've seen there are the transactional fixes that are possible in terms of mitigating specific risks. And there's areas of that that Bugcrowd's already applied with and that we work with customers on.

Casey Ellis:
I think in general in the middle, and this is sort of almost the bridge between the part that Katie's talking about in terms of the institutional knowledge and reset, the part that crowdsourcing addresses and the crowd addresses, which is more where I play. Where can trends be identified?" Are there particular frameworks or particular types of software that as an organization you're more systemically having issues with, that indicate an underlying any pattern that might be addressed by developer training-

Katie Mossouris:
Or eliminating PHP from your life.

Casey Ellis:
... or shift in framework. Or getting rid of PHP off, eating it into the sun. There's all these sorts of things where, and it's almost like the way that we've just kind of framed that, we didn't talk about this ahead of time. But there's the discrete issue that gets found by an individual all the way down to the root cause, like organizational cultural decision that probably happened 10 years ago that led to that. And we're really talking about addressing different parts of that chain, because it is ultimately a spectrum. Those things are all tied together. If you just work on one piece, then you will not be able to miss ... not being able to work on the other. And that's where I think being able to approach that mindfully and not just do one part of it and say, "Job done," is really important. But those are some of the ways I see those things tying together.

Mike Gruen:
I really think it's important that we reward kind of the right process. By rewarding, I remember a long time ago doing a security awareness training and we ended up having this instant response plan and we rewarded employees with money when they reported incidents. And the reward and the motivation was to get money, so they reported everything. And that's what we realized then is that, that wasn't the right motivation. It wasn't the right kind of thing. That's not what we were trying to achieve.

Katie Mossouris:
Right. That's the classic Dilbert cartoon, right? From 1995, where they're saying-

Mike Gruen:
Cover for me.

Katie Mossouris:
Yeah, exactly, cover for me. There's something important though about reversing the polarity of what you're paying for, right. Of paying for fixes transactionally and stuff, so there's a good story here. The European Commission authorized a bug bounty program for all open source software that's commonly used in European government, so this stuff a few years ago. And there's this idea that if you just find the bugs and throw them over the fence, that's just unequivocally a good thing. [crosstalk 00:53:22]. Open source is different. There are maintainers that are sole maintainers or they're part-time on an open source project that might be quite popular. We saw this with open SSL right before it got the resuscitation investments from the Linux Foundation.

Katie Mossouris:
Here's the thing, I asked the Apache server core developers, "Hey, if we were to structure this bounty in such a way that would be most helpful to you, would it be helpful to ask for a solution, because you're open and would that be helpful?" And they said, "Absolutely not. Please don't offer money for that." And I was very confused and I was like, "Wouldn't this be helpful?" And they said, "Yeah, most of our work right now is dealing in terms of accepting or not accepting security fixes, is arguing with people about breaking changes, and we're the core maintainers for a reason." Basically they saw this as a danger of not only overwhelming them with more people wanting to get their code committed and everything, and that would increase their workload in a bad way without increasing security, right.

Katie Mossouris:
But the other thing was open-source relies on volunteer maintainers. And if suddenly the people who are fixing issues for free are now going to be moved into this transactional model, it made less room for them to really identify the folks that would become the next generation of maintainers. And to give you one scary fact, open BSD is the average main core maintainer age is 55 years old. So just draw that in to yourself for a minute, that's average age. We need new blood doing the core maintenance of some of these packages and doing these sort of transactional incentives turns out is not going to work in terms of giving them better fixes and a better pipeline for who's going to take over these projects. I just wanted to put that out there that it's not like a one-to-one pay for the bugs pay for the fixes that's going to solve our problems here.

Casey Ellis:
Right. Actually, it's very reminiscent of, I remember my first software engineering job where the idea was, "Oh, we're going to give bonuses to the QA team for finding bugs." But as the engineer like, "Okay, cool. I'll just collude with them. I'll put some bugs in and we'll split the bonus fifty-fifty." It's unintended consequences. I understand what you're trying to do. But by offering money for the stuff you're going to create some perverse incentives and potentially disrupt things that were working just fine. [crosstalk 00:55:48].

Mike Gruen:
It's unintentional economic forces that you introduced to things. This whole idea, and it comes back to the whole crawl, walk, run, as we're talking about before, starting this off in a smaller context and actually getting that feedback. But then as an organization making sure that you're actually taking that on board beyond just fixing the bug, so to speak, and thinking through, "Okay, if we were to scale this out, how would we do you know ingestion of like offered code changes to scale?"

Mike Gruen:
If we had people offering this web application firewall rules for us to integrate in front of a dynamic platform, for example, how would we validate those? You can't just go and take someone else's code and slap it on top of your organization, that's unwise. All of these different things, and there's a million of those and they're different for every single organization. This whole idea of saying, "Okay, how much of that are you going to do based on your baseline? And then how do you think about this as it scales out? What are the steps that you need to take to make sure that all of those kind of foundational elements have been put in place?" And then that ultimately ends up, the closer it can be to balance of forces from an economic and an incentive standpoint.

Mike Gruen:
Ultimately when you see like bug bounty programs that are actually kind of awesome, they're at the end of a lot of that. It's this balancing of forces and setting these different things up in the organization that's progressed and scaled up over time to the point where it's at a maturity level, where you can just plug it into the internet and it works, that doesn't happen by mistake. I think that's kind of the key thing.

Katie Mossouris:
Oh, by the way that Scott Adams Dilbert with the perverse incentives, that got anonymously plastered on my office store at Microsoft right after I announced this bounty program, just so you know, that really happened.

Mike Gruen:
Code minivan.

Katie Mossouris:
Yeah, I'm going to code new minivan was right on [crosstalk 00:57:50], and I left it there.

Mike Gruen:
I had it as Twitter header for a while. Yeah, it's good.

Katie Mossouris:
Yeah, I know, I just left it there. I was like, "Yeah. Okay. We'll see how this goes." And it turns out it went great, because last year I think they spent $14 million on bug bounties, Microsoft did. So 2013 to now. Yeah.

Mike Gruen:
On that piece because this is a question I get asked, "How do you probably say it all the time as well?" It's like, how do you avoid that? How do you avoid ... Cobra farming is the economic reference for it, going back to some stuff that people can look up if they're interested. How do I avoid that? And the reality of it, yeah, there is the risk of perverse incentive and things kind of looping back around like that. Anecdotally we've not seen any of that so far. And I think it's largely going back to some of the stuff we talked about earlier around people being good.

Katie Mossouris:
Oh, I have them.

Mike Gruen:
But then, okay. I'm sure it is a thing that has happened. I'm not saying that it's impossible or it's non-existent. But the reality is that getting caught doing that is actually pretty easy too. As someone who sits within an organization who potentially wants to code themselves a minivan, you're also aware of like get blame and all of the different ways that you could end up going to jail for doing that, which becomes a [inaudible 00:59:11].

Katie Mossouris:
Yeah. And now it's not even where there's that sort of insider collusion part that's the only potential collusion that can happen or cheating of the system, the intent of the system really. We've seen triage people who are under contract steal vuln reports from what they're triaging, the bug bounty program or vuln disclosure program they're triaging. And then copy and paste that exact same bug report to another vendor who's vulnerable to the same thing and collect the reward. And that's essentially them abusing their triaged visibility into incoming bugs. And this I think is a problem, this is a huge problem, and this is literally just of the past few years where there have been bug bounty platforms, where you guys have the same hiring constraints that everybody else does and trying to scale appropriately, so you're going to bring on contractors for a period, and before they're full-timers and whatnot. We've actually seen this manifesting as another threat in this ecosystem.

Casey Ellis:
Yeah, definitely. It's actually the risk of that or the potential for that, is another one of those things that we kind of saw coming over the hill. So in terms of how Bugcrowd resources the triage team, it's primarily in-house, if they contract is it's usually because they're full-time contractors in a part of the world where we haven't ... we're not necessarily headquartered yet. And really firm and very strict rules on what's okay and what's not okay, to maintain that Chinese wall in effect within the organization. It's like, "All right, if you're able to see these sorts of things, here are all the other things that you're not allowed to do on condition, on pain of view basically retaining your employment."

Casey Ellis:
And it's one of those things, it feels draconian because it's a lot of like, "Oh, doing hacker stuff and it's heaps of fun and all of that." And here's some really hardcore rules that you also have to follow. But it's important because ultimately like this whole dynamic relies on trust and it relies on expectations being aligned and kept as the process plays out.

Katie Mossouris:
Popular Science magazine in 2008 called Microsoft Security Grunt, which was kind of a conglomeration of job descriptions. But the way they described it was the people who have to answer your email when you email Microsoft about a security issue and deal with that. This was kind of a broad painting picture, but they literally ... Popular Science named it as one of the top 10 worst jobs in science. And it was literally between elephant vasectomist and whale feces researcher. You got to take that in.

Mike Gruen:
I'm seeing a trend there.

Katie Mossouris:
Yeah, and everything. And by the way, I got to say that on an official list advisory board call just last month. And I was so pissed.

Casey Ellis:
That's awesome.

Katie Mossouris:
I'm certain I'm the first person to go on federal record.

Casey Ellis:
You should have the salaries [crosstalk 01:02:13].

Katie Mossouris:
Yeah, exactly. Elephant vasectomist and whale feces researcher. But here's the thing-

Casey Ellis:
The whale guy does really well but the other guy not so much.

Katie Mossouris:
No, that triaged job sucks. And it does suck and when you're good at it, you're going to be good at it and you're not going to want to do it for the rest of your life. The fact of the matter is that's a very important piece of understanding this ecosystem where-

Casey Ellis:
Yeah, team management [crosstalk 01:02:35].

Katie Mossouris:
Yeah. And where bug hunters maybe happy to hunt bugs for many years longer than that. It's those folks that begin to be the shock absorbers on the inside that you start to see a degeneration and shorter and shorter times where they're willing to even do that work. You're kind of in a little puppy mill of having to recruit, train and make sure that they're very efficient. Yeah.

Mike Gruen:
And creating a career pathway and doing all those sorts of [inaudible 01:03:00].

Joseph Carson:
I was just going to say, I think that's the best part of that is that-

Mike Gruen:
That's the upside.

Joseph Carson:
Right. The upside is that there is this entry level into security where you can get this job, you can start understanding the problems, and you can start to see a career path that develops from there. I think that that's the upside is that this is one of the few ... People ask me all the time, "How do you get into cybersecurity?" And like, "Cyber exist, we provide training and career development." But saying, "I want to get into cyber security," is like saying, "I want to be a doctor or a lawyer." There's so many jobs. And there's not that many entry level. And I think that's a really good, solid, entry level, get a good taste for what things are like, a solid understanding and foundation.

Casey Ellis:
A lot of people came from support industry because it's [crosstalk 01:03:49] listening to all the problems every single day and they're there trying to fix as well.

Mike Gruen:
And for us, I mean, there's no difference. I mean, it's partially because I have the type of organization that we run, we're very small, small engineering team. I'm VP of engineering and CISO, so there's not as much like back and forth between the security team and the engineering team, we're all one team.

Casey Ellis:
QA.

Mike Gruen:
Right. QA is part of my ... But the idea that a vulnerability or security problem is just another bug, that's how they're treated. We look at it like any other bug. What's the risk? What's the business value? What happens if we don't fix it? And so it goes through the same sort of product triaging, and we have that benefit. I think a lot of companies, and I'm curious, Katie, you probably see this, where it's way further apart. And it's like you have the security team, you have whoever's deciding what's going to get done, and the guy is doing it and the people doing it just are so far apart that it's almost impossible to get these things that are security problems fixed. I'm sure you've experienced this.

Katie Mossouris:
It is absolutely like that in a lot of organizations. And especially because time to market with whatever it is that you're building is so essential that we all know this as entrepreneurs and everything, that you have to build the thing and focus on building the thing. And so security and those security teams are often hired after the fact, so there's a well-established development culture that exists. And then the security team comes in. And often they're seen as the no people, just captain no over there telling us we can't do this thing that before the security team got here we were free. There's often just this family counseling that we end up having to do to be quite honest, where we're like, "Okay. Now we're going to get together and we're going to count your bugs." Okay. Everybody got the same count? All right. Now we're going to figure out-

Joseph Carson:
This are the features that you didn't intend to put in the product. [crosstalk 01:05:48].

Katie Mossouris:
Right. I mean, literally getting into organizations where they want to fight Jira math Wars saying, "That shouldn't count as a security bug because it was this other thing." And it's like, "Well, this one should count as a separate bug than this other one, even though they're the same root cause, because they're on different end points." You get into these religious wars between literally labeling in your bug databases.

Joseph Carson:
Oh, yeah. I mean-

Casey Ellis:
[inaudible 01:06:12] labeled by design.

Mike Gruen:
I mean, the same is true when you talk about features and bugs and so on and so forth. I've run into that sometimes as well. It's like, look, all I really care about is how much new work are we doing and how much of rework are we doing? And I don't really care what's causing the rework until the amount of rework that we're doing, it's so large that we're not getting actual new stuff done. Then it's like, well, is it because product isn't defining the requirements well enough? Is it because the engineers are trying to get too much done too quickly and they're putting all these bugs in unintentionally?

Casey Ellis:
I think with that, because in my mind based on observation, there's almost this post-Facebook and pre-Facebook kind of line in the sand in terms of an organization's natural tendency to be able to even understand what we're talking about right now. Organizations that are older, this whole idea of, "Oh, how are you going to fit it in?" The conversation we're having is more natively compatible with folks that are agile first, cloud-first, CICD, all those sorts of things, or at least who think about that as a business, because you have to do that too.

Casey Ellis:
When you're retrofitting this stuff over an organization that's been around from a technology standpoint for 30 or 40 years or more, it's a lot of work because the whole idea of, oh, well, how are we going to prioritize how we're pushing the pipeline forward to insert this work that's coming off the wire? They don't have a muscle group that does that yet necessarily.

Katie Mossouris:
I think a lot of organizations get pushed into it because they have an existential threat to their business bottom line. That's what pushed Microsoft into trustworthy computing initiative, where they did a code freeze and said, "Every developer is getting trained now on writing more secure code." And then they started their secure development life cycle. Same thing, Zoom went for a 90 day feature freeze except for security features, because they were experiencing that existential threat that they could not go on with business as usual without addressing it in a very serious way. We hope that most organizations don't have to have that sort of shock to their system to get them to start investing.

Katie Mossouris:
But I definitely have seen a pattern where, to your point, Casey, it's interesting where the companies that seem to understand that they are in over their heads are these older companies that are coming forward and saying, "Don't tell anyone we're over a hundred years old, and we only got into computers five years ago."

Casey Ellis:
I know.

Katie Mossouris:
Yeah. We're like, "It's very obvious. You don't need to be shy about that." Right. I think it's these companies that they already have huge sprawling infrastructures that need kind of the most like TLC in how to get to a place where they can be responsive, right. And that this is a new work item engine for them, right. That they need to hit that rhythm right.

Mike Gruen:
Ingestion, I mean, even the idea of being able to admit the fact that there's likely to be a problem in the first place. To me, that's becoming tantamount to security maturity. The idea that like, no, I know that somewhere at some time over the past existence I've had on the internet, one of my developers has made some sort of mistake that's created a security risk, that's just mathematically probably true. And the companies that are comfortable with that are the ones that end up being in a really strong position to be able to integrate this feedback and just have it be a part of how they operate.

Mike Gruen:
But also I think they ultimately end up being the ones that are more trusted by the consumer base as well. It'll be interesting to see how that plays out over time. Because that's something that I see the older organizations struggling with just because of 40 years of history of saying there's something wrong.

Katie Mossouris:
Oh yeah.

Casey Ellis:
And companies that have waterfall approach is still two, three year life cycles and year freeze codes are probably going struggle, and they're probably going to take a long time to change.

Katie Mossouris:
I mean, yeah, in some ways, but in other ways there are certain things about those companies that it makes them a lot more deliberate about things, which if you are talking about needing to hire resources and plan for them internally, those old fashioned waterfall companies, that's kind of their bread and butter of how they plan releases and plan engagement. But I do think that the companies that are smaller, more agile, more compact, they can get things done faster. The real danger there is those companies are usually moving so fast. And remember the turnover rate in our industry is high. And especially for jobs that touch this area, right, whale feces, elephant vasectomist, that kind of thing, right?

Katie Mossouris:
Where we see smaller companies falling down is in failing to capture some of that magic that simply was living in the heads of these in place persons at a given point in time. What we do see is we'll see a very responsive organization and then key personnel leave, take the institutional knowledge of how to make that a very good responsible organization. And the org itself suffers the wound of having to relearn that operational capacity. It's not a fixed point in time that we see you're mature and you're not, it's like this group in your organization is highly mature, but that person's about to leave the company. And you're going to be dropped down into the relatively immature levels of the rest of the company. We see all of these kind of mixed modes going on.

Joseph Carson:
I think we might be able to solve the whole world's problems in this call today.

Mike Gruen:
I feel like we've fixed it.

Joseph Carson:
I think we've fixed it. I think for everyone who's listening to this, we'll be very clear and everything's going to be fixed. And the sun's starting to rise here in Estonia.

Katie Mossouris:
Oh God.

Joseph Carson:
One of the things I'd like for the two perspectives, for the companies who's thinking about this or wants to really address it, I'd like to get anyone who's going to take this journey, anyone who's listening in, what's your recommendation? And the second part of it as well for security researchers who want to do the right thing, what do you recommend for them? The two-part question for companies who's thinking about this, what's your kind of recommendation path they take? And where's a good place to start? And for security researchers the same.

Casey Ellis:
Wow. So for security or such is I think just getting to be a part of a community, plug in as much as you possibly can. It's not just about learning to hack or learning how to do whatever the thing is that you're wanting to do. I actually do think that we grow as almost ... I've got this picture on the office wall in San Francisco, it's a swarm of birds. And I kind of think about the community in a similar way. If we're together, then the whole becomes far greater than some of the parts. I think for researchers to be able to do that, we've got discourse, there's forums. Bugcrowd, we were doing virtual conferencing before it was cool or virtual security conferences, partly because we wanted to just create opportunities for connection and to be able to get educational information into the hands of people, to see if it matches with their curiosity and they could move forward. I think that happens in community. I'm a huge believer in that.

Casey Ellis:
For organizations really, I mean the biased, but the very accurate recommendations gives the call just in terms of being able to sit down and understand where you're up to. What are the things that you're trying to get done? If you're coming in saying we need to start a public bug bounty program, so we could do a huge press release on Friday and it's Wednesday and you've not done anything, we're likely to say, "No, don't do that." And then we can have a conversation around how you get your goals met in a way that's more sane. And that kind of fits in with what you're trying to get done as an organization. I think, especially for the larger organizations that might be earlier on in this process, reaching out to Katie and the Luta crew as well, all of this stuff around how do you mature yourself as an organization to be in a position where you are in 2021?

Casey Ellis:
I don't need the top of the pack as it relates to cyber security. There's a lot of work and honestly every organization's in that bar together. So to be able to get assistance from organizations like hers from a consulting standpoint, I think is really valuable as well.

Joseph Carson:
For both, don't go it alone.

Casey Ellis:
Don't go it alone, 100%. Yeah. Totally.

Katie Mossouris:
No, thanks Casey, I appreciate that. We teamed up on the UK government, so Luta was in there. And then we were helping to get them ready and mature, and this was a government wide initiative where they wanted to assess their operational maturity, find out what are the people processes and tools we're missing so that we don't get a bad case of bug indigestion when we start opening the front door. And then we coupled up with Bugcrowd who provided that initial service of making sure it ran smoothly and that hacker expectations were met and they were getting good advice on how to fix some of these issues.

Katie Mossouris:
I think that it's important to be able to do that organizational assessment, think about what your goals are. Definitely neither Bugcrowd nor Luta Security is into you just getting a press release out of it, because ultimately it will come back and bite you whether that's in the form of perverse incentives or even making it more difficult for you to hire internal folks. If you're focusing so much energy on your external bug bounty programs, and there's skill sets that you could actually hire for internally, you haven't gotten to that sophistication level where you can't possibly afford the pairs of eyes that you would need internally. I think that's super important.

Katie Mossouris:
And then from security researchers standpoint, I would advise them, definitely belonging to a community is super important. But ultimately how you choose to spend your time, whether it's hobbyist hacking time, professional hacking time, come to an understanding with yourself about what your goals are, right. Are your goals to learn? Then definitely you can have a broad availability of targets. If your goal is to make money though, the thing I advise people to do is say, "Go for a company that already offers a bug bounty program." You will not believe how many researchers come to me and say, "How can I make such and such company pay a bug bounty to me for this vuln disclosure report?" I'm like, "Do they have a bug bounty program?" And they said, "No, how can I make them have one?" It's like, "How about you not spend any of your time trying to make people do something they're not ready for. Instead just go to the companies that have advertised that they're ready for it."

Katie Mossouris:
So think of it, for researchers, if your goal is to make money, choose your targets wisely, calculate your hourly rate of, if I get the highest bounty, how many hours is it worth it to me to spend, arguing back and forth. Or am I ready to set it and forget it, and just kind of send it off. If I get paid, great, if not, whatever. But really be ruthless about your time, if you are trying to make a living using bug bounty programs as part of that outliving. That's my advice for researchers.

Casey Ellis:
Well said for both of you.

Joseph Carson:
Mike, any closing thoughts or anything?

Mike Gruen:
Yeah. I mean, I think one of the important parts is also just understanding the mechanisms that you already have in place and trying to make ... for developers, I think one of the things, rather than making adversarial internally and figuring out how to make it so that it's just part of the developer's job like any other bug fix, is one of those areas that gets overlooked a lot and how to leverage. I think if you're a newer company and you're doing like continuous delivery or Agile, I'm not a big Agile fan. But continuous delivery or whatever methodology you're using, how do you plug this into your CICD? How do you make these programs just sort of work like any other part of the business getting feature development or bug fixes in? And I think you approach it from that perspective and things just go a lot smoother. And I think when you're the larger ones, that's where other companies ... you need external help in helping to bridge those gaps. But if you're ...

Joseph Carson:
In moving to that place. I think that's a really good place to land it, because ultimately this is what the future of development and business and being on the incident looks like. It's distributed, it's feature rich, it's being constantly updated. And that's where everyone's at various stages of maturity on that journey, but I believe that's ultimately kind of what the end state looks like for basically everyone. Okay. Where are you with respect to that? And what steps do you take next moving forward?

Mike Gruen:
And I think you have to be honest with yourself about where you are, right?

Katie Mossouris:
Yeah. We published a free guide called The Vulnerability Coordination Maturity Model, and it's on our website. It's lutasecurity.com/vcmm. And you can just download the slides. We're not tracking you with cookies. We're not asking for your email address. It's literally as free, as free comes on the internet. And we're just making that available for free for people to just look at and try to get a sense of where they are maturity-wise. Obviously when we do a maturity assessment, we go much further in depth than what you see on the website there, but it's a really good framework if people want to self-assess like, "Am I ready for even the vuln disclosure program, let alone a bug bounty program."

Katie Mossouris:
They can literally take a look at those slides and super easy tell themselves, is this realistic, or do we have other work where we need to invest further in our internal processes before we take on this other work stream, that is very demanding and is a work stream that we don't entirely control. I think that's the big transition is that you're adding in another work stream that your company can't necessarily control the rhythm of that work stream. So being prepared almost like a customer support organization, but the customer support the deals with the people that can hack you out of business is really what it is, right?

Joseph Carson:
If I have to kind of try and summarize everything up from that, the closing statements. Going to Michael, what you're really telling me is that not to do it as a checkbox and not to do it as a special project, is you want to actually build it into your actual existing processes. You want to make it something that is part of your job and is part off basically the entire life cycle process, not special, not a checkbox, try to get it into the existing workflow. And from Katie, I think is really setting the goals and really understanding about what your intentions are and making sure you actually put your time and the resources into the right places.

Joseph Carson:
And from Casey, it's really about, "Don't go it alone, get help, be part of the community." I think that really sums it up. And for any one of our audiences really looking to take on this path, I do highly recommend reach out to Katie. Katie is one of the world's experts in this area, has been doing it for a long time and really started it off. I think James was one of the first bug bounty payments you worked on going out. And definitely we're looking to get part of the community reach out to Casey because that's where you can get and round your knowledge off, because we might come in with a specific set of skills. But being part of a community will help you run those skills off and become a much better skilled person. That's really where I kind of think, coming down to that, don't go it alone, get help, reach out to Katie and Casey. They'd definitely be there to help you and direct you in the right kind of path and journey to success.

Joseph Carson:
Again, I think, really pleasure having us on the show, really awesome conversation. I think probably one of the longest episodes we'll ever have. But I mean, it's not a bad thing and I think the more knowledge we shared, the more we talk, the better it is for the world, the more resources and the more knowledge people will gain. Many thanks, Katie, Casey, Mike as always. I'm the first person you speak to in a day and the last person you speak to, so I'm not sure how that works. But for the audience that are tuned in every two weeks for episode 4O1 Access Denied, subscribe, get in touch with us, share your feedback and let us know what you'd like to hear. So out there, stay safe, stay secure and keep learning. Thank you.

Speaker 6:
Learn how your team can get a free trial of Cybrary for business by going to www.cybrary.it/business. This podcast is also brought to you by Thycotic, the leader in privileged access management. To learn more, visit www.thycotic.com.