4 min read

Crowdsourcing physics

Ok, time for some hard chats. I’m posting this following on from a series of conversations and reactions on Twitter and Slack/Discord, and a couple of articles that left a little to be desired in their differentiation between opinion and fact.

Let’s go…

Q: Does a platform have the right to ban a user?
A: Yes.
Q: Does a vendor have the right to say “I don’t want to invite you to work with us anymore?”
A: Yes.
Q: Can a finder then drop 0-day on a vendor?
A: Yes.
Q: Does the vendor then have the right to pursue the finder under violation of CFAA/DMCA/etc?
A: Yes… Most of the time.

I’m not endorsing or shouting down any of these things – Take a look at disclose.io if you want to see how we’re working to improve this situation – I’m stating them as facts of the existing physics of vulnerability disclosure.

Isn’t it all just bug bounty? No.

What some of the recent conversations around platform bans belie a fundamental misunderstanding of the difference between crowdsourcing and disclosure.

Crowdsourcing:

In crowdsourcing the company initiates the discussion (in the form of an invitation or brief) and therefore sets the Rules of Engagement for subsequent relationship. Payment is invariably a part of the deal, which forms the incentive for agreeing to the terms of the relationship in the first place.

Disclosure:

In disclosure the hacker initates the discussion, since they’ve usually found a vulnerability before (or completely independant to) any kind of agreement. The hacker, therefore, sets the Rules of Engagement for subsequent relationship. Payment is not always a part of the deal, and the incentive for the conversation can range for desire to protect the internet, desire to complete, desire to improve one’s resume, and possible a payment if the terms are set in advance – or the program retroactively decides to reward.

The role of platforms

The role of platforms is fundamentally different between these two use-cases.

For vulnerability disclosure, our role is to facilitate and minimize unintended consequences of the progression of a standardized disclosure discussion between a program owner and a finder. This is a conversation that is offered by the vendor via the platform out to the entire Internet (not just the constituent “crowds” or “hackers” or “elite”). Because of this, the professionalism exhibited by the finders and experienced by the vendors varies significantly, and reducing the cost of managing the overhead of this is a task taken on (and a value provided) by the platforms.

This is where bans come in with respect to vulnerability disclosure. If someone is spamming a program or is behaving aggressively towards a program, limiting their ability to do so is a part of allowing the vendor to focus their attention on the other vulnerabilities being submitted via their program.

As service providers, the platforms offer this option as a means to reduce the cost of dealing with people who are:

  1. not valuable enough, or
  2. too expensive to continue trying to work with.

It’s far from perfect in the context of vulnerability disclosure in it’s pure, utopian, normal-part-of-Internetting form, because it re-establishes the default adversarial relationship which has existed between hackers and vendors for 30 years or more and leads to things like Full Disclosure and lengthy one-sided blog posts.

Disclosure is messy. None of these bugs were meant to be there in the first place, so it should be little surprise that handling the infinite number of potential conversations is part-art, and part-science… QED.

Bugcrowd does it because we believe it’s a fundamental right of the user on the Internet – and because it’s something the Internet still needs a lot of help in figuring out how to do well right now.

Teaching the newcomers

The other, less understood, aspect of bans comes in around the rapidly emerging pool of folks who want to be a part of the vulnerability disclosure industry.

Seasoned security researchers understand the time, place, and power of Full Disclosure (or the use of a lapsed CVD timeline), but rookies will tend to view these tools in the same way 5-year-old views asking their Grandma for something when their Mother has said no. This is a completely reasonable uninformed assumption to make and Bugcrowd goes to great lengths to educate about the history of disclosure and the correct and incorrect time and place.

Sometimes these lessons need to be communicated more quickly in order to protect vulnerability disclosure as a movement and the hacker community as a whole. This is where temporary bans come in. Permanent bans apply when there’s either an active, consistent demonstration of bad faith, or a hostile interaction between a finder and a program that the program never wishes to repeat again (again, Bugcrowd does a LOT of work behind the scenes to explain in both directions and defuse this type of situation, but it does happen from time to time).

The more common scenario is that we’ll issue a short platform ban to make a point, the finder will realize and understand the “next things they need to learn” and correct course, usually without further incident.

Now… That’s all in a disclosure (i.e. Vulnerability Disclosure Program or Public Bug Bounty) context.

Teaching the “experts”

In the case of a private program, it’s a lot simpler: If a program doesn’t like a finder, or a platform doesn’t trust the finder with that program, they don’t have to invite them. Simple, and no different to the “right to refuse” arrangement that exists in any number of other industries.

Hopefully that clears up the why and the how somewhat.