A thought re vulnerability research clustering

I'll never forget Heartbleed...

It dropped while I was on a flight to speak about crowdsourcing, vulnerability economics, and early learnings from Bugcrowd at SOURCE Boston. It ended up forming the basis of my talk, which I pretty much completely rewrote on the flight.

from

In this deck I talked about how Linus was wrong with his maxim "With many eyes all bugs are shallow".

Well... wrong is a little extreme - The talk went through how for developers and folks working on code, their incentive is to make it work and do new cool stuff, not to make it "not do all the things it shouldn't", and proposed an amended version that encompassed security, and acknowledged the then-growing trend of defensive vulnerability acquisition:

With many eyes and the right incentive, all bugs are shallow.

- Linus' Law

In the context of Bugcrowd and security research trends at scale, the phenomenon of vulnerability research clustering is frequently observed but rarely discussed, and it ties in to Linus's Amended Law.

Following Heartbleed, there was Shellshock, Poodle, Ghost, BEAST, Freak, and others. Aside from the newish trend of doing logo design and naming  vulnerability discoveries, I think there was another factor which came into play: Hope in an outcome.

 reminder: novel, broadly published offensive security research attracts attention to the fundamental anti-pattern that it reveals. this begets new findings.

2021: the year of pwning (and fixing) the software supply chain— cje (@caseyjohnellis) February 11, 2021 

The idea that vulnerabilities exist in widespread, 20 year old code wasn't a new one and is kind of QED when you think about it, but what Heartbleed did was to validate the truth of that, and to give security researchers fresh hope in an outcome to finish off their incomplete research, or to pursue the attack modes and overall class of risk that Heartbleed represented.

Heartbleed triggered "hope for an outcome", activated the curiosity and persistence of the hacker mindset, set a broader audience to work on similar problems, and the subsequent discoveries were the outcome.

The same thing happened years later with Meltdown and Spectre. Like the idea of bugs still persisting in old code, the notion that vulnerabilities exist in silicon has a "yeh, of course they probably do" feel to it.

Zombieload, Rowhammer, Ridl, Fallout, Foreshadow and others aren't necessarily direct descendants, nor do their discoverers owe credit to the others (folks who attack silicon and succeed are of the highest order of tenacity and genius in our space IMHO) - That said, it is hard to argue that this type of vulnerability went from "not being a thing" to being a fairly frequent occurrence over a defined period of time.

Attack classes do more than exposing the class itself, they inspire hope and attention towards adjacent classes which follow a similar mental model.

Why bring this up now?

Just yesterday, Alex Birsan (@alxbrsn) Alex Birsan released a post outlining "Dependancy Confusion" attacks that hinge around the exploitability of insecure software supply chains. Both the paper and his attack are very novel and unique, and they are causing quite a stir amongst security researchers, bug bounty hunters, as well as defenders.

In the paper itself, Alex goes into the origin of thinking around his attack, calling out some of the typo-squatting research done over the past few years and refercing a collaboration he did with Justin Gardner (@Rhynorater) as a source of inspiration.

The idea of dependency confusion is almost so simple.. Too good to be true, but it clearly is.

- Linus' Amended Law, 2014

While we've seen attacks of this type go through Bugcrowd in the past we are very tight about firewalling where vulnerability and attack information flows (for obvious reasons), and when we passed Alex's paper around the office today most of the organization was seeing it for the first time. The dominant reaction, particularly from folks outside of the offensive pipeline, was surprise that something so elegant and intuitive could have been overlooked all this time.

The fact that insecure software pipelines are exploitable feels a little like the idea that bugs exist in old F/OSS code, or that a chip design might not be 100% perfect. It's almost QED - but in the defensive realm, people weren't looking there.

My pretty convicted prediction is that we'll see a lot of similar findings published over the coming months, firstly addressing attacks similar to Alex's, but then - and more importantly - attacks which are different, but that exploit the same fundamental anti-pattern.

With many eyes, the right incentive, and hope for an outcome, all bugs are shallow.

- Linus' Amended Law, 2021

Thanks and props to Austin Sturm (@austinsturm) for injecting the idea of Hope into this conversation and precipitating the post.