Bug Bounties in the Age of AI

As AI accelerates the offense-defense asymmetry, bug bounties and vulnerability disclosure remain essential. Casey Ellis on the future of bug bounties, the evolving threat landscape, and how disclose.io and the SRLDF protect the researchers keeping us safe.

Bug Bounties in the Age of AI

I sat down with Todd from runZero at RSA Conference 2026 to talk about bug bounties, AI, and why security researchers are more important than ever. Here's the conversation, and some of the key threads worth pulling on.

The offense/defense asymmetry — now at AI speed

One of the things Todd and I dug into is what happens when both sides of the security equation get AI at the same time. My take: offense has always been inherently more agile than defense. You don't go to the expense and inconvenience of securing a door unless you realize someone nasty might walk through the hole in the wall. Defense is offense's child.

AI accelerates both sides, but here's the problem: we're still really bad at writing secure software, and we're really bad at patching things that should have been patched a long time ago. The internet is a lot more vulnerable than most people realize. The "you must be this tall to ride" bar has dropped dramatically, which means more people — good and bad — can jump in and do whatever they feel like. The things they're capable of achieving through better models combined with all that fragile opportunity out there... we're at the start of a bumpy hump.

Bug bounties aren't going anywhere

There's a question floating around right now about whether AI makes bug bounties obsolete. I don't think so — and here's why.

The primitive underneath bug bounties is vulnerability research and the fact that people on the outside of an organization find vulnerabilities that people on the inside wrote because people aren't perfect. That's not going to change.

A bounty is fundamentally about using an incentive to reduce information asymmetry between a disparate group of sources and a single destination. Every bug is worth something — it costs something to put there, costs something to find, costs something to fix, and has value to someone who wants to exploit it. That value spectrum ranges from zero (where all the AI slop is happening right now) all the way up to handset zero-days in the million-dollar range.

What will come under pressure is the places where security has been sold to people who don't actually care — checkbox pentesting driven by compliance rather than risk reduction. If the only reason you're buying a pentest is because someone told you to check a box, and you're finding the cheapest thing that qualifies, that's going to get disrupted. Hopefully what's left is the understanding that bugs are worth something and that needs to plug into how we think about risk.

What's new at disclose.io

We've been busy. The mission remains the same — make vulnerability disclosure suck less — but we've added some significant new pieces:

Programs Database: We've built a database that indexes every vulnerability disclosure policy we can find and classifies it by attributes. If you're a security researcher looking for where to submit a finding or where to hunt, you can use it as a vendor-agnostic guide for what good looks like.

Maturity Model: We're working to establish a maturity model for organizations. At the top: full safe harbor, proactive coordinated vulnerability disclosure, accountability to researchers, and making them feel safe. Very few organizations are there today, but that's the north star.

Research Threats Database: We've also built an open-source database tracking threats made against security researchers — cease and desists, legal action, the works. Think attrition.org for the vulnerability disclosure era.

For context: disclose.io started as an effort to standardize vulnerability disclosure with boilerplate legal language. Getting lawyers to agree on boilerplate is a minor miracle, but the real goal was getting that language adopted broadly, and creating precedent to see the DOJ update the CFAA charging rules in 2022. We've been at this since 2014, created the disclose.io brand in 2018, and the work continues.

I'm excited to share that Jen Ellis and I recently joined the board of the Security Research Legal Defense Fund (SRLDF).

The SRLDF is a 501(c)(3) that accepts donations and funds legal defense grants for security researchers facing legal threats for good-faith research. The big use case: someone stumbles into something during research, does the right thing, and gets threatened for it — and they might not have the means or knowledge to respond.

One recent case: researchers in Malta who hacked a train system in good faith, disclosed responsibly, and got their door kicked in. The SRLDF funded their legal defense, they received a presidential pardon, and now Malta is writing legislation to carve out good-faith hacking protections.

This isn't just for Americans or the usual crowd — it's worldwide. If you're in Canada, Australia, India, Malta, wherever, and you need help finding and funding legal representation after good-faith research, the SRLDF is there.

If you're a researcher: know that this resource exists. Reach out if you need it.
If you're in a position to support this kind of work: we're always looking for donors.

The researcher ecosystem is the tip of the spear

All of these threads connect. Security researchers are going to continue to be the tip of the spear for how we secure the internet. That will be true regardless of what AI does to the landscape. We need to protect that ecosystem and make sure it functions — and with AI tipping everything sideways right now, that's all the more reason to lean in.


Links: