Spicy Takes from my Aikido Security Podcast

Nine takes from my RSAC conversation with Mackenzie Jackson on Aikido's Secure Disclosure podcast — on bug bounty, AI slop, hack-back, vibe coding, and why the internet still working is a minor miracle.

Share
Spicy Takes from my Aikido Security Podcast

At RSAC this year I jumped on Aikido Security's The Secure Disclosure podcast with Mackenzie Jackson. We covered a lot of ground — from how Bugcrowd got started, to AI slop in bug bounty, to why the internet still working is basically a miracle. Rather than a recap, here are the takes I made that I think are worth putting front and center.

Not Every Organization Should Have a Bug Bounty

Yeah, the founder of Bugcrowd said that. And I've been saying it for years.

Every organization already has a vulnerability disclosure program whether they like it or not. If you're on the internet, you can assume you've screwed something up and someone outside your organization will find it. That's not fear-mongering — that's internet physics.

But adding a financial incentive on top of that? That's a different conversation entirely. A VDP is putting up a lightning rod. A bug bounty is shooting rockets with wire on them up into a thundercloud. If you're not ready for what comes back down, maybe don't do that yet.

The curl program is a perfect example. Volunteer maintainers running a public bounty? The math just doesn't work, especially when you add the dynamic of AI making it easier to find and submit vulnerabilities, both valid and invalid. Not every organization is ready for it. Not every organization needs it. And that's fine.

We're Doing Stupid Things Faster With More Energy

That's my one-liner on AI in security right now. We saw this movie before — There's a natural economic asymmetry between offense (and submission discovery and reporting) and defense (and triage and fixing), and back in 2014-2015 when bug bounty went from niche to mainstream and programs suddenly got overwhelmed with volume they weren't ready for. AI "slop" is the same pattern, just accelerated.

The deeper issue is the OODA loop. Observe, orient, decide, act — there are cycle times baked into how security works. What AI is doing is tightening that loop so small that humans can't fit inside it to take action. That's a fundamental shift, not a feature request.

But here's what people get wrong: they look at the noise (AI-generated garbage reports) and miss the signal, which has also been increased and accelerated by all of this. AI is going to get really good at what I'd call "sparkling QA" — finding the stuff that scanners and automation should catch. Great. Reduce the cost of that to zero. But it's important not to confuse that with actual hacking. The creativity, the chaining, the weird lateral thinking that finds the stuff that matters? That's still human.

The Internet Still Working Is a Minor Miracle

I mean it. Look at XZ Utils. Look at what's happening with npm supply chain compromises right now. Look at the fact that BGP still makes the internet itself work. Go down that rabbit hole and tell me you're not terrified.

There are way more vulnerabilities and structural weakness out there than we care to talk about. If you're in security, you may already know that. People outside the industry have no idea how close we've come to things breaking at a fundamental level.

SQL Injection Was Solved in 1998 — And Still Makes Up Half of All Vulns

We solved it with prepared statements. That same year. It's technically a solved problem. It still makes up a massive percentage of real-world vulnerabilities.

We can solve things from a technical point of view. We can get better at finding them. And they will still exist. That's a rule of nature, not a bug in the process.

Meanwhile, half the hardware vulnerabilities that get hosed by Flax Typhoon are format strings that were "solved" in the '90s.

Solved problems that aren't solved at all. Discovery and awareness is not the same as mitigation, fix, or prevention.

China Is in the Bottom 20 Turtles

Most of the security conversation is focused on the top five turtles in a stack of 50. Nation-state threats — the really serious ones — live in infrastructure layers nobody's paying attention to. Hardware. Firmware. The stuff below the OS that makes everything else possible.

The oxygen is getting sucked out of the room by the AI hype cycle, and the stuff that actually needs attention is getting ignored. I actually don't think that's a bad thing if it causes hackers to go look at those lower layers — but it's happening whether we direct it or not.

I Don't Like Hack-Back, But I Don't See How We Avoid It

Letters of marque are being discussed in Congress right now. Countries are starting to say companies can hack back when threatened. Non-cooperative defense — breaking into compromised systems to fix them on behalf of the user — is on the table.

I'm not advocating anyone go do this right now. But when you look at the raw ingredients of what we're working with — nation-state offensive operations, vulnerability marketplaces getting spicier, the blurring lines between defense and offense — I can't see how some of these things don't become normal within 5 years.

The Pentest Market Is Inflated (*)

I thought that was true when I started Bugcrowd. I still think it's true. AI is putting downward pressure on pentest pricing right now, and I think there's a correction in progress.

Tying back to the point above about how much is really out there: the sense of value in security is going to realign itself. We'll get more focused on fixing things quickly, building securely in the first place, and actually understanding what we're defending against — not just checking a compliance box.

  • = This is a hill I've been dying on for a long time now and I know it upsets pentesters, until I clarify that I'm talking about performative testing, not the kind of testing that actually informs a proper risk management program.

Learn COBOL, ASP.NET, and Java

I gave a keynote at a conference and told people to learn ASP.NET, Java, and COBOL. Boring. Not cool. Not the languages you see trending on Hacker News.

But if you go look at what's actually powering the internet, it's ASP.NET and Java. Under the hood? COBOL. It's everywhere. The people who know this stuff are retiring or literally passing away. That's a massive opportunity for anyone willing to go where others won't, and AI assisted engineering for languages like these still requires an architect to wield the tools well.

Vibe Coding Is Not AI-Assisted Engineering

I have a love/hate relationship with the term vibe coding. I love the playfulness and accessibility of it, but there's a massive difference between putting a prompt into Lovable or Replit and actually using AI from an engineering standpoint — with thoughtful design, maintainability, and proper understanding of what it's producing.

AI-assisted engineering with good thought put into design is just a force multiplier for people who are already good at this. That's not the same thing as vibe coding. The frontier labs are using their own models to build their own platforms. That's probably roughly where we all end up — and I'm using that as a leading indicator.


This conversation was recorded on Aikido Security's The Secure Disclosure podcast. Mackenzie Jackson is the host — go check out the full episode for the complete discussion including a "would you rather" game where I chose no firewall over secrets in git (and I stand by it).