The idea of a bounty (or, more specifically, payment-for-success incentives designed to reduce information asymmetry) predate cybersecurity's modern interpretation of the term by hundreds of years.

I'm a huge nerd for the history of information bounties as a model, and this post started off as simply sharing a fascinating chapter in that story - But then the more I read and thought about it, the more it turned into a useful explanation of the nature of a bounty itself.


This 1950s Nuclear Bounty is for information that helps the US Government thwart the covert introduction of nuclear threats into US territory (a) and (b), as well as prevent the unauthorized proliferation of atomic weapons from the US (c).

A bounty reward is in exchange for and based on the value of the information provided - It's also agnostic of the source.

This is a difficult problem to solve:

  • There are a wide and diverse range of threat actors who could attempt the things this bounty is designed to catch, and
  • A plethora of approaches which could be taken, creating
  • An almost unlimited number of possible scenarios for those in intelligence and control of nuclear material stay ahead of.

A centralized approach to detection and prevention, while effective, has enough coverage gaps in it for substantial exceptions to slip through. Here, the consequence of an exception involves nuclear weapons (and potential escalation to nuclear war), so this is unacceptable.

Bounty uses economic incentives to reduce information asymmetry.

From this problem statement, a more distributed approach is a logical answer to the coverage problem. In July 1955, the US Government decided to offer enough of an incentive to:

  1. Activate crowdsourced intelligence gathering around the problem they were trying to solve, and
  2. Encourage useful information to transit from a place of knowledge to a place of actionability, regardless of its source.

$500,000, the maximum amount on offer, was a huge amount of money for an individual in the 1950s. It would have been enough activate net-new help in searching for and discovering vital information, as well as to push those with pre-existing knowledge over their point of indifference where they'd report what they already know, even if they were on the other side at the time.

If compared to modern public bug bounties:

  • The "bug" is any of the banned actions taking place (and their potential consequences),
  • The "scanners, SDLC and other existing controls" are all of the intelligence and security protocols that already busily tried to prevent these actions from playing out, as well as the laws themselves as a primary deterrence measure,
  • The "scope" is all potential contributors and scenarios to these actions left behind by the existing controls, and
  • The "finder" is anyone who identifies or has prior knowledge of the actions and decides to report.

Information concerning illegal introduction, manufacture, acquisition or export of special nuclear material or atomic weapons or conspiracies relating thereto; reward

Any person who furnishes original information to the United States — 

(a) leading to the finding or other acquisition by the United States of special nuclear material or an atomic weapon which has been introduced into the United States or manufactured or acquired therein contrary to the laws of the United States, or
(b) with respect to the introduction or attempted introduction into the United States or the manufacture or acquisition or attempted manufacture or acquisition of, or a conspiracy to introduce into the United States or to manufacture or acquire, special nuclear material or an atomic weapon contrary to the laws of the United States, or
(c) with respect to the export or attempted export, or a conspiracy to export, special nuclear material or an atomic weapon from the United States contrary to the laws of the United States, shall be rewarded by the payment of an amount not to exceed $500,000.

July 15, 1955

Interestingly, there's no mention of "safe harbor" for the finders, and providing the government with this information in the first place would have been a risky proposition. I quietly wonder if this chilled informants and potential turn-coats in a similar manner to the chilling effect we see on cybersecurity research today.

Asymmetric problems are often best served by asymmetric solutions.