2 min read

To err is human - Kerckhoffs' Principle in Software Transparency

Shannon and Kerckhoff were pioneers of disclosure thinking — They understood the concept of “build it like it’s broken”. This was especially true in WWII cryptography, but it’s becoming increasingly clear in its relevance to the 'peacetime' software that we use today.
To err is human - Kerckhoffs' Principle in Software Transparency

A question on Twitter today got me thinking about Kerckhoffs’ principle, a favorite of mine when it comes to the intersection of security and design thinking i.e. the point at which the attacker, the user, and the system interface.

A cryptosystem should be secure even if everything about the system, except the key, is public knowledge - Kerckhoffs’ Principle.

The question that sparked this was “What is one piece of advice you’d give to people in security?” to which this was my answer:

What does this have to do with Kerckhoffs? A lot actually.

Like cryptography, applications and systems run on top of mathematic systems that have very little true variability in their performance of the intended task. The humans who created the task, however, are a different story… I’ve been writing this bit for five minutes and already hit backspace more times than I care to admit, which is an illustration of the point.

Humans, while creativity unparalleled, are also error-prone. When those errors are overlaid onto a system that is designed to do precisely what it’s told, they create bugs and sometimes vulnerabilities.

I am convinced that humans - the contributors themselves, management, the organization, and the market - have contributed a sizable portion of the current state of security by refusing to accept, for whatever reason, that people are not machines. We acknowledge the praiseworthy and positive fingerprints of creativity, put up with the negative ones, and hold out hope that by ignoring the ugly, it will eventually disappear.

This is where Kerckhoff’s principle comes in, more strongly put by Claude Shannon in Shannon’s Maxim:

The enemy knows the system.

Shannon and Kerckhoff were pioneers of disclosure thinking — They understand the concept of “build it like it’s broken”. This was especially true in WWII cryptography, but it’s becoming increasingly clear in its relevance to the “peacetime” software that we use today.

Bruce Schneier summed it up well:

Kerckhoffs’s principle applies beyond codes and ciphers to security systems in general: every secret creates a potential failure point. Secrecy, in other words, is a prime cause of brittleness — and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.

You'll be less brittle if your team realizes that mistakes are inevitable and that grace is extended provided they are acknowledged, dealt with, and lessons learned are applied to prevent repeats of the same failure.

...The reductionist in me wants to apply the concept of secret-induce brittleness to all kinds of systems, including social and political ones, where failure would create a security risk where none had previously existed, but we can discuss that another time.