An oft-repeated guideline in securing technology is that we should not rely on obscurity to secure our systems. As we wind down 2023, I use this article to take a brief look at that guideline.
The guideline to not rely on obscurity as a security mechanism goes back at least to Auguste Kerckoffs, a Dutch-born cryptographer of the 19th century. Kerckoffs promoted the principle that the security of a cryptographic cypher should not rely on the mechanism of that cypher remaining a secret. Rather, the only part that should be kept secret are the keys that are input into that cypher. In modern, computer-based, cryptographic algorithms, those keys are large numbers that are mathematically combined with the data being protected to produce what looks like gobbledygook. Applying Kerckhoffs’s principle, we can publish the algorithm as long as we don’t publish the keys we input. We keep the keys secret, not the steps we take to combine those keys with our data to create that gobbledygook. It’s the same thing as publishing the design of the lock on the front door of your residence while never sharing the key. Let’s look at why.
A thief who wants to pick the lock on our front door can hop on over to the DIY store and buy the same brand lock and take it apart. Exploring the lock, they can figure out its mechanism and the theory of how to pick it. Without the key and without the exact same lock as we have on our front door, they still need to go through the same steps of picking as if they hadn’t taken it apart. That is, unless they find a flaw in the mechanism that they can exploit.
The possibility that there is a flaw in the mechanism is part of the reason we want the details of that lock to be made public. We want people with good intentions, like the Lockpicking Lawyer, to also get their hands on the lock, take it apart, and reveal its flaws to us before the thief gets to our home and exploits that flaw. The most important Lockpicking Lawyer doesn’t need my specific lock with my specific key to examine the lock. Any copy of the lock keyed to any unique key is sufficient to explore the capabilities of the design. With the knowledge of the flaw, we can mitigate the flaw or choose a different lock. With the public knowledge of the flaw, the manufacturer will feel more pressure to fix the flaw.
If the knowledge of the flaw wasn’t made public, we can still be sure the thieves of the world will find out about it and share that information among themselves. We won’t know there’s a flaw, but the thieves will. Without knowing about it, then we won’t mitigate the flaw or get a new lock. Consider the Kia/Hyundai “Challenge”. If major media didn’t pick up the story, if someone didn’t “blab” about it on social media, how many cars would be stolen? As it is, we probably will see more stolen because this one was so easy and so widespread. At least as a potential car buyer, I am forewarned about the issue and know to avoid those models/years.
In 1997, a computer with a capacity of a GigaFLOPS (while the real meaning is more complex, think of it as the ability of a computer to perform 1 billion compute operations a second) cost about $55,000. Today? That same capacity about a penny - we can’t really buy one GFLOPS by itself anymore 1. That means attackers can automate their attempts to crack our algorithms and run many, many of those attempts for inconsequential cost. The low cost of attacks means that any flaw that is merely hard to find will be found almost immediately. By allowing many, many qualified people to examine an algorithm before it is adopted, those merely hard flaws can be identified before that algorithm is used to protect sensitive data. The only practical way to get many, many qualified people (and “qualified” pretty much means PhD level specialized knowledge of mathematics and cryptography) to examine an algorithm is to make it public.
Making it public also helps us adopt the algorithm. While we can use a non-public algorithm to encrypt data when we store it, it’s a different story when it comes to using it to encrypt communications. If we want to send encrypted data to someone else to read, we need to tell the other person how to decrypt that data. Otherwise, we are sending them a bunch of gobbledygook that they won’t be able to read! With algorithms made public, it becomes easier for software developers to include the algorithm and thus create interoperability. Further, when a flaw is found, we can be sure many people will be moving quickly to identify and fix that flaw or create a new algorithm without that flaw. Then I can use a Windows machine and you can use a Mac and we can still share data as long as we can share the keys. The best encryption algorithms are so advanced that I can use one key to encrypt the data and give you a different key to decrypt it. The added bonus of those kinds of asymmetric algorithms is that when I keep the first key private, you can trust that the data comes from me and only me.
That brings us back full circle to Kerckoffs’s principle. We rely on keeping the keys secret to protect the data while we let other people examine the algorithm we use. This gives us assurance there are not other people snooping on our data and that the message came from the right person.
In a future article, I’ll explore more on the topic of security and obscurity, including what details we want to keep secret in addition to keys.