Michael Eriksson
A Swede in Germany
Home » Software development | About me Impressum Contact Sitemap

Security through obscurity

A regularly occurring dispute is whether the encryption method used in a certain case should be made public or not. Companies often explicitly use “proprietary encryption” and similar formulations in advertisements and information sheets.

Specialists of cryptography, on the other hand, almost unanimously take the position that a non-public encryption method cannot be trusted. At least two issues are involved:

  1. Cryptography is complicated, and even apparently secure methods can contain large holes—making them easily breakable, once the hole is discovered. A thorough review from the cryptographic community is necessary to reduce this risk.

  2. A company using “security through obscurity” may have neglected to use a strong mechanism. (Possibly, because of a false sense of security; because they have prioritized ease of implementation, execution speed, or low license fees; or because they lack the competence needed.) Other parties relying on the encryption have no possibility to convince themselves of the suitability or non-suitability of the chosen method.

Is the problem that companies deliberate mislead in their advertisements? Possibly; however, the naivete involved goes far, and I tend to “never attribute to malice...”. Example: An early version of a requirements document at [E4] explicitly forbade the use of any public technology for the generation of certain “proof-of-ownership” numbers. This in order to increase security—the product manager in charge actually believed that this would make the product more secure; although, an internal and, in all likelihood, vastly inferior mechanism would have had to be developed. (I immediately had this statement removed.)

A major issue with “security through obscurity” is that important mechanisms are not that hard to find out, if one has the resources: Company A pays 10,000 Euro to a dissatisfied employee at company B, and he provides the corresponding information; alternatively, burglary, “social engineering”, inadvertent slips in public, etc., can all provide crucial information. Once the details are known, the weak encryption is soon cracked, and disaster strikes: Not only will the existing software have to be changed, possibly affecting millions of installations, but all previous messages can now be deciphered by third parties.

Remark: Note that “public” above has nothing to do with e.g “public key” encryption/decryption.


Side-note:

To illustrate how easy it is too be fooled where encryption is concerned:

Consider a trivial encryption that just gives the letters of the alphabet a fix permutation, e.g. A–D, B–Z, C–M, D–R, ... (which would translate “BAD” into “ZDR”, “CAB” into “MDZ”, and so on). A typical manager might now think “Let us encrypt twice, so we are twice as secure!”—leaving us with another exactly as secure, but twice as costly, permutation. In special cases where the permutation is its own inverse (as e.g. with ROT13w) the disastrous result is that the “encoded” text is identical with the original...

A slightly more subtle example is posed by the (much more advanced) Enigmaw encryption machine: A key vulnerability was that it never encoded a letter to it self. At first look, this might seem beneficial; however, in practice nothing is gained and much lost. (The number of combinations to consider for a cracker is reduced and a systematic weakness is introduced.)

Modern day examples use even more subtle weaknesses, like small systematic deviations from perfect randomness in larger amounts of encoded text.