Michael Eriksson
A Swede in Germany
Home » Software development | About me Impressum Contact Sitemap

Security through obscurity

General issue

Security through obscurity, i.e. attempts to create security by keeping outsiders in the dark about protection mechanisms and similar, is a recurring annoyance and, more often than not, security issue.

For instance, a regularly occurring dispute is whether the encryption method used in a certain case should be made public or kept secret:

Companies often explicitly use “proprietary encryption” and similar formulations in advertisements and information sheets—with the apparent implication that this would be a plus over non-proprietary encryption.

Experts on cryptography, on the other hand, almost unanimously take the position that a secret encryption method cannot be trusted. (And “secret” usually implies “homebrewed”.) At least two issues are involved:

  1. Cryptography is complicated, and even apparently secure methods can contain large holes—making them easily breakable, once the hole is discovered. A thorough review from the cryptographic community is necessary to reduce this risk.

  2. A company using “security through obscurity” might have neglected to use a strong mechanism. (Possibly, because of a false sense of security; because it has prioritized ease of implementation, execution speed, or low license fees; or because it lacks the competence needed. Worse, it might be that the company is aware of the weakness and uses “security through obscurity” as a cover.)

    In a next step, other parties relying on the encryption have no way of knowing, because they do not have access to the details.


In all fairness, “proprietary” does not automatically imply “secret”, nor “secret” “proprietary”. However, (a) when it comes to encryption methods, this seems to be the normal case, (b) the above example is just one piece of a bigger puzzle of obscurity (note e.g. the below example).

Is the problem that companies deliberately mislead in their advertisements? Possibly; however, the naivete involved goes far, and I tend to apply Hanlon’s Razor.

Example: An early version of a requirements document at [E4] explicitly forbade the use of any publicly known encryption mechanism for the generation of certain “proof-of-ownership” numbers. This in order to increase security—the product manager in charge actually believed that this would make the product more secure; although, an internal and, in all likelihood, vastly inferior mechanism would have had to be developed. (I immediately had this statement removed.)

A major flaw of “security through obscurity” is that important mechanisms are not that hard to find out, if one has the resources: Company A pays 10,000 Euro to a dissatisfied employee at company B, and he provides the corresponding information; alternatively, burglary, “social engineering”, inadvertent slips in public, etc., can all provide crucial information. Once the details are known, e.g. what weak encryption is used, security is soon by-passed, and disaster strikes. In the case of a consumer software using a weak encryption, the consequences do not just include the need to change that software, possibly affecting millions of installations, but that all previous messages that have reached third parties, e.g. because someone has snooped the communications, can now be deciphered.

Illustration of deceptive encryption

To illustrate how easy it is too be fooled where encryption is concerned:

Consider a trivial encryption that just gives the letters of the alphabet a fix permutation, e.g. A–D, B–Z, C–M, D–R, ... (which would translate “BAD” into “ZDR”, “CAB” into “MDZ”, and so on). A typical manager might now think “Let us encrypt twice, then we will be twice as secure!”—leaving us with another exactly as secure, but twice as costly, permutation. In special cases where the permutation is its own inverse (as e.g. with ROT13w) the disastrous result is that the “encoded” text is identical with the original...

A slightly more subtle example is posed by the (much more advanced) Enigmaw encryption machine: A key vulnerability was that it never encoded a letter to it self. At first look, this might seem beneficial; however, in practice nothing is gained and much lost. (The number of combinations to consider for a cracker is reduced and a systematic weakness is introduced.)

Modern day examples use even more subtle weaknesses, like small systematic deviations from perfect randomness in larger amounts of encoded text.

Obscurity vs. secrecy

Security schemes almost invariably rely on secrecy of some sort, e.g. an encryption key or the password of a user account. The crucial difference is that there are some things intended, by design, to be kept secret and some that are not. (Here, I will refer to the former with variations of “secret” and the latter, when kept secret anyway, with variations of “obscure”.) If an encryption key used with the above fictional software becomes public knowledge, only one user is hurt and changing this one key allows that user to resume safe communications (but previous messages might still be problematic). Compare this with the original scenario.

Even with more user-specific obscurity, there are dangers. For instance, if a user has a combination of username and password, it is much easier to keep a portion of the whole (the password) secret than if he only has a username. Assume that we have no password and rely on an obscured username: The username is specific to him and if it becomes public knowledge, only that one username needs to be changed; however, keeping the username obscure is so much harder than keeping an additional password secret. Consider e.g. a phone call from a college computer-hall to tech support and the question “What is your username?”, or how (today) basic functionality like the ability to see who is logged in to a public computer must be reduced, or how, on a Unix-like computer, access to /etc/passwd must be restricted, with negative consequences for tools drawing on the “GECOS” fields. Correspondingly, the solution is not to obscure the username but to combine a (potentially) publicly known username with a secret password.


Of course, obscuring the username in addition to a secret password can have some benefits, but these are likely to be minor and, for the purposes of security evaluations, it must be assumed that the usernames are known to an attacker. Indeed, the main benefit of keeping usernames obscure might relate to social engineering, e.g. in that a company could keep usernames known internally but obscured to third parties, which reduces the risk that a social engineer has an easy “in”. (But, again, relying on this obscurity is foolish. Consider e.g. something as trivial as a former employee remembering the usernames of some colleagues. Or take the shape of the usernames themselves: any name-based mnemonic, e.g. “MEriksson” would increase the risk of a bad guy gaining knowledge considerably, while more random character sequences might lead to frustrated users or usernames written down next to the computer.)

Similar statements about a minor benefit from adding obscurity on top of security often applies, when use is in a sufficiently trusted circle. (E.g. within a single company. Contrast this with the above complications of third-party users who cannot know whether the obscured whatnot is trustworthy.) Is it worth the effort? Only very rarely, I suspect.


There are other means than passwords to keep security, including biometric data. In the large, making a switch to some such means does not change much in the above. However, it is notable that some of them, including, again, biometric data, are unsuitable as a sole mechanism for reasons similar to obscurity. For instance, gaining someone’s fingerprint without his knowledge and/or against his will is easier than his password, and, unlike the password, the fingerprint cannot be changed with reasonable means. Correspondingly, for important access checks, biometric data should only be used as an addition, e.g. in that someone needs to know the right password and have the right fingerprint. (Fingerprints are weaker than passwords, but passwords are weaker than passwords + fingerprints.)