

One goal of securing a system is to increase the work factor for the opponent, with a secondary goal of increasing the likelihood of detecting when an attack is undertaken. However, the usual intent behind the current use of the phrase “security through obscurity” is not correct. And, analogously, if an attacker knows the vulnerability and hides that discovery, he can exploit it when desired.

The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle-once the vulnerability is disclosed, the system becomes more vulnerable.

Worse, if an attacker manages to discover the algorithm without disclosing that discovery then she can exploit it over time before it can be fixed. The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor to use Schneier’s term, it is brittle: Once the algorithm is discovered, there is no (or minimal) protection left, and once broken it cannot be repaired. The origin of the phrase is arguably from one of Kerckhoff’s principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. None of us originated the term, but I know we helped popularize it with those items. I take some of the blame for helping to spread “no security through obscurity,” first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of Practical Unix Security (with Simson Garfinkel) in 1991. This was originally written for Dave Farber’s IP list.
