Foreword

Terms such as cryptovirology, malware, kleptogram, or kleptography may be unfamiliar to the reader, but the basic concepts associated with them certainly are familiar. Everyone knows—often from sad experience—about viruses, Trojan horses, and worms and many have had a password “harvested” by a piece of software planted surreptitiously on their computer while browsing the Net. The realization that a public key could be placed in a virus so that part of its payload would be to perform a one-way operation on the host computer that could only be undone using the private key held by the virus' author was the discovery from which Malicious Cryptography sprang. Rather than describe these notions here, intriguing as they are, I'll only try to set the stage for the authors' lucid description of these and other related notions.

Superficially, information security, or information integrity, doesn't appear to be much different from other functions concerned with preserving the quality of information while in storage or during transmission. Error detecting and correcting codes, for example, are intended to ensure that the information that a receiver receives is the same as that sent by the transmitter. Authentication codes, or authentication in general, are also intended to ensure that information can neither be modified nor substituted without detection, thus allowing a receiver to be confident that what he receives is what was sent and that it came from the purported transmitter. These sound remarkably alike in function, but they are fundamentally different in ways that are at the heart of Malicious Cryptography. The greatest service this Foreword can render is to give the reader a crisp, clear understanding of the nature of this difference in order to set the stage for the book that follows.

Most system functions can be quantitatively specified and tested to verify that the specifications are met. If a piece of electronic equipment is supposed to operate within a specified range of a parameter (such as voltage, acceleration, temperature, shock, vibration, and so forth), then it is a straightforward matter to devise tests to verify that it does. Closer in spirit to information security and integrity than physical environmental specifications would be a specification of a communication system's immunity to noise or bit errors. One might specify the minimum data bandwidth for a given signal to noise (SN) ratio or the allowable bit error rate. Again it is a straightforward matter to devise tests that verify the data bandwidth or the bit error rate for a signal possessing the specified signal to noise ratio. Error detecting and correcting codes may be tailored to the expected statistical nature of the noise, Fire codes for burst errors or Grey codes for an angular position reading device, etc. But the verification that the system is meeting specifications remains straightforward and quantitative.

Security is fundamentally different from any other system parameter, however. One of the largest alarm and vault manufacturers in the U.S. discovered this in a costly example a few years ago. Vaults and safes are routinely certified for the time documents will survive undamaged in a fire—itself specified by temperature and type (oil, structural, electrical, etc.). They had developed a new composite material that was very resistant to cutting, drilling, burning, etc. Extensive tests had been conducted with cutting tools of all sorts including oxyhydrogen burning bars, drilling with mechanical drills and hypervelocity air-abrasive drills, etc. Based on these results, they guaranteed their safes and vaults made of the new material would provide a specified minimum time for penetration. What they had overlooked was that linear cutting charges (shaped charges) that were widely used in the oil industry for cutting oil well casings and in the demolition business for slicing building supports to bring down buildings could be used to cut out a panel from the side of a safe or vault in milliseconds instead of requiring hours. This long aside is very germane to this Foreword. The safe and vault company had measured the resistance of their product to the attacks they anticipated would be used against them. The robbers used an entirely unexpected means to open the vault—and the company paid dearly for their oversight. Malicious Cryptography is almost entirely about doing things in completely unexpected ways in information integrity protocols.

Going back to the example with which we started, the fundamental difference between error detecting and correcting codes and authentication, both of which function to ensure the integrity of information, is that the first is pitted against nature and the other against a human adversary. Nature may be hostile, the signal to noise ratio may be large, the signal may drop out for extended periods of time, other signals may randomly mask the desired signal, but nature is neither intelligent nor adaptive. A human opponent is both. He may also be interactive, probing to gain information to allow him to refine and adapt his attacks. As those of us in the information security business like to say, there is no standard attacker and no standard attack. This is in contrast with all other specifications where standard environments, no matter how hostile or unpredictable, are the norm.

What the authors of Malicious Cryptography have done very successfully is to capture the essence of how security can be subverted in this non-standard environment. On several occasions, they refer to game theory without actually invoking the formalism of game theory—emphasizing instead the game-like setting in which security is the value of the ongoing competition between a system designer and its attackers.

There have been many books on hacking, software subversion, network security, etc., which consist mainly of descriptions of successful attacks—some exceedingly clever and many very devious in their execution. These are similar in style and feeling to Modern Chess Openings (MCO) that every chess player knows, studies, and on which he depends. There are of course many possible lines of play in chess, but the several hundred openings that have stood the test of time and repeated tournament play make up the MCO. Roughly the first twenty moves or so of these openings, with promising variations, have been so thoroughly analyzed and understood that it is rare indeed for an opening not in the MCO to be successful in match play. A similar situation is true for the end game—not that the endings are so cataloged and restricted, but rather that the game has simplified to where almost a counting-like analysis reveals the outcome to a knowledgeable player. Masters will resign a game as lost at a point where a less experienced player may not even be able to see who has the advantage. As most books on hacking recount one clever attack after another, MCO recounts one opening after another with an ! or !! in the annotation to flag a particularly brilliant move. I almost expect to find an exclamation mark in the margin of most books on software subversion when the deception on which a particular protocol failure turns is revealed.

The middle game in chess, though, must be guided by general principles since the number of lines of play—the attack, counter attack—between two masters is virtually unlimited. So it is with information security protocols and cryptosystems. The possibilities are virtually unlimited so general principles must guide both the system designer and the counter designer; the attacker seeking to exploit hidden weaknesses in the design; the designer seeking to prevent such attacks or failing that, to detect them when they occur. Malicious Cryptography pioneers in motivating and clearly enunciating some of these principles.

Cryptography, authentication, digital signatures, and indeed, virtually every digital information security function depend for their security on pieces of information known only to a select company of authorized insiders and unknown to outsiders. Following the usual convention in cryptography we will refer to this privileged information as the key although in many situations the only thing in common with the usual notion of a cryptographic key is that it is secret from all but a designated select few. It may well be that no individual knows the key but that a specified set of them have the joint capability to either recover it (shared secret schemes) or to jointly execute a function that in all probability no outsider or any proper subset of them can do (shared capability schemes). It is almost always the case that this secret piece of information is supposed to be chosen randomly—from a specified range of values and with a specified probability distribution, generally the uniform distribution. The assumption is that this insures that an unauthorized user will have no better chance of discerning the secret key than the probability the same key will be drawn in an independent drawing of a new value under the same conditions. It is also generally assumed that only the person choosing the random number knows it. In fact he may share it with someone else at the time it is drawn, or they may have chosen the number in advance of the supposed drawing. In the most extreme case it may be dictated by some other participant and not chosen by the person supposed to be choosing it all. Every one of these surreptitious variants has been the basis for serious subversions of information integrity and security protocols. One of the central themes in Malicious Cryptography is the mischief that is possible if these conditions are not met; in other words, if the “random” value is not random in the sense supposed.

Since security or integrity is directly measured by the probability the secret key can be discovered (computed) by unauthorized cabals of attackers, the information content of the key (roughly speaking, the size of the random number) must be great enough that it is computationally infeasible to simply try all possible values—known as a brute force key space search. But this means that it is then computationally infeasible for a monitor to tell whether the random values produced were actually randomly chosen as supposed or not. This is at the heart of subliminal channels, for example. The subliminal transmitter and receiver share in secret information about the bias imposed on the selection of the session keys which enables them to communicate covertly in the overt communications while it remains computationally infeasible (impossible?) for a monitor to detect a bias in the session key selection process, and hence impossible for him to detect either the presence or use of a subliminal channel.

The dilemma is that if the key is large enough to be secure, it is also large enough to make it impossible to detect a bias in the selection process. It therefore becomes possible to hide information in the keys, to communicate other keys subliminally, to make it computationally feasible for designated receivers to perform a key space search while a full search remains computationally infeasible for outsiders to do, to subvert information integrity protocols from within, etc. The list of possible deceptions is virtually unlimited and the authors of Malicious Cryptography have exploited many of these in innovative ways.

In information integrity protocols nothing can be taken for granted, i.e., nothing can be assumed that cannot be enforced. If the protocol calls for a number to be chosen from a specified range using a particular probability distribution, then the assumption must be that it isn't unless the other parties to the protocol can force it to be in a secondary protocol. Otherwise you must assume it could be chosen from a restricted range or chosen using a different probability distribution, or that it was chosen earlier and shared with persons assumed not to know it, or that it isn't being selected at random at all by the person supposed to be choosing it, or that it is dictated to him by another party not even considered in the protocol. Several of the subversions described in Malicious Cryptography depend on this ability to undetectably hide information in keys. The point germane to this Foreword, though, is that it is the general principle that is vital for both the designer and the counter-designer to keep in mind. There are interactive protocols to insure that the objectives of randomness are met. Those protocols are not the subject of Malicious Cryptography, but made all the more important because of the weaknesses exposed in it.

There are other examples, though, in which no means is known to enforce the desired outcome. Several protocols call for a public modulus to be the product of two secret primes chosen so as to make it computationally infeasible to factor the modulus—usually only a function of the size of the factors although in some protocols the factors must satisfy some number theoretic side condition such as belonging to a particular residue class, etc. It is possible to work a variety of mischiefs if a modulus that is the product of more than two prime factors can be passed off as the product of only two. In particular, a subliminal channel becomes possible with the desirable feature that while the subliminal receiver can receive subliminal messages sent by the transmitter he cannot falsely attribute a forged message to the transmitter. It is only polynomially difficult to distinguish between primes and composite numbers. But so far as is known it is just as hard to tell if a composite number has three or more factors as it is to factor the number itself! In the absence of an interactive protocol to ensure that a modulus has two and only two prime factors, deceptions that depend on the existence of three or more factors remain a possibility. Deceptions of this sort do not appear in Malicious Cryptography and are mentioned here only to illustrate that not all general principles for deception have solutions available to the designer at the moment.

Malicious Cryptography is a remarkable book; remarkable for what it attempts and remarkable for what it achieves. The realization that cryptography can be exploited to achieve malicious ends as easily as it can to achieve beneficial ones is a novel and valuable insight—to both designers and counter-designers of information security and integrity protocols.

Gus Simmons

September, 2003

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.144.32