Chapter 7. Cryptography

Cryptography may be a science unto itself, but it also plays a major role in the science of cybersecurity. Bruce Schneier described it this way: “Traditional cryptography is a science—applied mathematics—and applied cryptography is engineering.” Gauss famously called mathematics “the queen of the sciences.” Like other sciences, there are pure mathematics (with no specific application in mind) and applied mathematics (the application of its knowledge to applications and other fields).

Whether cryptography is a science, there is value in looking at how to use the scientific method to evaluate the design and application of cryptography. In this chapter, we will look at provably secure cryptography. However, those proofs have limitations because the proofs deal with very specific attacks. And despite provable security, people break or find flaws in cryptographic systems all the time. They’re broken because of flaws in implementation, a true and often cited reason. Cryptographic systems also suffer from defects in other noncryptographic systems, such as cryptographic keys left unsecured in memory, lazy operating system practices, and side-channel attacks (information leaks from the physical hardware running the cryptography).

Though there are open problems in the mathematical aspects of cryptography, you are more likely interested in ways to use cybersecurity science to evaluate and improve products and services. So, in this chapter we will ignore the fundamental mathematical construction of cryptographic algorithms and focus on their implementation and performance.

An Example Scientific Experiment in Cryptography

For an example of cybersecurity science in cryptography, look at the paper “SDDR: light-weight, secure mobile encounters” by Lentz et al. (2014). In the following abstract, you can see that an implied hypothesis is that SDDR, the authors’ new protocol for discovery of nearby devices, is provably correct and at least as energy-efficient as other proven cryptographic protocols. These developers took a two-pronged approach in their evaluation with both formal proof of security and experimental results of its energy efficiency using a research prototype. This combined approach appeals to a wider audience than, say, a formal proof alone. Note that the abstract highlights “four orders of magnitude more efficient” in energy-efficiency and “only ~10% of the battery,” though readers must draw their own conclusions about the impressiveness of those results.

Abstract from an experiment of cybersecurity science in cryptography

Emerging mobile social apps use short-range radios to discover nearby devices and users. The device discovery protocol used by these apps must be highly energy-efficient since it runs frequently in the background. Also, a good protocol must enable secure communication (both during and after a period of device co-location), preserve user privacy (users must not be tracked by unauthorized third parties), while providing selective linkability (users can recognize friends when strangers cannot) and efficient silent revocation (users can permanently or temporarily cloak themselves from certain friends, unilaterally and without re-keying their entire friend set).

We introduce SDDR (Secure Device Discovery and Recognition), a protocol that provides secure encounters and satisfies all of the privacy requirements while remaining highly energy-efficient. We formally prove the correctness of SDDR, present a prototype implementation over Bluetooth, and show how existing frameworks, such as Haggle, can directly use SDDR. Our results show that the SDDR implementation, run continuously over a day, uses only ∼10% of the battery capacity of a typical smartphone. This level of energy consumption is four orders of magnitude more efficient than prior cryptographic protocols with proven security, and one order of magnitude more efficient than prior (unproven) protocols designed specifically for energy-constrained devices.

As a practitioner, how would you apply these research results or ideas if you saw this paper online or heard about it from a colleague? If you develop smartphone applications, you might be interested in incorporating this protocol into your own product. Thankfully, all of the research prototype code for SDDR is available on GitHub. Or, you may have a solution of your own already and wish to compare how your algorithm compares to SDDR. Or maybe you’re curious or skeptical and want to replicate or extend the experimental results from this paper.

Experimental Evaluation of Cryptographic Designs and Implementation

One of the most common experimental evaluations in cryptography is of the performance of cryptographic algorithms. Cryptographers and practitioners compare algorithms in order to understand the algorithms’ strengths, weaknesses, and features. Those results inform future cryptographic design and inform the choice of algorithm to use in a new cybersecurity solution. Figure 7-1 illustrates a comparison of throughput for six cryptographic algorithms. Other performance metrics in cryptography commonly include encryption time and power consumption. These results come from running the algorithm and measuring the relevant metric, perhaps with different input file sizes. It is critically important to report the type of hardware used for the experiment in these studies since hardware specifications, especially processors and memory, strongly influence cryptographic performance.

Figure 7-1. Throughput (megabytes/second) of six symmetric encryption algorithms from “Evaluating The Performance of Symmetric Encryption Algorithms” (2010)

There is value in experimental evaluation for cybersecurity implementations beyond a comparison of the algorithms themselves. Experimental evaluation of implementations and cryptography in practice are also possible. One could design an experiment to measure the lifetime of cryptographic keys in memory for different operating systems, or the usability of encryption features in email (see the case study in Chapter 11).

There are many other ways to evaluate cryptographic designs and implementations. Cryptanalysis attacks are used to evaluate the mathematical construction and practical implementation of cryptographic algorithms. Here are some common cryptographic attacks that can be used in experimentation:

Known-plaintext attack

The attacker obtains the ciphertext of a given plaintext.

Chosen-ciphertext attack

The attacker obtains the plaintexts of arbitrary ciphertexts of his own choosing.

Chosen-plaintext attack

The attacker obtains the ciphertexts of arbitrary plaintexts of her own choosing.

Brute-force attack

The attacker calculates every possible combination of input (e.g., passwords or keys) and tests to see if each is correct.

Man-in-the-middle attack

The attacker secretly relays and possibly alters the communication between two parties who believe they are communicating directly with each other.

Cryptography is an answer to the problem of data protection. If you were given a new cybersecurity solution, say software for full disk encryption, how would you evaluate its effectiveness at doing what it claims and how would you validate whether you were any more secure from using it? These questions start to blur the line between cryptography and software assurance, not to mention risk management.

If you implement or test cryptography, keep several things in mind. First, in cryptography, Kerckhoffs’s principle states that a cryptosystem should be secure even if everything about the system (except the key) is public knowledge. One implication of this principle is that cryptographic algorithms should be subject to peer review, not kept secret. Second, because cryptography implementers are often not cryptographers themselves, errors and shortcuts in implementation can weaken the cryptography.1 Third, you should also pay attention to all the details of the protocol specification and check the assumptions attached to the cryptographic and protocol designs. Security assumptions are discussed in the next section. Finally, be aware that we are rarely sure if cryptography is completely secure. Acceptance of cryptography generally comes from long periods of failed attacks, and experimentation can uncover such cryptographic weaknesses.

Provably Secure Cryptography and Security Assumptions

In 1949, mathematician and father of information theory Claude Shannon wrote A Mathematical Theory of Cryptography and proved the perfect secrecy of the one-time pad. This notion of perfect secrecy means that the ciphertext leaks no information about the plaintext. The phrase perfect secrecy requires some explanation. Information theory is a collection of mathematical theories about the methods for coding, transmitting, storing, retrieving, and decoding information. Perfect secrecy is an information theoretic notion of security, which means that you can use mathematical theories to prove it.

Note

Information theory can even be used to describe the English language. Rules of grammar, for example, decrease the entropy (uncertainity) of English. For more, see Shannon’s paper “Prediction and Entropy of Printed English.”

As a practitioner, it is important to understand that provable security in information theory and cryptography is not an absolute statement of security. Security proofs are conditional and are not absolute guarantees of security. Security is guaranteed only as long as the underlying assumptions hold. Provable security is incredibly important because it brings a quantitative nature to security. This enables protocol designers to know precisely how much security they gets with the protocol.

Take SSH as an example. In 2002, three researchers conducted the first formal security analysis of the SSH Binary Packet Protocol (BPP) using the provable security approach.2 Yet, other researchers later showed an attack on SSH BPP because the proven security model made some assumptions about the real-world system executing the decryption.3 A very good research question comes from this example: how do we know that “fixing” SSH actually improves security? TLS/SSL, too, has been studied, and by 2013 there were papers showing that most unaltered full TLS ciphersuites offer a secure channel. The important words are most, unaltered, and full. No security analysis has yet shown that TLS is secure in all situations.

A security model is the combination of a trust and threat models that address the set of perceived risks. Every cybersecurity design needs a security model. You cannot talk about the security of a system in a vacuum without also talking about the threats, risks, and assumptions of trust. The work lies in determining what assumptions to include in a security model and how close the theoretical model is to the practical implementation to capture the significant attack vectors. To get you started thinking of assumptions on your own, here are a few potential assumptions about threats or attackers’ technical abilities that could be made for a particular situation or environment:

  • The adversary can read and modify all communications.

  • The adversary has the ability to generate messages in a communication channel.

  • The adversary has no ability to tamper with communication between the honest parties.

  • The adversary has the ability to spoof its identity.

  • The adversary has the ability to leak from each key a few bits at a time.

  • The adversary does not have access to the master key.

  • The adversary has the ability to predict operations costs.

  • The adversary has unlimited computing power.

  • The adversary can mount login attempts from thousands of unique IP addresses.

  • The adversary cannot physically track the mobile users.

Whenever you make a security claim, also describe any and all assumptions you make about the threat. It is disingenuous to assume an all-powerful adversary or to underestimate the capabilities of possible adversaries. In the next section we will talk about the Internet of Things (IoT), where we might assume that you are designing the security for smart clothing like a shirt with movement sensors woven into the fabric. Here is one example adversarial model for that situation:

We assume that the adversary is interested in detecting a target’s movement at all times, thereby violating a user’s expected privacy. We assume that the adversary does not have physical access to his target’s shirt. We assume that the adversary can purchase any number of identical shirts to study. We assume that any other shirt may be corrupted and turned into a malicious item controlled by the adversary. We assume that the adversary has the ability to infer all of the IoT items that belong together or to the same user.

This collection of assumptions bounds what we explicitly believe the adversary can and cannot do. The model usually contains only those motivations, capabilities, or limitations of the adversary pertinent to the security offered by the proposed solution. You might go on to suggest that the shirt needs SSL security because you can construct an attack that would otherwise succeed given the stated adversarial model.

Not only do attackers target cryptographic implementations, they also target the written and unwritten assumptions. Note that security models are not limited to cryptography. It is quite common for cybersecurity papers to include an entire section on the threat model used in the paper. The threat model narrows the scope of the scenario and may also limit the applicability of the attack or defense being presented. These can be very specific statements, such as “we assume that the adversary does not have any privileged access on any of the key network entities such as servers and switches, so she is unable to place herself in the middle of the stream and conduct man-in-the-middle attacks.”

A well-defined security model benefits both the investigator conducting the assessment and the consumer of the final assessment. The model bounds the experiment and allows you to focus on a confined problem. It also establishes relevancy for the end user or consumer of your product.

Cryptographic Security and the Internet of Things

The proliferation of Internet-enabled devices has taken off more quickly than security measures for them. RSA Conference is an annual information security event with a strong emphasis on cryptography. This is one of the largest events in the industry, with around 33,000 attendees and more than 490 sessions in 2015. Presentations follow industry trends, and there has been a clear rise recently in talks on the Internet of Things (IoT). Kaspersky Labs summarized the trend in 2015 as the “Internet of Crappy Things” highlighting a string of new attacks on home automation and other consumer devices.

Small resource-constrained devices such as insulin pumps require algorithms that respect their computing power, memory requirements, and physical size. In a world of desktops, laptops, and even smartphones, we could implement RSA, AES, and other mainstream algorithms with an acceptable burden on the device. But what about a smartcard, smart meter, medical implant, or soil sensor? What you can fit on the device and what the device needs are very different. IoT devices need efficient, lightweight cryptographic implementations that are also trustworthy. There are at least 24 lightweight block ciphers today designed with these constraints in mind. The National Security Agency even proposed two families of lightweight block ciphers called Simon (optimized for hardware) and Speck (optimized for software).4 You can find a list of lightweight ciphers and their technical features (e.g., block size) on the CryptoLUX Wiki.

Some have proposed offloading cryptographic functions in resource-constrained devices. A dedicated microcontroller might increase performance and reduce the load on the main CPU. It might also be possible to outsource crypto to a service on another device, as long as certain assurances were made.

Experimentation in cybersecurity science can be used to measure and evaluate design choices for a specific device or class of devices. Say you are investing in a home automation system and want to add tiny soil moisture sensors that alert you when to water your plants. How would you decide how much wireless security was necessary, and whether the sensors could handle it? Deciding on the amount of security takes us back to the discussion in the previous section about threat models and assumptions. You must think about the different security threats and their associated likelihood. For example, do you care about physical attacks like node capture, impersonation attacks, denial-of-service attacks, replay attacks, or spoofing attacks? If you do care, what is the likelihood of them, and what is the cost of damage if any occur? We’ve just outlined a risk analysis that isn’t very technical but is nonetheless critical. On the technical side, how could you also determine how well the sensors could even perform the desired level of cryptography? These technical considerations are reflected in the case study in the next section.

Another example of IoT devices to consider is smart utility sensors such as water and electric meters. These devices are being offered (sometimes mandated) for both consumers and businesses. Smart meters effectively increase the attack surface because the devices are networked together or back to the provider. Today, consumers have little choice but to treat such devices as black boxes without knowledge of how they work. Experimentation and evaluation with the scientific method will allow users to determine the cybersecurity assurances and risks associated with smart meters.

Case Study: Evaluating Composable Security

Background

The security of individual electronic devices has evolved significantly over time, followed by the growth of security for systems of devices such as corporate networks. A logical next question is “what is the impact to systemwide security given the assembly of individual components or subsystems?” This area of study is called composable security. Here’s one example. If I trust the security of my Fitbit fitness tracker on its own, and trust the security of my iPhone on its own, and trust the security of the WiFi in my home, are any of those individual devices or the group of them any less (or more) secure because of their interconnectedness? The unexpected properties that arise or emerge from the interaction between the components are sometimes called emergent properties. Emergent properties are value-neutral; they are not inherently positive or negative, but because of their unexpected nature we often think of them as harmful.

Tip

For more coverage on composable security, including a discussion on challenges of emergent phenomena to risk assessment, see Emergent Properties & Security: The Complexity of Security as a Science by Nathaniel Husted and Steven Myers.

Both cybersecurity offense and defense can have emergent properties. Emergent attacks are created because a group of individual agents form a system that achieves an attack made possible by the collaboration. Distributed denial-of-service (DDoS) is an example of this; a single bot does not have much effect, but the combined forces of many bots in a botnet produce devastating results. You could run scientific experiments to measure the spectrum of these emergent effects as the size of the attacking swarm grows. Emergent defenses also arise due to the composition of some property of a group. Anonymity is an emergent property that is not apparent in isolation. The anonymity of Tor is a property that manifests as the system grows; a single Tor node does not achieve the same level of defense as a large collection of nodes. This fact is easy to demonstrate scientifically by showing the ability to violate anonymity in a one-node Tor network and the difficulty in doing so in a ten-node Tor network.

There is a very special instantiation of composable security for cryptographic protocols, known as universal composability. Universally composable cryptographic protocols remain secure even if composed with other instances of the same or other protocols. In 2008, scholars presented a security analysis of the Transport Layer Security (TLS) protocol under universal composability.5 On the contrary, there are multi-party cryptographic protocols that are provably secure in isolation but are not secure when executed concurrently in larger systems. Further, there are classes of functions that cannot be computed in the universally composable fashion.

A New Experiment

The evaluation of composable security and emergent properties remains an open problem, but let us consider a hypothetical experiment to test a particular use case. This problem deals with the establishment of secure communication paths in IoT networks.6 Rather than relying on theoretical analysis, we focus on practical feasibility and an experimental setup for the verification of runtime behavior. Here is a hypothesis:

Three ad hoc IoT devices can establish secure communication paths whose composed communication security is equivalent to the security of two.

The intuition here is that we want to show (a) that two ad hoc devices can establish secure communications and (b) that by adding a third, communications are no less secure.

Note

The probability of establishing communication between any pair of nodes in an ad hoc network is an emergent property of random graphs and has been studied since the 1960s.

As a practical experiment, it is acceptable to select three specific IoT devices you care about for the test as opposed to trying to prove a theoretical result that holds for any three devices. This approach does carry a limitation that the result may not be generalizable, and it is worth noting that when sharing your results. You should also consider using three devices of the same type, since mixing device types introduces complexity and additional variables into the experiment. For this study, let’s use three Pinoccio Scouts. These tiny and inexpensive devices are ideal because they natively support mesh networking and are built with open source software and hardware. Scouts use the Lightweight Mesh protocol, and that protocol supports two encryption algorithms: hardware accelerated AES-128 and software XTEA. However, the entire network uses the same shared encryption key by default.

An important result that you could demonstrate deals with key management. Obviously, having the same encryption key for all nodes leads to a rejection of the hypothesis because compromising the communication key in a two-node network decreases the composed security of a three-node network. You would have to implement a key exchange protocol that doesn’t rely on external public key infrastructure and respects the limited memory of the nodes and their inability to store keys for a large number of peers. Using your knowledge of cybersecurity, you also want to consider potential ways that secure communication might be compromised: physical layer vulnerabilities, link layer jamming, passive eavesdropping, spoofing attacks, replay attacks, routing attacks, flooding attacks, and authentication attacks. It is your discretion about which of these you think need to be addressed in the security demonstration. Furthermore, for each one, you must now think about the difference between two-node networks and three-node networks, and the security differences between those cases. Unlike the shared encryption key, perhaps you argue that jamming attacks are no more disruptive to the secure communication paths of two nodes than three.

How to Find More Information

Research in applied cryptography is presented at a large number of mathematics and cybersecurity conferences, including the USENIX Security Symposium, the International Conference on Applied Cryptography and Network Security (ACNS), and the International Cryptology Conference (CRYPTO). Likewise, research and experimental results appear in an assortment of journals and magazines, notably the Journal of Cryptology and IEEE Transactions on Information Forensics and Security. The Cryptology ePrint Archive also provides an electronic archive of new results and recent research cryptography.

Conclusion

In this chapter, we looked at how to use the scientific method to evaluate the design and application of cryptography. The key takeaways are:

  • One of the most common experimental evaluations in cryptography is the performance of cryptographic algorithms, including encryption time and power consumption.

  • Provably secure cryptography and security proofs are conditional and are not absolute guarantees of security. Security is guaranteed only as long as the underlying assumptions hold.

  • A security model is the combination of a trust and threat models that address the set of perceived risks. Every cybersecurity design needs a security model.

  • Scientific evaluation of cryptographic algorithms is important in resource-constrained IoT devices.

  • The evaluation of composable security and emergent properties remains an open problem. We looked at a hypothetical experiment to test secure communications in IoT networks.

References

  • Ran Canetti. Universally Composable Security: A New Paradigm for Cryptographic Protocols, Cryptology ePrint Archive, Report 2000/067, (July 16, 2013)

  • Bruce Schneier and Niels Ferguson. Cryptography Engineering: Design Principles and Practical Applications (Indianapolis, IN: Wiley, 2010)

  • Al Sweigart. Hacking Secret Ciphers with Python (Charleston, SC: CreateSpace Independent Publishing, 2013)

1 As an example, an old version of GNU Privacy Guard (GPG) contained a flaw in the ElGamal crypto algorithm. The developer had this comment in the source code: “I don’t see a reason to have a x of about the same size as the p. It should be sufficient to have one about the size of q or the later used k plus a large safety margin. Decryption will be much faster with such an x.”

2 Mihir Bellare, Tadayoshi Kohno, and Chanathip Namprempre. 2002. “Authenticated encryption in SSH: provably fixing the SSH binary packet protocol.” In Proceedings of the 9th ACM conference on Computer and communications security (CCS ’02), Vijay Atluri (Ed.). ACM, New York, NY, USA, 1−11.

3 Martin R. Albrecht, Kenneth G. Paterson, and Gaven J. Watson. 2009. “Plaintext Recovery Attacks against SSH.” In Proceedings of the 2009 30th IEEE Symposium on Security and Privacy (SP ’09). IEEE Computer Society, Washington, DC, USA, 16−26.

4 Ray Beaulieu, Douglas Shors, Jason Smith, Stefan Treatman-Clark, Bryan Weeks, and Louis Wingers. The SIMON and SPECK Families of Lightweight Block Ciphers. Cryptology ePrint Archive, Report 2013/404, 2013.

5 Sebastian Gajek, Mark Manulis, Olivier Pereira, Ahmad-Reza Sadeghi, and Jorg Schwenk. “Universally Composable Security Analysis of TLS—Secure Sessions with Handshake and Record Layer Protocols.” In Proceedings of the 2nd International Conference on Provable Security (ProvSec ’08), Joonsang Baek, Feng Bao, Kefei Chen, and Xuejia Lai (Eds.). Springer-Verlag, Berlin, Heidelberg, 313-327, 2008.

6 This problem is based on one offered by Virgil D. Gligor in Security of Emergent Properties in Ad-Hoc Networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.213.44