© Stephen Haunts 2019
Stephen HauntsApplied Cryptography in .NET and Azure Key Vaulthttps://doi.org/10.1007/978-1-4842-4375-6_12

12. Final Summary

Stephen Haunts1 
(1)
Belper, Derbyshire, UK
 

The central reason that I wrote this book was to educate developers on why data breaches are inevitable. Although organizations protect their data centers and databases to the best of their abilities, the grim reality is that data breaches still happen, whether they are outside jobs or initiated from the inside by staff. In this final chapter.

Accepting the inevitability of data breaches is a fundamental mindset change that developers and companies need to make with data breaches. We need to focus our efforts on making sure that if a data breach does occur, it is a secure breach. With this, I mean that the essential data within the data breach such as personally identifiable information (PII), financial information, or anything that can be used for an attack for their gain is encrypted and the encryption keys are securely protected.

This book is aimed squarely at developers who build software for their organizations. The purpose of the book isn’t to try and turn you into a master cryptographer, but to allow you to make use of the tools that are available to you in the .NET Framework/.NET Core to add additional security to your systems.

As I was planning this book, I could have taken a few directions. I could have written a long and extensive reference manual where I document every property or method on all the relevant classes in .NET, but that would have been a very long book, which replicates a lot of the documentation that is already available on the Internet. I don’t like books like that and instead wanted to develop a practical book that takes you through the cryptographic principles available and builds up a working sample application as the book evolves. I have purposefully tried to make this book as short, yet information dense as possible. As a software developer who has worked for many large enterprises, I fully understand that projects and products are developed under time and budget constraints, and it can be quite common that some security measures in our applications can be deprioritized over more physical and more visible features. By reading this book, I hope I have given you the tools to be able to incorporate more encryption and security into your applications quickly to try and avoid having complex security deprioritized.

Let’s go through a summary of what we have learned in this book.

Cryptography Summary

Throughout history, cryptography has played a crucial part in helping people, companies, and governments keep secrets. In the early days of cryptography, the encryption process worked on simple text based messages, but as the digital revolution took hold, encryption has been against binary data. In the modern world, you cannot get through the day without cryptography playing some part in your life from securing your Wi-Fi network, protecting data on your mobile devices or buying a product from online retailers. With great innovation comes threats and one of our most significant risks is data being stolen from organizations and used against people to impersonate them and potentially con them out of money. We explored the four main pillars of modern cryptography: confidentiality, integrity, authentication, and non-repudiation.

Confidentiality is what you associate with cryptography and encryption, where you take data and encrypt it so it is in a form that cannot be read by someone else.

Data integrity is about maintaining and proving the accuracy and the consistency of data sent between two parties. This means that if someone sends data to a third party, the individual should be able to detect if the data has been corrupted or tampered with.

Authentication is establishing the identity of a person or system sending a message. A good example is with TLS certificates on a web server proving the identity of the server that you wish to connect to. Use of a cryptographic key authenticates the identity. Having a less secure key means there is lower trust between two parties. Authentication is also commonly used by everyone when they enter their username and password to gain access to a system. Your Facebook or Twitter account is an excellent example of this. To use those systems, you have to authenticate yourself with the Facebook or the Twitter website to prove who you are.

Non-repudiation is proving that someone has carried out an action or signed a document. A signature on a paper contract is an excellent example of this. If a contract has been signed and witnessed, then that person cannot deny having signed the agreement.

Random Numbers

When we started our look at cryptography in .NET, we began by looking at random numbers. Secure random numbers are essential to modern cryptography as we depend on this ability to create encryption lets for our symmetric algorithms like AES, keys for HMAC and salts for password hashing. The most common random number generator in .NET is the System.Random class, but while this is fine for generating a set of lottery numbers or simulating a dice roll, the results it produces are very deterministic unless you provide a different seed value every time. The ideal random number generator to use in the .NET ecosystem is the RngCryptoServiceProvider that offers a cryptographically secure means of generating non-deterministic random numbers for encryption keys.

Hashing and Authentication

The next primitives were hashing and authenticated hashing. Hashing is a one-way algorithm that generates a unique fingerprint (hash code) of a piece of input data. Once a hash code has been generated for a piece of data, it should be infeasible to reverse the hash to get back to the original value. To be considered a reliable and useful hash function, it must conform to three main properties:
  • The hash code must be easy to calculate for any input message.

  • You should not be able to create a message that has a specified hash code.

  • Any changes to the original message should completely change the hash code.

You should not be able to find two input messages that result in the same hash code. Another way to frame the concept of a hash function is to think of it as the digital equivalent of a fingerprint for a piece of data. Once you have generated a hash code for that piece of data, the hash code is always the same if you calculate it again, unless the original data changes in any way, no matter how small that difference is.

The process of calculating a hash code (or message digest) of an item of data is straightforward to do in the .NET Framework or .NET Core. There are different algorithms you can use in .NET such as MD5, SHA-1, SHA-256, and SHA-512. The properties of hashing, such as only being able to hash in one direction and the hash code is unique to a piece of data, makes hashing the perfect mechanism for checking the integrity of data. The integrity checking means when you send data across a network to someone else, you can use hashing as a way to tell if the original data has been tampered with or corrupted.

Before sending the data, you calculate a hash of the data to get its unique fingerprint. You then send that data and the hash to the recipient. They then compute the hash of the data they have received, and then compare it to the hash you sent. If the generated hash codes are identical, then the data has been successfully received without data loss or corruption. If the hash codes fail to match up correctly, then the data received is not the same as the data initially sent.

For the examples in this book, we used the secure hash based on generating a 256-bit (32-byte) hash code (SHA-256). If you need to use a higher strength hash, you can swap out the hashing code to use SHA-512 because the coding interfaces are the same between them.

Authenticated Hashing

For hashing, we settled on the SHA (Secure Hash Algorithm) family of hashing functions. The purpose is to provide integrity checking capabilities within our applications to help us detect if data has been tampered with or corrupted over time. We then looked at satisfying another of our four pillars of cryptography by talking about authentication, which is a natural follow on from integrity.

If you combine a one-way hash function with a secret cryptographic key, you get a hash message authentication code (HMAC). Like a hash code, an HMAC is used to verify the integrity of a message. A HMAC also allows you to verify the authentication of that message because only a person who knows the key can calculate the same hash of the message. The fundamental differences between a standard MD5 or SHA hash and a HMAC, anyone can calculate a hash code using MD5 or SHA and get the same results for a piece of data. Only an authorized individual can generate the same hash code using an HMAC because they need to have the same key that was used to create the original HMAC hash code.

Storing Passwords

Storing passwords in a database is a very common task for our systems to perform, but this task is so commonly done badly that it leaves the integrity of our systems exposed when passwords are leaked in a data breach. We started our exploration of securely storing passwords by examining some of the bad ways to do it. Let’s remind ourselves of these bad techniques.

The first and the worst solution is to store passwords as plaintext in a database. Doing this offers no protection at all should your database tables get leaked in a data breach. Sadly, there are a lot of old websites out there that do use this plaintext technique, but for a new system under your control, you should never do this.

The next technique that we looked at, which is better than plaintext passwords is basic hashing of passwords. A hash is a one-way function, where once a password has been hashed, you shouldn’t be able to go back the other way to recover the password. This principle works in theory, but plain hashed passwords are relatively easy to recover by performing either a brute-force attack or a rainbow table/dictionary attack. Many people use simple passwords, such as a spouse’s name or a pet’s name, which are very susceptible to attack. If you use a long and complex passphrase, you are afforded a little more protection, but not by much. Modern password cracking tools like Hashcat are designed to use the powerful graphics processing units (GPUs) built into modern graphics cards. These GPU cards allow tools like Hashcat to perform billions of hashing operations per second and as these graphics cards get more powerful each year, standard hashed passwords get easier to crack.

To try and remedy the ease of cracking hashed passwords the next technique builds on hashing by the inclusion of a salt to the password before hashing. The salt is a random piece of data that adds entropy to the password making them much harder to brute force or perform a dictionary attack. Salting a password makes their cracking much harder to perform, but with GPU hardware power increasing every year, what is considered a safe password today, could be cracked and compromised in a few short years.

The final option was a variation of the salted hash using a technique called a Password Based Key Derivation Function (PBKDF2). The main difference, compared to a SHA hash, is you specify a “number of iterations” parameter. This parameter is the number of times the hash is repeated. So, if that number was 10, then the password and salt combination is hashed ten times. The intention with this iteration parameter is to slow down the hashing process which means if someone does get a copy of your password table, instead of performing billions of passwords hash cracks per second you drastically reduce the number that can be attempted per second making the cracking process much less desirable to a would-be hacker. The .NET object that we used to generate one of these PBKDF2 password hashes was Rfc2898DeriveBytes.

Symmetric Encryption

We have discussed how hashing and hashed message authentication codes are one-way operations. Once you hash some data, you shouldn’t be able to reverse the hash to go back to the original data. Symmetric encryption algorithms, on the other hand, are a two-way operation where you use the same key for both encryption and decryption of your message; you can reverse the encryption process to recover the original data provided you use the same key, which is why it is referred to as symmetric.

Symmetric encryption has both advantages and disadvantages to its use.

Advantage: Very Secure

When using a secure algorithm, symmetric encryption is exceptionally secure. One of the most widely-used symmetric key encryption systems is the Advanced Encryption Standard (AES). As of the writing of this book, AES is unbroken, so it is one of the recommended algorithms.

Advantage: Fast

One of the problems with public key encryption systems like RSA is that they need complicated mathematics to work, making them very computationally intensive and slow. Encrypting and decrypting symmetric key data is easier, which provides excellent read and write performance. Many solid-state drives, which are very fast, use symmetric key encryption store data, yet they are still a lot faster than unencrypted standard hard drives.

Disadvantage: Sharing Keys Is Hard

One of the most significant problems with symmetric key encryption algorithms is that you need to have a way to get the key to the person with who you are sending the encrypted data. Encryption keys aren’t simple strings of text like passwords; they are byte arrays of randomly generated data, such as the random numbers we generated with RNGCryptoServiceProvider earlier in this book. As such, you need to have a safe way to get the key to the other person.

With this in mind, symmetric key encryption is particularly useful when encrypting your information as opposed to when sharing encrypted information. There are ways to use the power of symmetric encryption with a suitable key sharing scheme, which we looked at earlier in the book when we talk about hybrid encryption schemes.

Disadvantage: Dangerous If Compromised

When someone gets hold of one of your symmetric keys, they can decrypt everything encrypted with that key. When you’re using symmetric encryption for two-way communications, this means that both sides of the conversation get compromised. With asymmetrical public-key cryptography like RSA, someone that gets your private key can decrypt messages sent to you but can’t decrypt what you send to the other party since it is encrypted with a different key pair.

In this book, we explored three types of symmetric encryption algorithm, DES, Triple DES, and AES. DES and Triple DES should be treated as legacy algorithms, but it was still worth discussing them because there are a lot of old systems out there that still encrypt their data with DES or Triple DES. If you are working on a newer system that doesn’t have these legacy constraints, then you should use AES by default.

AES offers three key sizes: 128 bits (16 bytes), 192 bits (24 bytes), and 256 bits (32 bytes). The examples in this book used the 256-bit key size, and we used the AesCryptoServiceProvider class to perform our AES encryption.

Asymmetric Encryption

Symmetric encryption is a two-way encryption process that uses the same key for both encryption and decryption of your message. The main problem with symmetric encryption is that of securely sharing keys. For a recipient to decrypt a message, they need the same key as the sender, and this exchange of keys can be difficult to do securely. An excellent solution to this problem is to use asymmetric cryptography, which is also referred to as public key cryptography.

With public key cryptography, you have two keys; a public key, which anyone can know, and a private key, which only the recipient of a message knows. These keys are mathematically linked. The message sender uses the public key to encrypt a message, and the recipient uses their private key to decrypt the message.

The word asymmetric is used because this method uses two different linked keys that perform inverse operations from each other whereas symmetric cryptography uses the same key to perform both operations.

It is quite straightforward to generate both the public and private key pair, but the power of asymmetric cryptography comes from the fact it is impossible for a private key to be determined from its corresponding public key. It is only the private key that needs to be kept secret in the key pair.

The primary advantage of using asymmetric encryption is that two parties don’t need to pre-share a secret key to communicate using asymmetric encryption. The person encrypting a message only needs to know the recipients public key which is available to anyone on request. Then only the recipient can decrypt the message with their private key. The main disadvantage is that the asymmetric algorithm is comparatively complex when compared to symmetric encryption which means that messages take longer to encrypt and decrypt.

When using RSA in .NET, we used the RSACryptoServiceProvider class. We looked at three ways of handling our keys, using the Windows CSP to store keys, writing them out as XML (a really bad idea), and storing them in memory.

Digital Signatures

Digital signatures are based on asymmetric cryptography. For the receiver of the message, a digital signature allows the receiver to believe the correct sender sent the message; this can be thought of as a digital equivalent to a signature on a letter, except a digital signature is much harder to forge.

Digital signatures give you both authentication and non-repudiation. Authentication because the signatures have to be created by a user with a valid private key, and non-repudiation as the receiver can trust that a known sender signed the message as only, they know the private key. So, how do digital signatures do all this? Digital signatures in .NET and .NET Core are based on RSA, so some of the same rules for RSA apply for digital signatures. This is why you cannot sign data that is larger than the size of the key; that is, 1024 bits, 2048 bits, or 4096 bits. Because of this, it is common first to take a SHA-256 hash of the data that you want to sign digitally. You then use that hash to create the digital signature.

A digital signature consists of the following three algorithms:
  • Public and private key generation using RSA

  • A signing algorithm that uses the private key to create the signature

  • A signature verification algorithm that uses the public key to test if the message is authentic

When you use RSA to encrypt data, you use the recipient’s public key, and then the recipient uses their private key to decrypt the data. It is the other way around with digital signatures, to create the signature the sender uses their private key to generate the signature, and the recipient uses the sender’s public key to verify the signature.

The digital signature implementation we looked at in .NET was based on RSA, so to generate the keys we use the same process as with RSA which is to use the RSACryptoServiceProvider class. The generation of the digital signature is handled by the RSAPKCS1SignatureFormatter class, and the verification of the digital signature is handled by RSAPKCS1SignatureDeformatter.

Hybrid Encryption

Once we had finished covering the main cryptographic primitives to cover our four pillars of cryptography (confidentiality, integrity, authentication, and non-repudiation), we then looked at combining these primitives to create a more powerful set of tools called hybrid encryption.

First, we looked at combining RSA and AES. Sharing keys securely between two or more people is very hard to do.

For asymmetric encryption, the actual process of encryption is much slower due to the modular based mathematical nature of the RSA, and there are limits to the amount of data that you can encrypt at once. A real benefit for RSA is how keys are managed. With RSA, you use a public and private key pair. The recipient of the message knows the private key, and they keep that key safe and secret; anyone can know the public key. If Alice wants to send a message to Bob, she first gets his public key. Encrypts the message with that public key and sends the message to Bob. Bob then uses his private key to read the message; which is a much better solution to keys exchange than with symmetric encryption algorithms like AES.

Now we want the best of both worlds. We want the fast and efficient encryption properties of AES coupled with the more robust key sharing mechanism of RSA. We explored hybrid encryption, which is achieved using unique symmetric session keys along with asymmetric encryption.

The sender first uses the recipient’s public key to encrypt a freshly generated AES session key. The data you want to send to the recipient is encrypted with AES and that session key, and that encrypted message along with the RSA encrypted session key is sent to the recipient who then uses their private key to decrypt the session key. Once the session key is recovered, it is then used to decrypt the message.

The combination of encryption methods has various advantages. One is that a connection channel is established between two users’ sets of equipment. Users can communicate using this hybrid encryption technique. A downside of asymmetric encryption is that it can slow down the encryption process, but with using it along with symmetric encryption, both of them together use their best parts, the efficiency of the symmetric encryption and the key splitting of the asymmetric encryption. The result is added security to the message sending process along with overall improved system performance.

To further extend the hybrid encryption example we added some integrity checking to the code. We wanted to add some integrity checking to ensure that the message that is sent between two people is not corrupted or tampered with in transit.

The simplest way to do this is by taking a hash of the encrypted data and the initialization vector, which could be done using any of the hashing operations, such as MD5, SHA-1, or SHA-2. The hash would be calculated after the message has been encrypted with AES and sent to the recipient inside the encrypted packet.

When the recipient wants to decrypt the message, they first recalculate the hash of the encrypted message and IV. If the hashes match, then the data is intact and hasn’t been corrupted or tampered with which means the recipient can safely decrypt the message. If the hash codes do not match, then there has been an issue during transmission of the message; it has either been corrupted or tampered with, and the recipient should discard the message entirely and not trust it.

As a solution, this worked quite well, but we went one better. With the solution of hashing the encrypted data and initialization vector, nothing is stopping an attacker intercepting the message, corrupting the encrypted data or IV and then recalculating the hash. It would be much better if the strength of our session key could also protect the hashing of the data; this is possible by a hashed message authentication code or HMAC.

Like a standard hash code, a HMAC is used to verify the integrity of a message. A HMAC also allows you to verify the authentication of that message because only the person who knows the private key to recover the session key can calculate the same hash of the message. Without that session key, you cannot recalculate the same hash code of the encrypted data. A HMAC can be used with different hashing functions like MD5 or the SHA family of algorithms. In the examples in the remainder of this chapter, we use SHA-256. The cryptographic strength of an HMAC depends on the size of the key that is used.

Extending our example using HMACs for integrity provided a lot of benefits when it came to sending data from the sender to the receiver because we can detect if the encrypted messages have been corrupted or tampered with. By using a HMAC, we can ensure that the recipient can only recalculate the HMAC if they first recover the session key using their private key.

Next, we extended the example by incorporating non-repudiation with digital signatures. This means that before the sender sends the message to the recipient, the sender first takes a digital signature of the HMAC using her private key. When the recipient receives the packet of data and verifies the digital signature, if it returns true, he is confident that the original sender sent the message, and not someone else.

Combining all the cryptographic primitives this way gives you the ability to create systems that not only encrypt data but can also move that data securely between systems and detect authenticity and the potential for any tampering.

Azure Key Vault

Key management is an essential feature of any enterprise system and there are appliances like hardware security modules (HSM) that let you store keys securely. The problem with HSMs is that they are traditionally costly appliances for companies, which means they are typically used by larger organizations, such as banks or pharmaceutical companies. With the advent of cloud computing and products like Azure, we now have access to abstract HSM systems like Azure Key Vault, which brings the power of HSM to the masses.

Azure Key Vault is a service provided by Microsoft as part of their Azure cloud computing platform that makes the functionality of hardware security modules available to anyone for a fraction of the cost. Even though Microsoft is providing software abstraction to the service, underneath there is real HSM hardware. Microsoft has put into each regional data center a series of devices called nShield by Thales Security, which means that Microsoft has taken the financial hit on the cost of the hardware that they rent for a minimal price to software developers. The fundamental shift between you paying for a HSM to a pay-as-you consume module has enormous implications as this means you can now take advantage of secure key management that banks have been enjoying for many years.

Azure Key Vault lets you store encryption keys and secrets, which are encrypted blobs of text where you can safely store secret information like database connection strings or API keys to third-party systems. Azure Key Vault then lets you perform different operations as a developer such as encryption and decryption, storage and retrieval or secrets and the generation and verification of digital signatures.

When discussing Azure Key Vault, we covered five usage patterns .
  • Multiple environments

  • Configuration as secrets

  • Local key wrapping

  • Password protection

  • Digital signing

Multiple Environments

The key message around environments is to make sure you do not use the same instance of Key Vault from production in your test environments. The sharing of production keys anywhere except production is a terrible idea. Instead, you should either have one additional instance of Key Vault that you use for all your test environments or script a new instance for each test environments. This does mean though, that you cannot jut copy data from production to your test environments as any data that is encrypted in production will not decrypt in your test environment, so you need to put a process in place to insert anonymized data in place of the encrypted production data.

Configuration as Secrets

Key Vault allows you to store small, named, blobs of text into Key Vault, which is very useful for storing secret data like database connection strings, API keys or anything that you wouldn’t want to expose in a config file. An excellent way to think about the storage of secrets is that it is like a key-value pair NoSQL data store. You store your secret, which is the value and you give it a name which you provide when you save the value and then retrieve it again in the future.

Local Key Wrapping

When we looked at hybrid encryption, we built up an example of using RSA to encrypt an AES session key. Using Azure Key Vault, we can extend this concept by using RSA and a key stored in the vault to encrypt our local session key which means we can remove the use of RSACryptoServiceProvider and allow Key Vault to perform the encryption. Using this technique, you can also drastically reduce the cost of using Key Vault. You are charged per 10,000 operations on the vault. For a session, if you use Key Vault to decrypt a local AES key and then use that AES key to perform local encryption and decryption operations you are reducing the number of hops across to Key Vault which reduces your cost and also reduces latency as calling Key Vault also incurs a time penalty.

Password Protection

When we discussed password protection , I said that using a password-based key derivation function was the best current method for hashing a password where as well as providing the password to the hashing function, you also provide a salt value and a number of iterations value, so we can algorithmically slow down the hashing process.

Using a password-based key derivation function puts us in a good position, but where does Key Vault fit into this? Adding Key Vault into the equation gives a bit of extra protection. When using a Password Based Key Derivation Function (PBKDF2), we have to provide a salt and a “number of iterations” parameter, both of which need to be presented when hashing a password to compare against a stored password. Conventionally, the salt, of which you would use a new salt per password, would be stored in a database and the number of iterations might be a configuration option. The salt is not a key used for confidentiality, but why make it easy for an attacker? Using Key Vault, we can encrypt the salt before we store it in the database. Then when a user authenticates onto your system, the salt needs to be decrypted using Key Vault before the password hash is recalculated.

To successfully create a hash for a password to match a hash in your passwords table, you also need to specify the same number of iterations. If your stored password was hashed with 1000 iterations and a salt, and you try to hash the same password with the same salt but 1001 iterations, the hash will not match. This means that the number of iterations is a configuration item, which is a perfect candidate to store as a secret in Key Vault. As with salt, hiding the number of iterations isn’t part of the security of a PBKDF, but why make it easy for an attacker. By storing the number of iterations as an encrypted secret in the vault, you are making their lives a little harder if they get hold of your hashed password tables as they not only have to decrypt the salt, but they have also to figure out the correct number of iterations to use, which is safely stored in the vault.

Digital Signing

The final pattern we looked at with Azure Key Vault is creating and verifying digital signatures. We looked at digital signatures back in Chapter 8 where we performed the signing and verification tasks using the RSAPKCS1SignatureFormatter and RSAPKCS1SignatoreDeFormatter classes in the .NET Framework. Signing and verification of data is a feature that Key Vault offers as part of its standard functionality, and it is straightforward to do which reduces the complexity in doing it yourself using the RSAPKCS1SignatureFormatter and deformatter classes.

Don’t Forget the Perimeter

The focus of this book has been on software security with cryptography in the .NET Framework and .NET Core, but it is important to remember the perimeter of your systems. Generally, in a lot of organizations, the network and perimeter protection of your systems are handled by a specific operations team unless you are in a more DevOps-focused environment where you have access to set up perimeter security yourself. In any case, making sure any websites or web APIs that you develop sit behind HTTPS is essential. Unfortunately, a lot of people across the Internet argue that HTTPS is not required for static sites like brochureware, but this is a dangerous mindset to adopt. It is essential to make sure you always protect sites and APIs with HTTPs even if they do not specifically handle any sensitive data. Also, with many modern web browsers, warnings appear if your site is not protected with HTTPS.

Not using HTTPS means that any malicious parties cannot perform a man in the middle attack and deface the content on your site; this could be as a form of vandalism or subtly changing information on the site to unsuspecting visitors. This would typically be an attack where an attacker tries to disrupt the communication between your website and your browser.

HTTPS also protects the communications from the browser back to the server and therefore protect the privacy of your users who may be inputting sensitive information onto your site. If you are encrypting their essential data on the back end, but you do not protect the communication channel with HTTPS, then an attacker can steal their data before you get to encrypt it.

Next Steps

We have now come to the end of this book on applying cryptography with the .NET Framework and .NET Core. The next steps are for you to start implementing some of these principles in your system. I recommend that you load up and experiment with the sample code from this book which can be found on GitHub at https://github.com/Apress/applied-crypto-.net-azure . The best way to learn, apart from reading this book, is to take the code and experiment. Step through each of the examples in the debugger. Perhaps you could try writing some small console or terminal apps to experiment with the features. A good learning exercise is to write an application that can take any file you provide it and encrypt or decrypt it. Or perhaps using Key Vault and the hybrid encryption principles develop a small instant messaging app with peer-to-peer encryption. The techniques we have discussed in this book are very similar to the types of protocols that many instant messaging systems use today.

This book was designed to be very practical for the everyday developer. It wasn’t the books intention to try and turn you into a master cryptographer but to make use of some of the tools available to you in .NET. If this book has piqued your interest, then you may want to go on and buy some of the more theoretical cryptography books. The power is in your hands now to help your employers protect their critical data for their benefit, but more importantly, their customers benefit, safety, and privacy. The power is in your hands. Use it wisely.

Thanks for reading.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.111.130