Chapter 9. Protecting Secret Data

Storing secret information—data such as encryption keys, signing keys, and passwords—in software in a completely secure fashion is impossible with current PC hardware. Someone with an account of enough privilege on your computer or someone with physical access to the computer can easily access the data. Storing secret information securely in software is also hard to do, and thus it’s generally discouraged. Sometimes, however, you must, so this chapter will aid you in doing so. The trick is to raise the security bar high enough to make it very difficult for anyone other than appropriate users to access the secret data. To that end, this chapter will cover the following: attack methods; determining whether you need to store a secret; getting the secret from the user; storing secrets in various versions of Microsoft Windows; in-memory issues; storing secrets by using managed code; raising the security bar; and using devices to encrypt secret data.

Before I dive into the core subject, please realize that this chapter focuses on protecting persistent data. Protecting ephemeral data—network traffic, for example—is reasonably straightforward. You can use SSL/TLS, IPSec, RPC, and DCOM with privacy and other protocols to encrypt the data. The use of these protocols is discussed in other sections of this book.

Important

Keep secret data secret. As a colleague once said to me, the value of a secret is inversely proportional to its accessibility. Put another way: a secret shared by many people is no longer a secret.

Attacking Secret Data

Secret data is susceptible to two main threats: information disclosure and tampering. Other threats become apparent depending on the nature of the compromised data. For example, if Blake’s password is disclosed to a malicious user, the password could be replayed by the attacker to spoof Blake’s identity. Therefore, in this example, an information disclosure threat becomes a spoofing threat.

An attacker can access private information held in software in many ways, some obvious and others not so obvious, depending on how the data is stored and how it’s protected. One method is simply to read the unencrypted data from the source, such as the registry or a file. You can mitigate this method by using encryption, but where do you store the encryption key? In the registry? How do you store and protect that key? It’s a difficult problem to solve.

Let’s imagine you decide to store the data by using some new, previously undiscovered, revolutionary way. (Sounds like snake oil, doesn’t it?) For example, your application is well written and builds up a secret from multiple locations, hashing them together to yield the final secret. At some point, your application requires the private data. All an attacker need do is hook up a debugger to your process using the secret, set a breakpoint at the location where your code gathers the information together, and then read the data in the debugger. Now the attacker has the data. One way to mitigate this threat on Microsoft Windows NT and later is to limit which accounts have the Debug privilege—referred to as SeDebugPrivilege or SE_DEBUG_NAME in the Microsoft Platform SDK—because this privilege is required to debug a process running under a different account. By default, only members of the local administrator’s group have this privilege.

Another danger is an asynchronous event, such as the memory holding the secret becoming paged to the page file. If an attacker has access to the Pagefile.sys file, he might be able to access secret data. Perhaps the computer is put into hibernation so that it can be started up rapidly, in which case all the contents of the computer’s memory are written to the Hiberfil.sys file. Another, perhaps less obvious, issue is your application faulting and a diagnostic application such as Dr. Watson writing a process’s memory to disk. If you have the secret data held in plaintext in the application’s memory, it too will be written to the disk.

Remember that the bad guys are always administrators on their own machines. They can install your software on those machines and crack it there.

Now that we’ve seen how a secret can be leaked out, let’s focus on ways to hide the data.

Sometimes You Don’t Need to Store a Secret

If you store a secret for the purpose of verifying that another entity also knows the secret, you probably don’t need to store the secret itself. Instead, you can store a verifier, which often takes the form of a cryptographic hash of the secret. For example, if an application needs to verify that a user knows a password, you can compare the hash of the secret entered by the user with the hash of the secret stored by the application. In this case, the secret is not stored by the application—only the hash is stored. This presents less risk because even if the system is compromised, the secret itself cannot be retrieved (other than by brute force) and only the hash can be accessed.

Creating a Salted Hash

To make things a little more difficult for an attacker, you can also salt the hash. A salt is a random number that is added to the hashed data to eliminate the use of precomputed dictionary attacks, making an attempt to recover the original secret extremely expensive. A dictionary attack is an attack in which the attacker tries every possible secret key to decrypt encrypted data. The salt is stored, unencrypted, with the hash. The salt should be cryptographically random and generated using good random number–generation techniques, such as those outlined in Chapter 8.

Creating a salted hash, or a simple verifier, is easy with CryptoAPI. The following C/C++ code fragment shows how to do this:

//Create the hash; hash the secret data and the salt.
if (!CryptCreateHash(hProv, CALG_SHA1, 0, 0, &hHash))
    throw GetLastError();
if (!CryptHashData(hHash, (LPBYTE)bSecret, cbSecret, 0))
    throw GetLastError();   
if (!CryptHashData(hHash, (LPBYTE)bSalt, cbSalt, 0))
    throw GetLastError();

//Get the size of the resulting salted hash.
DWORD cbSaltedHash = 0;
DWORD cbSaltedHashLen = sizeof (DWORD);

if (!CryptGetHashParam(hHash, HP_HASHSIZE, (BYTE*)&cbSaltedHash, 
                       &cbSaltedHashLen, 0))
    throw GetLastError();
   
//Get the salted hash.
BYTE *pbSaltedHash = new BYTE[cbSaltedHash];
if (NULL == *pbSaltedHash) throw;

if(!CryptGetHashParam(hHash, HP_HASHVAL, pbSaltedHash,
    &cbSaltedHash, 0))
    throw GetLastError();

You can achieve the same goal in managed code using the following C# code:

using System;
using System.Security.Cryptography;
using System.IO;
using System.Text;
...
static byte[] HashPwd(byte[] pwd, byte[] salt) {
    SHA1 sha1 = SHA1.Create();
    UTF8Encoding utf8 = new UTF8Encoding();
    CryptoStream cs = 
        new CryptoStream(Stream.Null, sha1, CryptoStreamMode.Write);
    cs.Write(pwd,0,pwd.Length);
    cs.Write(salt,0,salt.Length);
    cs.FlushFinalBlock();
    return sha1.Hash;
}

The complete code listings are available with the book’s sample files in the folder Secureco2Chapter09SaltedHash. Determining whether the user knows the secret is easy. Take the user’s secret, add the salt to it, hash them together, and compare the value you stored with the newly computed value. The Windows API CryptGetHashParam adds data to a hash and rehashes it, which is effectively the same thing. If the two match, the user knows the secret. The good news is that you never stored the secret; you stored only a verifier. If an attacker accessed the data, he wouldn’t have the secret data, only the verifier, and hence couldn’t access your system, which requires a verifier to be computed from the secret. The attacker would have to attack the system by using a dictionary or brute-force attack. If the data (passwords) is well chosen, this type of attack is computationally infeasible.

Using PKCS #5 to Make the Attacker’s Job Harder

As I’ve demonstrated, many applications hash a password first and often apply a salt to the password before using the result as the encryption key or authenticator. However, there’s a more formal way to derive a key from a human-readable password, a method called PKCS #5. Public-Key Cryptography Standard (PKCS) #5 is one of about a dozen standards defined by RSA Data Security and other industry leaders, including Microsoft, Apple, and Sun Microsystems. PKCS #5 is also outlined in RFC2898 at http://www.ietf.org/rfc/rfc2898.txt.

PKCS#5 works by hashing a salted password a number of times; often, the iteration count is in the order of 100s if not 1000s of iterations. Or, more accurately, the most common mode of PKCS #5—named Password-Based Key Derivation Function #1 (PBKDF1)—works this way. The other mode, PBKDF2, is a little different and uses a pseudorandom number generator. For the purposes of this book, I mean PBKDF1 when referring to PKCS #5 generically.

The main threat PKCS #5 helps mitigate is dictionary attacks. It takes a great deal of CPU time and effort to perform a dictionary attack against a password when the password-cracking software must perform the millions of instructions required by PKCS #5 to determine whether a single password is what the attacker thinks it is. Many applications simply store a password by hashing it first and comparing the hash of the password entered by the user with the hash stored in the system. You can make the attacker’s work substantially harder by storing the PKCS #5 output instead.

To determine the password, the attacker would have to perform the following steps:

  1. Get a copy of the password file.

  2. Generate a password (p) to check.

  3. Choose a salt (s).

  4. Choose an iteration count (n).

  5. Perform n-iterations of the hash function determined by PKCS #5.

If the salt keyspace is large—say, at least 64 bits of random data—the attacker has to try potentially 2^64 (or 2^63, assuming she can determine the salt in 50 percent of the attempts) more keys to determine the password. And if the iteration count is high, the attacker has to perform a great deal of work to establish whether the password and salt combination are correct.

Using PKCS #5, you can store the iteration count, the salt, and the output from PKCS #5. When the user enters her password, you compute the PKCS #5 based on the iteration count, salt, and password. If the two results match, you can assume with confidence the user knows the password.

The following sample code written in C# shows how to generate a key from a passphrase:

static byte[] DeriveBytes(string pwd, byte[] salt, int iter) {
    PasswordDeriveBytes p = 
        new PasswordDeriveBytes(pwd,salt,"SHA1",iter);
    return p.GetBytes(16);
}

Note that the default CryptoAPI providers included with Windows do not support PKCS #5 directly; however, CryptDeriveKey offers similar levels of protection.

As you can see, you might be able to get away with not storing a secret, and this is always preferable to storing one.

Important

There’s a fly in the ointment: the salt value might be worthless! Imagine you decide to use PKCS #5 or a hash function to prove the user is who they say they are. To be highly secure, the application stores a large, random salt on behalf of the user in an authentication database. If the attacker can attempt to log on as a user, he need not attempt to guess the salt; he could simply guess the password. Why? Because the salt is applied by the application, it does not come from the user. The salt in this case protects against an attacker attacking the password database directly; it does not prevent an attack where the application performs some of the hashing on behalf of the user.

Getting the Secret from the User

The most secure way of storing and protecting secrets is to get the secret from a user each time the secret is used. In other words, if you need a password from the user, get it from the user, use it, and discard it. However, using secret data in this way can often become infeasible for most users. The more items of information you make a user remember, the greater the likelihood that the user will employ the same password over and over, reducing the security and usability of the system. Because of this fact, let’s turn our attention to the more complex issues of storing secret data without prompting the user for the secret.

Protecting Secrets in Windows 2000 and Later

When storing secret data for a user of Windows 2000 and later, you should use the Data Protection API (DPAPI) functions CryptProtectData and CryptUnprotectData. There are two ways to use DPAPI; you can protect data such that only the data owner can access it, or you can protect data such that any user on the computer can access it. To enable the latter case, you need to set the CRYPTPROTECT_LOCAL_MACHINE flag in the dwFlags field. However, if you decide to use this option, you should ACL ("access control list" as a verb) the data produced by DPAPI accordingly when you store it in persistent storage, such as a in a file or a registry key. For example, if you want all members of the Accounts group to read the protected data on the current computer, you should ACL it with an access control list like this:

  • Administrators (Full Control)

  • Accounts (Read)

In practice, when developers use DPAPI from a service, they often use a service account that is a domain account, with minimum privileges on the server. Interactive domain accounts work fine with CryptProtectData; however, if the service impersonates the calling user, the system does not load the user’s profile. Therefore, the service or application should load the user’s profile with LoadUserProfile. The catch is that LoadUserProfile requires that the process operate under an account that has backup and restore privileges.

A user can encrypt and decrypt his own data from any computer so long as he has a roaming profile and the data has not been protected using the CRYPTPROTECT_LOCAL_MACHINE flag.

CryptProtectData also adds an integrity check called a message authentication code (MAC) to the encrypted data to detect data tampering.

Important

Any data protected by DPAPI, and potentially by any protection mechanism, is accessible by any code you run. If you can read the data, any code that runs as you can read the data also. The moral of the story is, don’t run code you don’t trust.

Important

If you protect data by using the CRYPTPROTECT_LOCAL_MACHINE flag, it’s imperative that you back up the resulting ciphertext. Otherwise, if the computer fails and must be rebuilt, the key used to encrypt the data is lost and the data is lost.

Although it’s discouraged on Windows 2000 and Windows XP, you can also use the Local Security Authority (LSA) secrets APIs, LsaStorePrivateData and LsaRetrievePrivateData, if your process is running with high privileges or as SYSTEM. LSA secrets are discouraged on Windows 2000 and later because LSA will store only a total of 4096 secrets per system. 2048 are reserved by the operating system for its own use, leaving 2048 for nonsystem use. As you can see, secrets are a scarce resource. Use DPAPI instead. I’ll cover LSA secrets in detail later in this chapter in the "Protecting Secrets in Windows NT 4" section.

The following code sample shows how to store and retrieve data by using DPAPI functions. You can also find this example code with the book’s sample files in the folder Secureco2Chapter09DPAPI.

// Data to protect
DATA_BLOB blobIn;
blobIn.pbData = reinterpret_cast<BYTE *>("This is my secret data.";
blobIn.cbData = lstrlen(reinterpret_cast<char *>(blobIn.pbData))+1;

//Optional entropy via an external function call
DATA_BLOB blobEntropy;
blobEntropy.pbData = GetEntropyFromUser();
blobEntropy.cbData = lstrlen(
    reinterpret_cast<char *>(blobEntropy.pbData));

//Encrypt the data.
DATA_BLOB blobOut;
DWORD dwFlags = CRYPTPROTECT_AUDIT;
if(CryptProtectData(
    &blobIn,
    L"Writing Secure Code Example", 
    &blobEntropy,                         
    NULL,                         
    NULL,                     
    dwFlags,
    &blobOut))   {
    printf("Protection worked.
");
} else {
    printf("Error calling CryptProtectData() -> %x",
           GetLastError());
    exit(-1);
}

//Decrypt the data.
DATA_BLOB blobVerify;
if (CryptUnprotectData(
    &blobOut,
    NULL,
    &blobEntropy,
    NULL,                
    NULL,       
    0,
    &blobVerify)) {
    printf("The decrypted data is: %s
", blobVerify .pbData);
} else {
    printf("Error calling CryptUnprotectData() - > %x", 
           GetLastError());
    exit(-1);
}

LocalFree(blobOut.pbData);
LocalFree(blobVerify.pbData);

More Information

You can learn more about the inner workings of DPAPI at http://msdn.microsoft.com/library/en-us/dnsecure/html/windataprotection-dpapi.asp.

A Special Case: Client Credentials in Windows XP

Windows XP includes functionality named Stored User Names And Passwords to make handling users’ passwords and other credentials, such as private keys, easier, more consistent, and safer. If your application includes a client component that requires you to prompt for or store a user’s credentials, you should seriously consider using this feature for the following reasons:

  • Support for different types of credentials, such as passwords and keys, on smart cards.

  • Support for securely saving credentials by using DPAPI.

  • No need to define your own user interface. It’s provided, although you can add a custom image to the dialog box.

Stored User Names And Passwords can handle two types of credentials: Windows domain credentials and generic credentials. Domain credentials are used by portions of the operating system and can be retrieved only by an authentication package, such as Kerberos. If you write your own Security Support Provider Interface (SSPI), you can use domain credentials also. Generic credentials are application-specific and apply to applications that maintain their own authentication and authorization mechanisms—for example, an accounting package that uses its own lookup SQL database for security data.

The following sample code shows how to prompt for generic credentials:

/*
   Cred.cpp
*/
#include <stdio.h>
#include <windows.h>
#include <wincred.h>

CREDUI_INFO cui;
cui.cbSize = sizeof CREDUI_INFO;
cui.hwndParent = NULL;
cui.pszMessageText = 
    TEXT("Please Enter your Northwind Traders Accounts password.");
cui.pszCaptionText = TEXT("Northwind Traders Accounts") ;
cui.hbmBanner = NULL;

PCTSTR pszTargetName = TEXT("NorthwindAccountsServer");  
DWORD  dwErrReason = 0;
Char   pszName[CREDUI_MAX_USERNAME_LENGTH+1];
Char   pszPwd[CREDUI_MAX_PASSWORD_LENGTH+1];
DWORD  dwName = CREDUI_MAX_USERNAME_LENGTH; 
DWORD  dwPwd = CREDUI_MAX_PASSWORD_LENGTH; 
BOOL   fSave = FALSE;
DWORD  dwFlags = 
         CREDUI_FLAGS_GENERIC_CREDENTIALS | 
         CREDUI_FLAGS_ALWAYS_SHOW_UI;

//Zero out username and password, as they are [in,out] parameters.
ZeroMemory(pszName, dwName);
ZeroMemory(pszPwd, dwPwd);
   
DWORD err = CredUIPromptForCredentials(
               &cui,
               pszTargetName,
               NULL,
               dwErrReason,
               pszName,dwName,
               pszPwd,dwPwd,
               &fSave,
               dwFlags);

if (err) 
    printf("CredUIPromptForCredentials() failed -> %d",
           GetLastError());
else {
    //Access the Northwind Traders Accounting package using
    //pszName and pszPwd over a secure channel.
}

You can also find this example code with the book’s sample files in the folder Secureco2Chapter09Cred. This code produces the dialog box in Figure 9-1. Note that the username and password are prepopulated if the credentials are already stored for the target—in this case, NorthwindAccountsServer—and that the credentials are cached in DPAPI.

A Credential Manager dialog box with a prepopulated username and password.

Figure 9-1. A Credential Manager dialog box with a prepopulated username and password.

You can also use a command line–specific function that does not pop up a dialog box: CredUICmdLinePromptForCredentials.

Finally, if the credential user interface functions are not flexible enough for your application, there are a range of low-level functions documented in the Platform SDK that should meet your needs.

Important

Remember, rogue software that runs in your security context can read your data, and that includes credentials protected by the functionality explained in this section.

Protecting Secrets in Windows NT 4

Windows NT 4 does not include the DPAPI, but it includes CryptoAPI support and ACLs. You can protect data in Windows NT 4 by performing these steps:

  1. Create a random key by using CryptGenRandom.

  2. Store the key in the registry.

  3. ACL the registry key such that Creator/Owner and Administrators have full control.

  4. If you are really paranoid, place an audit ACE (SACL) on the resource so that you can see who is attempting to read the data.

Each time you want to encrypt or decrypt the data, only the user account that created the key (the object’s owner) or a local administrator can read the key and use it to carry out the task. This is not perfect, but at least the security bar has been raised such that only an administrator or the user in question can carry out the process. Of course, if you invite a Trojan horse application to run on your computer, it can read the key data from the registry, because it runs under your account, and then decrypt the data.

You can also use LSA secrets (LsaStorePrivateData and LsaRetrievePrivateData) as discussed previously in the "Protecting Secrets in Windows 2000 and Later" section. Four types of LSA secrets exist: local data, global data, machine data, and private data. Local data LSA secrets can be read only locally from the machine storing the data. Attempting to read such data remotely results in an Access Denied error. Local data LSA secrets have key names that begin with the prefix L$. Global data LSA secrets are global such that if they are created on a domain controller (DC), they are automatically replicated to all other DCs in that domain. Global data LSA secrets have key names beginning with G$. Machine data LSA secrets can be accessed only by the operating system. These key names begin with M$. Private data LSA secrets, unlike the preceding specialized types, have key names that do not start with a prefix. Such data is not replicated and can be read locally or remotely. Note that service account passwords are not disclosed remotely and start with an SC_ prefix. Other prefixes exist, and you should refer to the LsaStorePrivateData MSDN documentation for further detail.

Before you can store or retrieve LSA secret data, your application must acquire a handle to the LSA policy object. Here’s a sample C++ function that will open the policy object:

//LSASecrets.cpp : Defines the entry point for the console application.
#include <windows.h>
#include <stdio.h>
#include "ntsecapi.h"
bool InitUnicodeString(LSA_UNICODE_STRING* pUs, const WCHAR* input){
    DWORD len = 0;
    if(!pUs)
        return false;
    if(input){
        len = wcslen(input);
        if(len > 0x7ffe) //32k -1 return false;
    }
    pUs->Buffer = (WCHAR*)input;
    pUs->Length = (USHORT)len * sizeof(WCHAR);
    pUs->MaximumLength = (USHORT)(len + 1) * sizeof(WCHAR);
    return true;
}

LSA_HANDLE GetLSAPolicyHandle(WCHAR *wszSystemName) {
    LSA_OBJECT_ATTRIBUTES ObjectAttributes;
    ZeroMemory(&ObjectAttributes, sizeof(ObjectAttributes));
    LSA_UNICODE_STRING lusSystemName;

    if(!InitUnicodeString(&lusSystemName, wszSystemName))return NULL;
    LSA_HANDLE hLSAPolicy = NULL;
    NTSTATUS ntsResult = LsaOpenPolicy(&lusSystemName,&ObjectAttributes, 
        POLICY_ALL_ACCESS, 
        &hLSAPolicy);
    DWORD dwStatus = LsaNtStatusToWinError(ntsResult);
    if (dwStatus != ERROR_SUCCESS) {
        wprintf(L"OpenPolicy returned %lu
",dwStatus);
        return NULL;
    }
    return hLSAPolicy;
}

The following code example shows how to use LSA secrets to encrypt and decrypt information:

DWORD WriteLsaSecret(LSA_HANDLE hLSA, 
                     WCHAR *wszSecret, WCHAR *wszName) 
{
    LSA_UNICODE_STRING lucName;
    if(!InitUnicodeString(&lucName, wszName))
        return ERROR_INVALID_PARAMETER;
    LSA_UNICODE_STRING lucSecret;
    if(!InitUnicodeString(&lucSecret, wszSecret))
        return ERROR_INVALID_PARAMETER;

    NTSTATUS ntsResult = LsaStorePrivateData(hLSA,&lucName, &lucSecret);
    DWORD dwStatus = LsaNtStatusToWinError(ntsResult);
    if (dwStatus != ERROR_SUCCESS) 
        wprintf(L"Store private object failed %lu
",dwStatus);
    return dwStatus;
}

DWORD ReadLsaSecret(LSA_HANDLE hLSA,DWORD dwBuffLen,
                    WCHAR *wszSecret, WCHAR *wszName) 
{
    LSA_UNICODE_STRING lucName;
    if(!InitUnicodeString(&lucName, wszName))
        return ERROR_INVALID_PARAMETER;

    PLSA_UNICODE_STRING plucSecret = NULL;
    NTSTATUS ntsResult = LsaRetrievePrivateData(hLSA, 
        &lucName, &plucSecret);
    DWORD dwStatus = LsaNtStatusToWinError(ntsResult);
    if (dwStatus != ERROR_SUCCESS) 
        wprintf(L"Store private object failed %lu
",dwStatus);
    else
        wcsncpy(wszSecret, plucSecret->Buffer, 
        min((plucSecret->Length)/sizeof WCHAR,dwBuffLen));
    if (plucSecret) 
        LsaFreeMemory(plucSecret);
    return dwStatus;
}

int main(int argc, char* argv[]) {
    LSA_HANDLE hLSA = GetLSAPolicyHandle(NULL);
    WCHAR *wszName = L"L$WritingSecureCode";
    WCHAR *wszSecret = L"My Secret Data!";
    if (WriteLsaSecret(hLSA, wszSecret, wszName) == ERROR_SUCCESS) {
        WCHAR wszSecretRead[128];
        if (ReadLsaSecret(hLSA,sizeof wszSecretRead / sizeof WCHAR,
            wszSecretRead,wszName) == ERROR_SUCCESS) 
            wprintf(L"LSA Secret '%s' is '%s'
",wszName,wszSecretRead);
    }

    if (hLSA) LsaClose(hLSA);
    return 0;
}

This example code is also available with the book’s sample files in the folder Secureco2Chapter09LSASecrets. You can delete an LSA secret by setting the last argument to LsaStorePrivateData NULL.

Note

Secrets protected by LSA can be viewed by local computer administrators using LSADUMP2.exe from BindView. The tool is available at http://razor.bindview.com/tools/desc/lsadump2_readme.html. Of course, an administrator can do anything!

Protecting Secrets in Windows 95, Windows 98, Windows Me, and Windows CE

Windows 95, Windows 98, Windows Me, and Windows CE (used in Pocket PCs) all have CryptoAPI support, but none have ACLs. Although it’s easy to save secret data in a resource such as the registry or a file, where do you store the key used to encrypt the data? In the registry too? How do you secure that, especially with no ACL support? This is a difficult problem. These platforms cannot be used in secure environments. You can hide secrets, but they will be much easier to find than on Windows NT 4, Windows 2000, or Windows XP. In short, if the data being secured is high-risk (such as medical data), use Windows 95, Windows 98, Windows Me, or Windows CE only if you get a key from a user or an external source to encrypt and decrypt the data.

When using these less-secure platforms, you could derive the key by calling CryptGenRandom, storing this key in the registry, and encrypting it with a key derived from something held on the device, such as a volume name, a device name, a video card name, and so on. (I bet you wish Intel had stuck with shipping their Pentium III serial numbers enabled, don’t you?) Your code can read the "device" to get the key to unlock the registry key. However, if an attacker can determine what you are using as key material, he can derive the key. Still, you’ve made the task more difficult for the attacker, as he has to go through more steps to get the plaintext. Also, if the user changes hardware, the key material might be lost also. This solution is hardly perfect, but it might be good enough for noncritical data.

The HKEY_LOCAL_MACHINEHARDWARE portion of the registry in Windows 95, Windows 98, and Windows Me computers is full of hardware-specific data you can use to derive an encryption key. It’s not perfect, but again, the bar is raised somewhat. That said, let’s look at some ways to derive system information to help build key material.

Getting Device Details Using PnP

Plug and Play support in Windows 98 and later, and Windows 2000 and later, allows a developer to access system hardware information. This information is sufficiently convoluted that it can serve as the basis for key material to protect data that should not leave the computer. The following code outlines the process involved; it enumerates devices on the computer, gets the hardware description, and uses this data to build a SHA-1 that could be used as non-persistent key material. You can learn more about the device management functions at http://msdn.microsoft.com/library/en-us/devio/deviceman_7u9f.asp.

#include "windows.h"
#include "wincrypt.h"
#include "initguid.h"
#include "Setupapi.h"
#include "winioctl.h"
#include "strsafe.h"
//These are defined in the DDK, but not everyone has the DDK!
DEFINE_GUID( GUID_DEVCLASS_CDROM,     
            0x4d36e965L, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_NET,       
            0x4d36e972L, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_DISPLAY,   
            0x4d36e968L, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_KEYBOARD,  
            0x4d36e96bL, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_MOUSE,     
            0x4d36e96fL, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_SOUND,     
            0x4d36e97cL, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_USB,       
            0x36fc9e60L, 0xc465, 0x11cf, 0x80, 0x56, 
            0x44, 0x45, 0x53, 0x54, 0x00, 0x00 );
DEFINE_GUID( GUID_DEVCLASS_DISKDRIVE, 
            0x4d36e967L, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_PORTS,     
            0x4d36e978L, 0xe325, 0x11ce, 0xbf, 0xc1, 
            0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 );
DEFINE_GUID( GUID_DEVCLASS_PROCESSOR, 
            0x50127dc3L, 0x0f36, 0x415e, 0xa6, 0xcc, 
            0x4c, 0xb3, 0xbe, 0x91, 0x0B, 0x65 );

DWORD GetPnPStuff(LPGUID pGuid, LPTSTR szData, DWORD cData) {

    HDEVINFO hDevInfo = SetupDiGetClassDevs(NULL,
        NULL,            
        NULL,
        DIGCF_PRESENT | DIGCF_ALLCLASSES); 

    if (INVALID_HANDLE_VALUE == hDevInfo)
        return GetLastError();

    //Enumerate all devices in Set.   
    SP_DEVINFO_DATA did;
    did.cbSize = sizeof(SP_DEVINFO_DATA);

    for (int i = 0;
        SetupDiEnumDeviceInfo(hDevInfo,i,&did);
        i++) {

            //Is this device we’re interested in?
            if (*pGuid != did.ClassGuid)
                continue;

            const DWORD cBuff = 256;
            char  Buff[cBuff];  
            DWORD dwRegType = 0, cNeeded = 0;

            if (SetupDiGetDeviceRegistryProperty(hDevInfo,
                &did,
                SPDRP_HARDWAREID,
                &dwRegType,
                (PBYTE)Buff,
                cBuff,
                &cNeeded))
                //Potential for data loss, but that's ok.
                if (cData > cNeeded) {
                    StringCchCat(szData,cData,"
	");
                    StringCchCat(szData,cData,Buff);
                }
        }  

        return 0;
}

DWORD CreateHashFromPnPStuff(HCRYPTHASH hHash) {
    struct {
        LPGUID guid;
        _TCHAR *szDevice;
    } device [] = 
    {
        {(LPGUID)&GUID_DEVCLASS_CDROM,    "CD"},
        {(LPGUID)&GUID_DEVCLASS_DISPLAY,  "VDU"},
        {(LPGUID)&GUID_DEVCLASS_NET,      "NET"},
        {(LPGUID)&GUID_DEVCLASS_KEYBOARD, "KBD"},
        {(LPGUID)&GUID_DEVCLASS_MOUSE,    "MOU"},
        {(LPGUID)&GUID_DEVCLASS_USB,      "USB"},
        {(LPGUID)&GUID_DEVCLASS_PROCESSOR,"CPU"}
    };

    const DWORD cData = 4096;
    TCHAR *pData = new TCHAR[cData];
    if (!pData)
        return ERROR_NOT_ENOUGH_MEMORY;

    DWORD dwErr = 0;

    for (int i=0; i < sizeof(device)/sizeof(device[0]); i++) {

        ZeroMemory(pData,cData);

        if (GetPnPStuff(device[i].guid,pData,cData) == 0) {
#ifdef _DEBUG
            printf("%s: %s
",device[i].szDevice, pData);
#endif
            if (!CryptHashData(hHash, 
                (LPBYTE)pData, lstrlen(pData), 0)) {
                    dwErr = GetLastError();
                    break;
                }
        } else {
            dwErr = GetLastError();
        }
    }

    delete [] pData;

    return dwErr;
}

int _tmain(int argc, _TCHAR* argv[]) {
    HCRYPTPROV hProv = NULL;
    HCRYPTHASH hHash = NULL; 

    if (CryptAcquireContext
        (&hProv,NULL,NULL,PROV_RSA_FULL,CRYPT_VERIFYCONTEXT)) {
            if (CryptCreateHash(hProv, CALG_SHA1, 0, 0, &hHash)) {
                if (CreateHashFromPnPStuff(hHash) == 0) {

                    //get the hash
                    BYTE hash[20];
                    DWORD cbHash = 20;

                    if (CryptGetHashParam
                        (hHash,HP_HASHVAL,hash,&cbHash,0)) {
                            for (DWORD i=0; i < cbHash; i++) {
                                printf("%02X",hash[i]);
                            }
                        }
                }
            }
        }

        if (hHash)      
            CryptDestroyHash(hHash);

        if (hProv)
            CryptReleaseContext(hProv, 0);

}
        if (hHash)      
        CryptDestroyHash(hHash);
    
    if (hProv)
        CryptReleaseContext(hProv, 0);

}

Be careful if you use code like this to build long-lived encryption keys. If the hardware changes, so does the key. With this in mind, restrict the hardware you query to hardware that never changes. And be mindful of a laptop in the docked and undocked state!

It’s important to realize that none of this is truly secure—it just might be secure enough for the data you’re trying to protect. That last point again: it might be secure enough.

Note

It’s important to notify the user in Help files or documentation that the platform stores secrets on a best-effort basis.

Not Opting for a Least Common Denominator Solution

No doubt you’ve realized that different versions of Windows provide different data protection technologies. Generally speaking, the new versions of the operating system provide better data security by way of ACLs, cryptographic services, and high-level data protection capabilities. However, what if your application must run on Windows NT 4 and later, yet you want your application to provide the best possible security for client data on the newer operating systems? You could always use what’s available in Windows NT 4, but, as you’ve read, Windows 2000 offers more capability than Windows NT 4 through the data protection API. The best way to take advantage of what the operating system has to offer is to call the functions indirectly, using run-time dynamic linking rather than load-time dynamic linking, and to wrap the calls in wrapper functions to isolate the code from the operating system. For example, the following code snippet works in Windows NT and Windows 2000 and later, and it has the logic to use DPAPI on Windows 2000 and LSA secrets on Windows NT 4:

//signature for CryptProtectData
typedef BOOL (WINAPI CALLBACK* CPD)
(DATA_BLOB*,LPCWSTR,DATA_BLOB*,
 PVOID,CRYPTPROTECT_PROMPTSTRUCT*,DWORD,DATA_BLOB*);

//signature for CryptUnprotectData
typedef BOOL (WINAPI CALLBACK* CUD)
(DATA_BLOB*,LPWSTR,DATA_BLOB*,
 PVOID,CRYPTPROTECT_PROMPTSTRUCT*,DWORD,DATA_BLOB*);

HRESULT EncryptData(LPCTSTR szPlaintext) {
    HRESULT hr = S_OK;
    HMODULE hMod = LoadLibrary(_T("crypt32.dll"));
    if (!hMod)
        return HRESULT_FROM_WIN32(GetLastError());

    CPD cpd = (CPD)GetProcAddress(hMod,_T("CryptProtectData"));                                                

    if (cpd) {
        //call DPAPI using (cpd)(args);
        //store result in ACLd registry location
    } else {
        //call LSA Secrets API
    }

    FreeLibrary(hMod);

    return hr;
}

Managing Secrets in Memory

When maintaining secret data in memory, you should follow some simple guidelines:

  • Acquire the secret data.

  • Use the secret data.

  • Discard the secret data.

  • Scrub the memory.

The time between acquiring the secret data and scrubbing the memory holding the data should be as short as possible to reduce the chance that the secret data is paged to the paging file. Admittedly, the threat of someone accessing the secret data in the page file is slim. However, if the data is highly sensitive, such as long-lived signing keys and administrator passwords, you should take care to make sure the data is not leaked through what seems like innocuous means. In addition, if the application fails with an access violation, the ensuing crash dump file might contain the secret information.

Once you’ve used the secret in your code, overwrite the buffer with bogus data (or simply zeros) by using memset or ZeroMemory, which is a simple macro around memset:

#define ZeroMemory RtlZeroMemory
#define RtlZeroMemory(Destination,Length)-
    memset((Destination),0,(Length))

There’s a little trick you should know for cleaning out dynamic buffers if you lose track or do not store the buffer size in your code. (To many people, not keeping track of a dynamic buffer size is bad form, but that’s another discussion!) If you allocate dynamic memory by using malloc, you can use the _msize function to determine the size of the data block. If you use the Windows heap functions, such as HeapCreate and HeapAlloc, you can determine the block size later by calling the HeapSize function. Once you know the dynamic buffer size, you can safely zero it out. The following code snippet shows how to do this:

void *p = malloc(N);

...

size_t cb = _msize(p);
memset(p,0,cb);

A Compiler Optimization Caveat

Today’s C and C++ compilers have incredible optimization capabilities. They can determine how best to use machine registers (register coloring), move code that manipulates or generates invariant data out of loops (code hoisting), and much more. One of the more interesting optimizations is dead code removal. When the compiler analyzes the code, it can determine whether some code is used based in part on whether the code is called by other code or whether the data the code operates on is used. Look at the following fictitious code—can you spot the security flaw?

void DatabaseConnect(char *szDB) {
    char szPwd[64];
    if (GetPasswordFromUser(szPwd,sizeof(szPwd))) {
        if (ConnectToDatabase(szDB, szPwd)) {
            // Cool, we’re connected
            // Now do database stuff
        }
    }
    ZeroMemory(szPwd,sizeof(szPwd));
}

Here’s the answer: there is no bug; this C code is fine! It’s the code generated by the compiler that exhibits the security flaw. If you look at the assembly language output, you’ll notice that the call to ZeroMemory has been removed by the compiler! The compiler removed the call to ZeroMemory because it realized the szPwd variable was no longer used by the DatabaseConnect function. Why spend CPU cycles scrubbing the memory of something that’s no longer used? Below is the slightly cleaned up assembly language output of the previous code created by Microsoft Visual C++ .NET. It contains the C source code, as well as the Intel x86 instructions. The C source code lines start with a semicolon (;) followed by the line number (starting at 30, in this case) and the C source. Below the C source lines are the assembly language instructions.

; 30   : void DatabaseConnect(char *szDB) {

        sub        esp, 68                        ; 00000044H
        mov        eax, DWORD PTR ___security_cookie
        xor        eax, DWORD PTR __$ReturnAddr$[esp+64]

; 31   :     char szPwd[64];
; 32   :     if (GetPasswordFromUser(szPwd,sizeof(szPwd))) {

        push        64                    ; 00000040H
        mov        DWORD PTR __$ArrayPad$[esp+72], eax
        lea        eax, DWORD PTR _szPwd$[esp+72]
        push        eax
        call        GetPasswordFromUser
        add        esp, 8
        test        al, al
        je    SHORT $L1344

; 33   :         if (ConnectToDatabase(szDB, szPwd)) {

        mov        edx, DWORD PTR _szDB$[esp+64]
        lea        ecx, DWORD PTR _szPwd$[esp+68]
        push        ecx
        push        edx
        call        ConnectToDatabase
        add        esp, 8
    $L1344:

; 34   :             //Cool, we’re connected
; 35   :             //Now do database stuff
; 36   :         }
; 37   :     }
; 38   : 
; 39   :     ZeroMemory(szPwd,sizeof(szPwd));
; 40   : }

        mov        ecx, DWORD PTR __$ArrayPad$[esp+68]
        xor        ecx, DWORD PTR __$ReturnAddr$[esp+64]
        add        esp, 68                        ; 00000044H
        jmp        @__security_check_cookie@4
DatabaseConnect ENDP

The assembly language code after line 30 is added by the compiler because of the –GS compiler "stack-based cookie" option. (Refer to Chapter 5, for more information about this option.) However, take a look at the code after lines 34 to 40. This code checks that the cookie created by the code after line 30 is valid. But where is the code to zero out the buffer? It’s not there! Normally, you would see a call to _memset. (Remember: ZeroMemory is a macro that calls memset.)

The problem is that the compiler should not remove this code, because we always want the memory scrubbed of the secret data. But because the compiler determined that szPwd was no longer used by the function, it removed the code. I’ve seen this behavior in Microsoft Visual C++ version 6 and version 7 and the GNU C Compiler (GCC) version 3.x. No doubt other compilers have this issue also. During the Windows Security Push—see Chapter 2, for more information—we created an inline version of ZeroMemory named SecureZeroMemory that is not removed by the compiler and that is available in winbase.h. The code for this inline function is as follows:

#ifndef FORCEINLINE
#if (MSC_VER >= 1200)
#define FORCEINLINE __forceinline
#else
#define FORCEINLINE __inline
#endif
#endif

...

FORCEINLINE PVOID SecureZeroMemory(
    void  *ptr, size_t cnt) {
    volatile char *vptr = (volatile char *)ptr;
    while (cnt) {
        *vptr = 0;
        vptr++;
        cnt--;
    }
    return ptr;
}

Feel free to use this code in your application if you do not have the updated Windows header files. Please be aware that this code is slow, relative to ZeroMemory or memset, and should be used only for small blocks of sensitive data. Do not use it as a general memory-wiping function, unless you want to invite the wrath of your performance people!

You can use other techniques to prevent the optimizer from removing the calls to memset. You can add a line of code after the scrubbing function to read the sensitive data in memory, but be wary of the optimizer again. You can fool the optimizer by casting the pointer to a volatile pointer; because a volatile pointer can be manipulated outside the scope of the application, it is not optimized by the compiler. Changing the code to include the following line after the call to ZeroMemory will keep the optimizer at bay:

*(volatile char*)szPwd = *(volatile char *)szPwd;

The problem with the previous two techniques is that they rely on the fact that volatile pointers are not optimized well by the C/C++ compilers—this only works today. Optimizer developers are always looking at ways to squeeze that last ounce of size and speed from your code, and who knows, three years from now, there might be a way to optimize volatile pointer code safely.

Another way to solve the issue that does not require compiler tricks is to turn off optimizations for the code that scrubs the data. You can do this by wrapping the function(s) in question with the #pragma optimize construct:

#pragma optimize("",off)
// Memory-scrubbing function(s) here.
#pragma optimize("",on)

This will turn off optimizations for the entire function. Global optimizations, -Og (implied by the -Ox, -O1 and -O2 compile-time flags), are what Visual C++ uses to remove dead stores. But remember, global optimizations are "a very good thing," so keep the code affected by the #pragma constructs to a minimum.

Encrypting Secret Data in Memory

If you must use long-lived secret data in memory, you should consider encrypting the memory while it is not being used. Once again, this helps mitigate the threat of the data being paged out. You can use any of the CryptoAPI samples shown previously to perform this task. While this works, you’ll have to manage keys.

In Windows .NET Server 2003, we added two new APIs along the same lines as DPAPI but for protecting in-memory data. The function calls are CryptProtectMemory and CryptUnprotectMemory. The base key used to protect the data is re-created each time the computer is booted, and other key material is used depending on flags passed to the functions. Your application need never see an encryption key when using these functions. The following code sample shows how to use the functions.

#include <wincrypt.h>

#define SECRET_LEN 15  //includes null

HRESULT hr = S_OK;
LPWSTR pSensitiveText = NULL;
DWORD cbSensitiveText = 0;
DWORD cbPlainText = SECRET_LEN * sizeof(WCHAR);
DWORD dwMod = 0;

//Memory to encrypt must be a multiple 
//of CYPTPROTECTMEMORY_BLOCK_SIZE.
if (dwMod = cbPlainText % CRYPTPROTECTMEMORY_BLOCK_SIZE)
    cbSensitiveText = cbPlainText + (CRYPTPROTECTMEMORY_BLOCK_SIZE - dwMod);
else
    cbSensitiveText = cbPlainText;

pSensitiveText = (LPWSTR)LocalAlloc(LPTR, cbSensitiveText);
if (NULL == pSensitiveText)
        return E_OUTOFMEMORY;

//Place sensitive string to encrypt in pSensitiveText.
//Then encrypt in place
if (!CryptProtectMemory(pSensitiveText, 
        cbSensitiveText, 
        CRYPTPROTECTMEMORY_SAME_PROCESS)) {
     //on failure clean out the data
    SecureZeroMemory(pSensitiveText, cbSensitiveText);
    LocalFree(pSensitiveText);
    pSensitiveText = NULL;
    return GetLastError();
}

//Call CryptUnprotectMemory to decrypt and use the memory.
...
//Now clean up
SecureZeroMemory(pSensitiveText, cbSensitiveText);
LocalFree(pSensitiveText);
pSensitiveText = NULL;

return hr;

You can learn more about these new functions in the Platform SDK.

Locking Memory to Prevent Paging Sensitive Data

You can prevent data from being written to the page file by locking it in memory. However, doing so is actively discouraged because locking memory can prevent the operating system from performing some memory management tasks effectively. Therefore, you should lock memory (by using functions like AllocateUserPhysicalPages and VirtualLock) with caution and only do so when dealing with highly sensitive data. Be aware that locking memory does not prevent the memory from being written to a hibernate file or to a crash dump file, nor does it prevent an attacker from attaching a debugger to the process and reading data out of the application address space.

Protecting Secret Data in Managed Code

Currently the .NET common language runtime and .NET Framework offer no service for storing secret information in a secure manner, and storing a password in plaintext in an XML file is not raising the bar very high! Part of the reason for not adding this support is the .NET philosophy of XCOPY deployment. In other words, any application can be written and then deployed using simple file-copying tools. There should be no need to register DLLs or controls or to set any settings in the registry. You copy the files, and the application is live. With that in mind, you might realize that storing secrets defeats this noble goal. You cannot store secret data without the aid of tools, because encryption uses complex algorithms and keys. However, there’s no reason why, as an application developer, you cannot deploy an application after using tools to configure secret data. Or your application could use secrets but not store them. What I mean is this: your application can use and cache secret data but not persist the data, in which case XCOPY deployment is still a valid option.

If you see code like the following "encryption code," file a bug and have it fixed as soon as possible. This is a great example instead of "encraption":

public static char[] EncryptAndDecrypt(string data)  {
        //SSsshh!! Don’t tell anyone.
        string key = "yeKterceS";
        char[] text = data.ToCharArray();
        for (int i = 0; i < text.Length; i++)
            text[i] ^= key[i % key.Length];

        return text;
}

Today, the only way to protect secret data from managed code is to call unmanaged code, which means you can call LSA or DPAPI from a managed application.

The following sample code outlines how you can use C# to create a class that interfaces with DPAPI. Note that there’s another file that goes with this file, named NativeMethods.cs, that contains platform invoke (PInvoke) definitions, data structures, and constants necessary to call DPAPI. You can find all of these files with the book’s sample files in the folder Secureco2Chapter09DataProtection. The System.Runtime.InteropServices namespace provides a collection of classes useful for accessing COM objects and native APIs from .NET-based applications.

//DataProtection.cs
namespace Microsoft.Samples.DPAPI {

    using System;
    using System.Runtime.InteropServices; 
    using System.Text;

    public class DataProtection {
        // Protect string and return base64-encoded data.
        public static string ProtectData(string data, 
                                         string name, 
                                         int flags) {
            byte[] dataIn = Encoding.Unicode.GetBytes(data);
            byte[] dataOut = ProtectData(dataIn, name, flags);

            return (null != dataOut)
                ? Convert.ToBase64String(dataOut)
                : null;
        }

        // Unprotect base64-encoded data and return string.
        public static string UnprotectData(string data)  {
            byte[] dataIn = Convert.FromBase64String(data);
            byte[] dataOut = UnprotectData(dataIn,
                NativeMethods.UIForbidden | 
                NativeMethods.VerifyProtection);

            return (null != dataOut) 
                ? Encoding.Unicode.GetString(dataOut)
                : null;
        }

        ////////////////////////
        // Internal functions //
        ////////////////////////

        internal static byte[] ProtectData(byte[] data,  
                                           string name,  
                                           int dwFlags)  {
            byte[] cipherText = null;

            // Copy data into unmanaged memory.
            NativeMethods.DATA_BLOB din = 
                new NativeMethods.DATA_BLOB();
            din.cbData = data.Length;
            din.pbData = Marshal.AllocHGlobal(din.cbData);
            Marshal.Copy(data, 0, din.pbData, din.cbData);

            NativeMethods.DATA_BLOB dout = 
                new NativeMethods.DATA_BLOB();

            NativeMethods.CRYPTPROTECT_PROMPTSTRUCT ps  = 
                new NativeMethods.CRYPTPROTECT_PROMPTSTRUCT();
         
            //Fill the DPAPI prompt structure.
            InitPromptstruct(ref ps);

            try {
                bool ret = 
                    NativeMethods.CryptProtectData(
                        ref din, 
                        name, 
                        NativeMethods.NullPtr,
                        NativeMethods.NullPtr, 
                        ref ps, 
                        dwFlags, ref dout);

                if (ret) {
                    cipherText = new byte[dout.cbData];
                    Marshal.Copy(dout.pbData, 
                                 cipherText, 0, dout.cbData);
                    NativeMethods.LocalFree(dout.pbData);
                } else {
                    #if (DEBUG)
                    Console.WriteLine("Encryption failed: " + 
                        Marshal.GetLastWin32Error().ToString());
                    #endif
                }
            }
            finally {
                if ( din.pbData != IntPtr.Zero )
                    Marshal.FreeHGlobal(din.pbData);
            }

            return cipherText;
        }

        internal static byte[] UnprotectData(byte[] data, 
                                             int dwFlags) {
            byte[] clearText = null;

            //Copy data into unmanaged memory.
            NativeMethods.DATA_BLOB din = 
                new NativeMethods.DATA_BLOB();
            din.cbData = data.Length;
            din.pbData = Marshal.AllocHGlobal(din.cbData);
            Marshal.Copy(data, 0, din.pbData, din.cbData);

            NativeMethods.CRYPTPROTECT_PROMPTSTRUCT ps = 
                new NativeMethods.CRYPTPROTECT_PROMPTSTRUCT();
         
            InitPromptstruct(ref ps);

            NativeMethods.DATA_BLOB dout = 
                new NativeMethods.DATA_BLOB();

            try {
                bool ret = 
                    NativeMethods.CryptUnprotectData(
                        ref din, 
                        null, 
                        NativeMethods.NullPtr,
                        NativeMethods.NullPtr, 
                        ref ps, 
                        dwFlags, 
                        ref dout);

                if (ret) {
                    clearText = new byte[ dout.cbData ] ;
                    Marshal.Copy(dout.pbData, 
                                 clearText, 0, dout.cbData);
                    NativeMethods.LocalFree(dout.pbData);
                } else {
                    #if (DEBUG)
                    Console.WriteLine("Decryption failed: " + 
                        Marshal.GetLastWin32Error().ToString());
                    #endif
                }
            }

            finally {
                if ( din.pbData != IntPtr.Zero )
                    Marshal.FreeHGlobal(din.pbData);
            }

            return clearText;
        }

        static internal void InitPromptstruct(
            ref NativeMethods.CRYPTPROTECT_PROMPTSTRUCT ps) {
            ps.cbSize = Marshal.SizeOf(
                typeof(NativeMethods.CRYPTPROTECT_PROMPTSTRUCT));
            ps.dwPromptFlags = 0;
            ps.hwndApp = NativeMethods.NullPtr;
            ps.szPrompt = null;
        }
    }
}

The following C# driver code shows how to use the DataProtection class:

using Microsoft.Samples.DPAPI;
using System;
using System.Text;

class TestStub {
    public static void Main(string[] args) {
        string data = "Gandalf, beware of the Balrog in Moria.";
        string name="MySecret";
        Console.WriteLine("String is: " + data);
        string s = DataProtection.ProtectData(data, 
            name, 
            NativeMethods.UIForbidden);
        if (null == s) {
            Console.WriteLine("Failure to encrypt");
            return;
        }
        Console.WriteLine("Encrypted Data: " + s);
        s = DataProtection.UnprotectData(s);
        Console.WriteLine("Cleartext: " + s);
    }
}

You can also use COM+ construction strings. COM+ object construction enables you to specify an initialization string stored in the COM+ metadata, thereby eliminating the need to hard-code configuration information within a class. You can use functions in the System.EnterpriseServices namespace to access a construction string. You should use this option only for protecting data used in server-based applications. The following code shows how you can create a COM+ component in C# that manages the constructor string. This component performs no other task than act as a conduit for the construct string. Note, you will need to create your own private/public key pair using the SN.exe tool when giving this a strong name. You will also need to replace the reference to c:keysDemoSrv.snk with the reference to your key data. Refer to Chapter 18, for information about strong named assemblies.

using System;
using System.Reflection;
using System.Security.Principal;
using System.EnterpriseServices;

[assembly: ApplicationName("ConstructDemo")]
[assembly: ApplicationActivation(ActivationOption.Library)]
[assembly: ApplicationAccessControl]
[assembly: AssemblyKeyFile(@"c:keysDemoSrv.snk")]

namespace DemoSrv {
    [ComponentAccessControl]
        [SecurityRole("DemoRole", SetEveryoneAccess = true)]

        // Enable object construct strings.
        [ConstructionEnabled(Default="Set new data.")]
        public class DemoComp : ServicedComponent {
                private string _construct;

                override protected void Construct(string s) {
                        _construct = s; 
                }

                public string GetConstructString() {
                        return _construct;  
                }
        }
} 

And the following Microsoft ASP.NET code shows how you can access the data in the constructor string:

Function SomeFunc() As String
   ’ Create a new instance of the ServicedComponent class
   ’ and access our method that exposes the construct string.
    Dim obj As DemoComp = New DemoComp

    SomeFunc = obj.GetConstructString()
    
End Sub

Administration of the constructor string data is performed through the Component Services MMC tool, as shown in Figure 9-2. You can find out more about System.EnterpriseServices at http://msdn.microsoft.com/msdnmag/issues/01/10/complus/complus.asp.

Setting a new constructor string for a COM+ component.

Figure 9-2. Setting a new constructor string for a COM+ component.

Managing Secrets in Memory in Managed Code

Managing secret data in managed code is no different than doing so in unmanaged code. You should acquire the secret data, use it, and discard it. However, here’s one small caveat: .NET strings are immutable. If the secret data is held in a string, it cannot be overwritten,. Therefore, it’s crucial that secret data be stored in byte arrays and not strings. The following simple C# class, ErasableData, could be used instead of strings to store passwords and keys. Included is a driver program that takes a command-line argument and encrypts it with a key from the user. The key is then erased from memory when the work is done.

class ErasableData : IDisposable {
    private byte[] _rbSecret;
    private GCHandle _ph;

    public ErasableData(int size) {
        _rbSecret = new byte [size];
    }

    public void Dispose() {
        Array.Clear(_rbSecret, 0, _rbSecret.Length);
        _ph.Free();
    }

    // Accessors
    public byte[] Data {
        set {
            //Allocate a pinned data blob
            _ph = GCHandle.Alloc(_rbSecret, GCHandleType.Pinned);
            //Copy the secret into the array
            byte[] Data = value; 
            Array.Copy (Data, _rbSecret, Data.Length);
        }

        get {
            return _rbSecret;
        }
    }
}

class DriverClass {
    static void Main(string[] args) {
        if (args.Length == 0) {
            // error!
            return;
        }

        //Get bytes from the argument.
        byte [] plaintext = 
            new UTF8Encoding().GetBytes(args[0]);

        //Encrypt data in memory.
        using (ErasableData key = new ErasableData(16)) {
            key.Data = GetSecretFromUser();
            Rijndael aes = Rijndael.Create();
            aes.Key = key.Data;

            MemoryStream cipherTextStream = new MemoryStream();
            CryptoStream cryptoStream = new CryptoStream(
                cipherTextStream,
                aes.CreateEncryptor(),
                CryptoStreamMode.Write);
            cryptoStream.Write(plaintext, 0, plaintext.Length);
            cryptoStream.FlushFinalBlock();
            cryptoStream.Close();

            //Get ciphertext and Initialization Vector (IV).
            byte [] ciphertext = cipherTextStream.ToArray();
            byte [] IV = aes.IV;

            //Scrub data maintained by the crypto class.
            aes.Clear();
            cryptoStream.Clear();
        }
    }
}

Notice that this code takes advantage of the IDisposable interface to automatically erase the object when it’s no longer needed. The C# using statement obtains one or more resources, executes statements, and then disposes of the resource through the Dispose method. Also note the explicit call to aes.Clear and cryptoStream.Clear; the Clear method clears all secret data maintained by the encryption and streams classes.

A more complete sample C# class, named Password, is available with the sample code for this book.

Raising the Security Bar

This section focuses on the different ways of storing secret data and describes the effort required by an attacker to read the data (information disclosure threat) or to modify the data (tampering with data threat). In all cases, a secret file, Secret.txt, is used to store secret data. In each scenario, the bar is raised further and the attacker has a more difficult time.

Storing the Data in a File on a FAT File System

In this example, if the file is stored on an unprotected disk drive—as an XML configuration file, for example—all the attacker needs to do is read the file, using either file access or possibly through a Web server. This is very weak security indeed—if the attacker can access the computer locally or remotely, she can probably read the file.

Using an Embedded Key and XOR to Encode the Data

The details in this case are the same as in the previous scenario, but a key embedded in the application that reads the file is used to XOR the data. If the attacker can read the file, he can break the XOR in a matter of minutes, especially if he knows the file contains text. It’s even worse if the attacker knows a portion of the text—for example, a header, such as the header in a Word file or a GIF file. All the attacker need do is XOR the known text with the encoded text, and he will determine the key or at least have enough information to determine the key.

Using an Embedded Key and 3DES to Encrypt the Data

Same details as in the previous scenario, but a 3DES (Triple-DES) key is embedded in the application. This is also trivial to break. All the attacker need do is scan the application looking for something that looks like a key.

Using 3DES to Encrypt the Data and Storing a Password in the Registry

Same as in the previous scenario, but the key used to encrypt the data is held in the registry rather than embedded in the application. If the attacker can read the registry, she can read the encrypted data. Also note that if the attacker can read the file and you’re using a weak password as the key, the attacker can perform a password-guessing attack.

Using 3DES to Encrypt the Data and Storing a Strong Key in the Registry

Same as the previous scenario, but now the attacker has a much harder time unless he can read the key from the registry. A brute-force attack is required, which might take a long time. However, if the attacker can read the registry, he can break the file.

Using 3DES to Encrypt the Data, Storing a Strong Key in the Registry, and ACLing the File and the Registry Key

In this case, if the ACLs are good—for example, the ACL contains only the Administrators (Read, Write) ACE—the attacker cannot read the key or the file if the attacker doesn’t have administrator privileges. However, if a vulnerability in the system gives the attacker administrator privileges, he can read the data. Some would say that all bets are off if the attacker is an administrator on the box. This is true, but there’s no harm in putting up a fight! Or can you protect against a rogue administrator? Read on.

Using 3DES to Encrypt the Data, Storing a Strong Key in the Registry, Requiring the User to Enter a Password, and ACLing the File and the Registry Key

This is similar to the previous example. However, even an administrator cannot disclose the data because the key is derived from a key in the registry and a password known to the data owner. You could argue that the registry key is moot because of the user’s password. However, the registry entry is useful in the case of two users encrypting the same data if the users share the same registry encryption key. The addition of the user’s password, albeit inconvenient, creates different ciphertext for each user.

Ultimately, you have to consider using alternative ways of storing keys, preferably keys not held on the computer. You can do this in numerous ways, including using special hardware from companies such as nCipher (http://www.ncipher.com).

Trade-Offs When Protecting Secret Data

Like everything in the world of software development, building secure systems is all about making trade-offs. The most significant trade-offs you need to consider when building applications that store secrets are as follows:

  • Relative security

  • Effort required to develop such an application

  • Ease of deployment

Personally, I think if you need to protect data, then you need to protect data regardless of the development cost. A little extra time spent in development getting the solution right will save time and money in the future. The big trade-offs are relative security versus ease of application deployment. The reason should be obvious: if some data is secured, it’s probably not very deployable! Table 9-1 offers a high-level view of the relative costs of the different data protection techniques; you should use it as a guideline.

Table 9-1. Trade-Offs to Consider When Protecting Secret Data

Option

Relative Security

Development Effort

Deployment Ease

Configuration files (no encryption, for comparison only)

None

Low

High

Embedded secrets in code—do not do this!

None

Low

Medium

COM+ construct strings

Medium

Medium

Medium

LSA secrets

High

High

Low

DPAPI (local machine)

High

Medium

Low

DPAPI (user data)

High

Medium

Medium

Summary

Storing secret information securely in software is a difficult task to accomplish. In fact, it’s impossible to achieve perfection with today’s technology. To reduce the risk of compromising secret information, make sure you take advantage of the operating system security functionality and also make sure you store secret information only if you do have to. If you don’t store secrets, they cannot be compromised. Determine a "good enough" solution based solely on the threats and data sensitivity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.53.93