CHAPTER 12
Identifying Windows Phone Implementation Issues

Having explored identification and vulnerability testing for various application-level weaknesses in Windows Phone applications in Chapter 11, we’ll now look at common implementation issues that can also be culprits for presenting security problems in apps.

You can think of implementation issues as being somewhat general issues that developers should be aware of to build suitably secure apps.

For example, storage of sensitive data may be considered an implementation issue. Failure to store personally identifiable information (PII) safely (that is, encrypted) could potentially have disastrous consequences for an individual or an organization if a lost or stolen device came into the wrong hands; hence, implementing such operations in a secure manner is important.

In this chapter we delve into more generic problems that are common to Windows Phone, rather than attacking specific pieces of an app’s functionality, as discussed in Chapter 11.

Identifying Insecure Application Settings Storage

Windows Phone provides a standard interface for persisting custom settings and data that the application developer deems appropriate to save for later use. This class is called IsolatedStorageSettings and can be viewed as being the Windows Phones’ equivalent of iOS’s NSUserDefaults and Android’s SharedPreferences interfaces. You can find the MSDN documentation for IsolatedStorageSettings at http://msdn.microsoft.com/en-us/library/system.io.isolatedstorage.isolatedstoragesettings(v=vs.95).aspx.

IsolatedStorageSettings provide a convenient way for apps to store data as key-value pairs to a file in their Local folder. A typical use is to save settings relevant to the app, such as the number of images to display per page, the user’s login name, page layout options, and other app-related settings. The IsolatedStorageSettings class essentially behaves as a thin layer wrapper around a dictionary object.

An application’s IsolatedStorageSettings instance is retrieved using the ApplicationSettings property, and if an instance doesn’t already exist, one is created accordingly.

Objects are stored to IsolatedStorageSettings using either the Add method, or array notation, and objects are retrieved using TryGetValue()<T> or again, using array notation to dereference a value by its key.

For example, an application may store the hostname of a server it interacts with under a key named serverAddress, and the user’s username, using code similar to the following,

IsolatedStorageSettings mySettings = IsolatedStorageSettings. 
ApplicationSettings; 
 
mySettings.Add("serverAddress", "applicationServer.com");  // using Add() method 
mySettings.Add("username", usernameToSave);  // using Add() method 
 
mySettings.Save();

or:

IsolatedStorageSettings mySettings = 
           IsolatedStorageSettings.ApplicationSettings; 
 
mySettings["serverAddress"] = (string)"applicationServer.com"; 
mySettings["username"] = (string)usernameToSave; 
 
 
mySettings.Save();

Note that changes to the settings instance are committed by calling the Save() method.

Conversely, the stored server address may then be retrieved from the application’s settings storage, which in this case is stored under a key called serverAddress, like so,

IsolatedStorageSettings mySettings = 
           IsolatedStorageSettings.ApplicationSettings; 
 
string serverToConnectTo = (string)mySettings["serverAddress"];

or:

IsolatedStorageSettings mySettings = 
           IsolatedStorageSettings.ApplicationSettings; 
 
string serverToConnectTo = null; 
bool success = mySettings.TryGetValue("serverAddress", out serverToConnectTo);

Objects that are currently stored in the app’s IsolatedStorageSettings dictionary can also be removed using the Remove() method, in the expected way:

mySettings.Remove("serverAddress");

Note the mention of storing objects to IsolatedStorageSettings, as opposed to storing only strings and other simple data types. Although many apps use only IsolatedStorageSettings to store useful settings and configuration values as strings, integers, and Boolean values, IsolatedStorageSettings is capable of storing more complicated objects. Objects that a developer wants to store must, of course, be serializable.

After settings (or in general, objects) are committed to the app’s IsolatedStorageSettings, the class serializes key-value pairs to XML representations and saves the results to the filesystem, with any complex objects also being serialized to XML representations along the way.

For example, in keeping with the hypothetical situation just mentioned, where an app stored a hostname to IsolatedStorageSettings, the resulting file would include XML resembling the following:

<Key>serverAddress</Key> 
<Value xmlns:d3p1=http://www.w3.org/2001/XMLSchema 
i:type="d3p1:string">applicationServer.com</Value>

Although this is merely an implementation detail, the IsolatedStorageSettings object and the objects it stores are serialized and conversely deserialized under the hood by the DataContractSerializer class.

Each application’s IsolatedStorageSettings file is stored in its Local directory and is named __ApplicationSettings. More specifically, an app’s IsolatedStorageSettings file, if it has one, may be found at C:DataUsersDefAppsAPPDATA{GUID}Local\__ApplicationSettings, where {GUID} is the app’s GUID identifier.

When carrying out a security review of an application, extracting the __ApplicationSettings file from an app’s local storage (using your full filesystem access; see Chapter 10) and reviewing its contents for interesting material is generally worth it, because Windows Phone developers use IsolatedStorageSettings frequently.

The IsolatedStorageSettings API does not encrypt key-value pair data in any way before storing it to the filesystem, so developers should be aware that any sensitive data stored using this interface is not safe from attackers who have access to an app’s local storage sandbox. As such, you should consider sensitive data storage via the IsolatedStorageSettings API to be a bug.

A good example of sensitive data that developers unwittingly store to IsolatedStorageSettings (without considering the consequences in the event that the device is compromised) are authentication credentials.

Although developers tend to store all manner of settings in their app’s IsolatedStorageSettings file, including sensitive information such as PII, finding sensitive credentials stored in __ApplicationSettings is also common.

For example, a developer who is perhaps less security-oriented may opt to store a set of login credentials that pertain to the user’s account on the app’s backend API. Such code could resemble this:

IsolatedStorageSettings mySettings = 
           IsolatedStorageSettings.ApplicationSettings; 
 
[ ... ] 
 
mySettings.Add("serverAddress", username); 
mySettings.Add("username", username); 
 
mySettings.Add("password", password); 
 
mySettings.Save();

The IsolatedStorageSettings API applies absolutely no encryption to these credentials, so they are prime and easy targets for theft by an attacker who manages to get access to the __ApplicationSettings file in the app’s Local folder. Storing credentials and other sensitive settings in plaintext on the filesystem may be considered an even worse practice on the Windows Phone than on other mobile OSes (that is, Android or iOS), because whole-device encryption is only available to enterprise-connected users with RequireDeviceEncryption enabled in their company’s ActiveSync.

Figure 12.1 shows an __ApplicationSettings file being accessed from a Windows Phone device’s filesystem, with would-be important login credentials residing in the serialized file in plaintext.

images

Figure 12.1 Accessing an __ApplicationSettings file on a device’s filesystem

During security reviews of Windows Phone apps, you should ensure that apps are not storing credentials and other pieces of sensitive information unencrypted. It is a fairly common problem, though, given the simplicity of using the IsolatedStorageSettings API, in much the same way iOS’s NSUserDefaults and Android’s SharedPreferences is also misused for insecure settings storage.

Identifying Data Leaks

Some applications carry out actions that result in data being stored in ways not directly relevant to their functionality. For example, an app may use a WebBrowser control, which often leads to visited pages being cached to disk in the app’s sandboxed filesystem. In addition, visited pages may also store cookies. Both cookies and web cache can include data that is sensitive in nature, so their storage may understandably be considered undesirable.

Applications may also store logs at runtime, either for the purpose of error reporting (that is, telemetry to the vendor), or to aid the vendor during the app’s development process, or both. Some applications are guilty of logging sensitive or otherwise useful information, sometimes including login credentials.

You can think of these three cases generally as data leaks. Storage of cookies and web cache by WebBrowser and WebView controls is implicit and not directly intended by the developer. The use of application logging is also not directly relevant to the operation of an app, but all of these have the potential to result in the disclosure of sensitive data to attackers.

HTTP(S) Cookie Storage

Because WebBrowser and WebView controls provide a subset of full web browser functionality, it’s unsurprising that they store cookies much like a full browser does.

The majority of Windows Phone apps we reviewed that feature WebBrowser or WebView controls don’t automatically attempt to clear stored cookies after use.

Assuming you (or a would-be attacker) has filesystem access to a Windows Phone device, checking whether or not cookies are cleared is easy to do for any app. A WebBrowser or WebView control will automatically store cookies to the following location: C:DataUsersDefAppsAPPDATA{GUID}INetCookies, where GUID is the application’s GUID. The INetCookies directory is hidden by default, so you should type the full path into your file manager rather than expect INetCookies to show up in its GUI interface.

Figure 12.2 shows the inspection of stored cookies in the INetCookies directory. In applications where WebBrowser or WebView controls are hosting authenticated sessions, failure to deal with cookie deletion could represent a fairly serious security issue.

images

Figure 12.2 Browsing an app’s INetCookies directory on a device

Unless the device in question is enterprise-linked to an ActiveSync instance with RequireDeviceEncryption enabled, any cookies stored to the INetCookies directory are stored in the clear when the device is at rest.

Chapter 13 provides details on how to clear cookies in both the WebView and WebBrowser controls.

HTTP(S) Caching

When applications use WebBrowser or WebView controls to request remote web pages, it’s not uncommon for the control to store cached copies of the web content to the app’s sandboxed directory structure.

Some applications use WebView or WebBrowser controls to render important interfaces that offer a great deal of their functionality—sometimes in an authenticated context. Particularly in these cases, cached web content may well contain sensitive information that was present in rendered pages, including HTML files, JavaScript files, and images.

As mentioned, cached content will be stored in plaintext on the filesystem (when the device is at rest) unless the device is enterprise-linked to an ActiveSync server with the RequireDeviceEncryption setting enabled.

WebView and WebBrowser controls store their cache to the INetCache directory within the app’s filesystem sandbox. More specifically, replacing GUID with the actual GUID of the application in question, you can find any cache files stored by the app at C:DataUsersDefAppsAPPDATA{GUID}INetCache. Note that the INetCache will be a hidden directory, so you’ll have to navigate to the directory by typing its name into your file manager’s address bar or equivalent.

See Chapter 13 for details on how to prevent caching by WebBrowser and WebView controls, so that sensitive content that has been rendered is not inadvertently left around in the app’s filesystem sandbox.

Application Logging

Windows Phone 8.x includes the standard logging APIs, such as Debug.WriteLine(), but messages written using this and related APIs are not stored to a log anywhere analogously to Android’s logcat, for example. If the app is not being debugged (that is, via Visual Studio), the logging calls essentially have no effect.

Some apps, however, may log to their Local directory, either via hand-rolled logging code, or via a known framework.

A logging solution is available on MSDN at https://code.msdn.microsoft .com/windowsapps/A-logging-solution-for-c407d880.

Two other free application logging frameworks are WPClogger (http://wpclogger .codeplex.com/) and Splunk MINT Express (https://mint.splunk.com/).

When auditing applications, testers should examine calls to logging-style APIs and ensure that they are not logging anything potentially sensitive to the filesystem, such as passwords and other credentials.

Identifying Insecure Data Storage

Secure data storage on mobile devices is one of the most important aspects of mobile application security. A large number of applications for all mobile platforms need to store data, which is often sensitive, and should not be easily handed over to a would-be attacker. Even so, developers still store data in unencrypted forms in databases, flat files, and other file storage formats.

Such insecure data storage is particularly concerning in the context of a sensitive mobile application, such as one used for banking or one that deals with sensitive documents, and even more so given that data at rest on a Windows Phone device’s filesystem is by default unencrypted, unless the device is enterprise-linked to an ActiveSync server with the RequireDeviceEncryption setting enabled.

This section discusses how you can identify instances of data storage by an application where data is being stored in plaintext format and is not being protected using cryptographic methods.

The standard interface for encrypting arbitrary data blobs on the Windows platforms is DPAPI, the Data Protection API. However, even this mechanism has its weaknesses, particularly in the context of Windows Phone devices. However, we’ll cover weaknesses in using DPAPI for data security in “Insecure Cryptography and Password Use—Data Protection API Misuse on Windows Phone”.

Unencrypted File Storage

Many apps store data to files in their filesystem sandbox for later use. The reasons for storing data vary widely, because Windows Phone apps serve a multitude of purposes.

Some apps that need to store data for later consumption deal with sensitive information, such as personally identifiable information (PII). Naturally, such data needs to be protected from prying eyes to prevent information disclosure; for example, in the event of a lost and stolen device. This protection is particularly needed for Windows Phone 8.x devices, which only have device encryption when they are enterprise-joined (despite having a screen unlock password).

Even so, it’s still a common occurrence for Windows Phone apps to store data, often sensitive, in plaintext on the filesystem.

Although many mobile applications don’t actually deal with particularly sensitive information, many do; in fact, the range of applications now available for all the popular mobile computing platforms is quite large; for example, you can find apps for banking, betting, social networking, human resources management, document processing, emailing, and otherwise electronically communicating, just to name a few.

A sample scenario could involve an HR management application. All things considered, it’s true to say that HR software generally deals with information that is quite sensitive, spanning categories such as employee information, client information, payroll data, and even health-related information pertaining to particular people. These categories are all data that no Chief Information Security Officer (CISO) would like to see make it into the wrong hands.

Suppose that a hypothetical HR app downloads a CSV file. This file is essentially a people directory for a company. The file contains full names, job titles, contact details, dates of births, and salary information for use by the app in carrying out its HR operative functions.

Every time the hypothetical application connects to the backend API and authenticates, it downloads the people directory CSV and saves it to the app’s Local folder. This is commonly done using HttpClient, WebClient, or another web-related API.

Using the HttpClient class, the application could download a file and save it to its local storage using the IsolatedStorageFile and IsolatedStorageFileStream APIs, via code such as the following:

  try 
  { 
    var httpClient = new HttpClient(); 
    var response = await httpClient.GetAsync(new 
                             Uri("https://mobile.mycompany.com "), 
 HttpCompletionOption.ResponseHeadersRead); 
 
    response.EnsureSuccessStatusCode(); 
 
    using(var isolatedStorageFile = 
IsolatedStorageFile.GetUserStoreForApplication()) 
    { 
      bool checkQuotaIncrease = IncreaseIsolatedStorageSpace(e.Result.Length); 
 
      string csvFile = "employee_info.csv"; 
      using(var isolatedStorageFileStream = 
                   new IsolatedStorageFileStream(csvFile, 
FileMode.Create, isolatedStorageFile)) 
      { 
        using(var stm = await response.Content.ReadAsStreamAsync()) 
        { 
          stm.CopyTo(isolatedStorageFileStream); 
        } 
      } 
    } 
  } 
  catch(Exception) 
  { 
    // failed to download and store file.. 
  }

At this point, assuming the download and file I/O operations went as expected, the CSV file in question would reside in the app’s Local folder under the name employee_info.csv. It would be ready for processing and use in the app’s normal functionality.

Notice that after the CSV data is downloaded; no cryptography is carried out on the file before it is saved to disk. Unfortunately, storing a sensitive file is where many apps stop, leaving the file on the filesystem in its unencrypted form; many apps make no effort to apply any encryption on their sensitive files at all.

It may be that many unsuspecting mobile developers assume that because files are in the app’s sandbox, they are generally safe from theft in their unencrypted form. Furthermore, there seems to be the expectation that most devices are surely encrypted in some standard, secure way to provide privacy if a device is lost or stolen. Such assumptions may be correct in that, normally, third-party apps on a device are not supposed to be able to reach into other apps’ sandboxes and steal files.

However, as previously mentioned, Windows Phone devices that are not enterprise-enrolled do not have device encryption enabled, and all data on the eMMC (flash storage module) could be extracted without difficulty from a lost or stolen device.

Furthermore, even if a Windows Phone device is encrypted, when the device is powered on, the filesystem is not “at rest”, and as such, successful attacks on the device would enable files to be extracted from the filesystem of the switched-on device. It’s therefore vigilant from a security perspective that sensitive data stored by an app is stored in encrypted form, with proper key management practices in place, and data security should never rely on device encryption (BitLocker), which may or may not be enabled in the first place.

Using a capability-unlocked device with filesystem access, you can browse each app’s directory structure in search of interesting files that have been stored in their plaintext form. Files are most likely to be found in the app’s Local folder, or a subdirectory thereof, under C:DataUsersDefAppsAPPDATA{GUID}Local, where {GUID} is the app’s identifier.

If you review an application that stores sensitive data to the filesystem without applying strong, industry-standard cryptography (coupled with secure key management), it’s fair to say that this kind of storage method represents a security risk, which should ultimately be considered a bug. The risk is particularly ever-present for devices that do not have device encryption enabled, which at the time of writing is all devices that are not enterprise enrolled. For an attacker with physical access to an unencrypted device, accessing the sensitive data would be as easy as removing the eMMC from the device, mounting it, and then browsing the filesystem.

Other attacks such as privilege escalation, sandbox breaches, and remote attacks (think drive-by browser attacks) essentially render device encryption irrelevant, because data is not at rest; hence in all cases, it should be considered that sensitive data should always be encrypted by the app itself that is storing it.

Insecure Database Storage

In regard to data that is best stored in a much more relational and structured way, a database is a common solution for all kinds of apps. Windows Phone apps are no exception.

Of course, at least in the context of Windows Phone, most databases are in reality stored to the device as files. We discuss this as an implementation issue on its own instead of in the previous section, because databases encompass a group of storage technologies in their own right.

Two families of databases find common usage in Windows Phone apps: local databases, which are Windows Phone’s standard native databases, and SQLite databases.

In apps that use either of these two database types (or both), sometimes encryption is applied to the database, and sometimes it is not. Even when cryptography is used in an effort to keep databases safe, developers make some common mistakes that only superficially protect data, leaving it only slightly more secure than if it were stored in plaintext—think insecure key management (including hard-coded keys).

We’ll discuss each of the two database families and how to spot when insecure database storage has been implemented, including some instances in which cryptography has been employed.

Local Databases

Windows Phone provides standard interfaces to create, manipulate, and access databases that are known as “local databases”. Developers do not drive these databases via SQL queries directly, but instead by Language Integrated Query (LINQ), which is a .NET component that adds data querying capabilities to the .NET languages.

Under the hood, local databases are still SQL based, but Windows Phone does not expose interfaces for talking to databases using raw queries. Instead, a LINQ-to-SQL layer converts LINQ queries on databases into SQL queries, and the database is driven in this way, with the LINQ-to-SQL layer acting as a translation interface or proxy. In fact, no public APIs exist for making SQL queries on databases.

The entire LINQ-to-SQL architecture is quite different from what developers brought up on SQL are used to, but the LINQ-to-SQL paradigm is object oriented and provides powerful methods for accessing and manipulating data when you understand some core concepts and patterns.

For a general introduction on Windows Phone local databases, LINQ-to-SQL, and its architecture, study the MSN article located at http://msdn.microsoft.com/en-us/library/windows/apps/hh202860(v=vs.105).aspx#BKMK_UsingtheDatabase; a full introduction to local databases/LINQ-to-SQL is beyond the scope of this chapter. We do, however, cover some basics of Windows Phone local databases here so that you will be able to identify instances of insecure data storage when databases are being used.

Use of a local database in a Windows Phone app begins with the definition of a data context. You do this programmatically by defining a class that extends the DataContext class. You then define additional classes to specify the table and column structure of the database, using the [Table] and [Column] attributes appropriately. For example, an HR application could define a database to hold information on the company’s employees, using code such as the following:

public class EmployeeDataContext : DataContext 
{ 
    public TaskDataContext(string connectionString) 
        : base(connectionString) 
    { 
    } 
 
    public Table<Employee> Employees; 
} 
 
 [Table] 
public class Employee 
{ 
 
    [Column(IsPrimaryKey = true, IsDbGenerated = true, DbType = 
    "INT NOT NULL Identity", CanBeNull = false, AutoSync = AutoSync.OnInsert)] 
    public string PersonName { get; set; } 
 
    [Column] 
    public string JobTitle { get; set; } 
 
    [Column] 
    public string PhoneNumber { get; set; } 
 
    [Column] 
    public string EmailAddress { get; set; } 
 
    [Column] 
    public string HomeAddress { get; set; } 
    [Column] 
    public DateTime EmploymentStartDate { get; set; } 
 
}

The preceding EmployeeDataContext class definition declares that the database should have one table, which is structurally defined by the Employees class, defined below it. The Employees class, marked as a table definition by the [Table] attribute, essentially defines a table that has columns for an employee’s full name, job title, phone number, email address, home address, and employment start date. All of these are aptly marked using the [Column] attribute, and their full name is marked as being a primary key for insertions and queries.

Notice the EmployeeDataContext class’s constructor definition:

public TaskDataContext(string connectionString) 
        : base(connectionString) 
{ 
}

Interpreting the TaskDataContext constructor above, whenever an instance of the TaskDataContext class is instantiated, TaskDataContext’s constructor immediately passes its string argument to the constructor its base class, DataContext. This string, incidentally, is the database’s connection string; this must be passed to the base class (DataContext) to successfully connect to the database (or to create the database, if the database is being used for the first time).

So, for example, when a developer wishes to use their database, or create a database representable by the EmployeeDataContext for the first time, they could use code similar to the following:

EmployeeDataContext db = new EmployeeDataContext("isostore:/EmployeeDB.sdf"); 
 
If(db.DatabaseExists() == false) { 
Db.CreateDatabase(); 
}

The preceding code attempts to connect to the database named EmployeeDB .sdf (which will be in the app’s Local folder), and if the database does not already exist, it will create it.

The string passed to EmployeeDataContext, that is, isostore:/EmployeeDB.sdf, is the database’s connection string, which the class will pass on to DataContext upon new EmployeeDataContext object instantiation.

However, note in the preceding example code where the connection string passed to the data context class was isostore:/EmployeeDB.sdf, that no password is specified in the connection string. Thus the created database would be completely unencrypted, unless the application itself manually encrypts data before its submission to the database. If sensitive data is being stored in a local database that is created without a password in its connection string, then this in itself constitutes a security issue.

The local database API supports passwords being used in connection strings. Use of a password in the connection string during database creation results in the database’s contents being AES-128 encrypted, with the key being generated by SHA-256 hashing the given password. An encrypted employee database could be created using a data context definition as follows, with the password being MySecretDbPassword.

EmployeeDataContext db = new EmployeeDataContext("Data 
Source='isostore:/EmployeeDB.sdf';Password='MySecretDbPassword'"); 
 
if(db.DatabaseExists() == false) { 
        db.CreateDatabase(); 
}

Although the database will indeed be AES-128 encrypted in the preceding case, the password being used is hard-coded into the connection string. This in itself also represents a security risk, because all users of the app will have a database encrypted with exactly the same key. This offers little more protection than having no cryptography applied to the database at all, because any attacker able to reverse-engineer the app will glean knowledge of the hard-coded password that is used in all cases. Unfortunately, hard-coded keys and passwords are quite common in mobile apps for all platforms, in addition to those for Windows Phone.

Even if a database password is not hard-coded, but is instead derived from system constants such as the DeviceUniqueId, you should again consider it a security issue if the stored data is sensitive, because the password may be easily derived by an attacker.

Database passwords should not be hard-coded, and should not be derivable from system data from the device (such as from a MAC address, or from DeviceUniqueId, for example). Instead, they should be derived from a secret phrase known only by the user, such as using PBKDF2 (Password-Based Key Derivation Function, 2).

Local databases are stored in an app’s Local folder, and often have the .sdf file extension, so checking for unencrypted databases manually is easy to do using full filesystem access that has been gleaned via capability unlocking.

SQLite-Based Databases

The standard SQLite distribution for Windows Phone does not support cryptography out of the box, so sensitive data being stored in a SQLite database created and managed by the standard package is likely to represent a security risk.

However, two fairly well-used SQLite packages do support cryptography; namely, SQLCipher and the SQLite Encryption Extension (SEE). Both of these packages require licenses to use and are not freeware. SEE supports AES-128, AES-256, and RC4, whereas SQLCipher solely uses AES-256.

To create a database (and subsequently use it thereafter) with encryption using SQLCipher, developers must use the SetPassword() method on their SQLiteConnection object, like so:

string connectionString = "Data 
Source=sqlcipher.db;Pooling=false;Synchronous=Full;"; 
 
string password = "password123"; 
using(var conn = new SQLiteConnection(connectionString)) { 
        conn.SetPassword(password); 
        conn.Open(); 
 
 [ ... ]

When using SEE (SQLite Encryption Extension), applications specify a key using the PRAGMA statement after instantiating their SQLiteConnection object, as in:

string connectionString = "Data 
Source=sqlcipher.db;Pooling=false;Synchronous=Full;"; 
 
string password = "password123"; 
using(var conn = new SQLiteConnection(connectionString)) { 
        conn.Execute(String.Format("PRAGMA key='{0}';", password); 
 
 [ ... ]

In both use cases (SEE and SQLCipher), if an application uses a static hard-coded password for a sensitive database, or the password is somehow derived from non-secret data (such as DeviceUniqueId), this should be considered a security issue. Of course, you should also consider sensitive data being stored without a password a bug.

SQLite databases are generally stored in the app’s Local folder and tend to have the .db file extension. You can check databases extracted from a device for cryptography using the sqlite3 application, using a hex editor, or by analyzing the output of the strings mydatabase.db.

Insecure Random Number Generation

Using cryptographically random data is important in security-critical applications, so that data derived from the entropy source can be relied on in security-sensitive situations.

One particular situation when secure generation of random data is important is in generation of cryptography keys. The reason why, of course, is quite obvious: If cryptography keys are predictable to an attacker, the key may be discovered, and the data protected by the key may be decrypted.

Windows Phone exposes two main APIs that may be used for generating random data: System.Random and RNGCryptoServiceProvider. System.Random should not be used for generating cryptography keys, passwords, or other similar security-sensitive values that need to be cryptographically random. In short, consider the use of the System.Random API in these contexts (such as for cryptography key generation) a security vulnerability. We discuss why in the coming subsections.

System.Random’s Predictability

System.Random is provided by the .NET Framework to generate pseudo-random data that is admittedly not cryptographically random. The System.Random API suffices for some purposes, but should not be used to generate security-sensitive values such as cryptography keys.

To use the System.Random class, an application first instantiates a new Random object—either with a seed, or without specifying a seed. For instantiation, System.Random exposes the following two constructors:

  • Random()
  • Random(Int32 seed)

The default constructor, Random(), is parameterless, and hence doesn’t take a seed. When this constructor is used, the Random object is seeded with the current system uptime—Environment.TickCount—which has millisecond resolution. You can see this by analyzing the source code for System.Random, which is available on Microsoft’s Reference Source website (http://referencesource .microsoft.com/#mscorlib/system/random.cs):

   // 
  // Constructors 
  // 
 
      public Random() 
        : this(Environment.TickCount) { 
      } 
 
      public Random(int Seed) { 
        int ii; 
        int mj, mk; 
 
 [ ... ] 
 
 }

The other constructor, Random(Int32 seed), accepts a seed as its 32-bit integer parameter, and uses this to seed the Random object.

The developer can then call one of the class’s member methods to retrieve pseudo-random data from the object. System.Random exposes the following methods for pulling out random data:

  • Next()—Returns a non-negative pseudo-random integer
  • Next(Int32)—Returns a non-negative pseudo-random integer that is less than the specific maximum
  • Next(Int32, Int32)—Returns a non-negative pseudo-random integer that is within the specified range
  • NextBytes(byte[])—Fills the specified byte array with random bytes
  • NextDouble()—Returns a pseudo-random double
  • Sample()—Returns a pseudo-random floating-point number between 0.0 and 1.0

So, for example, a less-than-perfect application may generate a 32-byte cryptography key, and therefore call into the Random API using code such as the following:

Random rnd = new Random(1234);  // 1234 as the seed 
 
byte[] encKey = new byte[32]; 
rnd.NextBytes(encKey); 

Or, the developer may opt to use Random's default constructor and not specify a seed, such as:

Random rnd = new Random(); // uptime in milliseconds as seed 
 
byte[] encKey = new byte[32]; 
rnd.NextBytes(encKey);

To the untrained eye, both of these may look fine and appear to work as expected; they both generate data that seems to be random, at a glance, perhaps. However, each case is in reality insecure; the problem with System.Random is that two Random object seeded with identical seed values always produce the same sequence of “random” numbers as their output. In other words, if a Random object is seeded with 1234, the output will be exactly the same as for another Random object seeded with 1234.

Clearly, this is particularly bad for generating security-sensitive values like cryptography keys, because if you seed the value you can predict the output of a System.Random object.

Intuitively, this situation is at its worst if the app manually specifies a static or deterministic seed value, as in the following example:

Random rnd = new Random(STATIC_SEED_VALUE);

This is because the seed value can be determined by all attackers who reverse-engineer the application or have knowledge of some system values, such as the MAC or IP addresses.

However, even if the default constructor is used as shown here,

Random rnd = new Random();

the system uptime in milliseconds is used as the seed. This is insecure, because Environment.TickCount is quite predictable.

As a matter of fact, only 86.4 million milliseconds are in a 24-hour day. Therefore, simply knowing on which day a key (or otherwise) was generated by Random would enable you to determine the generated value by trying all 86.4 million possible values as the seed. Additionally, just because Environment.TickCount has millisecond resolution, Environment.TickCount doesn’t change every millisecond. Changes to TickCount every 15 milliseconds may be typical, for example (see http://blogs.msdn.com/b/pfxteam/archive/2009/02/19/9434171.aspx). This is likely to narrow down the seed search space even further.

The point here is that for a given seed value, the output of a System.Random object will always be the same; this predictability of output for each particular seed value is obviously insecure, and for this reason, System.Random should never be used for generating security-related values, such as cryptography keys.

The right API to use for cryptographic and other security purposes, as it were, is RNGCryptoServiceProvider; we cover the use of this API in detail in the next chapter’s section, “Generating Random Numbers Securely”.

Multiple Instances of System.Random

Suppose that a developer wants to generate a collection of random numbers. The unsuspecting developer may write code like the following:

int[] randData = new int[32]; 
 
// generate random ints 
for(int count = 0; count < 32; count++) { 
        Random rnd = new Random(); 
        randData[count] = rnd.Next(); 
}

In this piece of code, a new instance of Random is instantiated for each subsequent number generation, with Environment.TickCount being used as the seed. However, because Environment.TickCount has millisecond-magnitude resolution (though not necessarily 1 millisecond), it is very likely that the code will fill randData with all the same integer. In fact, in a tight loop, the same integer may be generated thousands of times before Environment.TickCount eventually changes and the new Random objects are seeded with a different value.

Misuse of Random in this way can clearly have some detrimental consequences if the data needs to be cryptographically secure.

Similarly, consider that a developer did something similar, but instead specified a seed when he instantiated Random objects, for example:

// generate random ints 
int[] randData = new int[32]; 
 
// generate random ints 
for(int count = 0; count < 32; count++) { 
        Random rnd = new Random(1234); 
        randData[count] = rnd.Next(); 
}

This code would actually fill the randData array with 32 identical integers because System.Random returns the same sequence of numbers every time a given seed is used. Given that the preceding code is instantiating a new Random object for every number, the first number in the sequence will be outputted every time.

System.Random Thread Safety

System.Random is not thread safe, and a Random object should not be used by multiple threads without using a synchronization object for locking.

If a Random object is accessed by multiple threads in a thread-unsafe way, any of its methods (such as Next(), NextBytes(), and so on) may begin to return 0 every time they are called. In this case, if an object is used by multiple threads simultaneously, this could conceivably result in a cryptography key composed mostly or entirely of '' bytes, which would have obvious negative security side effects.

Code such as the following may result in 0s being emitted by the Random object, on multicore devices:

Random rand = new Random(); 
 
int[] randInts = new int[32]; 
Parallel.For(0, 32, (i, loop) => 
        { 
       randInts[i] = rand.Next(); 
       });

Non-thread safe characteristics present yet another reason to avoid System.Random altogether when cryptographically secure data is required. As mentioned before, the correct API to use for security purposes is the RNGCryptoServiceProvider class, the use of which we cover in full in the Chapter 13 section, “Generating Random Numbers Securely.”

Insecure Cryptography and Password Use

Most people involved with security realize that sensitive data should be stored or transferred in encrypted format, instead of in its easily accessible plaintext format. However, simply encrypting data is the tip of the iceberg; many ways exist to implement cryptographic storage (or transfer) that falls short in terms of security, and this can partially or completely undermine the security that cryptography could otherwise have provided.

The general category of “insecure cryptography and password use” does not represent one class of bug, but several. For example, bad key management provides a number of ways to introduce vulnerabilities.

Proper key management is central to securely implementing cryptography in applications. The security of encrypted data relies heavily on cryptography keys being unknown to those who would illegitimately like access to the data. Thus, failure to generate keys securely and then protect them can result in the compromise of encrypted data. We cover some of the common ways in which developers mismanage cryptography keys (and passwords) and introduce security vulnerabilities when implementing cryptographic storage or transfer in their applications.

Hard-Coded Cryptography Keys

Even with security now being a widespread concern, it’s still quite common to see apps encrypting data and storing (or transferring) data using cryptography keys that are simply hard-coded into the app.

When reviewing an app’s code (original or reversed) you may come across code that defines a static cryptography key used later for encrypting important and sensitive data. For example, consider the following code fragment in which the app defines a static 32-byte key, which it uses for encryption of some sensitive data, the resulting ciphertext for which is stored to a file in its Local directory:

char[] cryptoKey = {  0x10, 0x20, 0x30, 0x40, 0x45, 0x78, 0x65, 
0x61, 0x62, 0x43, 0x69, 0x35, 0x32, 0x15, 0x20, 0x50, 0x10, 0x20, 
0x30, 0x40, 0x45, 0x78, 0x65, 0x61, 0x62, 0x43, 0x69, 0x35, 0x32, 
0x15, 0x20, 0x50 }; 
 
[ ... ] 
 
retval = EncryptData(secretData, cryptoKey, out encryptedData); 
 
retval = StoreEncryptedData(encryptedData, filePath);

Although the resulting data will indeed be encrypted, any attacker able to reverse-engineer the application becomes privy to the key. Because the key is hard-coded into the app, all users of the app will have their data encrypted with exactly the same key.

All the attacker needs to do after discovering the hard-coded key is to extract encrypted files from the target devices and proceed with decryption using that static key. It goes without saying that the use of hard-coded keys is essentially never acceptable for sensitive data.

Insecure Storage of Cryptography Keys

Another common security failure is when apps safely generate per-user cryptography keys, but then store them in their filesystem sandbox in cleartext format. Some apps attempt to hide the key(s) or obfuscate them to deter casual or unskilled attackers, but this rarely offers any genuine extra security.

Likewise, some apps that make use of public key cryptography to store their private key to their filesystem sandbox—schematically:

string cryptoKey = GenerateCryptoKey(); 
 
StoreCryptoKeyToFile(cryptoKey);

In any case, any attacker able to access the device’s filesystem will be able to extract the key(s), which he can then use to recover encrypted data that is protected by the key.

When performing a review of an app’s cryptographic practices, pay close attention to whether keys are being stored, and keep in mind that storage of private keys and symmetric keys are security issues, assuming the protected data is sensitive.

Of course, secure ways exist for storing cryptography keys. We discuss them in Chapter 13 in the section “Secure Key Management.”

Storing Keys and Passwords in Immutable String Objects

Although cryptography keys themselves are rarely stored in string objects due to their binary nature, password-based key derivation schemes (such as Password Based Key Derivation Function 2, or PBKDF2) commonly deal with the password in the form of a string.

For example, to generate a cryptography key, an app may accept a password from the user, read it into a string object, and then pass that object to its PBKDF2 method to go ahead and generate the key. In pseudo-code, this could be represented as:

string password = ReadPasswordFromPasswordBox(); 
 
[ ... ] 
 
PBKDF2_GenKey(password, iterations, out cryptoKey);

This works fine functionally but the problem from a security perspective is that after the password has been stored in the string object, this value cannot be overwritten at will. This poses a problem if an attacker is able to dump memory out of the process; ideally, you should clear the password from memory as soon as it is not needed anymore.

Clearing a string, however, is not easily done. String objects are immutable, meaning that after the object’s value is set, it cannot be changed. You would be forgiven for assuming that the following results in myStr’s value being changed to "overwritten":

string myStr = "value1"; 
myStr = "overwritten";

In actual fact, it does not; the preceding code simply changes the string object that myStr references; the "value1" string object may still exist, until garbage collection.

The Common Language Runtime (CLR) is also likely to make new copies of string objects when they are passed into other methods, making memory disclosure and forensics attacks more likely to succeed.

Because you cannot easily wipe passwords stored in string objects, you would be vigilant to consider instances of password storage in strings to be a vulnerability, particularly in security-critical applications. Typical attack vectors include memory disclosure bugs and memory forensics investigation on a device.

To guard against memory disclosure and memory forensic attacks, store all passwords not in immutable string objects (which cannot be overwritten), but in char[] or byte[] arrays that can be zeroed in a for() or while() loop when they are no longer needed. We discuss this topic in the following section.

Failure to Clear Cryptography Keys and Passwords from Memory

When apps use cryptography, the key needs to be in memory in the app’s address space at some point. For apps that require a high level of security, however, cryptography keys should be wiped from memory as soon as they are no longer needed, or when they are not needed again for some time; you should also apply this same principle for passwords. The purpose of clearing cryptography keys and passwords from the app’s address space is to help protect against successful memory disclosure and forensics attacks, and in fact, wiping cryptography keys is required to be compliant to certain security specifications, including some Federal Information Processing Standards (FIPS) specifications.

Practically, this means that after a key has been used and is not needed again in the near future, the app should overwrite it to (hopefully) erase it from the runtime’s memory.

If the app actively needs the key (that is, it’s having a conversation via a custom-encrypted protocol), then overwriting it is obviously not going to be feasible. When an app only needs to use the key for a batch of operations, we recommend that the key be wiped promptly afterwards.

In apps where wiping is feasible from a usability and performance standpoint, cryptography keys and passwords should generally be stored in char[] or byte[]arrays and then wiped when no longer needed, as demonstrated here:

for(int i = 0; i < KEYLEN; i++) 
        cryptoKey[i] = 0;

In sensitive apps (that is, banking), failure to implement such a key and password clearing policy may be considered a security issue. Of course, usability and performance are also important in many applications, so if an app needs to persist a key or password because it uses it often, then ultimately this requirement may need to overrule security.

Insecure Key Generation

Secure key generation is another critical part of implementing an acceptably secure system of cryptographic storage or communications within an app. Failure to securely generate keys can result in these keys being predictable or otherwise weak, so we’ll look at ways in which apps may insecurely generate keys, and how you can spot them in a Windows Phone app security review.

Insecure Random Key Generation

Some cryptography keys are generated using pseudo-random number generation APIs. However, you must use a secure pseudo-random number generator (PRNG) and use it properly.

In the context of Windows Phone apps, this means that the System.Random class should never be used to generate cryptography keys, because output data from System.Random is not cryptographically random. You should consider the use System.Random to generate cryptography keys a security issue.

We covered this topic earlier in this chapter. (See “Insecure Random Number Generation” for more detail on the subject of auditing for insecurely generated random cryptography keys.)

Insecure Password-Based Key Generation and Password Policy

The other main way of generating cryptography keys (in addition to via pseudo-RNG sources) methods is via a password-based key generation scheme.

As the phrase suggests, password-based key generation schemes take a password usually provided by the user, and generates a cryptography key from it. The implementation details of these popular schemes vary.

The simplest conceivable way of generating a cryptography key from a password is to simply convert the password to a byte array, and use that as the cryptography key. There are, however, several problems with this idea. First, assuming 256-bit cryptography, the password would need to be 32-bytes long, which would present problems for most users.

The second problem relates to the resulting keyspace of keys generated in this way. In general, passwords contain only printable characters; a–z, A–Z, 0–9, and some special characters (for example, !, #, $, and so on). This limits the usable value for each character to around 75, out of the 256 values that a one-byte character can take. So keys made up from passwords directly allow much less entropy than could be achieved by allowing all possible 256 values that one-byte characters can assume.

Moving a step further in sophistication, some developers may generate a 256-bit key by hashing the user’s password using SHA-256. The main problem with this is that SHA-256 is a very fast hashing algorithm; an attacker with a lot of computational power at his disposal (think Graphics Processing Units— GPUs) can potentially generate billions of hashes per seconds, which translates to billions of password brute-force guesses per second in an attempt to find your cryptography key. SHA-256 is also unsalted.

With that being said, it’s understandable that other methods of generating cryptography keys using a user-supplied password are sought.

Good password-based key generation APIs work using hash functions over (potentially) many iterations, or allow the developer to specify a “cost factor,” and they also involve salts and other time-consuming manipulation steps. In general, the more iterations that are used, or the more costly it is to generate a key from a given password, the better (within usability constraints!).

The reason for this lies in making a password brute-force attack to find the correct cryptography key time consuming for an attacker; if he can only generate a few thousand keys per second, he can only attempt decryption with a few thousand keys per second. His attack will therefore take significantly longer than if the key were generated by just SHA-256 hashing the user’s password, which could allow billions of key outputs and therefore decryption attempts on the victim’s data per second.

Good algorithms for password-based key generation also use large random salts to ensure that the user’s password is hashed uniquely.

A description and survey of password-based key generation APIs are beyond the scope of this chapter, but understanding which methods of key generation from passwords are secure, and which are not, is important so that you can spot the usage of insecure methods in code reviews.

When apps use password-based key derivation, the use of the following APIs, when used correctly as per the guidelines below, are considered acceptably secure from a cryptographic point of view:

All of these algorithms are purposefully slow to make an attacker much less likely to succeed in brute-forcing passwords to find your cryptography key.

Treat the use of any other algorithms for password-based key generation as a security issue; apps should not attempt to “roll their own” cryptography-related code, in general, and should avoid using other peoples’ attempts, no matter how complex or secure the algorithm may look.

In addition to simply using an industry-standard key generation algorithm in applications, you must consider another important factor to ensure secure applications; password policy. Even if the app uses PBKDF2 with a high iteration count, if the password were something like “aaaa”, then a dictionary attack will usually succeed quite quickly. To prevent users from undermining the security of their own data, apps encrypting sensitive data should enforce a password policy. Reasonable complexity guidelines that allow a middle ground between security and usability include the following:

  • Have at least eight characters
  • Use both uppercase and lowercase characters
  • Include one number
  • Include one special character

When an app is encrypting, storing, or transferring sensitive data, you should consider the failure to implement a password policy to be a security issue.

Chapter 13 provides a discussion on the implementation of secure password hashing.

Use of Weak Cryptography Algorithms, Modes, and Key Lengths

Even when keys are well generated and managed, encrypted data can be at risk due to the choice of cryptography algorithm; some algorithms have simply been proven to be insecure, or were not intended for encryption of sensitive data in the first place.

Many encryption algorithms are not actually fit for protecting sensitive information, but we’ll discuss a few that are used, and should not be. These include DES Data Encryption Standard (DES), RC4, AES in ECB Electronic Codebook (ECB) mode, and obviously XOR encryption schemes.

Data Encryption Standard (DES)

DES uses a key length of 56-bits, giving a search space of 256 different keys. With modern computing power, cracking a piece of a DES key is completely feasible. Known Plaintext and Chosen Plaintext attacks have also been shown to be possible, which could further reduce the time necessary to crack a DES key, when a very large number of plaintexts are available (http://en.wikipedia .org/wiki/Data_Encryption_Standard#Attacks_faster_than_brute-force). Further information is available online, such as at the DES Wikipedia page at http://en.wikipedia.org/wiki/Data_Encryption_Standard.

Simply put, for storing sensitive data, avoid DES. Consider the use of it for sensitive data to be a bug.

Spotting the use of DES in a code review is generally simple: Look for use of the DESCryptoServiceProvider, or its base class, System.Security.Cryptography.DES. Other third-party libraries, such as BouncyCastle, could potentially be used; spotting DES use should be simple in these cases, as well.

AES in ECB Mode

AES has a number of different modes, including ECB Electronic Codebook (ECB), Cipher Block Chaining (CBC), and counter mode (CTR).

In short, ECB treats each block independently from all other blocks, so identical blocks of plaintext are encrypted into identical blocks of ciphertext every time. This makes pattern analysis attacks on encrypted data blobs possible.

The best demonstration of the dangers of using AES in ECB mode is via the classic “Tux the Penguin” case study. When a TIFF image of Tux the Penguin was encrypted using AES in ECB mode, pattern analysis attacks on the resulting ciphertext allowed the basic outline of the original image to be recovered. See the original image in Figure 12.3.

images

Figure 12.3 Original image of the Linux mascot, Tux the Penguin

Compare this to the recovered image in Figure 12.4, which shows the general outline and even some details possessed by the original Tux the Penguin image.

images

Figure 12.4 Recovered image of Tux the Penguin

It should be evident from these two images that AES in ECB mode should not be used for storing sensitive data.

Use of AES in ECB mode is easily spotted; look for the use of the System .Security.Cryptography.Aes class, or its two subclasses System.Security .Cryptography.AesCryptoServiceProvider and System.Security .Cryptography.AesManaged.

All three of these classes have a property named Mode property. If Mode is set to CipherMode.ECB, ECB mode will be used.

Other Weak Algorithms

A number of other weak algorithms are in fairly common usage that should not be used for the protection of sensitive data, some of which include

  • XOR schemes
  • Tiny Encryption Algorithm (TEA)
  • RC4

Use of any other “homegrown” or otherwise little-known algorithms probably represents a security issue. Apps dealing with sensitive data should stick to the industry-strength algorithms such as AES (in modes other than ECB).

Minimum Public-Private Key Length

At the time of this writing, the recommended RSA key length when using public-private key asymmetric encryption is 2048. You should consider the use of 1024-bit keys to be against security best practices, and be concerned about the use of 512-bit keys.

Use of Static Initialization Vectors

Every block cipher mode besides ECB uses what is known as an Initialization Vector (IV). The high-level purpose of an IV is to ensure that encryption results vary every time; that is, when identical blocks of data are encrypted with the same key, use of a different IV means that the resulting ciphertext will be different in each case.

This means that apps using non-ECB modes for block encryption should never use hard-coded IVs, and IVs should be randomly generated to ensure their uniqueness. Using predictable or hard-coded IVs allows Chosen Plaintext attacks. To read more details on Chosen Plaintext attacks, the following URL may be of interest: http://cryptography.stackexchange.com/questions/1312/using-a-non-random-iv-with-modes-other-than-cbc/1314#1314.

IVs do not need to be secret. In fact, they cannot be, because they are needed to decrypt an encrypted blob. They simply need to be unique to prevent Chosen Plaintext attacks on encrypted data.

Use of a hard-coded IV constitutes a security vulnerability, as does generation of an IV using an insecure random number generator such as System.Random; for example:

char[] iv = {  0x10, 0x20, 0x30, 0x40, 0x45, 0x78, 0x65, 0x61, 0x62, 
0x43, 0x69, 0x35, 0x32, 0x15, 0x20, 0x50 };

The preceding in cryptography code (an AES-256, for example) would be cause for concern because the IV is completely static, as would the following:

Random rnd = new Random(); // uptime in milliseconds as seed 
 
byte[] iv = new byte[16]; 
rnd.NextBytes(iv);

because iv may be predictable given the flawed nature of System.Random.

Both of the preceding examples are contrary to cryptography best practices.

You should generate IVs using a cryptographically secure random number generator. (See Chapter 13 for more information on the secure generation of IVs.)

Data Protection API Misuse on Windows Phone

The Data Protection API, or DPAPI, is a cryptographic API provided by Windows for the purpose of encrypting arbitrary data blobs. DPAPI is used by a large number of third-party and Microsoft applications and frameworks. Microsoft uses DPAPI in the following pieces of software and use cases, to name a few examples:

  • Filesystem encryption
  • Internet Explorer autocomplete settings
  • Outlook credentials
  • Wireless passwords

DPAPI is also available on the Windows Phone 8.x platforms, in addition to standard Windows. DPAPI is recommended by Microsoft as a standard way of encrypting and decrypting data on the Windows platforms.

DPAPI exposes two native interfaces: one for encrypting data, and one for decrypting data. Namely, these APIs are CryptProtectData()and CryptUnprotectData(). These are native methods and have the following function prototypes,

BOOL WINAPI CryptProtectData( 
 _In_ DATA_BLOB *pDataIn, 
 _In_ LPCWSTR szDataDescr, 
 _In_ DATA_BLOB *pOptionalEntropy, 
 _In_ PVOID pvReserved, 
 _In_opt_ CRYPTPROTECT_PROMPTSTRUCT *pPromptStruct, 
 _In_ DWORD dwFlags, 
 _Out_ DATA_BLOB *pDataOut 
);

and:

BOOL WINAPI CryptUnprotectData( 
 _In_ DATA_BLOB *pDataIn, 
 _Out_opt_ LPWSTR *ppszDataDescr, 
 _In_opt_ DATA_BLOB *pOptionalEntropy, 
 _Reserved_ PVOID pvReserved, 
 _In_opt_ CRYPTPROTECT_PROMPTSTRUCT *pPromptStruct, 
 _In_ DWORD dwFlags, 
 _Out_ DATA_BLOB *pDataOut 
);

.NET exposes interfaces for calling into DPAPI from C#, VB, and F# via the ProtectedData class. The ProtectedData class exposes two methods: Protect() and Unprotect(). As expected, Protect() accepts plaintext data and returns ciphertext data, and Unprotect() accepts ciphertext and returns plaintext data. DPAPI itself does not actually store data; it just encrypts (or decrypts) it and returns the data back to the caller.

The Protect() and Unprotect() APIs have the following prototypes on Windows Phone,

public static byte[] Protect( 
        byte[] userData, 
        byte[] optionalEntropy, 
)

and:

public static byte[] Unprotect( 
        byte[] encryptedData, 
        byte[] optionalEntropy, 
)

In both cases, optionalEntropy is an optional parameter for specifying a secondary credential.

DPAPI on the Windows desktop and server versions create per-user master cryptography keys so that apps running under one user on the system cannot decrypt data protected by an app running under another user account.

However, on Windows Phone devices, because all apps are running under the same user (PROTOCOLS), one master cryptography key is used for all third-party apps calling into DPAPI for encryption and decryption. The keys are stored at the following path: C:DataUsersDefAppsAPPDATAROAMINGMICROSOFTProtect<SID>.

The fact that all data protected by DPAPI on Windows Phone is encrypted using the same key for all apps presents a security problem. If an attacker on the device or malicious app is able to get access to a DPAPI-encrypted data blob, and the target app did not use an optionalEntropy parameter, he can recover the data simply by calling into ProtectedData.Unprotect().

For example, consider an app on a device that encrypted data using DPAPI, like code such as the following. Note the absence of the optionalEntropy parameter, where null is simply passed in instead:

byte[] encryptedData = ProtectedData.Protect(secretData, null);

If a malicious app on the device gained access to the outputted data, the following line of code would allow decryption:

byte[] plaintextData = ProtectedData.Unprotect(encryptedData, null);

This scenario could clearly present a problem; disclosure of an encrypted blob could be decrypted by another app on the device.

The solution to this problem is to use the optionalEntropy parameter when using ProtectedData.Protect(), so that the app can pass in a secondary credential:

byte[] encryptedData = ProtectedData.Protect(secretData, secondarySecret);

If a malicious app on the device then attempted to decrypt the stolen data using ProtectedData.Unprotect(), it would need to know secondarySecret to be successful.

As a result, you should always use the optionalEntropy parameter if you want to use DPAPI in your apps. Apps should not, however, hard-code this value or otherwise store it on the device, because this would allow attackers with filesystem access to attack the data somewhat easily. If you intend to use DPAPI in your apps, you should base it on a secret passphrase known only by the app user—for example, the output of PBKDF2 on a password only the user knows), and not based on hard-coded or determinable values.

In general, though, implementing cryptography using the standard APIs may be advisable instead, using a secret key derivable from a user-known secret. (See Chapter 13 for our recommendations.) In addition to using standard CryptoAPI calls to safely encrypt sensitive data for storage, we also give an example of how to use DPAPI with the optionalEntropy parameter.

Identifying Native Code Vulnerabilities

Apps running on Windows Phone 8 and above are capable of using native code (that is, C and C++ code). The use of native code in Windows Phone apps is not especially common; nonetheless some apps call into native code, generally for one or more of the following reasons:

  • Code reuse/portability—If an app component (for example, a parser) has already been written in C++, reusing the codebase for a Windows Phone version of an app without having to rewrite it (for example, in C#) makes sense.
  • Graphics—Many Windows Phone games (and other apps) need more direct access to graphics rendering using Direct3D. This can only be done in native code (that is, C++), at the time of writing.
  • Performance—Some apps have performance-critical components, and so leverage native code to gain speed advantages.

The three main ways of using native code in Windows Phone apps are:

  • Writing a purely native app—For example, a C++ game for Windows Phone.
  • By writing a native Windows Phone Runtime Component (WinPRT) to call into your native library—Internally, this uses PInvoke.
  • By using the[DllImport]attribute—This only works on Windows Phone 8.1, not Windows Phone 8. Internally, [DllImport] uses PInvoke.

No matter how an app runs native code, any memory protections that a managed language offered (that is, C#) are no longer there to protect the app. For example, if managed C# code calls into unmanaged C++ code, the app now becomes vulnerable to memory corruption bugs (for example) in the same way that an app written in pure C++ would be.

If the source code to the native module is not available to you, you can extract the binary from the app’s Install directory, and then reverse engineer it using reverse engineering tools of your choice, although we recommend IDA Pro. The Hex-Rays decompiler plug-in for IDA Pro is relatively proficient at producing pseudo-code from a reversed native binary, so you may wish to have the Hex-Rays decompiler in your toolbox as well, since reading pseudo-code is often much more efficient than reviewing ARM assembly, especially in complex modules.

An introduction to reverse engineering native ARM binaries is beyond the scope of this book, so we assume that if you have to reverse engineer native modules, that you are familiar with the methodologies involved in doing so.

The rest of this section covers how to spot native code vulnerabilities, and we also explain briefly each bug classification and why it can be dangerous. This section is not an introduction to native code and its vulnerabilities. Instead, we assume you are already familiar with native code in general, and we mainly aim to point out API use and coding patterns that may lead to native code vulnerabilities in the context of Windows Phone apps.

Stack Buffer Overflows

Stack-based buffer overflows occur when an application attempts to copy data into a fixed-length stack buffer without carrying out boundary checks; that is, without first ensuring that the destination buffer is large enough to house all the data being copied.

Needless to say, if the data chunk being copied is larger than the destination stack buffer, excess data will overrun the end of the stack buffer, and unintended data on the stack will be overwritten. Overwritten data may include pointers and program metadata, including saved return addresses. Having the ability to overwrite unintended stack data has made the possibility of taking control of program execution flow possible, in many cases allowing execution of attacker-controlled code. Exploit mitigation features have often made exploitation of stack overflow conditions somewhat more difficult in recent years, but many stack corruption vulnerabilities are still exploitable, and all stack overflow bugs should be considered as such.

Quite a number of APIs have been responsible for stack overflow vulnerabilities in the past and in the present. Some of these are:

  • strcpy()
  • gets()
  • sprint()
  • strcat()
  • vsprintf()
  • scanf()
  • sscanf()
  • memcpy()
  • bcopy()

This list is not an extensive list of all APIs that do not carry out bounds checking. When you are in doubt, a Google search of the API in question is likely to provide ample information about the safety of the function and both how it can be abused and how it can be used safely.

Spotting stack overflow vulnerabilities is often quite easy. In general, you’re looking for data copying operations that do not carry out boundary checks on the destination buffer or copying operations that blindly trust an attacker-supplied length, and in both cases, the developer has not made sure that the destination buffer is large enough to hold the data being copied.

For example, the following code fragment is obviously vulnerable to stack corruption in its use of strcpy() to copy into a buffer, destBuffer, that is declared on the program stack:

char destBuffer[32]; 
char attackerControlledData[200]; 
 
[ ... ] 
 
int ret = ReadDataFromWire(&attackerControlledData[0]); 
 
strcpy(destBuffer, attackerControlledData);

Because the strcpy() API does not carry out any boundary checks on the destination buffer, the API will continue copying from attackerControlledData until a NULL byte is encountered. Clearly, if the data in attackerControlledData is longer than 32 bytes, a stack overflow will occur as the bounds of destBuffer are breached.

The following code, which uses sprintf(), would also be vulnerable to a similar stack overflow vulnerability, because sprintf() doesn’t perform bounds checking (unless a maximum number of characters is supplied with the %s format specifier; that is, %32s):

char destBuffer[32]; 
char attackerControlledData[200]; 
 
[ ... ] 
 
int ret = ReadDataFromWire(&attackerControlledData[0]); 
 
sprint(destBuffer, "%s", attackerControlledData);

Some badly written code also accepts a user-supplied length and insecurely trusts it to use as a length, while parsing data:

char destBuffer[32]; 
 
[ ... ]
 
unsigned int len = ReadLengthFromBlob(attackerControlledData); 
unsigned char *ptr = ReadPayloadPosition(attackerControlledData); 
 
memcpy(destBuffer, ptr, len);

Stack buffer overflows may also occur in hand-rolled copying loops; for example:

char destBuffer[32]; 
unsigned char *ptr = &attackerControlledBuf[0]; 
 
for(int i = 0; *ptr; ptr++, i++) { 
       destBuffer[i] = *ptr++; 
}

The previous code is similar to a strcpy(). Bytes are copied from attackerControlledBuf until a NULL byte is found. If the source buffer, attackerControlledBuf, does not contain any NULL bytes before 32 bytes have been copied, a stack buffer overflow will occur.

We cover how to write native code securely in Chapter 13.

Heap Buffer Overflows

Standard heap overflow bugs are essentially analogous to stack-based overflows in their nature, except that they relate to heap memory corruption, as the name suggests. Exploitation of heap overflows varies quite significantly for different memory allocators, but many exploitation techniques in the past and present involve overwriting pointers and other important data past the end of the destination buffer.

As with stack overflows, many of the same APIs play a role in causing heap overflow bugs:

  • strcpy()
  • gets()
  • sprint()
  • strcat()
  • vsprintf()
  • scanf()
  • sscanf()
  • memcpy()
  • bcopy()

Hand-rolled parsing and copying loops may also lead to heap corruption if the code does insufficient bounds checking (or none at all), as demonstrated here:

char destBuffer[32]; 
unsigned char *ptr = &attackerControlledBuf[0]; 
 
for(int i = 0; *ptr; ptr++, i++) { 
        destBuffer[i] = *ptr++; 
}

You can recognize heap memory use by an app calling into the following APIs:

  • HeapAlloc()
  • HeapReAlloc()
  • malloc()
  • realloc()

Two causes for heap overflows are common: unbounded copy operations, and integer overflows in size calculations.

In the context of unbounded copies, here is a simple example of a heap overflow vulnerability:

unsigned char *ptr = (unsigned char *)malloc(32); 
 
if(!ptr) { 
        OutputError("memory allocation failed
"); 
        return -1; 
} 
 
strcpy(ptr, attackerSuppliedData);

If attackerSuppliedData is data under the attacker’s control, and it may be larger than 32 bytes, then a heap corruption bug exists.

Or, consider code that blindly trusts a parsed-out length field without validating it, due to bad parser design:

unsigned char *buf = (unsigned char *)malloc(32); 
 
[ ... ]
 
unsigned int len = ReadLengthFromBlob(attackerControlledData); 
unsigned char *ptr = ReadPayloadPosition(attackerControlledData);
 
 
memcpy(destBuffer, ptr, len); 

The second common case is when size calculations for a heap buffer are vulnerable to integer overflows. For example, consider the following code, which takes a data length from the user, and then adds 10 to it (for additional payload copying later), which may cause the resulting value to wrap back to 0, meaning only a very small heap buffer is actually allocated:

unsigned int len = ParseLenFromBlob(dataBlob); 
unsigned char *payload = GetPayloadPosition(dataBlob); 
 
unsigned char *ptr = malloc(len + 10);     // calculation can wrap to 0! 
 
memcpy(ptr, payload, len);

If len was within 10 of UINT_MAX (0xffffffff), the size used in the malloc() call would have wrapped back to zero and be a very small number. Obviously, the memcpy() call will then use the original value, in this case overwriting well beyond the bounds of the allocated memory chunk at ptr.

We cover some basics on how to write native code securely in Chapter 13.

Other Integer-Handling Bugs

We already covered one common type of integer handling bug: integer overflows that can lead to heap or corruption of other memory regions. Succinctly, memory corruption bugs resulting from integer overflows usually occur when careless arithmetic is carried out and an integer variable’s value is incremented past its maximum value, thereby becoming either negative (for signed integers) or wrapping back past zero (for unsigned integers).

For example, consider the following code fragment:

unsigned int len = ReadLengthFromBlob(blob); 
unsigned char *ptr = GetPayloadOffset(blob); 
 
unsigned char *buf = malloc(len + 10); 
memcpy(buf,  ptr, len);

Such bugs are quite common in native code, so you should never trust lengths from attacker-controllable data before first validating them for being safe and sane values. Writing arithmetic operations (and sometimes loops when variables of different sizes are used) that results in integer overflows is all too easy; always write such code cautiously to ensure integers do not overflow or wrap.

Other types of integer-handling bugs exist in addition to integer overflow of signed and unsigned integers (and the short types). Among these are integer underflows and signedness errors.

Integer Underflows

Integer underflows work in reverse to integer overflow bugs; integer underflows occur when an integer is decremented below zero.

Consider the following code, which takes a user-supplied integer and subtracts a value from it, and then uses the resulting integer for a boundary check. The subtraction, in this hypothetical case, is for subtracting a header length from a parsed-out size value.

#define HEADER_LEN 16 
 
[ ... ] 
 
unsigned char buf[512]; 
 
int len = GetLengthValueFromBlob(blob); 
unsigned char *ptr = GetDataPtrFromBlob(blob); 
 
if(len > sizeof(buf)) { 
        OutputError("len too large for buf!
"); 
        return -1; 
} 
 
len -= HEADER_LEN; 
ptr += HEADER_LEN; 
memcpy(buf, ptr, len);

The code retrieves a length (as a signed integer) from an attacker-supplied data blob, validates that the length is no longer than 512, subtracts 16 from it, and then uses the length in a memcpy() call.

However, in the len -= HEADER_LEN arithmetic operation, len may be decremented below 0, giving a very large negative integer, in signed representations. However, in unsigned representations, as used in the memcpy() call, the value will be represented as a very large unsigned value, resulting in a stack buffer overflow beyond buf’s bounds as memcpy() copies over a very large amount of data to buf. Again, as with overflows, you can avoid situations like these by validating integers for safe values.

Integer overflows also affect unsigned integers as well, but when decremented below 0, instead of becoming large negative values, the value becomes very large. When an unsigned integer is decremented below its minimum value (0), the value wraps backwards. For example, assuming that an integer had 31 as its value, and an application subtracted 32, from it, the value would become the integer’s largest value. In the context of an unsigned 32-bit integer, 0 - 1 = 0xffffffff, or 4294967295, sometimes referred as UINT_MAX, as per its ANSI macro name.

Signedness Errors

Signedness bugs tend to occur when an integer is used in both signed and unsigned contexts, and confusion therefore results. For example, consider the following code:

char buffer[512]; 
int len = GetLenFromBlob(attackerControlledData); 
char *ptr = GetPayloadPositionFromBlob(attackerControlledData); 
 
if(len > sizeof(buffer)) { 
        OutputError("len is larger than buffer
"); 
        return -1; 
} 
 
memcpy(buffer, ptr, len);

The developer’s intentions are on point; len is checked for being larger than the size of buffer. However, if len is negative, say -1, then the check will pass fine. However, when -1 is passed to memcpy(), it is interpreted as 0xffffffff (UINT_MAX), because memcpy()’s third parameter is an unsigned integer, inevitably resulting in memory corruption beyond buf’s boundary. In this situation, a memory corruption bug exists because len is being checked in a signed context, and then being used as an unsigned length.

Representing length values as unsigned integers generally makes more sense, and would fix the bug in this hypothetical case. We discuss secure programming when dealing with integers in Chapter 13.

Format String Bugs

Format string functions accept a format string as a parameter, which describes to the API how the format parameters should be interpreted. For example, the following code simply prints the string in buf to the standard output:

char buf[] = "hello world"; 
printf("%s
", buf);

The %s format specifier informs the printf() API that the proceeding parameter is a pointer to a string.

Besides printf(), other standard (and misusable) format string functions are:

  • wsprintf()
  • vsprintf()
  • sprint()
  • snprintf()
  • fprintf()
  • asprintf()

Attacker-controlled data should not be passed into a format string function as the format string itself, because this may allow the attacker to manipulate and corrupt memory in the target app. So, for example, the following represents a bug,

printf(attackerControlledData); 

as does:

snprintf(buffer, sizeof(buffer)-1, attackerControlledData);

For exploitation, attackers may use the %n format specifier, which instructs (many) format string APIs to write the currently written number of bytes to a specified address. With careful use of other format specifiers to control the number of written bytes, %n can be used to write arbitrary bytes to arbitrary memory locations, therefore allowing for controlled memory corruption exploits. As a consequence, any passing of attacker-controlled data to a format string function as the format string itself should be considered a serious security vulnerability.

Avoiding format string bugs is easily done. Always use code like this,

printf("%s", buf); 

. . .and never like this:

printf(buf);

We reiterate later that developers unfamiliar with classic native code bugs should review secure coding guidelines, and we provide links to resources to this end in the Chapter 13 section, “Avoiding Native Code Bugs”.

Array Indexing Errors

Array indexing errors occur when an attacker-supplied value is used as the index to an array, either on read or write operations. Such bugs are also sometimes called read access violations (AVs) and write AVs, because they have the potential to cause access violations if unmapped memory addresses are written to or read from.

For example, the following is an example of a read indexing error,

int someValue = buf[attackerControlledValue]; 

. . .and a write index error:

someBuffer[attackerControlledValue] = 0;

In general, write index errors tend to be more serious, because they often allow controlled memory corruption by writing to favorable locations beyond the bounds of the intended buffer. They could be considered a type of buffer overflow.

Read access violations have the potential to be used for memory disclosure in many cases. Both read and write bugs such as these can also be used to cause denial-of-service conditions via deliberate page faults by writing to or reading from unmapped memory addresses.

Before attacker-controlled values are used as indexes to arrays they should be strictly validated to ensure that the value lies within the length of the allocated memory chunk.

Also take negative values into account, because writes to an array using a negative index may be considered a type of buffer underflow. We reiterate this in Chapter 13.

Denial-of-Service Bugs

Denial-of-Service (DoS) bugs are less of a concern in mobile applications than in server apps, for example, but prevention of DoS bugs is good practice nonetheless.

Two general classes of DoS bugs are memory consumption bugs, and access violation bugs. We mentioned access violation bugs in the previous section, wherein crashes due to unmapped memory reads could crash the offending process.

Other access violation bugs are caused by NULL pointer dereferences. These bugs can happen in a number of failure cases, but a common one is when a memory allocation fails and the resulting NULL pointer is not checked and is dereferenced anyway. For example, consider a malloc() call that fails:

unsigned char *ptr = 
   (unsigned char *)  malloc(largeAttackerControlledValue); // can return NULL

If ptr is not checked before it is dereferenced, a NULL pointer AV will happen, and the process will (most likely) crash. In general, check returned pointers from APIs to ensure that NULL pointer dereferences don’t cause the app to crash.

When you’re allocating memory based on attacker-controlled values, we recommend carrying out sanity checks. Failure to do this may result in large chunks of memory being allocated, and application performance being degraded severely. For example, we would recommend against:

unsigned char *ptr = (unsigned char *) malloc(largeAttackerControlledValue);

Instead, code should check whether largeAttackerControlledValue is a sensible value before allowing the memory allocation to take place.

Unsafe C# Code

Though not strictly native code, C# allows code to be designated as unsafe using the unsafe and fixed keywords. In such code, pointers may be used, and security issues can arise in a fashion similar to many native software vulnerabilities. However, at the time of writing, Windows Phone 8 and 8.1 do not support the use of unsafe C# code, and use of it will result in your app being rejected during the store vetting process.

Summary

When working to identify implementation issues in Windows Phone applications, the following bullet points may be useful as a general checklist. The checklist is composed as a series of questions; answering “yes” to a question represents a potential security issue that should be further investigated to discover the real-world impact:

  • Are HTTP cache and cookies left undeleted when they’re no longer needed, thus representing a potential sensitive information leak (i.e., in the app’s INetCache and INetCookies directories)?
  • Does the app store sensitive data in files in cleartext (i.e., unencrypted)?
  • Does the app store sensitive data in any unencrypted databases?
  • Are any insecure sources of randomness being used to generate security-sensitive data such as cryptographic keys?
  • Does the app encrypt any sensitive data using bad cryptographic practices?
  • Is there any native code misuse that could lead to classic native code vulnerabilities, such as memory corruption?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.252.8