Security and the Developer

What do we mean when we refer to “secure code?” In essence, the ultimate purpose of security is to allow “good” code to execute while denying access to “bad” code. Unfortunately, there are no algorithms that let us differentiate “good” code from “bad.” Suppose, for example, that a request is made to append data to a file. Is this an attempt to inject a virus, or is it merely new output being added to some log file?

No security system can judge the intent behind such an action; even humans sometimes have difficulty discerning the true purpose of a piece of code. Instead, the security system concentrates on evidence about the user or code that it knows is factual or can be validated: usernames validated by passwords, strong names or Authenticode signatures, code download URLs, and so on. These allow the security system to characterize the user or code based on identity—either specifically, as with usernames or digital signatures, or more broadly, as with user groups or Internet zones based on download URLs.

Together with policy information from the system administrator, this allows the security system to assign levels of trust to users and code. In this way, we manage to sidestep the problem of needing to understand intent; instead of asking “Is that operation bad?” we now ask “Is that operation risky?” and “Do we trust this user/code to perform such risky operations?”

To get back to our original question—What is secure code?—secure code is about responsibilities. The extent of those responsibilities depends on the level of trust your code will be granted. Highly trusted code will have the ability to perform very dangerous operations (such as formatting the user's hard disk) and may even be able to circumvent the security system itself. Such code needs to be written very carefully to avoid mistakes that could open up serious security holes. Code with low levels of trust, on the other hand, has fewer responsibilities: Such code is “sandboxed” into a low risk environment where access to dangerous facilities such as file access is denied. Code at the lowest levels of trust couldn't open up a security hole even if it tried.

The responsibilities of trusted code are twofold:

  • Protect access to operations that could compromise the system. These operations are typically focused around system resources: files, the registry, access to the desktop display. The code that implements access to such resources must validate the caller's level of trust. Typically, it does so by associating a permission with the resource and demanding that permission of any caller (the caller's level of trust is modeled as a set of permissions that are granted by system policy).

  • Ensure that the levels of trust granted aren't open to misuse by callers of lesser (or undetermined) trust. This is the hard one and constitutes the source of the majority of security holes. Breaches of this responsibility can range from the blatant—exporting a public method that writes to an arbitrary file on behalf of an untrusted caller—to the subtle—leaving a sensitive member field marked protected so that an untrusted caller can subclass the type and gain access. Failure to uphold this responsibility will allow code of lower trust to break out of its sandbox and enjoy a greater level of trust than the system administrator was willing to grant it.

Protecting access to resources will require direct interaction with the APIs of the security system. The managed libraries that come with the .NET Framework contain abstractions for most common system resources (along with the necessary access protection), so typically developers will only need to worry about new resources specific to their system. For instance, a banking application may define a new permission to control access to the bank's accounts database.

public ResultSet QueryAccountsDatabase(String query)
{
    new AccountAccessPermission("Accounts",
                                DBAccess.ReadOnly).Demand();
    ...
}

Ensuring that trust is not leaked to untrusted code is a much more pervasive problem. Solutions to potential holes may or may not involve the security infrastructure directly. Secure coding practice can fundamentally impact the codebase (especially when performance is taken into consideration). For instance, access to some resource may depend on the acquisition of a token, allowing the relatively heavyweight security access checks to be performed at one choke point and leaving the high-frequency access paths free of overhead (similar to the model used for file access). Implementing such a scheme has the potential of affecting the implementation, and possibly even the design, of all other dependent software components.

From this, two important points should be apparent:

  • Security should be a part of the product life cycle from the very beginning. You should design the techniques and protocols that will ensure that your code is secure before moving into the implementation phase. The cost of retrofitting correct (and efficient) security is too high, both in terms of code destabilization and in terms of risk. A security change introduced at a late stage typically won't have had the breadth of exposure that a design-time change would have: Less thought will have gone into verifying its correctness, programmers will be less likely to understand its importance, and test coverage may well be compromised (code security is much harder to test than most features, since negative testing plays such an important role). While early planning and testing is important for any aspect of software design, it's particularly important for security because the problems can be so subtle and hard to find and the consequences of not finding a problem before shipping are so dire. In fact, the cost of a security flaw being unearthed once the product has reached customers is usually many times greater than the cost of any other type of software defect.

  • Good, secure code design and implementation is the responsibility of everyone working on the product, not just a focused security group. While having specific individuals who focus on security is a good idea in most cases, it's important that all developers have an understanding of the basic rules and where their responsibilities lie. When just one subtle error can compromise the security of an entire system, it is unrealistic to expect that a subset of the developers will have the resources and expertise to locate all such errors in a reasonable time frame. It makes much more sense to teach every developer the basics of secure coding and have them evaluate and fix their own code than to try and teach a small number of security experts about every aspect of the system you're building.

The .NET Framework does provide one tool for limiting the impact that trusted but potentially flawed code can have on the system. Strong named assemblies (those being the most likely to be shared amongst applications) which have not been explicitly marked with the System.Security.AllowPartiallyTrustedCallersAttribute custom attribute will not allow any untrusted callers at all. This allows developers to restrict access to potentially buggy or poorly designed code until such time as that code can be reviewed and tested thoroughly. At this point, the attribute can be added to the assembly and access opened to callers with a lower level of trust. For further discussion of the mechanics of AllowPartiallyTrustedCallersAttribute, see Chapter 25.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.31.209