Chapter 12. A Taxonomy of Coding Errors[1]

 

A horse! A horse! My kingdom for a horse!

 
 --KING RICHARD THE THIRD (WILLIAM SHAKESPEARE)

The purpose of any taxonomy like this one is to help software developers and security practitioners concerned about software understand common coding mistakes that impact security. The goal is to help developers avoid making mistakes and to more readily identify security problems whenever possible. A taxonomy like this one is most usefully applied in an automated tool that can spot problems either in real time (as a developer types into an editor) or at compile time (see Chapter 4). When put to work in a tool, a set of security rules organized according to this taxonomy is a powerful teaching mechanism. Because developers today are by and large unaware of security problems that they can (unknowingly) introduce into code, publication of a taxonomy like this should provide real, tangible benefits to the software security community.

This approach represents a striking alternative to taxonomies of attack patterns (see Exploiting Software [Hoglund and McGraw 2004]) or simple-minded collections of specific vulnerabilities (e.g., Mitre’s CVE <http://www.cve.mitre.org/>). Attack-based approaches are based on knowing your enemy and assessing the possibility of similar attack. They represent the black hat side of the software security equation. A taxonomy of coding errors is, strangely, more positive in nature. This kind of thing is most useful to the white hat side of the software security world. In the end, both kinds of approaches are valid and necessary.

The goal of this taxonomy is to educate and inform software developers so that they better understand the way their work affects the security of the systems they build. Developers who know this stuff (or at least use a tool that knows this stuff) will be better prepared to build security in than those who don’t.

Though this taxonomy is incomplete and imperfect, it provides an important start. One of the problems of all categorization schemes like this is that they don’t leave room for new (often surprising) kinds of vulnerabilities. Nor do they take into account higher-level concerns such as the architectural flaws and associated risks described in Chapter 5.[2] Even when it comes to simple security-related coding issues themselves, this taxonomy is not perfect. Coding problems in embedded control software and common bugs in high-assurance software developed using formal methods are poorly represented here, for example.

The bulk of this taxonomy is influenced by the kinds of security coding problems often found in large enterprise software projects. Of course, only coding problems are represented since the purpose of this taxonomy is to feed a static analysis engine with knowledge. The taxonomy as it stands is neither comprehensive nor theoretically complete. Instead it is practical and based on real-world experience. The focus is on collecting common errors and explaining them in such a way that they make sense to programmers.

The taxonomy is expected to evolve and change as time goes by and coding issues (e.g., platform, language of choice, and so on) change. This version of the taxonomy places more emphasis on concrete and specific problems over abstract or theoretical ones. In some sense, the taxonomy may err in favor of omitting “big-picture” errors in favor of covering specific and widespread errors.

The taxonomy is made up of two distinct kinds of sets (which we’re stealing from biology). What is called a phylum is a type or particular kind of coding error; for example, Illegal Pointer Value is a phylum. What is called a kingdom is a collection of phyla that share a common theme. That is, kingdoms are sets of phyla; for example, Input Validation and Representation is a kingdom. Both kingdoms and phyla naturally emerge from a soup of coding rules relevant to enterprise software. For this reason, the taxonomy is likely to be incomplete and may be missing certain coding errors.

In some cases, it is easier and more effective to talk about a category of errors than it is to talk about any particular attack. Though categories are certainly related to attacks, they are not the same as attack patterns.

On Simplicity: Seven Plus or Minus Two

I’ve seen lots of security taxonomies over the years, and they have all shared one unfortunate property—an overabundance of complexity. People are good at keeping track of seven things (plus or minus two).[3] I used this as a hard constraint and attempted to keep the number of kingdoms down to seven (plus one). I present these kingdoms in order of importance to software security.

Without further ado, here are the seven kingdoms (plus one):

  1. Input Validation and Representation

  2. API Abuse

  3. Security Features

  4. Time and State

  5. Error Handling

  6. Code Quality

  7. Encapsulation

  • Environment

A brief explanation of each follows.

Input Validation and Representation

Input validation and representation problems are caused by metacharacters, alternate encodings, and numeric representations. Of course, sometimes people just forget to do any input validation at all. If you do choose to do input validation, use a white list, not a black list [Hoglund and McGraw 2004].

Big problems result from trusting input (too much), including buffer overflows, cross-site scripting attacks, SQL injection, cache poisoning, and basically all of the low-hanging fruit that the script kiddies eat.

API Abuse

An API is a contract between a caller and a callee. The most common forms of API abuse are caused by the caller failing to honor its end of this contract. For example, if a program fails to call chdir() after calling chroot(), it violates the contract that specifies how to change the active root directory in a secure fashion. Another good example of library abuse is expecting the callee to return trustworthy DNS information to the caller. In this case, the caller abuses the callee API by making certain assumptions about its behavior (that the return value can be used for authentication purposes). Really bad people also violate the caller–callee contract from the other side. For example, if you subclass SecureRandom and return a not-so-random value, you’re not following the rules.

API abuse categories are very common. Check out Appendix B for a long, boring list of API problems that were built into ITS4 (an early code analysis tool).

Security Features

I’ve said this before, and I’ll say it again: Software security is not security software. All the magic crypto fairy dust in the world won’t make you secure. But it’s also true that you can drop the ball when it comes to essential security features. Let’s say you decide to use SSL to protect traffic across the network, but you really screw things up. Unfortunately, this happens all the time. When I chunk together security features, I’m concerned with such topics as authentication, access control, confidentiality, cryptography, privilege management, and all that other stuff on the CISSP exam. This stuff is hard to get right. You in the back, pay attention!

Time and State

Distributed computation is about time and state. That is, in order for more than one component to communicate, state must be shared (somehow), and all that takes time. Playing with time and state is the biggest untapped natural attack resource on the planet right now.

Most programmers anthropomorphize (or, more accurately, only solipsistically ponder) their work. They think about themselves—the single omniscient thread of control manually plodding along, carrying out the entire program in the same way that they themselves would do it if forced to do the job manually. That’s really quaint. Modern computers switch between tasks very quickly, and in multi-core, multi-CPU, or distributed systems, two events may take place at exactly the same time.[4] Defects rush to fill the gap between the programmer’s model of how a program executes and what happens in reality. These defects are related to unexpected interactions between threads, processes, time, and information. These interactions happen through shared state: semaphores; variables; the filesystem; the universe; and, basically, anything that can store information.

One day soon, this kingdom will be number one.

Error Handling

Want to break software? Throw some junk at a program and see what errors you cause. Errors are not only a great source of “TMI” from a program, but they are also a source of inconsistent thinking that can be gamed. It gets worse, though. In modern object-oriented systems, the notion of exceptions has reintroduced the banned concept of goto right back on center stage. Alas.

Errors and error handlers represent a class of programming contract. So, in some sense, errors represent the two sides of a special form of API; but security defects related to error handling are so common that they deserve a special kingdom all of their own. As with API Abuse, there are two ways to blow it here: first comes either forgetting to handle errors at all or handling them so roughly that they get all bruised and bloody. The second is producing errors that either give out way too much information (to possible attackers) or are so radioactive that nobody wants to handle them.

Code Quality

Security is a subset of reliability, just as all future TV shows are a subset of monkeys banging on zillions of keyboards. If you are able to completely specify your system and all of its positive and negative security possibilities, then security is a subset of reliability. In the real world, security deserves an entire budget of its own. If you’ve gotten this far into the book (lucky Chapter 12 plus or minus one), you probably agree that the current state of the art requires some special attention for security. Poor code quality leads to unpredictable behavior. From a user’s perspective that often manifests itself as poor usability. For an attacker, bad quality provides an opportunity to stress the system in unexpected ways.

Encapsulation

Encapsulation is about drawing strong boundaries between things and setting up barriers between them. In a Web browser this might mean ensuring that mobile code can’t whack your hard drive arbitrarily (bad applet, kennel up). On a Web Services server that might mean differentiating between valid data that have been authenticated and run through the white-list and mystery data that were found sitting on the floor in the men’s room under the urinal. Boundaries are critical. Some of the most important boundaries today come between classes with various methods. Trust and trust models require careful and meticulous attention to boundaries. Keep your hands off my stuff!

Environment

Another one of those pesky extra things. Turns out that software runs on a machine with certain bindings and certain connections to the bad, mean universe. Getting outside the software is important (write that down, you heard me say it here). This kingdom is the kingdom of outside→in. It includes all of the stuff that is outside of your code but is still critical to the security of the software you create.

The Phyla

The big list in this section takes the following form:

Kingdom

  • Phylum

<explanatory sentence or two>

I now introduce the phyla that fit under the seven (plus one) kingdoms. To better understand the relationship between kingdoms and phyla, consider a recently found vulnerability in Adobe Reader 5.0.x for UNIX. The vulnerability is present in a function UnixAppOpenFilePerform() that copies user-supplied data into a fixed-size stack buffer using a call to sprintf(). If the size of the user-supplied data is greater than the size of the buffer it is being copied into, important information, including the stack pointer, is overwritten. By supplying a malicious PDF document, an attacker can execute arbitrary commands on the target system.

The attack is possible because of a simple coding error—the absence of a check that makes sure that the size of the user-supplied data is no greater than the size of the destination buffer. Developers will associate this check with a failure to code defensively around the call to sprintf(). I classify this coding error according to the attack it enables—“Buffer Overflow.” I chose Input Validation and Representation as the name of the kingdom the Buffer Overflow phylum belongs to because the lack of proper input validation is the root cause making the attack possible.

The coding errors represented by phyla can all be detected by static source code analysis tools. Source code analysis offers developers an opportunity to get quick feedback about the code they write. I strongly advocate educating developers about coding errors by having them use a source code analysis tool (see Chapter 4).

  1. Input Validation and Representation

    • Buffer Overflow

      Writing outside the bounds of allocated memory can corrupt data, crash the program, or cause the execution of an attack payload.

    • Command Injection

      Executing commands from an untrusted source or in an untrusted environment can cause an application to execute malicious commands on behalf of an attacker.

    • Cross-Site Scripting

      Sending unvalidated data to a Web browser can result in the browser executing malicious code (usually scripts).

    • Format String

      Allowing an attacker to control a function’s format string may result in a buffer overflow.

    • HTTP Response Splitting

      Writing unvalidated data into an HTTP header allows an attacker to specify the entirety of the HTTP response rendered by the browser.

    • Illegal Pointer Value

      This function can return a pointer to memory outside of the buffer to be searched. Subsequent operations on the pointer may have unintended consequences.

    • Integer Overflow

      Not accounting for integer overflow can result in logic errors or buffer overflows.

    • Log Forging

      Writing unvalidated user input into log files can allow an attacker to forge log entries or inject malicious content into logs.

    • Path Traversal

      Allowing user input to control paths used by the application may enable an attacker to access otherwise protected files.

    • Process Control

      Executing commands or loading libraries from an untrusted source or in an untrusted environment can cause an application to execute malicious commands (and payloads) on behalf of an attacker.

    • Resource Injection

      Allowing user input to control resource identifiers may enable an attacker to access or modify otherwise protected system resources.

    • Setting Manipulation

      Allowing external control of system settings can disrupt service or cause an application to behave in unexpected ways.

    • SQL Injection

      Constructing a dynamic SQL statement with user input may allow an attacker to modify the statement’s meaning or to execute arbitrary SQL commands.

    • String Termination Error

      Relying on proper string termination may result in a buffer overflow.

    • Struts: Duplicate Validation Forms

      Multiple validation forms with the same name indicate that validation logic is not up to date.

    • Struts: Erroneous validate() Method

      The validator form defines a validate() method but fails to call super.validate().

    • Struts: Form Bean Does Not Extend Validation Class

      All Struts forms should extend a Validator class.

    • Struts: Form Field without Validator

      Every field in a form should be validated in the corresponding validation form.

    • Struts: Plug-in Framework Not in Use

      Use the Struts Validator to prevent vulnerabilities that result from unchecked input.

    • Struts: Unused Validation Form

      An unused validation form indicates that validation logic is not up to date.

    • Struts: Unvalidated Action Form

      Every action form must have a corresponding validation form.

    • Struts: Validator Turned Off

      This action form mapping disables the form’s validate() method.

    • Struts: Validator without Form Field

      Validation fields that do not appear in the forms they are associated with indicate that the validation logic is out of date.

    • Unsafe JNI

      Improper use of the Java Native Interface (JNI) can render Java applications vulnerable to security flaws in other languages. Language-based encapsulation is broken.

    • Unsafe Reflection

      An attacker may be able to create unexpected control flow paths through the application, potentially bypassing security checks.

    • XML Validation

      Failure to enable validation when parsing XML gives an attacker the opportunity to supply malicious input.

  2. API Abuse

    • Dangerous Function

      Functions that cannot be used safely should never be used.

    • Directory Restriction

      Improper use of the chroot() system call may allow attackers to escape a chroot jail.

    • Heap Inspection

      Do not use realloc() to resize buffers that store sensitive information.

    • J2EE Bad Practices: getConnection()

      The J2EE standard forbids the direct management of connections.

    • J2EE Bad Practices: Sockets

      Socket-based communication in Web applications is prone to error.

    • Often Misused: Authentication

      (See the complete entry on page 290 in this chapter.)

    • Often Misused: Exception Handling

      A dangerous function can throw an exception, potentially causing the program to crash.

    • Often Misused: Path Manipulation

      Passing an inadequately sized output buffer to a path manipulation function can result in a buffer overflow.

    • Often Misused: Privilege Management

      Failure to adhere to the principle of least privilege amplifies the risk posed by other vulnerabilities.

    • Often Misused: String Manipulation

      Functions that manipulate strings encourage buffer overflows.

    • Unchecked Return Value

      Ignoring a method’s return value can cause the program to overlook unexpected states and conditions.

  3. Security Features

    • Insecure Randomness

      Standard pseudo-random number generators cannot withstand cryptographic attacks.

    • Least Privilege Violation

      The elevated privilege level required to perform operations such as chroot() should be dropped immediately after the operation is performed.

    • Missing Access Control

      The program does not perform access control checks in a consistent manner across all potential execution paths.

    • Password Management

      Storing a password in plaintext may result in a system compromise.

    • Password Management: Empty Password in Configuration File

      Using an empty string as a password is insecure.

    • Password Management: Hard-Coded Password

      Hard-coded passwords may compromise system security in a way that cannot be easily remedied.

    • Password Management: Password in Configuration File

      Storing a password in a configuration file may result in system compromise.

    • Password Management: Weak Cryptography

      Obscuring a password with trivial encoding does not protect the password.

    • Privacy Violation

      Mishandling private information, such as customer passwords or social security numbers, can compromise user privacy and is often illegal.

  4. Time and State

    • Deadlock

      Inconsistent locking discipline can lead to deadlock.

    • Failure to Begin a New Session upon Authentication

      Using the same session identifier across an authentication boundary allows an attacker to hijack authenticated sessions.

    • File Access Race Condition: TOCTOU

      The window of time between when a file property is checked and when the file is used can be exploited to launch a privilege escalation attack.

    • Insecure Temporary File

      Creating and using insecure temporary files can leave application and system data vulnerable to attack.

    • J2EE Bad Practices: System.exit()

      A Web application should not attempt to shut down its container.

    • J2EE Bad Practices: Threads

      Thread management in a Web application is forbidden in some circumstances and is always highly error prone.

    • Signal Handling Race Conditions

      Signal handlers may change shared state relied on by other signal handlers or application code causing unexpected behavior.

  5. Error Handling

    • Catch NullPointerException

      Catching NullPointerException should not be used as an alternative to programmatic checks to prevent dereferencing a null pointer.

    • Empty Catch Block

      Ignoring exceptions and other error conditions may allow an attacker to induce unexpected behavior unnoticed.

    • Overly Broad Catch Block

      Catching overly broad exceptions promotes complex error-handling code that is more likely to contain security vulnerabilities.

    • Overly Broad Throws Declaration

      Throwing overly broad exceptions promotes complex error-handling code that is more likely to contain security vulnerabilities.

    • Unchecked Return Value

      Ignoring a method’s return value can cause the program to overlook unexpected states and conditions.

  6. Code Quality

    • Double Free

      Calling free() twice on the same memory address can lead to a buffer overflow.

    • Inconsistent Implementations

      Functions with inconsistent implementations across operating systems and operating system versions cause portability problems.

    • Memory Leak

      Memory is allocated but never freed, leading to resource exhaustion.

    • Null Dereference

      The program can potentially dereference a null pointer, thereby raising a NullPointerException.

    • Obsolete

      The use of deprecated or obsolete functions may indicate neglected code.

    • Undefined Behavior

      The behavior of this function is undefined unless its control parameter is set to a specific value.

    • Uninitialized Variable

      The program can potentially use a variable before it has been initialized.

    • Unreleased Resource

      The program can potentially fail to release a system resource.

    • Use After Free

      Referencing memory after it has been freed can cause a program to crash.

  7. Encapsulation

    • Comparing Classes by Name

      Comparing classes by name can lead a program to treat two classes as the same when they actually differ.

    • Data Leaking Between Users

      Data can “bleed” from one session to another through member variables of singleton objects, such as servlets, and objects from a shared pool.

    • Leftover Debug Code

      Debug code can create unintended entry points in an application.

    • Mobile Code: Object Hijack

      Attackers can use cloneable objects to create new instances of an object without calling its constructor.

    • Mobile Code: Use of Inner Class

      Inner classes are translated into classes that are accessible at package scope and may expose code that the programmer intended to keep private to attackers.

    • Mobile Code: Non-Final Public Field

      Non-final public variables can be manipulated by an attacker to inject malicious values.

    • Private Array-Typed Field Returned from a Public Method

      The contents of a private array may be altered unexpectedly through a reference returned from a public method.

    • Public Data Assigned to Private Array-Typed Field

      Assigning public data to a private array is equivalent to giving public access to the array.

    • System Information Leak

      Revealing system data or debugging information helps an adversary learn about the system and form an attack plan.

    • Trust Boundary Violation

      Commingling trusted and untrusted data in the same data structure encourages programmers to mistakenly trust unvalidated data.

  • Environment

    • ASP .NET Misconfiguration: Creating Debug Binary

      Debugging messages help attackers learn about the system and plan a form of attack.

    • ASP .NET Misconfiguration: Missing Custom Error Handling

      An ASP .NET application must enable custom error pages in order to prevent attackers from mining information from the framework’s built-in responses.

    • ASP .NET Misconfiguration: Password in Configuration File

      Do not hardwire passwords into your software.

    • Insecure Compiler Optimization

      Improperly scrubbing sensitive data from memory can compromise security.

    • J2EE Misconfiguration: Insecure Transport

      The application configuration should ensure that SSL is used for all access-controlled pages.

    • J2EE Misconfiguration: Insufficient Session-ID Length

      Session identifiers should be at least 128 bits long to prevent brute-force session guessing.

    • J2EE Misconfiguration: Missing Error Handling

      A Web application must define a default error page for 404 errors and 500 errors and to catch java. lang. Throwable exceptions to prevent attackers from mining information from the application container’s built-in error response.

    • J2EE Misconfiguration: Unsafe Bean Declaration

      Entity beans should not be declared remote.

    • J2EE Misconfiguration: Weak Access Permissions

      Permission to invoke EJB methods should not be granted to the ANYONE role.

More Phyla Needed

This taxonomy includes coding errors that occur in a variety of programming languages. The most important among them are C and C++, Java, and the .NET family (including C# and ASP). Some of the phyla are language-specific because the types of errors they represent apply only to specific languages. One example is the Double Free phylum. This phylum identifies incorrect usage of low-level memory routines and is specific to C and C++ because neither Java nor the managed portions of the .NET languages expose low-level memory APIs.

In addition to being language-specific, some phyla are framework-specific. For example, the Struts phyla apply only to the Struts framework, and the J2EE phyla are only applicable in the context of the J2EE applications. Log Forging, on the other hand, is a more general phylum.

The phylum list as it exists is certainly incomplete, but it is adaptable to changes in trends and discoveries of new defects that are bound to happen over time. The current list reflects a focus on finding and classifying security-related defects rather than more general quality or reliability issues. The Code Quality kingdom could potentially contain many more phyla, but the ones that are currently included are the most likely to affect software security directly. Finally, classifying errors that are most important to real-world enterprise developers is the most important goal of this taxonomy—most of the information here is derived from the literature, various colleagues, and hundreds of customers.

A Complete Example

Each phylum in the taxonomy is associated with a nice number of clear, fleshed-out examples similar in nature to the rules described in Chapter 4. An example of the kingdom API Abuse in the phylum Often Misused: Authentication is included here to give you some idea of the form that a complete entry takes. For more, see <http://vulncat.fortifysoftware.com>.

Lists, Piles, and Collections

The idea of collecting and organizing information about computer security vulnerabilities has a long history (see the box Academic Literature). More recently, a number of practitioners have developed “top ten” lists and other related collections based on experience in the field. The taxonomy introduced here negotiates a middle ground between rigorous academic studies and ad hoc collections based on experience.

Two of the most popular and useful lists are the “19 Sins” and the “OWASP top ten.” The first list, at one month old as I write this, is carefully described in the new book 19 Deadly Sins of Software Security [Howard, LeBlanc, and Viega 2005]. The second is the ““OWASP Top Ten Most Critical Web Application Security Vulnerabilities”” available on the Web at <http://www.owasp.org/documentation/topten.html>. Both of these collections, though extremely useful and applicable, share one unfortunate property—an overabundance of complexity. My hard constraint to stick to seven things helps cut through the complexity.

By discussing the 19 Sins and OWASP top ten lists with respect to the taxonomy here, I hope to illustrate and emphasize why simplicity is essential to any taxonomy. The main limitation of both lists is that they mix specific types of errors and vulnerability classes and talk about them all at the same level of abstraction. The 19 Sins include both “Buffer Overflows” and “Failing to Protect Network Traffic” categories at the same level, even though the first is a very specific coding error, while the second is a class comprised of various kinds of errors. Similarly, OWASP’s top ten includes “Cross Site Scripting (XSS) Flaws” and “Insecure Configuration Management” at the same level. This is a serious problem that leads to confusion among practitioners.

My classification scheme consists of two hierarchical levels: kingdoms and phyla. Kingdoms represent classes of errors, while the phyla that comprise the kingdoms represent collections of specific errors. Even though the structure of my classification scheme is different from the structure of the 19 Sins and OWASP top ten lists, the categories that comprise these lists can be easily mapped to the kingdoms (as I show next).

Nineteen Sins Meet Seven Kingdoms

  1. Input Validation and Representation

    Sin: Buffer Overflows

    Sin: Command Injection

    Sin: Cross-Site Scripting

    Sin: Format String Problems

    Sin: Integer Range Errors

    Sin: SQL Injection

  2. API Abuse

    Sin: Trusting Network Address Information

  3. Security Features

    Sin: Failing to Protect Network Traffic

    Sin: Failing to Store and Protect Data

    Sin: Failing to Use Cryptographically Strong Random Numbers

    Sin: Improper File Access

    Sin: Improper Use of SSL

    Sin: Use of Weak Password-Based Systems

    Sin: Unauthenticated Key Exchange

  4. Time and State

    Sin: Signal Race Conditions

    Sin: Use of “Magic” URLs and Hidden Forms

  5. Error Handling

    Sin: Failure to Handle Errors

  6. Code Quality

    Sin: Poor Usability

  7. Encapsulation

    Sin: Information Leakage

  • Environment

The 19 Sins are an extremely important collection of software security problems at many different levels. By fitting them into the seven kingdoms, a cleaner organization begins to emerge.

Seven Kingdoms and the OWASP Ten

Top ten lists are appealing, especially since the cultural phenomenon that is David Letterman. The OWASP top ten list garners much attention because it is short and also useful. Once again, a level-blending problem is apparent in the OWASP list, but this is easily resolved by appealing to the seven kingdoms.

  1. Input Validation and Representation

    OWASP A1: Unvalidated Input

    OWASP A4: Cross-Site Scripting (XSS) Flaws

    OWASP A5: Buffer Overflows

    OWASP A6: Injection Flaws

  2. API Abuse

  3. Security Features

    OWASP A2: Broken Access Control

    OWASP A8: Insecure Storage

  4. Time and State

    OWASP A3: Broken Authentication and Session Management

  5. Error Handling

    OWASP A7: Improper Error Handling

  6. Code Quality

    OWASP A9: Denial of Service

  7. Encapsulation

  • Environment

    OWASP A10: Insecure Configuration Management

Go Forth (with the Taxonomy) and Prosper

The seven pernicious kingdoms are a simple, effective organizing tool for software security coding errors. With over 60 clearly defined phyla, the taxonomy here is both powerful and useful. Descriptions of the phyla can be found on the Web at <http://vulncat.fortifysoftware.com>.

The classification scheme here is designed to organize security rules and thus be of help to software developers who are concerned with writing secure code and being able to automate detection of security defects. These goals make the taxonomy:

  • Simple

  • Intuitive to a developer

  • Practical (rather than theoretical and comprehensive)

  • Amenable to automatic identification of errors with static analysis tools

  • Adaptable with respect to changes in trends that happen over time

Taxonomy work is ongoing. Your help is requested.



[1] Parts of this chapter appeared in original form in Proceedings of the NIST Workshop on Software Security Assurance Tools, Techniques, and Metrics coauthored with Katrina Tsipenyuk and Brian Chess [Tsipenyuk, Chess, and McGraw 2005].

[2] This should really come as no surprise. Static analysis for architectural flaws would require a formal architectural description so that pattern matching could occur. No such architectural description exists. (And before you object, UML doesn’t cut it.)

[3] The magic number seven plus or minus two comes from George Miller’s classic paper ““The Magic Number Seven, Plus or Minus Two,”The Psychological Review, vol. 63, pp. 81–97, 1956; see <http://www.well.com/user/smalin/miller.html>.

[4] Looks like the Police were on to something with that Synchronicity album after all.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.198.174