Chapter 11. Software Development Security


Terms you’ll need to understand:

Image Acceptance testing

Image Cohesion and coupling

Image Tuple

Image Polyinstantiation

Image Inference

Image Fuzzing

Image Bytecode

Image Database

Image Buffer overflow

Topics you’ll need to master:

Image Identifying security in the software development lifecycle and system development life-cycle

Image Understanding database design

Image Knowing the capability maturity model

Image Stating the steps of the development life-cycle

Image Determining software security effectiveness

Image Recognizing acquired software security impact

Image Describing different types of application design techniques

Image Understanding the role of change management

Image Recognizing the primary types of databases


Introduction

Software plays a key role in the productivity of most organizations, yet our acceptance of it is different from everything else we tend to deal with. For example, if you were to buy a defective car that exploded in minor accidents, the manufacturer would be forced to recall the car. However, if a user buys a buggy piece of software, the buyer has little recourse. The buyer could wait for a patch, buy an upgrade, or maybe just buy another vendor’s product. Well-written applications are essential for good security. As such, this chapter focuses on information the CISSP must know to apply security in the context of the CIA triad to the software development lifecycle, including programming languages, application design methodologies, and change management. A CISSP must not only understand the software development lifecycle, but also how databases are designed.

Databases contain some of the most critical assets of an organization, and are a favorite target of hackers. The CISSP must understand design, security issues, control mechanisms, and common vulnerabilities of databases. In addition to protecting the corporation’s database from attacks, the security professional must be sensitive to the interconnectivity of databases and the rise of large online cloud databases.

Software Development

A CISSP is not expected to be an expert programmer or understand the inner workings of a Java program. What the CISSP must know is the overall environment in which software and systems are developed. The CISSP must also understand the development process, and be able to recognize whether adequate controls have been developed and implemented. Know that it’s always cheaper to build in security up-front than it is to add it later. Organizations accomplish this by using a structured approach, so that:

Image Risk is minimized.

Image Return on investment from using said software is maximized.

Image Security controls are established so that the risk associated with using software is mitigated.

New systems are created when new opportunities are discovered; organizations take advantage of these technologies to solve existing problems and accelerate business processes and improve productivity. Although it’s easy to see the need to incorporate security from the beginning of the process, the historical reality of design and development has been deficient in this regard. Most organizations are understaffed and duties are not properly separated. Too often, inadequate consideration is given to the implementation of access-limiting controls from within a program’s code.

As a result, this has caused excessive exposure points and has led to a parade of vulnerabilities following the defective code’s release. New technologies and developments such as cloud computing and the Internet of Things (IoT) have made a structured, secure development process even more imperative. It is critical that development teams enforce a structured software development life-cycle that has checks and balances where security is thought through from start to finish.

Avoiding System Failure

No matter how hard we plan, systems will still fail. Organizations must prepare for these events with the proper placement of compensating controls. These controls help limit the damage. Some examples of compensating controls are checks and application controls, and fail-safe procedures.

Checks and Application Controls

The easiest way to minimize problems in the processing of data is to ensure that only accurate, complete, and timely inputs can occur. Even poorly written applications can be made more robust by adding controls that check limits, data formats, and data lengths; this is all referred to as data input validation. Controls that verify data is only processed through authorized routines should be in place. These application controls should be designed to detect any problems and to initiate corrective action. If there are mechanisms in place that permit the override of these security controls, their use should be logged and reviewed. Table 11.1 shows some common types of controls.

Image

TABLE 11.1 Checks and Controls

Failure States

Knowing that all applications can fail, it is important that developers create mechanisms for a safe failure, thereby containing damage. Well-coded applications have built-in recovery procedures that are triggered if a failure is detected; the system is protected from compromise by terminating the service or disabling the system until the cause of failure can be investigated.


Tip

Systems that recover into a fail-open state can allow an attacker to easily compromise the system. Systems that fail open are typically undesirable because of the security risk. However, some IDS/IPSs (intrusion detection systems/intrusion prevention systems) will go into fail-open state to prevent the disruption of traffic.


The System Development Lifecycle

The utilization of a framework for system development can facilitate and structure the development process. As an example, The National Institute of Standards and Technology (NIST) defines the System Development Lifecycle (SDLC) in NIST SP 800-34 as “the scope of activities associated with a system, encompassing the system’s initiation, development and acquisition, implementation, operation and maintenance, and ultimately its disposal that instigates another system initiation.” Many other framework models exist, such as Microsoft’s Security Development Lifecycle (SDL). It consists of Training, Requirements, Design, Implementation, Verification, Release, and Response. Regardless of the model, the overall goal is the same: to control the development process and add security at each level or stage within the process. The System Development Lifecycle we will review has been separated into seven distinct steps:

1. Project initiation

2. Functional requirements and planning

3. Software design specifications

4. Software development and build

5. Acceptance testing and implementation

6. Operational/maintenance

7. Disposal


ExamAlert

Read all test questions carefully to make sure you understand the context in which SDL, SDLC, or other terms are being used.


Regardless of the titles that a given framework might assign to each step, the SDLC’s purpose is to provide security in the software development lifecycle. The failure to adopt a structured development model increases a product’s risk of failure because it is likely that the final product will not meet the customer’s needs. Table 11.2 describes each step of development and the corresponding activities of that phase.

Image

TABLE 11.2 SDLC Stages and Activities

Project Initiation

This initial step usually includes meeting with everyone involved with the project to answer the big questions like what are we doing? why are we doing it? and who is our customer? At this meeting, the feasibility of the project is considered. The cost of the project must be discussed, as well as the potential benefits that the product is expected to bring to the system’s users. A payback analysis should be performed to determine how long the project would take to pay for itself. In other words, the payback analysis determines how much time will lapse before accrued benefits overtake accrued and continuing costs.

Should it be determined that the project will move forward, the team will want to develop a preliminary timeline. Discussions should be held to determine the level of risk involved with handling data, and to establish the ramifications of accidental exposure. This activity clarifies the precise type and nature of information that will be processed, and its level of sensitivity. This is the first look at security. This analysis must be completed before the functional requirements and planning stage begins.


ExamAlert

For the exam you should understand that users should be brought into the process as early as possible. You are building something for them and must make sure that the designed system/product meets their needs.


Functional Requirements and Planning

This phase is responsible for fully defining the need for the solution, and mapping how the proposed solution meets the need. This stage requires the participation of management as well as users. Users need to identify requirements and desires they have regarding the design of the application. Security representatives must verify the identified security requirements, and determine whether adequate security controls are being defined.

An entity relationship diagram (ERD) is often used to help map the identified and verified requirements to the needs being met. ERDs define the relationship between the many elements of a project. ERDs are a type of database, grouping together like data elements. Each entity has a specific attribute, called the primary key, which is drawn as a rectangular box containing an identifying name. Relationships, drawn as diamonds, describe how the various entities are related to each other.

ERDs can be used to help define a data dictionary. After the data dictionary is designed, the database schema is developed. This schema further defines tables and fields, and the relationships between them. Figure 11.1 shows the basic design of an ERD. The completed ERD becomes the blueprint for the design, and is referred to during the design phase.

Image

FIGURE 11.1 Entity Relationship Diagram.

Software Design Specifications

Detailed design specifications are generated during this stage, either for a program that will be created or in support of the acquisition of an existing program. All functions and operations are described. Programmers design screen layouts and chart process diagrams. Supporting documentation will also be generated. The output of the software design specification stage is a set of specifications that delineate the new system as a collection of modules and subsystems.

Scope creep most often occurs here, and is simply the expansion of the scope of the project. Small changes in the design can add up over time. Although little changes might not appear to have a big cost or impact on the schedule of a project, these changes have a cumulative effect and increase both length and cost of a project.

Proper detail at this stage plays a large role in the overall security of the final product. Security should be the focus here as controls are developed to ensure input, output, audit mechanisms, and file protection. Sample input controls include dollar counts, transaction counts, error detection, and correction. Sample output controls include validity checking and authorization controls.

Software Development and Build

During the software development and build phase, programmers work to develop the application code specified in the previous stage, as illustrated in Figure 11.2.

Image

FIGURE 11.2 Development and Build Activities.

Programmers should strive to develop modules that have high cohesion and low coupling. Cohesion addresses the fact that a module can perform a single task with low input from other modules. Coupling is the measurement of the amount of interconnections or dependencies between modules. Low coupling means that a change to one module should not affect another and the module has high cohesion.


Tip

Sometimes you may not actually build the software and find that purchasing a previously developed product is easier. In these situations security cannot be forgotten. You will need to fully test the acquired software to verify its security impact. In these situations the source code may not always be available and you may need to perform other types of testing.


This stage includes testing of the individual modules developed, and accurate results have a direct impact on the next stage: integrated testing with the main program. Maintenance hooks are sometimes used at this point in the process to allow programmers to test modules separately without using normal access control procedures. It is important that these maintenance hooks, also referred to as backdoors, be removed before the software code goes to production. Programmers might use online programming facilities to access the code directly from their workstations. Although this typically increases productivity, it also increases the risk that someone will gain unauthorized access to the program library.


ExamAlert

For the exam you should understand that separation of duties is of critical importance during the SDLC process. Activities such as development, testing, and production should be properly separated and their duties should not overlap. As an example, programmers should not have direct access to production (or released) code or have the ability to change production or released code.



Caution

Maintenance hooks or trapdoors are software mechanisms that are installed to bypass the system’s security protections during the development and build stage. To prevent a potential security breach, these hooks must be removed before the product is released into production. You can find an example of a maintenance hook at www.securityfocus.com/bid/7673/discuss. This alert discusses a weakness in the TextPortal application and covers how an attacker can obtain unauthorized access. The issue exists due to an undocumented password of “god2” that can be used for the default administrative user account.


Controls are built into the program during this stage. These controls should include preventive, detective, and corrective mechanisms. Preventive controls include user authentication and data encryption. Detective controls provide audit trails and logging mechanisms. Corrective controls add fault tolerance and data integrity mechanisms. Unit testing occurs here, but acceptance testing takes place in the next stage. Test classifications are broken down into general categories:

Image Unit testing—Examines an individual program or module.

Image Interface testing—Examines hardware or software to evaluate how well data can be passed from one entity to another.

Image System testing—A series of tests that starts in this phase, and continues into the acceptance testing phase, and includes recovery testing, security testing, stress testing, volume testing, and performance testing.


Caution

Reverse engineering can be used to reduce development time. This is somewhat controversial because reverse engineering can be used to bypass normal access control mechanisms or disassemble another company’s program illegally. Most software licenses make it illegal to reverse engineer the associated code. Laws such as DMCA can also prohibit the reverse engineering of code.


Acceptance Testing and Implementation

This stage occurs when the application coding is complete, and should not be performed by the programmers. Instead, testing should be performed by test experts or quality assurance engineers. The important concept here is separation of duties. If the code is built and verified by the same individuals, errors can be overlooked and security functions can be bypassed. Models vary greatly on specifically what tests should be completed and how much if any iteration is necessary within that testing. With that said, Table 11.3 lists some common types of acceptance and verification tests of which you should be aware.

Image

TABLE 11.3 Test Types

When all pertinent issues and concerns have been worked out between the QA engineers, the security professionals, and the programmers, the application is ready for deployment.


ExamAlert

For the exam you should understand that fuzzing is a form of blackbox testing technique that enters malformed data inputs and monitors the application’s response. This is commonly referred to as “garbage in, garbage out” testing because it throws “garbage” at the application to see what it can handle.


Operations and Maintenance

The application is prepared for release into its intended environment during the implementation phase. This is the stage where final user acceptance is performed, and any required certification and/or accreditation is achieved. This stage is the final step, wherein management accepts the application and agrees that it is ready for use.

Certification requires a technical review of the system or application to ensure that it does what is supposed to do. Certification testing often includes an audit of security controls, a risk assessment, and/or a security evaluation. Accreditation is management’s formal acceptance of the system or application. Typically, the results of the certification testing are compiled into a report that becomes the basis for the acceptance from management referred to as accreditation. Management might request additional testing, ask questions about the certification report, or simply accept the results. When the system or application is accepted, a formal acceptance statement is usually issued.


Tip

Certification is a technical evaluation and analysis of the security features and safeguards of a system to establish the extent to which the security requirements are satisfied and vendor claims are verified.

Accreditation is the formal process of management’s official approval of the certification.


Operations management begins when the application is rolled out. Maintenance, support, and technical response must be addressed. Data conversion might also need to be considered. If an existing application is being replaced, data from the old application might need to be migrated to the new one. The rollout of the application might occur all at once or in a phased process over time. Changeover techniques include:

Image Parallel operation—Both the old and new applications are run simultaneously with all the same inputs and the results between the two applications compared. Fine-tuning can also be performed on the new application as needed. As confidence in the new application improves, the old application can be shut down. The primary disadvantage of this method is that both applications must be maintained for a period of time.

Image Phased changeover—If the application is large, a phased changeover might be possible. With this method, applications are upgraded one piece at a time.

Image Hard changeover—This method establishes a data at which users are forced to change over. The advantage of the hard changeover is that it forces all users to change at once. However, this does introduce a level of risk into the environment because things can go wrong.

Disposal

This step of the process is reached when the application is no longer needed. Those involved in this step of the process must consider how to dispose of the application, archive any information or data that might be needed in the future, perform disk sanitization (to ensure confidentiality), and dispose of equipment. This is an important step that is sometimes overlooked.

Development Methods

So, what is the most important concept of system development? Finding a good framework and adhering to the process it entails. The sections that follow explain several proven software-development processes. These models share a common element in that they all have a predictive life-cycle. Each has strengths and weaknesses. Some work well when a time-sensitive or high-quality product is needed, whereas others offer greater quality control and can scale to very large projects.

The Waterfall Model

Probably the most well known software development process is the waterfall model. This model was developed by Winston Royce in 1970, and operates as the name suggests, progressing from one level down to the next. The original model prevented developers from returning to stages once they were complete; therefore, the process flowed logically from one stage to the next. Modified versions of the model add a feedback loop so that the process can move in both directions. An advantage of the waterfall method is that it provides a sense of order and is easily documented. The primary disadvantage is that it does not work for large and complex projects because it does not allow for much revision.

The Spiral Model

This model was developed in 1988 by Barry Boehm. Each phase of the spiral model starts with a design goal and ends with the client review. The client can be either internal or external, and is responsible for reviewing progress. Analysis and engineering efforts are applied at each phase of the project. An advantage of the spiral model is that it takes risk much more seriously. Each phase of the project contains its own risk assessment. Each time a risk assessment is performed, the schedules and estimated cost to complete are reviewed and a decision is made to continue or cancel the project. The spiral model works well for large projects. The disadvantage of this method is that it is much slower and takes longer to complete. Figure 11.3 illustrates an example of this model.

Image

FIGURE 11.3 The Spiral Model.

Joint Application Development

Joint Application Development (JAD) is a process developed at IBM in 1977. Its purpose is to accelerate the design of information technology solutions. An advantage of JAD is that it helps developers work effectively with the users who will be using the applications developed. A disadvantage is that it requires users, expert developers, and technical experts to work closely together throughout the entire process. Projects that are good candidates for JAD have some of the following characteristics:

Image Involve a group of users whose responsibilities cross department or division boundaries

Image Considered critical to the future success of the organization

Image Involve users who are willing to participate

Image Developed in a workshop environment

Image Use a facilitator who has no vested interest in the outcome

Rapid Application Development

Rapid Application Development (RAD) is a fast application development process, created to deliver speedy results. RAD is not suitable for all projects, but it works well for projects that are on strict time limits. However, this can also be a disadvantage if the quick decisions lead to poor design and product. This is why you won’t see RAD used for critical applications, such as shuttle launches. Two of the most popular RAD tools for Microsoft Windows are Delphi and Visual Basic.

Incremental Development

Incremental development defines an approach for a staged development of systems. Work is defined so that development is completed one step at a time. A minimal working application might be deployed while subsequent releases enhance functionality and/or scope.

Prototyping

Prototyping frameworks aim to reduce the time required to deploy applications. These frameworks use high-level code to quickly turn design requirements into application screens and reports that the users can review. User feedback is used to fine-tune the application and improve it. Top-down testing works best with this development construct. Although prototyping clarifies user requirements, it also leads to a quick skeleton of a product with no guts surrounding it. Seeing complete forms and menus can confuse users and clients and lead to overly optimistic project timelines. Also, because change happens quickly, changes might not be properly documented and scope creep might occur. Prototyping is often used where the product is being designed for a specific customer and is proprietary in nature.


ExamAlert

Prototyping is the process of building a proof-of-concept model that can be used to test various aspects of a design and verify its marketability. Prototyping is widely used during the development process.


Modified Prototype Model (MPM)

MPM was designed to be used for web development. MPM focuses on quickly deploying basic functionality and then using user feedback to expand that functionality. MPM is especially useful when the final nature of the product is unknown.

Computer-Aided Software Engineering

Computer-Aided Software Engineering (CASE) enhances the software development life-cycle by using software tools and automation to perform systematic analysis, design, development, and implementation of software products. The tools are useful for large, complex projects that involve multiple software components and lots of people. Its disadvantages are that it requires building and maintaining software tools, and training developers to understand how to use the tools effectively. CASE can be used for

Image Modeling real-world processes and data flows through applications

Image Developing data models to better understand process

Image Developing process and functional descriptions of the model

Image Producing databases and database management procedures

Image Debugging and testing the code

Agile Development Methods

Agile software development allows teams of programmers and business experts to work closely together.

According to the agile manifesto at agilemanifesto.org/, “We are uncovering better ways of developing software by doing it and helping others do it. Through this work, we have come to value:

Image Individuals and interactions over processes and tools.

Image Working software over comprehensive documentation.

Image Customer collaboration over contract negotiation.

Image Responding to change over following a plan.”

Agile project requirements are developed using an iterative approach, and the project is mission-driven and component-based. The project manager becomes much more of a facilitator in these situations. Popular agile development models include the following:

Image Extreme programming (XP)—The XP development model requires that teams include business managers, programmers, and end users. These teams are responsible for developing usable applications in short periods. Issues with XP are that teams are responsible not only for coding but also for writing the tests used to verify the code. There is minimal focus on structured documentation, which can be a concern. XP does not scale well for large projects.

Image Scrum—Scrum is an iterative development method in which repetitions are referred to as sprints and typically last thirty days. Scrum is typically used with object-oriented technology, and requires strong leadership and a team that can meet at least briefly each day. The planning and direction of tasks passes from the project manager to the team. The project manager’s main task is to work on removing any obstacles from the team’s path. The scrum development method owes its name to the team dynamic structure of rugby.

Capability Maturity Model

The Capability Maturity Model (CMM) was designed as a framework for software developers to improve the software development process. It allows software developers to progress from an anything-goes type of development to a highly structured, repeatable process. As software developers grow and mature, their productivity will increase and the quality of their software products will become more robust. There are five maturity levels to the CMM, as shown in Table 11.4.

Image

TABLE 11.4 Capability Maturity Model

Although there might be questions on the exam about the CMM, it is important to note that the model was replaced in December 2007 with the Capability Maturity Model Integration (CMMI), in part due to the standardization activities of ISO 15504.


ExamAlert

Read any questions regarding CMM or CMMI carefully to make sure you understand which model the question is referencing.


The steps for CMMI include: (1) Initial, (2) Managed, (3) Defined, (4) Quantitatively Managed, and (5) Defined, Optimizing. These steps are shown in Figure 11.4. CMMI has similarities with agile development methods, such as XP and Scrum. The CMMI model contains process areas and goals, and each goal comprises practices.

Image

FIGURE 11.4 The CMMI Model.

Scheduling

Scheduling involves linking individual tasks. The link relationships are based on earliest start date or latest expected finish date. Gantt charts provide a way to display these relationships.

The Gantt chart was developed in the early 1900s as a tool to assist the scheduling and monitoring of activities and progress. Gantt charts show the start and finish dates of each element of a project. Gantt charts also show the relationships between activities in a calendar-like format. They have become one of the primary tools used to communicate project schedule information. Their baselines illustrate what will happen if a task is finished early or late.

Program Evaluation and Review Technique (PERT) is the preferred tool for estimating time when a degree of uncertainty exists. PERT uses a critical path method that applies a weighted average duration estimate.

Probabilistic time estimates are used by PERT to create a three-point time estimate of best, worst, and most likely time evolution of activities. The PERT weighted average is calculated as follows:

PERT Weighted Average = Optimistic Time + 4 × Most Likely Time + Pessimistic Time / 6

Every task branches out to three estimates:

Image One—The most optimistic time in which the task can be completed.

Image Two—The most likely time in which the task will be completed.

Image Three—The worst-case scenario or longest time in which the task might be completed.

Change Management

Change management is a formalized process for controlling modifications made to systems and programs: analyze the request, examine its feasibility and impact, and develop a timeline for implementing the approved changes. The change management process provides all concerned parties with an opportunity to voice their opinions and concerns before changes are made. Although types of changes vary, change control follows a predictable process. The steps for a change control process are as follows:

1. Request the change.

2. Approve the change request.

3. Document the change request.

4. Test the proposed change.

5. Present the results to the change control board.

6. Implement the change if approved.

7. Document the new configuration.

8. Report the final status to management.


Tip

One important piece of change management that is sometimes overlooked is a way to back out of the change. Sometimes things can go wrong and the change needs to be undone.


Documentation is the key to a good change control process. All system documents should be updated to indicate any changes that have been made to the system or environment. The system maintenance staff of the department requesting the change should keep a copy of that change’s approval. Without a change control process in place, there is a significant potential for security breaches. Indicators of poor change control include

Image No formal change control process is in place.

Image Changes are implemented directly by the software vendors or others without internal control, which can indicate a lack of separation of duties.

Image Programmers place code in an application that is not tested or validated.

Image The change review board did not authorize the change.

Image The programmer has access to both the object code and the production library; this situation presents a threat because the programmer might be able to make unauthorized changes to production code.

Image No version control.

Finally, this does not mean that there will never be a situation where a change occurs without going through the change control process, because situations might arise in which emergency changes must be made. These emergencies typically are in response to situations that endanger production or could halt a critical process. If programmers are to be given special access or provided with an increased level of control, the security professional with oversight should make sure that checks are in place to track those programmers’ access and record any changes made.

Programming Languages

Programming languages permit the creation of instructions using instruction sets that a computer can understand. The types of tasks that get programmed, and the instructions or code used to create the programs, depend on the nature of the organization. If, for example, the company has used FORTRAN for engineering projects for the last 25 years, it might make sense to use it again for the current project. Programming has evolved through five generations of languages (GLs), as illustrated in Figure 11.5 and described in the list that follows.

Image

FIGURE 11.5 Programming Languages.

1. Generation 1—Machine language, the native language of a computer consisting of binary ones and zeros.

2. Generation 2—Assembly language, human-readable notation that translates easily into machine language.

3. Generation 3—High-level programming languages. The 1960s through the 1980s saw an emergence and growth of many third-generation programming languages, such as FORTRAN, COBOL, C+, and Pascal.

4. Generation 4—Very high-level language. This generation of languages grew from the 1970s through the early 1990s. 4GLs are typically those used to access databases. SQL is an example of a fourth-generation language.

5. Generation 5—Natural language. These took off strong in the 1990s and were considered the wave of the future. 5GLs are categorized by their use of inference engines and natural language processing. Mercury and Prolog are two examples of fifth-generation languages.

After the code is written, it must be translated into a format that the computer will understand. These are the three most common methods:

Image Assembler—A program translates assembly language into machine language.

Image Compiler—A compiler translates a high-level language into machine language.

Image Interpreter—Instead of compiling the entire program, an interpreter translates the program line by line. Interpreters have a fetch-and-execute cycle. An interpreted language is much slower to execute than a compiled or assembled program, but does not need a separate compilation or assembly step.

Hundreds of different programming languages exist. Many have been written to fill a specific niche or market demand. Examples of common programming languages include the following:

Image ActiveX—This language forms a foundation for higher-level software services, such as transferring and sharing information among applications. ActiveX controls are a Component Object Model (COM) technology. COM is designed to hide the details of an individual object and focus on the object’s capabilities. An extension to COM is COM+.

Image COBOL—Common Business Oriented Language is a third-generation programming language used for business finance and administration.

Image C, C+, C++, C#—The C programming language replaced B and was designed by Dennis Ritchie. C was originally designed for UNIX and is very popular and widely used. From a security perspective some C functions are known for issues related to buffer overflows.

Image FORTRAN—This language features an optimized compiler that is widely used by scientists for writing numerically intensive programs.

Image HTML—Hypertext Markup Language is a markup language that is used to create web pages.

Image Java—This is a general-purpose computer programming language, developed in 1995 by Sun Microsystems.

Image Visual Basic—This programming language was designed to be used by anyone, and enables rapid development of practical programs.

Image Ruby—An object-oriented programming language that was developed in the 1990s and designed for general-purpose usage. It has been used in the development of such projects as Metasploit.

Image Scripting languages—A form of programming language that is usually interpreted rather than compiled and allows some control over a software application. Perl, Python, and Java are examples of scripting languages.

Image XML—Extensible Markup Language (XML) is a markup language that specifies rules for encoding documents. XML is widely used on the Internet.

Object-Oriented Programming

Multiple development frameworks have been created to assist in defining, grouping, and reusing both code and data. Methods include data-oriented system programming, component-based programming, web-based applications, and object-oriented programming. Of these, the most commonly deployed is object-oriented programming (OOP), an object technology resulting from modular programming. OOP allows code to be reused and interchanged between programs in modular fashion without starting over from scratch. It has been widely embraced because it is more efficient and results in lower programming costs. And because it makes use of modules, a programmer can easily modify an existing program.

In OOP, objects are grouped into classes; all objects in a given group share a particular structure and behavior. Characteristics from one class can be passed down to another through the process of inheritance. Java and C++ are two examples of OOP languages.

Some of the attributes of OOP include:

Image Encapsulation—This is the act of hiding the functionality of an object inside that object or, for a process, hiding the functionality inside that process’s class. Encapsulation permits a developer to keep information disjointed; that is, to separate distinct elements so that there is no direct unnecessary sharing or interaction between the various parts.

Image Polymorphism—Technically this means that one thing has the capability to take on many appearances. In OOP, it is used to invoke a method on a class without needing to care about how the invocation is accomplished. Likewise, the specific results of the invocation can vary because objects will have different variables that will respond differently.

Image Polyinstantiation—Technically, this means that multiple instances of information are being generated. This is a tool used in many settings. For example, polyinstantiation is used to display different results to different individuals who pose identical queries on identical databases, due to those individuals possessing different security levels. This deployment is widely used by the government and military to unify information bases, while protecting sensitive or classified information. Without polyinstantiation, an attacker might be able to aggregate information for various sources to do what is referred to as an inference attack to determine secret information. Initially, a piece of information by itself appears useless like a piece to a puzzle, but when put with together with several other pieces of the puzzle you have an accurate picture.

During programming, object-oriented design (OOD) is used to bridge the gap between a real-world problem and the software solution. OOD modularizes data and procedures. This provides for a detailed description as to how a system is to be built. OOA and OOD are sometimes combined as object-oriented analysis and design (OOAD).

CORBA

Functionality that exists in a different environment from your code can be accessed and shared using vendor-independent middleware known as Common Object Request Broker Architecture (CORBA). CORBA’s purpose is to allow different vendor products, such as computer languages, to work seamlessly across distributed networks of diversified computers. The heart of the CORBA system is the Object Request Broker (ORB). The ORB simplifies the client’s process of requesting server objects. The ORB locates the requested object, transparently activates it as necessary, and then delivers the requested object back to the client.

Database Management

Databases are important to business, government, and individuals because they provide a way to catalog, index, and retrieve related pieces of information and facts. These repositories of data are widely used. If you have booked a reservation on a plane, looked up the history of a used car you were thinking about buying, or researched the ancestry of your family, you have most likely used a database during your quest. The database itself can be centralized or distributed, depending on the database management system (DBMS) that has been implemented. The DBMS allows the database administrator to control all aspects of the database, including design, functionality, and security. There are several popular types of databases, although the majority of modern databases are relational. Database types include:

Image Hierarchical database management system—This form of database links structures into a tree structure. Each record can have only one owner. Because of this, a hierarchical database often can’t be used to relate to structures in the real world.

Image Network database management system—This type of database system was developed to be more flexible than the hierarchical database. The network database model is referred to as a lattice structure because each record can have multiple parent and child records.

Image Relational database management system—This database consists of a collection of tables linked to each other by their primary keys. Many organizations use this model. Most relational databases use SQL as their query language. The RDBMS is a collection based on set theory and relational calculations. This type of database groups data into ordered pairs of relationships (a row and column) known as a tuple.

Image Object-relational database system—This type of database system is similar to a relational database but is written in an object-oriented programming language. This allows it to support extensions to the data model and to be a middle ground between relational databases and object-oriented databases.

Database Terms

If you are not familiar with the world of databases, you might benefit from a review of some of the other common terms. Figure 11.6 illustrates several of the following terms, which security professionals should be familiar with:

Image Aggregation—The process of combining several low-sensitivity items, and drawing medium- or high-sensitivity conclusions.

Image Inference—The process of deducing privileged information from available unprivileged sources.

Image Attribute—A characteristic about a piece of information. Where a row in a database table represents a database object, each column in that row represents an attribute of that object.

Image Field—The smallest unit of data within a database.

Image Foreign key—An attribute in one table that cross-references to an existing value that is the primary key in another table.

Image Granularity—Refers to the level of control the program has over the view of the data that someone can access. Highly granular databases have the capability to restrict views, according to the user’s clearance, at the field or row level.

Image Relation—Defined interrelationship between the data elements in a collection of tables.

Image Tuple—Used to represent a relationship among a set of values. In an RDBMS, a tuple identifies a column and a row.

Image Schema—The totality of the defined tables and interrelationships for an entire database. It defines how the database is structured.

Image Primary key—Uniquely identifies each row and assists with indexing the table.

Image View—The database construct that an end user can see or access.

Image

FIGURE 11.6 Primary and Foreign Keys.

Integrity

The integrity of data refers to its accuracy. To protect the semantic and referential integrity of the data within a database, specialized controls are used, including rollbacks, checkpoints, commits, and savepoints.

Image Semantic integrity—Assures that the data in any field is of the appropriate type. Controls that check for the logic of data and operations affect semantic integrity.

Image Referential integrity—Assures the accuracy of cross references between tables. Controls that ensure that foreign keys only reference existing primary keys affect referential integrity.

Transaction Processing

Transaction management is critical in assuring integrity. Without proper locking mechanisms, multiple users could be altering the same record simultaneously, and there would be no way to ensure that transactions were valid and complete. This is especially important with online systems that respond in real time. These systems, known as online transaction processing (OLTP) are used in many industries, including banking, airlines, mail order, supermarkets, and manufacturing. Programmers involved in database management use the ACID test when discussing whether a database management system has been properly designed to handle OLTP:

Image Atomicity—Results of a transaction are either all or nothing.

Image Consistency—Transactions are processed only if they meet system-defined integrity constraints.

Image Isolation—The results of a transaction are invisible to all other transactions until the original transaction is complete.

Image Durability—Once complete, the results of the transaction are permanent.

Artificial Intelligence and Expert Systems

An expert system is a computer program that contains a knowledge base, a set of rules, and an inference engine. At the heart of these systems is the knowledge base—the repository of information that the rules are applied against.

Expert systems are typically designed for a specific purpose and have the capability to infer. For example, a hospital might have a knowledge base that contains various types of medical information; if a doctor enters the symptoms of weight loss, emotional disturbances, impaired sensory perception, pain in the limbs, and periods of irregular heart rate, the expert system can scan the knowledge base and diagnose the patient as suffering from beriberi.


Tip

How advanced are these expert systems? A computer named Watson, created by IBM, routinely wins at Jeopardy and beats human opponents by looking for the answers in unstructured data using a natural query software language. www-03.ibm.com/ibm/history/ibm100/us/en/icons/watson/.


The challenge in the creation of knowledge bases is to ensure that their data is accurate, that access controls are in place, that the proper level of expertise was used in developing the system, and that the knowledge base is secured. Neural networks are ones capable of learning new information. An example is shown in Figure 11.7. Artificial intelligence (AI) = expert systems + neural networks. Neural networks make use of nodes. There are typically multiple levels of nodes that are used to filter data and apply a weight. Eventually, an output is triggered and the fuzzy solution is provided. It’s called a fuzzy solution as it can lack exactness.

Image

FIGURE 11.7 Artificial Neural Network.

Security of the Software Environment

The security of software is a critical concern. Protection of the confidentiality, integrity, and availability of data and program variables is one of the CISSP top concerns. During the design phase you should consider what type of data the application will be processing. As an example, if the application will deal with order quantities, these numbers should be positive—you would not order negative 17 of an item. Sanitizing inputs and outputs to just allow qualified values reduces the attack surface. Think of the attack surface as all of the potential ways in which an attacker can attack the application.

Threat modeling is one technique that can be used to reduce the attack surface of an application. Threat modeling details the potential attacks, targets, and any vulnerabilities of an application. It can also help determine the types of controls that are needed to prevent an attack; as an example, when you enter an incorrect username or password do you get a generic response, or does the application respond with too much data, as shown in Figure 11.8? Just keep in mind that the best practice from a security standpoint is to not identify which entry was invalid, and have a generic answer.

Image

FIGURE 11.8 Non-generic Response that Should be Flagged by Threat Modeling.

Security doesn’t stop after the software development process. The longer a program has been in use, the more vulnerable it becomes as attackers have had more time to probe and explore methods to exploit the application. Attackers might even analyze patches to see what they are trying to fix and how such vulnerabilities might be exploited.

This means that CISSPs need to take security into account by including proper planning for timely patch and update deployment. A patch is a fix to a particular problem in software or operating system code that does not create a security risk but does create problems with the application. A hot fix is quick but lacks full integration and testing, and addresses a specific issue. A service pack is a collection of all patches to date; it is considered critical and should be installed as soon as possible.

The CISSP will also want to consider:

Image What is the software environment?—Where will the software be used? Is it on a mainframe, or maybe a publicly available website? Is the software run on a server, or is it downloaded and executed on the client (mobile code)?

Image What programming language and toolset was used?—While programming languages have evolved, some languages, such as C, are known to be vulnerable to buffer overflows.


Note

Java is estimated to be installed on more than 850 million computers, 3 billion phones, and millions of TVs, but it was not until August of 2014 that the company changed its update software to remove older, vulnerable versions of Java during the installation process.


Image What are the security issues in the source code?—Depending on how the code is processed it may or may not be easy to identify problems. As an example, a compiler translates a high-level language into machine language, whereas an interpreter translates the program line by line. Also, can the attacker change input, process, or output data? Will the program flag on these errors?

Image How do you identify malware and defend against it?—At a minimum, malware protection (antivirus) software needs to be deployed and methods to detect unauthorized changes need to be implemented.

Mobile Code

Mobile code refers to software that will be downloaded from a remote system and run on the computer performing the download; it is widely used on the web. The security issue with mobile code is that it is executed locally. Many times the user might not even know that the code is executing. Examples of mobile code include scripts, VBScript, applets, Flash, Java, and ActiveX controls. The downloaded program will run with the access rights of the logged-in user.

Java is the dominant programming language of mobile code. Java:

Image Is a compiled high-level language

Image Can be used on any type of computer

Image Employs a sandbox security scheme (virtual machine)

Java is extremely portable, since the output of the Java compiler is not executable code, but bytecode. Bytecode is a type of instruction set designed for efficient execution by a software interpreter to be executed by the Java run-time system, which is called the Java Virtual Machine (JVM).

A Java applet is a specific type of Java program that is designed to be transmitted over the Internet and automatically executed by a Java-compatible web browser, such as Edge, Internet Explorer, Firefox, Chrome, or Safari. The security issue with applets is that they are downloaded on demand, without further interaction with the user.

Buffer Overflow

A buffer is a temporary data storage area whose length and type is defined in the program code that creates it, or by the operating system. Buffer overflows occur when programmers use unsecured functions or don’t enforce limits on buffers—basically, when programmers do not practice good coding techniques. For example, a program should check for and prevent any attempt to stuff 32 letters into a buffer intended for 24 digits.

However, this type of error checking does not always occur, and buffer overflows are commonly used by attackers to gain access to systems and/or for privilege escalation. Attackers can use unprotected buffers to attempt to inject and run malicious code. Worse, if the original code executed has administrator or root level rights, those privileges are granted to the attacker as well. The result is that, many times, the attacker gains access to a privileged command shell on the system under attack. When this occurs, the attacker has complete control.

Because buffer overflows are such a huge problem, you can see that any hacker, ethical or not, is going to search for them. The best way to prevent them is to have perfect programs. Because that is not possible, there are compensating controls:

Image Audit the code—Nothing works better than a good manual audit. The individuals that write the code should not be the ones auditing the code. Audits should be performed by a different group of individuals trained to look for poorly written code and potential security problems. Although effective, this can be an expensive and time-consuming process with large, complex programs.

Image Use safer functions—There are programming languages that offer more support against buffer overflows than C. If C is going to be used, ensure that safer C library support is used.

Image Improved compiler techniques—Compilers such as Java automatically check whether a memory array index is working within the proper bounds.

Image Harden the stack—Buffer overflows lead to overwrites of code and pointers in the program’s stack space, which holds the code and predefined variables. This overwriting is called “smashing the stack.” A good paper on this topic is at insecure.org/stf/smashstack.html. However, products such as StackGuard and Visual Basic have evolved special guard buffers called canaries that are compiled into code. A canary is a protected zone that is added between chunks of stack code. The code’s execution is immediately halted if a canary is breached in a stack smashing attempt. Such techniques are not 100% effective and might still be vulnerable to heap overflows.

Financial Attacks

A large number of the attacks that occur today are for financial reasons. One example is a rounding-down attack. Rounding down skims off small amounts of money by rounding down the last few digits. Let’s say a bank account has $8,239,128.45 and the amount is rounded down to $8,239,128.40. A salami attack is similar; it involves slicing off small amounts of money so that the last few digits are truncated. As an example, $8,239,128.45 becomes $8,239,128.00. Both rounding and salami attacks work under the concept that small amounts will not be missed and that over time the pennies will add up to big profits for the attacker. When you take a break from studying, check out the movie “Office Space” to see a good example of an attempted salami attack.

The attacker might even plant code with the thought of waiting until a later date to have it execute. This is called a logic bomb, and while it is not just for financial attacks, it can cause a great deal of damage. The logic bomb can be designed to detonate on some predetermined action or trigger. Because they are buried so deep in the code, logic bombs are difficult to discover or detect before they become active. Fired employees might do this to strike back at their former employer.

Change Detection

Hashing is one of the ways in which malicious code can be detected. Hash-based application verification ensures that an application has not been modified or corrupted, by comparing the file’s hash value to a previously calculated value. If these values match, the file is presumed to be unmodified.

Change detection is another useful technique. Tripwire is an example of this type of software. Change detection software detects changes to system and configuration files. Most of these programs work by storing a hashed value of the original file in a database. Periodically, the file is rechecked and the hashed values are compared. If the two values do not match, the program can trigger an alert to signal that there might have been a compromise.

Hashed values are the most widely used mechanisms for detecting changes in files. Most software vendors provide a web-accessible summary that lists the fingerprints of all files included in their products. This gives users a way to ensure they have the authentic file.

Viruses

Computer viruses are nothing new; they have been around since the dawn of the computer era. What has changed through the years is the way in which viruses infect systems. Some of the ways in which viruses can spread include the following:

Image Master boot record infection—This is the oldest form of malicious attack. The technique involves the placement of malicious code on the master boot record. This attack was very effective in the days when everyone passed around floppy disks.

Image File infection—A slightly newer form of virus, file infectors rely on the user to execute the infected file to cause the damage. These viruses attach to files based on their extensions, such as .com and .exe. Some form of social engineering is normally used to get the user to execute the program.

Image Macro infection—This modern style of virus began appearing in the 1990s. Macro viruses exploit scripting services installed on your computer. Most of you probably remember the I Love You virus, a prime example of a macro infector. Macro infections are tied to Microsoft Office scripting capabilities.

Image Polymorphic—This style of virus has the capability to adapt and change so that it can attempt to evade signature-based antivirus tools (see below). Such viruses might even use encryption to avoid detection.

Image Multipartite—This style of virus can use more than one propagation method and targets both the boot sector and program files. One example is the NATAS (“Satan” spelled backwards) virus.

Image Meme—While not a true virus, it spreads like one and is basically a chain letter or email message that is continually forwarded. It is sometimes referred to as a cultural virus.

Viruses can use many techniques to avoid detection. Some viruses might spread fast, whereas others spread slowly. Fast infection viruses infect any file that they are capable of infecting. Others, known as sparse infectors, limit their rates of spread. Some viruses forego a life of living exclusively in files and load themselves into RAM. These viruses are known as RAM-resident.

Antivirus software is the best defense against these types of malware. Most detection software contains a library of signatures that it uses to detect viruses. A signature identifies a pattern of bytes found in the virus code. Here is an example of a virus signature:

X5O!P%@AP[4PZX54(P^)7CC)7$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*

If you were to copy this code into a text file and rename the extension so that the file is recognized as an executable, your antivirus software should flag the file as a virus. This file is actually harmless, but contains a signature found within a classic virus. This particular sequence was developed by the European Institute of Computer Anti-virus Research (EICER) as a means of testing the functionality of antivirus software.


Tip

Many antivirus programs work by means of signatures. Signature programs examine boot sectors, files, and sections of program code known to be vulnerable to viral programs. Although the programs are efficient, they are only as good as their latest signature lists. They must be updated regularly to detect the most recent type of computer viruses. Even then there are a variety of tools that attackers can use to try to make virus payloads harder to detect. One such tool is Tejon Cryptor.


Worms

Worms, unlike viruses, require no interaction on the user’s part to replicate and spread. One of the first worms to be released on the Internet was the RTM worm. It was developed by Robert Morris Jr. back in 1988 and was meant only to be a proof of concept. Its accidental release brought home the fact that this type of code can do massive damage to the Internet.

Today, the biggest changes to worms are:

Image The mechanism by which worms spread.

Image The new methods of how they attack.

Image The new types of payloads might do nothing more than display a message on your screen at a certain data and time, whereas others could encrypt your hard drive until you pay a ransom.

Image The goals of worms and even malware in general now tend to be much more specific. As an example, Stuxnet was developed to target programmable logic controllers (PLCs), which control the automation of centrifuges used for separating nuclear material.

Exam Prep Questions

1. A CISSP must understand the different types of application updates. All updates should be obtained from the manufacturer only, and deployed into production only once tested on non-production systems. As such, what is the best answer that describes updates, patches, hot fixes, and service packs?

Image A. A hot fix has undergone full integration testing, has been released to address vulnerability, and addresses a specific issue; in most cases a hot fix is not appropriate for all systems. A security patch lacks full integration and testing, has been released to address a vulnerability, and is mandatory. A service pack is a collection of patches that are critical, and should be installed quickly.

Image B. A hot fix is quick, lacks full integration and testing, and addresses a specific issue; in most cases a hot fix is not appropriate for all systems. A security patch is a collection of patches that are critical and has been released to address a vulnerability. A service pack is a collection of patches and is considered critical.

Image C. A hot fix is a quick collection of critical, install ASAP patches that address a specific issue; in most cases a hot fix is not appropriate for all systems. A security patch has undergone full integration testing, and has been released to address a vulnerability. A service pack is a collection of patches.

Image D. A hot fix is slow and has full integration and testing, and addresses a broad set of problems; in most cases a hot fix is appropriate for all systems. A security patch is a collection of patches that are not critical. A service pack is a collection of patches and is not considered critical.

2. Which is the correct solution that describes the CIA triad when applied to software security?

Image A. Confidentiality prevents unauthorized access, integrity prevents unauthorized modification, and availability deals with countermeasures to prevent denial of service to authorized users.

Image B. Confidentiality prevents unauthorized modification, integrity prevents unauthorized access, and availability deals with countermeasures to prevent denial of service to authorized users.

Image C. Confidentiality prevents unauthorized access, integrity prevents unauthorized modification, and availability deals with countermeasures to prevent unauthorized access.

Image D. Confidentiality deals with countermeasures to prevent denial of service to authorized users, integrity prevents unauthorized modification, and availability prevents unauthorized access.

3. Which of the following tools can be used for change detection?

Image A. DES

Image B. Checksums

Image C. MD5sum

Image D. Parity bits

4. Bob has noticed that when he inputs too much data into his new Internet application, it momentarily locks up the computer and then halts the program. Which of the following best describes this situation?

Image A. Fail-safe

Image B. Buffer overflow

Image C. Fail-open

Image D. Fail-soft

5. Which of the following types of database is considered a lattice structure, with each record having multiple parent and child records?

Image A. Hierarchical database management system

Image B. Network database management system

Image C. Object-oriented database management system

Image D. Relational database management system

6. Which database term refers to the capability to restrict certain fields or rows from unauthorized individuals?

Image A. Low granularity

Image B. High resolution

Image C. High granularity

Image D. Low resolution

7. Which of the following types of testing involves entering malformed, random data?

Image A. XSS

Image B. Buffer overflow

Image C. Fuzzing

Image D. Whitebox testing

8. OmniTec’s new programmer has left several entry points in its new e-commerce shopping cart program for testing and development. Which of the following terms best describes what the programmer has done?

Image A. Back door

Image B. Security flaw

Image C. SQL injection

Image D. Trapdoor

9. Generation 2 programming languages are considered what?

Image A. Assembly

Image B. Machine

Image C. High-level

Image D. Natural

10. Which of the following is considered middleware?

Image A. Atomicity

Image B. OLE

Image C. CORBA

Image D. Object-oriented programming

11. After Debbie became the programmer for the new payroll application, she placed some extra code in the application that would cause it to halt if she was fired and her name removed from payroll. What type of attack has she launched?

Image A. Rounding down

Image B. Logic bomb

Image C. Salami

Image D. Buffer overflow

12. While working on a penetration test assignment, you just discovered that the company’s database-driven e-commerce site will let you place a negative quantity into an order field so that the system will credit you money. What best describes this failure?

Image A. Referential integrity error

Image B. Buffer overflow

Image C. Semantic integrity error

Image D. Rounding down

13. Which of the following best describes bytecode?

Image A. Is processor-specific

Image B. Is used with ActiveX

Image C. Is not processor-specific

Image D. Is used with COM and DCOM

14. One of the best approaches to deal with attacks like SQL, LDAP, and XML injection is what?

Image A. Using type-safe languages

Image B. Manual review of code

Image C. Using emanations

Image D. Adequate parameter validation

15. Which of the following program techniques would this phrase most closely be associated with? Two objects may not know how the other object works and each is hidden from the other.

Image A. Data modeling

Image B. Network database management system

Image C. Object-oriented programming

Image D. Relational database management system

Answers to Exam Prep Questions

1. B. A hot fix is quick, lacks full integration and testing, and addresses a specific issue; in most cases a hot fix is not appropriate for all systems. A security patch is a collection of patches that are critical and has been released to address a vulnerability. A service pack is a collection of patches and is considered critical. Answers A, C, and D are all incorrect because the definitions are swapped and connected to the wrong solution. Software updates are optional, and usually functionality-, not security-, related. Firmware releases are produced to address security issues with hardware.

2. A. Answers B, C, and D are incorrect and are just scrambled definitions that look similar to confuse the test-taker. Confidentiality prevents unauthorized access, integrity prevents unauthorized modification, and availability deals with countermeasures to prevent denial of service to authorized users.

3. C. One of the ways in which malicious code can be detected is through the use of change-detection software. This software has the capability to detect changes to system and configuration files. Popular programs that perform this function include Tripwire and MD5sum. Answer A is incorrect because DES is an asymmetric algorithm. Answers B and D are incorrect because both checksums and parity bits can be easily changed and, therefore, do not protect the software from change.

4. D. A fail-soft occurs when a detected failure terminates the application while the system continues to function. Answers A and C are incorrect because a fail-safe terminates the program and disables the system, while a fail-open is the worst of events because it allows attackers to bypass security controls and easily compromise the system. Answer B is incorrect because although a buffer overflow could be the root cause of the problem, the question asks why the application is halting in the manner described.

5. B. Network database management systems are designed for flexibility. The network database model is considered a lattice structure because each record can have multiple parent and child records. Answer A is incorrect because hierarchical database management systems are structured like a tree: each record can have only one owner, and because of this restriction, hierarchical databases often can’t be used to relate to structures in the real world. Answer C is incorrect because object-oriented database management systems are not lattice-based and don’t use a high-level language like SQL. Answer D is incorrect because relational database management systems are considered collections of tables that are linked by their primary keys.

6. C. Granularity refers to control over the view someone has of the database. Highly granular databases have the capability to restrict certain fields or rows from unauthorized individuals. Answer A is incorrect because low granularity gives the database manager little control. Answers B and D are incorrect because high resolution and low resolution do not apply to the question.

7. C. Fuzzing is a form of blackbox testing that enters random input and monitors for flaws or a system crash. The idea is to look for problems in the application. Answers A, B, and D are incorrect: this is not an example of white-box testing, a buffer overflow, or XSS.

8. D. A trapdoor is a technique used by programmers as a secret entry point into a program. Programmers find these useful during application development; however, they should be removed before the code is finalized. All other answers are incorrect: answer B is also a security flaw, but is not as specific as trapdoor; back doors (answer A) are malicious in nature; SQL injection (answer C) is targeted against databases.

9. A. Programming languages are categorized as follows: Generation 1 is machine language, Generation 2 is assembly language, Generation 3 is high-level language, Generation 4 is very high-level language, and Generation 5 is natural language.

10. C. Common Object Request Broker Architecture (CORBA) is vendor-independent middleware. Its purpose is to tie together different vendor products so that they can seamlessly work together over distributed networks. Answer B is incorrect because Object Linking and Embedding (OLE) is a proprietary system developed by Microsoft to allow applications to transfer and share information. Answer A is incorrect because atomicity deals with the validity of database transactions. Answer D is incorrect because object-oriented programming is a modular form of programming.

11. B. A logic bomb is designed to detonate sometime later when the perpetrator leaves; it is usually buried deep in the code. Answers A, C, and D are incorrect: rounding down skims off small amounts of money by rounding down the last few digits; a salami attack involves slicing off small amounts of money so that the last few digits are truncated; a buffer overflow is storing more information in a buffer than it is intended to hold.

12. C. Semantic integrity controls logical values, data, and operations that could affect them, such as placing a negative number in an order quantity field. Answers A, B, and D are incorrect: referential integrity ensures that foreign keys only reference existing primary keys; buffer overflow is storing more information in a buffer than it is intended to hold; rounding down skims off small amounts of money by rounding down the last few digits.

13. C. Bytecode is not processor-specific and is a form of intermediate code used by Java. Answers A, B, and D are incorrect: bytecode is not processor-specific and can run on many systems; bytecode is not associated with ActiveX; COM / DCOM are technologies associated with ActiveX.

14. D. Adequate parameter validation is seen as the best approach to dealing with input problems. All data, when input, when processed, and when output, must be checked for validity. Answers A, B, and C are incorrect: moving to a type-safe language will not prevent buffer overflows; although manual review of code may find some problems, this might not always be possible; switching to mobile code is also something that is not seen as feasible in all situations.

15. C. Object-oriented development allows an object to hide the way an object works from other objects. Answers A, B, and D are incorrect: data modeling considers data independently; network database management systems are designed for flexibility; relational database management systems are considered collections of tables that are linked by their primary keys.

Need to Know More?

Building security software: www.owasp.org/index.php/OWASP_Guide_Project

Microsoft SDL: www.microsoft.com/en-us/sdl/

Six steps to change management: www.techrepublic.com/article/implement-change-management-with-these-six-steps/5074869

Object-oriented programming: encyclopedia2.thefreedictionary.com/Object-oriented+programming

Meme virus: asocial.narod.ru/en/articles/memes.htm

Java exploits: heimdalsecurity.com/blog/java-biggest-security-hole-your-computer/

Buffer overflows: insecure.org/stf/smashstack.html

How Trojan horse programs work: computer.howstuffworks.com/trojan-horse.htm

The history of SQL injection attacks: motherboard.vice.com/read/the-history-of-sql-injection-the-hack-that-will-never-go-away

SQL injection and database manipulation: www.securiteam.com/securityreviews/5DP0N1P76E.html

CORBA FAQ: www.omg.org/gettingstarted/corbafaq.htm

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.17.140