Chapter 5. Life Cycle Management

THE OBJECTIVE OF THIS CHAPTER IS TO ACQUAINT THE READER WITH THE FOLLOWING CONCEPTS:

  • Implementing all seven phases of the System Development Life Cycle

  • Understanding how to evaluate the business case for proposed system and feasibility to ensure alignment to business strategy

  • Impact of international standards relating to software development

  • Understanding the process of conducting the system design analysis

  • Understanding the process for developing systems and infrastructure requirements, acquisition, and development and testing to ensure it meets business objectives

  • Understanding how to evaluate the readiness of systems for production use

  • Knowledge of management's responsibility for accepting and maintaining the system

  • Understanding the purpose of postimplementation system reviews relating to anticipated return on investment and proper implementation of controls

  • Evaluating system retirement and disposal methods to ensure compliance with legal policies and procedures

  • Basic introduction to the different methodologies used in software development.

Life Cycle Management

When auditing software development, you will assess whether the prescribed project management, System Development Life Cycle, and change-management processes were followed. You are expected to evaluate the processes used in developing or acquiring software to ensure that the program deliverables meet organizational objectives.

We will discuss software design concepts and terminology that every CISA is expected to know for the CISA exam.

Governance in Software Development

Every organization strives to balance expenditures against revenue. The objective is to increase revenue and reduce operating costs. One of the most effective methods for reducing operating costs is to improve software automation.

Computer programs may be custom-built or purchased in an effort to improve automation. All business applications undergo a common process of needs analysis, functional design, software development, implementation, production use, and ongoing maintenance. In the end, every program will be replaced by a newer version, and the old version will be retired. This is what is referred to as the life cycle.

It is said that 80 percent of a business's functions are related to common clerical office administration tasks. The clerical functions are usually automated with commercial off-the-shelf software. Each organization does not need to custom-write software for word processing and spreadsheets, for example. These basic functions can be addressed through traditional software that is easily purchased on the open market. This type of commodity software requires little customization. The overall financial advantages will be small but useful. When purchasing prewritten software, an organization follows a slightly different model that focuses on selection rather than software design and development. Prewritten software follows a life cycle of needs analysis and selection, followed by implementation, which leads to production use and ongoing maintenance.

The remaining 20 percent of the business functions may be unique or require highly customized computer programs. This is the area of need addressed by custom-written computer software. The challenge is to ensure that the software actually fulfills the organization's strategic objectives.

Let's make sure that you understand the difference between a strategic system and a traditional system.

A strategic system fundamentally changes the way the organization conducts business or competes in the marketplace. A strategic system significantly improves overall business performance with results that can be measured by multiple indicators. These multiple indicators include measured performance increases and noticeable improvement on the organization's financial statement. An organization might, for example, successfully attain a dramatic increase in sales volume as a direct result of implementing a strategic system. The strategic system may create an entirely new sales channel to reach customers.

Auction software implemented and marketed by eBay is an example of a strategic system. The strategic software fundamentally changes the way an organization will be run. For a strategic system to be successfully implemented, management and users must be fully involved. Anything less than significant fundamental change with dramatic, measurable results would indicate that the software is a traditional system.

Note

You should be aware that some software vendors will use claims of strategic value with obscure results to try to sell lesser products at higher profit margins. Your job is to determine whether the organizational objectives have been properly identified and met. Claims of improvement should be verifiable.

Traditional systems provide support functions aligned to fulfill the needs of an individual or department. Examples of traditional systems include general office productivity and departmental databases. The traditional system might provide 18 percent return on investment, whereas a strategic system might have a return of more than 10 times the investment.

Managing Software Quality

Controlling quality is an ongoing process of improving the yield of any effort. True quality is designed into a system from the beginning. In contrast, inspected quality is no more than a test after the fact. This section covers models designed to promote software quality:

  • Capability Maturity Model (CMM)

  • ISO Software Process Improvement and Capability dEtermination (SPICE)

Let's review the Capability Maturity Model and introduce the related international standards.

Capability Maturity Model

As you may recall from Chapter 3, "IT Governance," the Software Engineering Institute's Capability Maturity Model (CMM) was developed to provide a strategy for determining the maturity of current processes and to identify the next steps required for improvement. The CMM roots are based in lessons learned from assembly-line automation during the U.S. industrial age of the early 1900s. Several analogies exist between CMM and the manufacturing process quality concepts of Walter Shewhart, W.W. Royce, W. Edwards Deming, Joseph Duran, and Philip Crosby. Most of the people who understood the analogy relating manufacturing processes to business processes have long since retired. This promotes a false impression that CMM is new, when it's just new to them. Let's take a quick overview of the levels contained in the CMM model:

Level 0

This level is implied but not always recognized. Zero indicates that nothing is getting done.

Level 1 = Initial

Processes at this level are ad hoc and performed by individuals. Typical characteristics are ad hoc activities, firefighting, unpredictable results, and management activities that vary without consistency.

Level 2 = Repeatable

These processes are documented and can be repeated. Characteristics are more semiformal methods, tension problems between project managers and line managers, inspected quality rather than built in, and no formal priority system.

Level 3 = Defined

These are lessons learned and integrated into institutional processes. Standardization begins to take place between departments with qualitative measurement (opinion of quality). Formal criteria is developed for use in selection processes.

Level 4 = Managed

This level equates to quantitative measurement (numeric measurement of quality). Portfolio management is engrained into all decisions. A formal project priority system is practiced with a Project Management Office (PMO) governing projects.

Level 5 = Optimized

This is the highest level of control with continuous improvement using statistical process control. A culture of constant improvement is pervasive with a desire to fine-tune the last available percentages to squeak out every remaining penny of profit.

The Software Engineering Institute (SEI) estimates that it may take 13 to 25 months to move up to each successive level. It's just not possible to leapfrog over to the next level because of the magnitude of change required to convert the organization's attitude, experiences, and culture. SEI was one of the first organizations to define the evolution of software maturity. The CMM model has been expanded to cover all types of processes used to run a business or a government office.

International Organization for Standardization

A significant number of the best practices for quality in American manufacturing have been adopted by the International Organization for Standardization (ISO). ISO is a worldwide federation of government standard bodies operating under a charter to create international standards in order to promote commerce and reduce misrepresentation. One of the functions of the ISO is to identify regional best practices and promote acceptance worldwide. The original work of Shewhart and derivative works of Crosby, Deming, and Duran focused on reducing manufacturing defects. Their original concepts have been expanded over the last 50 years to include almost all business processes. The CMM represents the best-practice method of measuring process maturity. It makes no difference whether the process is administrative, manufacturing, or software development. ISO has modified the descriptive words used in the levels of the CMM for international acceptance.

As a CISA, you should be interested in three of the ISO standards relating to development and maturity.

ISO 15504: Variation of CMM

The ISO 15504 standard is a modified version of the CMM. These changes were intended to clarify the different maturity levels across different languages and cultures. Notice that level 0 is relabeled as incomplete. Level 1 is renamed to indicate that the process has been successfully performed. Level 2 indicates that the process is managed. Level 3 shows that the process is well established in the organization. Level 4 indicates that the process output will be very predictable. Level 5 shows that the process is under a continuous improvement program using statistical process control. Table 5.1 illustrates the minor variations between the CMM and the ISO 15504 standard, also known as SPICE.

Table 5.1. CMM Compared to ISO 15504 (SPICE)

CMM Levels

ISO 15504 Levels

0 = process did not occur yet

ISO level 0 = Incomplete

CMM level 1 = Initial

ISO level 1 = Performed

CMM level 2 = Repeatable

ISO level 2 = Managed

CMM level 3 = Defined

ISO level 3 = Established

CMM level 4 = Managed

ISO level 4 = Predictable

CMM level 5 = Optimized

ISO level 5 = Optimized

The purpose of ISO 15504 is identical to the CMM. Variations in language forced the ISO version to use slightly different terminology to express their objectives. Let's move on to a quick overview of two ISO quality-management standards.

ISO 9001: Quality Management

The ISO has promoted a series of quality practices that were previously known as ISO 9000, 9001, and 9002 for design, manufacturing, and service, respectively. These have now been combined into the single ISO 9001 reference. Many organizations have adopted this ISO standard to facilitate worldwide acceptance of their products in the marketplace. ISO compliance also brings the benefits of a better perception by investors. Compliance does not guarantee a better product, but it does provide additional assurances that an organization should be able to deliver a better product.

Within the ISO 9001 reference, you will find that a formally adopted quality manual is required by the ISO 9001:2000. The ISO 9001:2000 quality manual specifies detailed procedures for quality management by an organization. The same quality manual provides procedures for strong internal controls when working with vendors, including a formal vendor evaluation and selection process. To ensure quality, the ISO 9001:2000 mandates that personnel performing work shall be properly trained and managed to improve competency. Because an organization claiming ISO compliance is required to have a thoroughly written quality manual in place, an IS audit may request evidence demonstrating that the quality processes are actively used.

Note

It's important to understand the naming convention of ISO standards. Names of ISO standards begin with the letters ISO, which are then followed by the standard's numeric number, a colon (:), and the year of implementation. You would read ISO 9001:2000 as ISO standard 9001 adopted in year 2000 (or updated in year 2000).

ISO 9126: Software Quality

ISO 9126 is a variation of ISO 9001. The ISO standard 9126-2:2003 explains how to apply international software-quality metrics. This standard also defines requirements for evaluating software products and measuring specific quality aspects.

The six quality attributes are as follows:

  • Functionality of the software processes

  • Ease of use

  • Reliability with consistent performance

  • Efficiency of resources

  • Portability between environments

  • Maintainability with regard to making modifications

Note

You need to know the six major attributes contained in the ISO 9126 standard.

Once again, organizations claiming ISO compliance should be able to demonstrate active use of software metrics and supporting evidence for ISO 9126-2 compliance. You need to remember that no evidence equals no credit.

As a CISA, you should be prepared to identify the terminology used by the CMM and various ISO quality standards. Now that we've reviewed these maturity standards, it is time to mention the matching ISO document-control requirements.

ISO 15489: Records Management

ISO records retention standard 15489:2001 was designed to ensure that adequate records are created, captured, and managed. This standard applies to managing all forms of records and record-keeping policies. It does not matter whether the record format is electronic, printed, or voice. It makes no difference whether the records are used by a public or private organization. The 15489 standard provides the guidance necessary for minimum compliance with ISO 9001 quality standards and records management under ISO 14001. Therefore, an organization must be 15489 compliant to be ISO 9001 or ISO 14001 compliant.

Does it apply to anyone else? The answer is definitely yes. Records management governs the record-keeping practices of any person who creates or uses records in the course of their business activities. It also applies to those activities in which a record is expected to exist. Examples include the following:

  • Financial bookkeeping records

  • Contracts and business transactions

  • Government filings

  • Setting policies and operating standards

  • Payroll and HR records

  • Establishing procedures and guidelines

  • Keeping records in the normal course of business

All organizations need to identify the regulations that have bearing on their activities. Record keeping is necessary to document their actions in order to provide adequate evidence of compliance. Remember that no evidence equals no proof, which demonstrates noncompliance. Business activities are defined broadly by ISO to include public administration, nonprofit activities, commercial use, and other activities expected to keep records. All fundraising campaigns fall under the ISO 15489 standard.

A record is expected to reflect the truth of the communications between the parties, the action taken, and the evidence of the event. Records are expected to be authentic with reliable information of high integrity. Auditors need to be aware of the legal challenges whenever records are introduced as evidence in a court of law. Every good defense lawyer will attempt to dispute the authenticity or integrity of each record by allegations of tampering, mishandling, incompetence, or computer system compromise. Without excellent record keeping, the value of the record as evidence may be diminished or completely lost. This is why the chain of custody actually starts with how records are created in the first place.

ISO 15489 is used by court judges and lawyers as the international standard for determining liability in addition to sentencing during prosecution. All organizations, including yours, should have already adopted a records classification scheme (data classification). The purpose is to convey to the staff how to properly protect assorted records. Consider the different requirements for each of the following types of records:

  • Trade secrets

  • Unfiled patent applications

  • Personal information and privacy data such as HIPAA or bank account numbers

  • Intellectual property rights

  • Commercial contracts (possibly a confidential record) versus government contracts (a public record)

  • Financial data

  • Internal operating reports

  • Privileged information, including consultation with lawyers

  • Customer lists and transaction records (including professional certification)

  • Retirement and destruction of obsolete information

Record retention systems should be regularly reassessed to ensure compliance. The corresponding reports also need to be protected because they serve as evidence in support of compliance activities. Most of the fraud mentioned in the beginning of this book was discovered and prosecuted under ISO 15489 standards.

Note

CISA certification is recognized under the ISO 17024 requirements for all bodies issuing professional certification. ISACA complies with the ISO 15489 records management and ISO 17024 professional requirements governing all professional certifications. CISA certification simply meets the minimum standard under ISO 15489 and ISO 17024 for compliance. CISA is not an ISO standard, just ISO standards compliant.

Let's move on now. It is time to discuss the leadership role of management. We will begin with an overview of the steering committee.

Overview of the Steering Committee

The steering committee should be involved in software decisions to provide guidance toward fulfillment of the organizational objectives. We have already discussed the basic design of a steering committee in Chapter 3.

As you may recall, the steering committee comprises executives and business unit managers. Their goal is to provide direction for aligning IT functions with current business objectives. Steering committees provide the advantage of increasing the attention of top management on IT. The most effective committees hold regular meetings focusing on agenda items from the business objectives rather than IT objectives. Most effective decisions are obtained by mutual agreement of the committee rather than by directive. The steering committee increases awareness of IT functions, while providing an avenue for users to become more involved. In this chapter, we are focusing on the identification of business requirements as they relate to the choices made for computer software.

As a CISA, you should understand how the steering committee has developed the vision for software to fulfill the organization's business objectives. What was the thought process that led the steering committee to its decision? Two common methods are the use of critical success factors (CSFs) and a scenario approach to planning.

Identifying Critical Success Factors

A critical success factor (CSF) is something that must go right every time. To fail a CSF would be a showstopper. The process for identifying CSFs begins with each manager focusing on their current information needs. This thought process by the managers will help develop a current list of CSFs.

Some of the factors may be found in the specific industry or chosen business market. External influences—such as customer perception, current economy, pressure on profit margin, and posturing of competitors—could be another source of factors. The organization's internal challenges can provide yet another useful source. These can include internal activities that require attention or are currently unacceptable.

As an IS auditor, you should remain aware that critical success factors are highly dependent on timing. Each CSF should be reexamined on a regular basis to determine whether it is still applicable or has changed.

Using the Scenario Approach

The scenario approach is driven by a series of "what if" questions. This technique challenges the planning assumptions by creating scenarios that combine events, trends, operational relationships, and environmental factors. A series of scenarios are created and discussed by the steering committee. The most likely scenario is selected for a planning exercise.

The major benefit of this approach is the discovery of assumptions that are no longer relevant. Rules based on old assumptions and past situations may no longer apply. The scenario approach also provides an opportunity to uncover the mindset of key decision-makers.

The role of the scenario is to identify the most important business objectives and CSFs. After some discussion, the scenario should reveal valuable information to be used in long-term plans. Remember, the goal is to align computer software with the strategic objectives of the organization, which we will look at next.

Aligning Software to Business Needs

As a CISA, you should understand the alignment of computer software to business needs. Information systems provide benefits by alignment and by impact. Alignment is the support of ongoing business operations. Changes created in the work methods and cost structure are referred to as impact.

Each organizational project will undergo a justification planning exercise. Management will need to determine whether the project will generate a measurable return on investment. The purpose of this exercise is to ensure that the time, money, and resources are well spent. The basic business justification entails the following four items:

Establish the need

Business needs can be determined from internal and external sources. Internal needs can be developed by the steering committee and by interviewing division managers. Internal performance metrics are an excellent source of information. External sources include regulations, business contracts, and competitors.

Identify the work effort

The next step is to identify the people who can provide the desired results. Management's needs are explained to the different levels of personnel who perform the work. The end-to-end work process is diagrammed in a flowchart. Critical success factors are identified in the process flow. A project plan is created that estimates the scope of the work. This may use traditional project management techniques in combination with the System Development Life Cycle.

Summarize the impact

The anticipated business impact can be presented by using quantitative and qualitative methods. It is more effective to convert qualitative statements into semiquantitative measurements. Semiquantitative measurements can be converted into a range scale of increased revenue by implementing the system or by cost savings. The CISA candidate should recall the discussion in Chapter 2, "Audit Process," regarding the use of semiquantitative measurement with a range scale similar to A, B, C, and F school report card grades.

Present the benefits

Management will need to be sold on the value of the system. The benefits will typically entail promises of eliminating an existing problem, improving competitive position, reducing turnaround time, or improving customer relations.

In this chapter, we are focusing our discussions on computer software. The steering committee should be involved in decisions concerning software priorities and necessary functions. Each software objective should be tied to a specific business goal. The combined input will help facilitate a buy versus build decision about computer software. Should the organization buy commercial software or have a custom program written? Let's consider the questions to ask in regard to making this decision. The list presented here is for illustration purposes; however, it is similar to the standard line of questions an auditor will ask:

  • What are the specific business objectives to be attained by the software? Does a printed report exist?

  • Is there a defined list of objectives?

  • What are the quantitative and qualitative measurements to prove that the software actually fulfills the stated objectives?

  • What internal controls will be necessary in the software?

  • Is commercial software available to perform the desired function?

  • What level of customization would be required?

  • What mechanisms will be used to ensure the accuracy, confidentiality, timely processing, and proper authorization for transactions?

  • What is the time frame for implementation?

  • Should building the software be considered because of a high level of customization needed or the lack of available software?

  • Are the resources available to build custom software?

  • How will funding be obtained to pay for the proposed cost?

The steering committee should be prepared to answer each of these questions and use the information to select the best available option. Effective committees will participate in brainstorming workshops with representation from their respective functional areas. The goal is to solicit enough information to reach an intelligent decision. The final decision may be to buy software, build software, or create a hybrid of both.

Organizations may use the answers from the questions asked in conjunction with a written request to solicit offers from vendors. The process of inviting offers incorporates a statement of the current situation with a request for proposal (RFP). The term RFP is also related to an invitation to tender (ITT) or request for information (RFI).

RFI/RFP Process

The steering committee charters a project team to perform the administrative tasks necessary for an information request (RFI) or proposal request (RFP). The request is sent to a small number of prospective vendors or posted to the public, depending on the client's administrative operating procedure. An internal software development staff may provide their own proposal in accordance with the RFP or participate on the review team. A typical RFP will contain at least the following elements:

  • Cover letter explaining the specific interest and instructions for responding to the RFP

  • Overview of the objectives and time line for the review process

  • Background information about the organization

  • Detailed list of requirements, including the organization's desired service level

  • Questions to the vendor about their organization, expertise of specific individuals documented in a skills matrix, support services, implementation services, training, and current clients

  • Request of a cost estimate for the proposed configuration with details about the initial cost and all ongoing costs

  • Request for a schedule of demonstrations and visit to the installation site of existing customers

Note

All government agencies and many commercial organizations require separation of duties during the bid review process. A professional purchasing manager will become the vendor's contact point to prevent the vendor from having any direct contact with the buyer. The intention is to eliminate any claims of bias or inappropriate influence over the final decision to purchase.

The RFP project team works with the steering committee to formulate a fair and objective review process. The organization may consult ISO 9001:2000 and ISO 9126-2:2003 standards for guidance. The proposed software could be evaluated by using the CMM. In addition, ISACA's Control Objectives for Information and related Technology (CObIT) provides valuable information to be considered when reviewing a vendor and their products.

As an IS auditor, you should remember that your goal is to be thorough, fair, and objective. Care should be given to ensure that the requirements and review do not grant favor toward a particular vendor. The reviews are actually a form of audit and should include the services of an internal or external IS auditor. It is essential that vendor claims are investigated to ensure that the software will fulfill the desired business objectives.

Reviewing Vendor Proposals

The systematic process of reviewing vendor proposals is a project unto itself. Each proposal has to be scrutinized to ensure compliance requirements identified in the original RFP documents provided to the vendor. You need to ask the following questions:

  • Does the proposed system meet the organization's defined business requirements?

  • Does the proposed system provide an advantage that our competitors will not have, or does the proposed system provide a commodity function similar to that of our competitors?

  • What is the estimated implementation cost measured in total time and total resources?

  • How can the proposed benefits be financially calculated? The cost of the system and the revenue it generates should be noticeable in the organization's financial statement. To calculate return on investment, the total cost of the system including manpower is divided by the cost savings (or revenue generated) and identified as a line item in the profit and loss statement.

  • What enhancements are required to meet the organization's objectives? Will major modifications be required?

  • What is the level of support available from the vendor? Support includes implementation assistance, training, software update, system upgrade, emergency support, and maintenance support.

  • Has a risk analysis been performed with consideration of the ability of the organization and/or vendor to achieve the intended goal?

  • Can the vendor provide evidence of financial stability?

  • Will the organization be able to obtain rights to the program source code if the vendor goes out of business? Software Escrow refers to placing original software programs and design documentation into the trust of a third party (similar to financial escrow). The original software is expected to remain in confidential storage. If the vendor ceases operation, the client may obtain full rights to the software and receive it from the escrow agent. A small number of vendors may agree to escrow the source code, whereas most would regard the original programs as an intellectual asset that can be resold to another vendor.

Tip

Modern software licenses provide only for the right to benefit from the software's use, not software ownership.

One of the major problems in reviewing a vendor is the inability to get a firm commitment in writing for all issues that have been raised. There are major vendors that will respond to the RFP with a lowball offer that undercuts the minimum requirements. Their motive is to win by low bid and then overcharge the customer with expensive change orders to bring the implementation up to the customer's stated objective.

Note

A CISA reviews the documentation of business needs and that of the proposed system. The objective is to ensure that the system is properly aligned to business requirements and contains the necessary internal controls.

Change Management

The accepted method of controlling changes to software is to use a change control board (CCB). Members of the change control board include IT managers, quality control, user liaisons from the business units, and internal auditors. A vice president, director, or senior manager presides as the chairperson. The purpose of the board is to review all change requests before determining whether authorization should be granted. This fulfills the desired separation of duties. Change control review must include input from business users. Every request should be weighed to determine business need, required scope, level of risk, and preparations necessary to prevent failure.

You can refer to the client organization's policies concerning change control. You should be able to determine whether separation of duties is properly enforced. Every meeting should include a complete tracking of current activities and the minutes of the meetings. Approval should be a formal process. The ultimate goal is to prevent business interruption. This is performed by following the principles of version control, configuration management, and testing. We discuss separation of duties with additional detail in Chapter 6, "IT Service Delivery."

Managing the Software Project

Let's move on to a discussion of the challenges in managing a software development project. In this section, you'll learn about the two main viewpoints for managing software development. You'll then take a closer look at the role of traditional project management in software development.

Choosing an Approach

There are two opposing viewpoints on managing software development: evolutionary and revolutionary.

The traditional viewpoint promotes evolutionary development. The evolutionary view is that the effort for writing software code and creating prototypes is only a small portion of software development. The most significant work effort occurs during the planning and design phase. The evolutionary approach works on the premise that the number one source of failures is a result of errors in planning and design. Evolutionary software may be released in incremental stages beginning with a selected module used in the architecture of the first release. Subsequent modules will be added to expand features and improve functionality. The program is not finished until all the increments are completed and assembled. The evolutionary development approach is designed to be integrated into traditional software life cycle management.

The opposing view is that a revolution is required for software development. The invention of advanced fourth-generation programming languages (4GL) empowers business users to develop their own software without the aid of a trained programmer. This approach is in stark contrast to the traditional view of developing specific requirements with detailed specifications before writing software. The revolutionary development approach is based on the premise that business users should be allowed to experiment in an effort to generate software programs for their specific needs. The end user holds all the power of success or failure under this approach. The right person might produce useful results; however, the level of risk is substantially greater. The revolutionary approach is difficult to manage because it does not fit into traditional management techniques. Lack of internal controls and failure to obtain objectives are major concerns in the revolutionary development approach.

Note

The analogy to revolutionary development would be to tell a person to go write their own software. A tiny number of individuals would have the competence necessary to be successful.

Using Traditional Project Management

Evolutionary software development is managed through a combination of the System Development Life Cycle (SDLC) and traditional project management. We covered the basics of project management using the Project Management Institute (PMI) methodology in Chapter 1, "Secrets of a Successful IS Auditor." The SDLC methodology—which is discussed in detail in the following section—addresses the specific needs of software development, but still requires project management for the nondevelopment business management functions.

When using traditional project management, the advantages include Program Evaluation Review Technique (PERT) with a Critical Path Methodology (CPM). You will need to be aware of the two most common models used to illustrate a software development life cycle: the waterfall model and the spiral model.

Waterfall Model

Evolutionary software development is an iterative process of requirements, prototypes, and improvement. In the 1970s, Barry Boehm used W.W. Royce's famous waterfall diagram to illustrate the software development life cycle. A simplified version of the waterfall model used by ISACA is shown in Figure 5.1.

Simplified waterfall model (W.W. Royce)

Figure 5.1. Simplified waterfall model (W.W. Royce)

Based on the SDLC phases, this simplified model assumes that development in each phase will be completed before moving into the next phase. That assumption is not very realistic in the real world. Changes are discovered that regularly require portions of software to undergo redevelopment.

Boehm's version of the software life cycle model contained seven phases of development. Each of the original phases included validation testing with a backward loop returning to the previous phase. The backward loop provides for changes in requirements during development. Changes are cycled back to the appropriate phase and then regression-tested to ensure that the changes do not produce a negative consequence. Figure 5.2 shows Boehm's model as it appeared in 1975 from the Institute of Electrical and Electronics Engineers (IEEE).

Spiral Model

About 12 years later, Boehm developed the spiral model to demonstrate the software life cycle including evolutionary versions of software. The original waterfall model implied management of one version of software from start to finish. This new spiral model provided a simple illustration of the life cycle that software will take in the development of subsequent versions. Each version of software will repeat the cycle of the previous version while adding enhancements. Figure 5.3 shows the cycle of software versions in the spiral model.

Boehm's modified waterfall model

Figure 5.2. Boehm's modified waterfall model

Spiral model for software life cycle

Figure 5.3. Spiral model for software life cycle

Notice how the first version starts in the planning quadrant of the lower left and proceeds through requirements into risk analysis and then to software development. After the software is written, we have our first version of the program. The planning cycle then commences for the second version, following the same path through requirements, risk analysis, and development. The circular process will continue for as long as the program is maintained.

Overview of the System Development Life Cycle

All computer software programs undergo a life cycle of transformation during the journey from inception to retirement. The System Development Life Cycle (SDLC) used by ISACA is designed as a general blueprint for the entire life cycle. A client organization may insert additional steps in their methodology. This international SDLC model comprises seven unique phases with a formal review between each phase (see Figure 5.4).

Tip

Auditors will encounter SDLC models with only five or six phases. Upon investigation, it becomes obvious that someone took an inappropriate shortcut, skipping one of the seven phases. A smart auditor will pick up on this lack of understanding to investigate the organization further and discover any additional weakness created by this mistake. Failure to implement all seven phases indicates that a major control failure is present.

Let's start with a simple overview of SDLC:

Phase 1: Feasibility Study

This phase focuses on determining the strategic benefits that the new system would generate. Benefits can be financial or operational. A cost estimate is compared to the anticipated payback schedule. Maturity of the business process and personnel capabilities should be factored into the decision. Three primary goals in phase 1 are as follows:

  • Define the objectives with supporting evidence. New policies might be created to demonstrate support for the stated objectives.

  • Perform preliminary risk assessment.

  • Agree upon an initial budget and expected return on investment (ROI).

Phase 2: Requirements Definition

The steering committee creates a detailed definition of needs. We discussed this topic a few pages ago. The objective is to define inputs, outputs, current environment, and proposed interaction. The system user should participate in the discussion of requirements. In phase 2 the goals include the following:

  • Collect specifications and supporting evidence.

  • Identify which standards will be implemented in the specifications.

  • Create a quality control plan to ensure that the design remains compliant to the specifications.

Seven phases of SDLC

Figure 5.4. Seven phases of SDLC

Phase 3: System Design or Selection

In phase 3, the objective is to plan a solution (strategy) using the objectives from phase 1 and specifications from phase 2. The decision to buy available software or build custom software is based on management's determination regarding fitness of use. The client moves in one of two possible directions based on whether the decision is to build or to buy:

Build (Design)

It was decided that the best option is to build a custom software program. This decision is usually reached when a high degree of customization is required. Efforts focus on creating detailed specifications of internal system design. Program interfaces are identified. Database specifications are created by using entity-relationship diagrams (ERDs). Flowcharts are developed to document the business logic portion of design.

Buy (Selection)

The decision is to buy a commercial software program. The RFP process is used to select the best vendor and product available based on the specification created in phase 2.

Phase 4: Development or Configuration

The client continues down one of two possible directions based on the earlier decision of build versus buy:

Build (Development)

The design specifications, ERD, and flowcharts from phase 3 will become the master plan for writing the software. Programmers are busy writing the individual lines of program code. Prototypes are built for functional testing. Software undergoes certification testing to ensure that everything will work as intended without any surprises or material defects. Component modules of software will be written, tested, and submitted for user approval. The first stages of user acceptance testing occur during this phase.

Buy (Configuration)

Customization is typically limited to program configuration settings with a limited number of customized reports. The selection process for customization choices should be a formal project.

Phase 5: Implementation

This phase is common to both buy and build decisions. The new software is installed using the proposed production configuration. Everyone from the support staff to the user is trained in the new system. Final user acceptance testing begins. The system undergoes a process of final certification and management accreditation prior to approval for production use:

  • Certification is a technical process of testing the finished design and the integrity of the chosen configuration.

  • Accreditation represents management's formal acceptance of the complete system as implemented.

Accreditation includes the environment, personnel, support documentation, configuration, and technology. With formal management accreditation, the approved implementation may now begin production use (go live).

Phase 6: Postimplementation

After the system has been in production use, it is reviewed for its effectiveness to fulfill the original objectives. The implementation of internal controls is also reviewed. System deficiencies are identified. Goals in phase 6 include the following:

  • Compare performance metrics to the original objectives.

  • Analyze lessons learned.

  • Re-review the specifications and requirements annually.

  • Implement requests for new requirements, updates, or disposal.

The last step in phase 6 is to perform an ROI calculation comparing cost to the actual benefits received. Over time, the operating requirements will always change.

Phase 7: Disposal

The final phase is the proper disposal of equipment and purging of data. Assets must undergo a formal review process to determine when the system can be shut down for dismantling. Legal requirements may prohibit the system from being completely shut down. In phase 7, the goals include the following:

  • Archive old data.

  • Mark retention requirements and specify destruction date (if any). Be aware that certain types of records may need to be retained forever.

  • Management signs a formal authorization for the disposal and formally accepts any resulting liability.

If approved for disposal, the system data must be archived, remnants purged from the hardware, and equipment assets disposed of in an acceptable manner. Nobody within the organization should profit from the system disposal.

Note

Be careful not to confuse the SDLC with the Capability Maturity Model (CMM). A system life cycle covers the aspects of selecting requirements, designing software, installation, operation, maintenance, and disposal. The CMM focuses on metrics of maturity. CMM can be used to describe the maturity of IT governance controls.

Now that you have a general understanding of the SDLC model, we will discuss the specific methods used in each phase. These methods are designed to accomplish the stated SDLC objectives.

Phase 1: Feasibility Study

The Feasibility Study phase begins with the initial concept of engineering. In this phase, an attempt is made to determine a clearly defined need and the strategic benefits of the proposed system. A business case is developed based on initial estimates of time, cost, and resources. To be successful, the feasibility study will combine traditional project management with software development cost estimates.

Let's start with the business side of feasibility. The following points should be discussed and debated, and the outcome agreed upon with appropriate documentation:

  • Perception of need. Describe the present situation while defining a specific need to be met.

  • Link the need to a specific mission objective within the long-term strategy.

  • State the desired outcome.

  • Identify specific indicators of success and indicators of failure.

  • Perform a preliminary risk assessment. The outcome should include a statement of the security classification necessary if the decision is to proceed. Will it be common knowledge, or will it involve business secrets, classified data, or the need for other special handling?

  • Make an assessment of alternatives (AoA). Determine formal and informal criteria in support of the decision for whichever option is selected as the best choice. Document all the answers.

  • Prepare a preliminary budget for investment review. Traditional techniques need to be combined with an expert estimation of software development costs.

The most common model for estimating software development cost is the constructive cost model, which uses an estimated count of lines of program code and Function Point Analysis. Let's begin with the Constructive Cost Model.

Software Cost Estimation

The Constructive Cost Model (COCOMO) was developed by Boehm in 1981. This forecasting model provides a method for estimating the effort, schedule, and cost of developing a new software application. The original version is obsolete because of evolution changes in software development. COCOMO was replaced with COCOMO II in 1995.

The COCOMO II model provides a solid method for performing "what if" calculations that will show the effect of changes on the resources, schedule, staffing, and predicted cost. The COCOMO II model deals specifically with software programming activities but does not provide a definition of requirements. You must compile your requirements before you can use either COCOMO model. COCOMO II templates are available on the Internet to run in Microsoft Office Excel.

The COCOMO II model permits the use of three internal submodels for the estimations: Application Composition, Early Design, or Post Architecture. Within the three internal submodels, the estimator can base their forecast on a count of source lines of code or Function Point Analysis.

Source Lines Of Code (SLOC) forecasts estimates by counting the individual lines of program source code regardless of the embedded design quality. This method has been widely used for more than 40 years and is still used despite advances with 4GL programming tools. It is important for you to understand that counting lines of code will not measure efficiency. The most efficient program could have fewer lines of code, and less-efficient software could have more lines. Having a program with few lines of program code typically indicates that the finished software will run faster. Smaller programs also have the advantage of being easier to debug.

Function Point Analysis (FPA) is a structured method for classifying the required components of a software program. FPA was designed to overcome shortfalls in the SLOC method of counting lines in programs. The FPA method (see Figure 5.5) divides all program functions into five classes:

  • External input data from users and other applications

  • External output to users, reports, and other applications

  • External inquiries from users and other applications

  • Internal file structure defining where data is stored inside the database

  • External interface files defining how and where data can be logically accessed

The five classes of data are assigned a complexity ranking of low, average, or high. The ranking is multiplied by a numerical factor and tallied to achieve an estimate of work required (see Figure 5.6).

Concept overview of Function Point Analysis

Figure 5.5. Concept overview of Function Point Analysis

Calculating function points

Figure 5.6. Calculating function points

FPA is designed for an experienced and well-educated person who possesses a strong understanding of functional perspectives. Typically this is a senior-level programmer. An inexperienced person will get a false estimate. This model is intended for counting features that are specified in the early design. It will not create the initial definition of requirements. Progress can be monitored against the function point estimate to assess the level of completion. Changes can be recorded to monitor scope creep. Scope creep refers to the constant changes and additions that can occur during the project. Scope creep may indicate a lack of focus, poor communication, lack of discipline, or an attempt to distract the user from the project team's inability to deliver to the original project requirements.

Note

You should acquire formal training and consult a Functional Point Analysis training manual if you are ever asked to perform FPA.

The overall cost budget should include an analysis of the estimated personnel hours by function. The functions include clerical duties, administrative processes, analysis time, software development, equipment, testing, data conversion, training, implementation, and ongoing support.

Phase 1 Review and Approval

Best practices in software development require a review meeting at the end of each phase to determine whether the project should continue to the next phase. The review is attended by an executive chairperson, project sponsor, project manager, and the suppliers of key deliverables.

The meeting is opened by the chairperson. The project manager provides an overview of the business case and presents the initial assessment reports. Presentations are made to convey the results of risk management analysis for the project. Project plans and the initial budget are presented for approval. Meeting attendees review the phase 1 plans to ensure that the skills and resource requirements are clearly understood.

At the end of the phase review meeting, the chairperson determines whether the review has passed or failed based on the evidence presented. In the real world, a third option may exist: deciding that the project should be placed on temporary hold and reassessed at a future date. All outstanding issues must be resolved before granting approval to pass the phase review.

Formal approval is evidenced by a signed project charter accompanied by a preliminary statement of work (SOW). The project manager is responsible for preparing the project plan documentation. The sponsor grants formal authority by physically signing the documents. Without either of these documents, chances are a dispute will evolve into a conflict that compromises the project. A signed charter and SOW are frequently used to force cooperation by other departments or to prevent interruptions by politically motivated outsiders.

Auditor Interests in the Feasibility Study Phase

In the Feasibility Study phase, you should review the documentation related to the initial needs analysis. As an auditor, you review the risk mitigation strategy. You ask whether an existing system could have provided an alternative solution. The organization's business case and cost justifications are verified to determine whether their chosen solution was a reasonable decision. You also verify that the project received formal management approval before proceeding into the next phase.

Phase 2: Requirements Definition

The Requirements Definition phase is a documentation process focused on discovering the proposed system's business requirements. Defining the requirements requires a broader approach than the initial feasibility study. It is necessary to develop a list of specific conditions in which the system is expected to operate. Criteria need to be developed to specify the input and output requirements along with the system boundaries. Let's review the basic steps that can help define the requirements:

  • Functional statement of need as described in phase 1.

  • Competitive market research. Has the auditee defined what the customer wants? What does the competition offer?

  • Identification of legal requirements for data security. Somebody needs to download each regulation and create a list of specific shall points referenced by page, paragraph, and line number. This will eliminate scope creep and quell attempts to subvert the project scope. Look at this tiny example relating to compliance under the Payment Card Industry (PCI) laws.

    Phase 2: Requirements Definition
  • Identification of the type of reports required for legal filings, both government and customer.

  • Formal selection of security controls. Ignorance of the law is a wonderful way to assure a speedy conviction. The same concept applies to apathy.

  • Software conversion study. How will the data be migrated to the new system? When will the switch to production occur?

  • Cost benefit analysis to justify selection of features or functionality. It's doubtful that the first version will have all the features that everyone imagined. However, security and controls should never be compromised.

  • Risk management plan. A trade-off always occurs in relation to cost, time, scope, and features. An example is to limit internal use to a physical area rather than to violate security by allowing remote access. Later versions may include the additional security controls necessary for safe remote access. The number of uses may be initially restricted. Technical risks must be managed.

  • Analysis of impact with business cycles. How could the software be developed, tested, and later deployed without conflicting with the business cycle? Traditional project management plans are created to control the tasks.

In this phase of gathering detailed requirements, the entity-relationship diagram (ERD) technique is often used. The ERD helps define high-level relationships corresponding to a person, data element, or concept that the organization is interested in implementing. ERDs contain two basic components: the entity and the relationship between entities.

An entity can be visualized as a database comprising reports, index cards, or anything that contains the data to be used in the design. Each entity has specific attributes that relate to another entity. Figure 5.7 shows the basic design of an ERD.

Entity-relationship diagram

Figure 5.7. Entity-relationship diagram

It is a common practice to focus first on defining the data that will be used in the program. This is because the data requirement is relatively stable. The purpose of the ERD exercise is to design the data dictionary. The data dictionary provides a standardized term of reference for each piece of data in the database. After the data dictionary is developed, it will be possible to design a database schema. The database schema represents an orderly structure of all data stored in the database.

After the ERD is complete, it is time to begin construction of transformation procedures used to manipulate the data. The transformation procedures detail how data will be acquired and logically transformed by the application into usable information. Transformation procedures exceed the capability of fourth-generation (4GL) programming tools. It takes old-fashioned knowledge of the business process and the aid of a skilled software engineer (programmer) to refine an idea into usable logic. Business objectives should always win over the programmer's desire to show off the latest tools, or worse, to subvert a good idea that requires more effort.

High-level flowcharts define portions of the required business logic. A low-level flowchart illustrates the details of the transformation process from beginning to end. The flowchart concept will map each program process, decision choice, and handling of the desired result. The flowchart is a true blueprint of the business logic used in the program. Figure 5.8 shows a simple program flowchart.

Program flowchart

Figure 5.8. Program flowchart

The ERD and flowcharts from phase 2 provide the foundation for the system design in SDLC phase 3. Security controls are added into the design requirement during phase 2. You should understand that internal controls are necessary in all software designs.

Internal Controls

The internal controls for user account management functions are included in this phase to provide for separation of duties:

  • Preventative controls such as data encryption and unique user logins are specified.

  • Detective controls for audit trails and embedded audit modules are added.

  • Corrective controls for data integrity are included. Features that are not listed in the requirements phase will most likely be left out of the design.

It is important that the requirements are properly verified and supported by a genuine need. Each requirement should be traced back to a source document detailing the actions necessary for performance of work or legal compliance.

A gap analysis is used to determine the difference between the current environment and the proposed system. Plans need to be created to address the deficiencies that are identified in the gap analysis. The deficiencies may include personnel, resources, equipment, or training.

Phase 2 Review and Approval

At the end of phase 2, a phase 2 review meeting is held. This meeting is similar in purpose to the previous phase 1 review. This time, the review focuses on success criteria in the definition of software deliverables and includes a timeline forecast with date commitments. The proposed system users need to submit their final feedback assessment and comments before approval is granted to proceed into phase 3. The purpose of the phase 2 review meeting is to gain the authority to proceed with preliminary software design (phase 3). Once again, all outstanding issues need to be resolved before approval can be granted to proceed to the next phase.

Auditor Interests in the Requirements Definition Phase

You should obtain a list of detailed requirements. The accuracy of the requirements can be verified by a combination of desktop review of documentation and interviews with appropriate personnel. Conceptual ERD and flowchart diagrams should be reviewed to ensure that they address the needs of the user.

The Requirements Definition phase creates an output of detailed success factors to be incorporated into the acceptance test specifications. As an auditor, you will verify that the project plans and estimated costs have received proper management approval.

Phase 3: System Design

The System Design phase expands on the ERD and initial concept flowcharts. Users of the system provided a great deal of input during phase 2, which is then used in this phase for in-depth flowcharting of the logic for the entire system. The general system blueprint is decompiled into smaller program modules.

Internal software controls are included in the design to ensure a separation of duties within the application. The work breakdown structure is created for effective allocation of resources during development. Design and resource planning may be one of the longest phases in the planning cycle. Quality is designed into a system rather than inspected after the fact.

The 1-10-100 rule provides an excellent illustration of the costs of quality-related problems. Figure 5.9 shows that for every dollar spent preventing a design flaw in planning, design, and testing, the organization can avoid the additional cost of noncompliance failures:

  • $100 to correct a problem reaching the customer

  • $10 to correct a problem or mistake during production

  • $1 to prevent a problem

1-10-100 rule of quality

Figure 5.9. 1-10-100 rule of quality

According to quality guru Philip Crosby, there are two primary components of quality—the extra expenses known as the price of nonconformance, and the savings in the price of conformance:

Price of nonconformance (PONC)

This represents the added costs of not doing it right the first time. Think of this as the extra time and cost of rework or uncompensated warranty repair. It's not uncommon for the overall cost of the rework to exceed your original profits.

Price of conformance (POC)

Avoiding the headache by doing it right the first time is known as the price of conformance (POC). Employee training and user training is a POC expense that conserves time and money by avoiding the added cost of nonconformance (PONC).

Quality failures will occur because of variation. Poor planning, flawed design, and poor management are the most frequent sources of failure. We can categorize quality failures as common or specific in nature:

Common quality failures

Common failures are the result of inherent variations inside the process, which are difficult to control. Let's consider writing the software programs for a robotic assembly line. Extreme heat affects the finish or adhesion of drying paint. New people may have been hired during production without enough training or experience. This type of common failure is inside the production process, which can affect the quality of a paint job. Management would be held responsible for fixing the problem because it was inside the process. It's management's responsibility to design a solution to prevent the problem.

Special quality failures

Special failures occur when something changes outside the normal process. What if the weather was fine, but the paint finish came out wrong? Upon investigation, it was discovered that the problem resulted from using the wrong type of paint or an unapproved substitution. This special failure is something that the workers should be able to fix with their purchasing agent by working with their vendor. It's more of a supply issue than a process problem. Improvements in employee discipline to follow change control for substitution would prevent the defect.

Customer Satisfaction

The best way to create loyal customers is to exceed their expectations. It's important to deliver within the original scope to satisfy customer needs. Failing project managers may make the dangerous mistake of attempting to switch the deliverables by using a bait-and-switch technique referred to as gold plating. If you gold-plate doggy poop, it's nice and shiny, but still just fancy poop.

We discussed Deming's planning cycle in Chapter 2 as it related to audit planning. Figure 5.10 shows that the Plan-Do-Check-Act cycle also applies to software design.

Planning for quality during design (Plan, Do, Check, Act)

Figure 5.10. Planning for quality during design (Plan, Do, Check, Act)

Phase 3 is the best time for the software developer to work directly with the user. Most professional programmers encourage the progression of meetings necessary to refine the design before a single line of program code is written. A series of meetings are necessary to help convert user ideas and whims into a structured set of deliverables. Time should be spent on creating screen layouts, designing formats, and matching their desired workflow. Initial plans for developing a prototype in phase 4 are created during this System Design phase.

A significant output during the design phase is to identify how each of the software functions can be tested. Data derived during design provides the base criteria for behavior testing and inspection during phase 4 development testing. Data from user meetings provides a solid basis for user acceptance testing. The documentation created during the design phase initially serves as the road map for programmers during development. Later the phase 3 design documentation will provide a foundation for support manuals and training.

Reverse Engineering and Reengineering

In certain situations, reverse engineering may be used to accelerate the creation of a working system design.

Note

The 2003 movie Paycheck starring Ben Affleck was themed around reverse engineering a competitor's product to jumpstart product development for Affleck's employer.

Reverse engineering is a touchy subject. A software decompiler will convert programs from machine language to a human-readable format. The majority of software license agreements prohibit the decompiling of software in an effort to protect the vendor's intellectual design secrets.

An existing system may loop back into phase 3 for the purpose of reengineering. The intention would be to update the software by reusing as many of the components as is feasible. Depending on the situation, reengineering may support major changes to upgrade the software for newer requirements.

Software Design Baseline

At the end of the System Design phase, a software baseline is created from the design documents. The baseline incorporates all the agreed-upon features that will be implemented in the initial version of software (or next version in the case of reengineering). This baseline is used to gain approval for a design freeze. The design freeze is intended to lock out any additional changes that could lead to scope creep.

Phase 3 Review and Approval

The phase 3 review meeting starts with a review of the detailed design for the proposed system. Engineering plans and project management plans are reviewed. Cost estimates are compared to the assumptions made in the business case. A comparison is made between the intended features and final design. Final system specifications, user interface, operational support plan, and test and verification plans are checked for completeness. Data from the risk analysis undergoes a review based on evidence. Approval is requested to proceed to the next phase. Once again, all outstanding issues must be resolved before proceeding to the next phase. Each of the stakeholders and sponsors should physically sign a formal approval of the design before allowing it to proceed into development. This administrative control enforces accountability for the final outcome.

Auditor Interests in the System Design Phase

You need to review the software baseline and design flowcharts. The design integrity of each data transaction should be verified. During the design review, you verify that processing and output controls are incorporated into the system. Input from the system's intended power users may provide insight into the effectiveness of the design.

It is important that the needs of the power users are implemented during the design phase. This may include special functions, screen layout, and report layout. You should have a particular interest in the logging of system transactions for traceability to a particular user. You look for evidence that a quality control process is in use during the software design activities. It is important to verify that formal management approval was granted to proceed to the next phase.

Warning

A smart auditor is wary of systems being allowed to proceed into development without formal approval. The purpose of IT governance is to enforce accountability and responsibility. Even the smallest, most insignificant system represents an investment of time, resources, and capital. None of these should be wasted, squandered, or misused.

Phase 4: Development

Now the time has come to start writing actual software in the Development phase. This process is commonly referred to as coding a program. Design planning from previous phases serves as the blueprint for software coding. The systems analysts support programmers with ideas and observations. The bulk of the work is the responsibility of the programmer who is tasked with writing software code.

Implementing Programming Standards and Quality Control

Standards and quality control are extremely important during the Development phase. A talented programmer can resolve minor discrepancies in the naming conventions, data dictionary, and program logic. Computer software programs will become highly convoluted unless the programmer imposes a well-organized structure during code writing. Unstructured software coding is referred to as spaghetti bowl programming, making reference to a disorganized tangle of instructions.

The preferred method of organizing software is to implement a top-down structure. Top-down structured programming divides the software design into distinct modules. If top-down program structures were diagrammed, the result would look like an inverted tree. Within the tree, individual program modules (or subroutines) perform a unique function. Modules are logically chained together to form the finished software program. The modular design exponentially improves maintainability of the finished program. Individual modules can be updated and replaced with relative ease. By comparison, an unstructured spaghetti bowl program would be a nightmare to modify. Modular design also permits the delegation of modules to different teams of programmers. Each module can be individually tested prior to final assembly of the finished program.

Adhering to the Development Schedule

The software project needs to be managed to ensure adherence to the planned schedule. Scope creep with unforeseen changes can have a devastating impact on any project. It is common practice to allow up to a 10 percent variance in project cost and time estimates. In government projects, the variance is only 8 percent.

The development project will be required to undergo management oversight review if major changes occur in assumptions, requirements, or methodology. Management oversight review would also be warranted if the total program benefits or cost are anticipated to deviate by more than 8 percent for government or 10 percent in industry. The project schedule needs to be tightly managed to be successful. The change control process should be implemented to ensure that necessary changes are properly incorporated into the software development phase.

A version control system is required to track progress with all of the minor changes that naturally occur daily during development.

Writing Program Code

The effort to write program code depends on the programming language and development tool selected. Examples of languages include Common Business Oriented Language (COBOL), C language, Java, the Beginner's All-purpose Symbolic Instruction Code (BASIC), and Visual Basic. The choice of programming languages is often predetermined by the organization. If the last 20 years' worth of software was developed using COBOL, it might make sense to continue using COBOL.

Understanding Generations of Programming Languages

Computer programming languages have evolved dramatically over the past 50 years. The early programming languages were cryptic and cumbersome to write. This is where the term software coding originated. Each generation of software became easier for a human being to use. Let's walk through a quick overview of the five generations of computer programming languages:

First-generation programming language

The first-generation computer programming language is machine language. Machine language is written as hardware instructions that are easily read by a computer but illegible to most human beings. First-generation programming is very time-consuming but was useful enough to give the computer industry a starting point. The first generation is also known as 1GL. In the early 1950s, 1GL programming was the standard.

Second-generation programming language

The second generation of computer programming is known as assembly language, or 2GL. Programming in assembly language can be tedious but is a dramatic improvement over 1GL programming. In the late 1950s, 2GL programming was the standard.

Third-generation programming language

During the 1960s, the third generation (3GL) of programming languages began to make an impact. The third generation uses English-like statements as commands within the program, for example, if-then and goto. Examples of third-generation program languages include COBOL, Fortran, BASIC, and Visual Basic. Another example is the C programming language written by Ken Thompson and Dennis Ritchie. Most 3GL programs were used with manually written databases.

Fourth-generation programming language

During the late 1970s, the fourth-generation programming languages (4GL) began to emerge. These include prewritten database utilities. This advancement allowed for rapid development due to an embedded database or database interface. The fourth-generation design is a true revolution in computer programming. The programmer creates a template of the software desired by selecting program actions within the development tool. This is referred to as pseudocoding or bytecoding. Their development tool will convert bytecode into actual program code. An untrained user could write a program that merely formats reports on a screen and allows a software-generation utility to write the software automatically. Figure 5.11 illustrates the general concept of pseudocoding inside a 4GL development tool.

Pseudocoding inside a 4GL development tool

Figure 5.11. Pseudocoding inside a 4GL development tool

A 4GL is designed to automate reports and the storage of data in a database. Unfortunately, it will not create the necessary business logic without the aid of a skilled programmer. An amateur using a 4GL can generate nice-looking form screens and databases. But the amateur's program will be no more than a series of buckets holding data files. The skilled programmer will be required to write transformation procedures (program logic) that turn those buckets of data into useful information. Examples of commercial 4GL development tools include Sybase's PowerBuilder, computer-aided software engineering (CASE) tools, and YesSoftware's Code Charge Studio. 4GL is the current standard for software development.

Fifth-generation programming language

The fifth-generation programming languages (5GLs) are designed for artificial-intelligence applications. The 5GL is characterized as a learning system that uses fuzzy logic or neural weighing algorithms to render a decision based on likelihood. Google searches on the Internet use a similar design to assess the relevance of search results.

Figure 5.12 shows the hierarchy of the different generations of programming languages.

Generation levels of programming languages

Figure 5.12. Generation levels of programming languages

Using Integrated Development Environment Tools

After the programming language has been selected, the next step is to choose the development tool. There are still some programmers able to sit down and write code manually by using the knowledge contained in their head. This type of old-school approach usually creates very efficient programs with the smallest number of program lines.

The majority of programmers use an advanced fourth-generation software code program to write the actual program instructions. This advanced software enables the programmer to focus on drawing higher-level logic while the computer program creates the lower-level set of instructions similar to what a manual programmer would have done. Simply put, a computer program writes the computer program.

The better development tools provide an integrated environment of design, code creation, and debugging. This type of development tool is referred to as an integrated development environment (IDE).

One of the best examples of an IDE is the commercial CASE tool software. You need to understand the basic principles behind CASE tools. CASE tools are divided into three functional categories that support the SDLC phases of 2, 3, and 4, respectively:

Upper CASE tools

Business and application requirements can be documented by using upper CASE tools. This provides support for the SDLC phase 2 requirements definition. Upper CASE tools permit the creation of ERD relationships and logical flowcharts.

Middle CASE tools

The middle CASE tools support detailed design from the SDLC phase 3. These tools aid the programmer in designing data objects, logical process flows, database structure, and screen and report layouts.

Lower CASE tools

The lower CASE tools are software code generators that use information from upper and middle CASE to write the actual program code.

You can see the relationship of CASE tools to the SDLC phases in the following diagram.

Using Integrated Development Environment Tools

Using Alternative Development Techniques

As a CISA, you should be aware of two alternative software development methods: Agile and Rapid Application Development (RAD). Each offers the opportunity to accelerate software creation during the Development phase. The client may want to use either of these methods in place of more-traditional development. Both offer distinct advantages for particular situations. Both also contain drawbacks that should be considered.

Agile Development Method

Agile uses a fourth-generation development environment to quickly develop prototypes within a specific time window. The Agile method uses time-box management techniques to force individual iterations of a prototype within a very short time span. Agile allows the programmer to just start writing a program without spending much time on preplanning documentation. The drawback of Agile is that it does not promote management of the requirements baseline. Agile does not enforce preplanning. Some programmers prefer Agile simply because they do not want to be involved in tedious planning exercises.

When properly combined with traditional planning techniques, Agile development can accelerate software creation. Agile is designed exclusively for use by small teams of talented programmers. Larger groups of programmers can be broken into smaller teams dedicated to individual program modules.

Agile Development Method

Note

The primary concept in Agile programming is to place greater reliance on the undocumented knowledge contained in a person's head. This is in direct opposition to capturing knowledge through project documentation.

Rapid Application Development Method

A newer integrated software development methodology is Rapid Application Development (RAD), which uses a fourth-generation programming language. RAD has been in existence for almost 20 years. It automates major portions of the software programmer's responsibilities within the SDLC.

RAD supports the analysis portion of SDLC phase 2, phase 3, phase 4, and phase 5. Unfortunately, RAD does not support aspects of phase 1 or phase 2 that are necessary for the needs of a major enterprise business application. RAD is a powerful development tool when coupled with traditional project management in the SDLC.

Rapid Application Development Method

Building Prototypes

During the Development phase, it is customary to create system prototypes. A prototype is a small-scale working system used to test assumptions. These assumptions may be about user requirements, program design, or the internal logic used in critical functions. Prototypes usually are inexpensive to build and are created over a few days or weeks. The principal advantage of a prototype is that it permits change to occur before the major development effort begins.

Prototypes seldom have any internal control mechanisms. Each prototype is created as an iterative process, and the lessons learned are used for the next version. A successful prototype will fulfill its mission objective and validate the program logic. All development efforts will focus on the production version of the program after the prototype has proven successful.

Warning

There is always a serious concern that a working prototype may be rushed into production before it is ready for a production environment. Internal controls are typically absent from prototypes or insufficient for production use.

Compiling Software Programs

A computer program can be written as either a program script or a compiled program. Program scripts are written like movie scripts and contain instructions for the computer to follow. The programmer uses a scripting language such as Perl, JavaScript, or Visual Basic. The advantage of scripts is that they are easy to maintain. The program script is stored in human-readable form. The disadvantage is that program scripts are run by using a script interpreter. The script interpreter is slow to execute. A script interpreter compiles a temporary version of the scripted program as it is running on the computer. The scripted program is considered a crystal box, or white box, because a trained human being could read the program script and decipher the structural design of the program.

Compiling programs is a process of converting human-readable instructions into machine-language instructions for execution. The human-readable version of software is referred to as source code. A computer programmer will compile programs to increase the execution speed of the software. A simple way to remember the definition is that source code is what the compiler started with. The compiled program is unreadable to humans. This unreadable version of the program is referred to as the object code.

Think of object code as the output object created by the compiler. Compiling software provides rudimentary protection of the program's internal logic from inquisitive people. The disadvantage of compiled programs is that reviewing the internal structural design would be practically impossible. The compiled program is essentially a black box. Figure 5.13 shows the different creation paths for compiled programs and program scripts.

Computer programmers will usually compile multiple versions of a program during development and debug testing. Without proper management, this scenario could become a nightmare. How do you ensure that the latest copy is in use? It is the job of configuration and version management to provide a traceable history of multiple versions of computer software.

Implementing Configuration and Version Management

Managing a changing environment is a significant challenge. Constant changes make it difficult to remain organized and coordinated, no matter what you're trying to accomplish. But suppose, for example, that a company wants to release a new software product. During software development, multiple programmers may be working on different modules of the same program. For this example, let's name the program Report Whiz. The programming used in the individual modules for Report Whiz may have different levels of maturity to consider. For example, the screen-printing utility might be in version 1.1, while the report-writer module may be in version 6.0. By combining these two modules into Report Whiz, the result will become our finished configuration for Report Whiz version 1.0. Does it sound like this could get confusing?

Compiled programs versus scripts

Figure 5.13. Compiled programs versus scripts

Well, it can. That is the challenge. How will the company manage and track all these different components with the correct versions?

Version control is the tracking of all the tiny details inside both major and minor version changes. By tracking these tiny details, we can understand the internal construction of our finished software configuration. Detailed version control is the foundation of configuration. With version control, we have a detailed configuration that is ready to be managed.

Configuration management is focused on management exercising control over the finished software version. The primary elements of configuration management are control, accounting and reporting.

Configuration Control

The control of all the design documentation, design changes, all specifications, parts and assemblies and manufacturing processes. Specifications include the informal notes, observations and advice written on the documentation.

Configuration Accounting

Timely reporting of any modifications to the originally agreed design documentation which occurs after the initial design release review. It's possible for engineering to design a product the organization is unable to build. CM accounting tracks the compromises and modifications necessary to produce a working product.

Configuration Reporting

This encompasses all the elements of control and accounting, going further to report the configuration as-built and delivered to the customer including any change or maintenance to the product after delivery. It becomes the life history of the product from conception through the useful service life until disposal.

The auditor needs to investigate how the client manages and records changes to a configuration. After obtaining this understanding, you need to ask who authorizes the changes. Finally, you need to ask how the changes are tested and accepted for production use.

Fortunately, there are software tools to assist software developers in managing version control. One of the most common commercial applications for tracking version changes is known as a Poly Version Control System (PVCS). PVCS software contains a database that manages the tracking of programming changes and revisions of software code. In industry slang, we may refer to the PVCS function as a Top Copy or Latest Copy system.

The purpose of PVCS is to ensure that each programmer is working with the latest version of the software program code. During the day, the programmer checks out the latest copy of the program code from the PVCS database. The checkout process is similar to the checkout of books from your local library. The PVCS database is designed to synchronize the work of every programmer. A programmer checks out the latest software version and then uses that version to start writing program code. Each day the programmer returns their finished work to the PVCS-controlled library by using a check-in process. During check-in, the PVCS database informs the programmer with a list of any related changes made by the other programmers. This provides coordination for the team of programmers.

Debugging Software

A vast assortment of errors occur naturally during the development process. These errors may include syntax errors, inconsistent naming structures, logic errors, and other common mistakes. Most online development tools will assist the programmer by debugging some of the errors. Using top-down structured programming techniques makes it easier to troubleshoot problems.

Testing the Software

During the Development phase, it is imperative that tests and verification plans are created to debug the software programs. Tests should be performed to validate processing accuracy. Test plans are created to uncover program flaws, manage defects, and search for unintended results. A logic path monitor can be used to provide programmers with information about errors in program logic.

During development, software testing occurs at multiple levels. Any deficiencies or errors need to be discovered before the finished program is implemented. There are four basic types of test methods:

White-box testing (for uncompiled programs)

Also known as crystal-box testing because it allows the programmer to view and to test the logic of procedures and data calculations. The intention is to verify each transformation process as data passes through the system. This can be an expensive and time-consuming process. This testing is commonly used for unit and integrity testing of self-developed software. Legal obstacles concerning ownership and proprietary rights may be encountered when attempting to use this type of testing on commercial software. Script-based software is human readable and therefore can be crystal-box tested.

Black-box testing (for compiled programs)

Intended to test the basic integrity of system processing. This is the most common type of test. The process is to put data through the system to see whether the results came out as expected. You do not get to see the internal logic structures; all you get is the output. Commercial software is compiled into a form that is nonreadable by humans. Black-box testing is the standard test process to run when you buy commercial software. Black-box testing is often used for user acceptance tests.

Functional, or validation, testing (for all programs)

Compares the system against the desired functional requirements. We want to see whether the product has met our objectives for its intended use.

Regression testing

Tests changes against all the existing software models to detect any conflicts. The purpose of regression testing is to ensure that modifications do not damage existing processes. During regression testing, internal controls are retested for integrity.

All tests should follow a formal procedure in a separate testing environment. The following types of structured technical tests occur during the Development phase:

  • Program module tests (unit test)

  • Program interface tests (integration test)

  • Internal security control tests

  • Processing volume tests (stress test of maximum workload)

  • Performance tests

  • Integrity tests (processing accuracy)

  • Recovery tests (to verify data integrity after failures)

  • Sociability tests (to determine whether the program will have conflicts with another program on the system)

  • Preliminary user acceptance testing (to approve the system functionality as delivered)

The test plan and results of each test need to be carefully documented. In environments where strong controls are desired, archiving test records for future reference is necessary. After all the technical tests have been completed to satisfaction, it is time for the most important test of all. The last test in the Development phase is user acceptance testing. This is when the project sponsor determines whether to accept the system. If accepted, the system moves into the Implementation phase.

Note

Software certification testing in phase 4 development measures the coded software against phase 2 specifications, phase 3 design, effective implementation of internal controls, and fitness of use for production.

Phase 4 Review and Approval

Once again, a phase review meeting is held. The phase 4 review focuses on the software being delivered by the programmers for the users. The Development phase has now concluded. The finished software is compared for compliance against the original objectives, requirements list, and design specifications. Evidence is presented from test results, which should indicate that the software is performing as expected. Plans for ongoing operation are compared to the previous gap analysis to uncover any remaining deficiencies. After all outstanding issues have been resolved, the plan is put before the chairperson for approval to proceed to the Implementation phase.

Auditor Interests in the Development Phase

As an auditor, your prime interest in the Development phase is to verify that a quality control process has been utilized to develop an effective computer program. All internal control mechanisms should be present in the finished program. The programs have undergone debugging with formal testing. Evidence from test results is expected to provide assurance of system integrity. Support documentation has been created in conjunction with an operational support plan for production use. The finished software capabilities have been verified for compliance to the original objectives. The user has accepted the finished computer program. And finally, management has granted formal approval for the software to be implemented.

Phase 5: Implementation

The computer program is fully functional by the time it reaches phase 5. This phase focuses on final preparations for actual production use. Version control is a formal requirement to ensure that the right version of software is running for production.

Software Release and Patch Management

Computer software is authorized for distribution via a release process. Software is released from development and authorized to be installed for production use. Each vendor has their own release schedule.

Computer software releases fit one of the following profiles:

Major release

A significant change in the design or generation of software is known as a major release. Major releases tend to occur in the interval of 12 to 24 months.

Minor release or update

Updates are also known as minor releases. Their purpose is to correct small problems after the major release has been issued.

Emergency software fixes

These are known as program patches, or hot fixes. Emergency fixes should be tested prior to implementation. Every fix should undergo a pretest, even if the test is informal. Emergency software fixes may introduce new problems that are unexpected. Every emergency fix must undergo change control review to determine the following:

  • What to remediate

  • Whether the change should remain in use

The computer program is now a finished version ready for final acceptance testing and user training. The next step for implementation is to load the client's current data.

Data Conversion

A data conversion plan is developed to migrate existing data into the new system. Great care needs be taken to prevent loading garbage data into the new system. A successful technique to prevent loading garbage is to reload selected portions of shared data directly from the latest source file. An example is reloading a manufacturing kit list directly from the latest engineering design. This would eliminate the migration of outdated information into the new system.

A list of data files eligible for migration is developed. Each file is verified against the system design requirements. If the file is required, procedures would be created to scrub (remove) outdated entries from each file. It is a common practice to hire a data entry service to assist in data conversion. Sometimes it is easier to re-create a file with minimal data, as opposed to the tedious job of grooming existing files. The programmers may write a data conversion utility to reformat existing files, such as a customer list, into the new system. A comprehensive data conversion plan is always required.

System Certification

Certification is a technical process of testing against a known reference. The system is tested to ensure that all internal controls are present and functioning correctly. The system certification is based on measuring compliance to a particular requirement. Systems used in the government are required to undergo a certification process before being placed in production use.

Common Criteria (ISO 15408)

The original U.S. computer controls (Trusted Computer System Evaluation Criteria, or TCSEC) have been merged with developments from the European IT security countries (Information Technology Security Evaluation Criteria, or ITSEC) to form an international common criteria for evaluating computer security. This common criteria has been adopted by ISO as Common Criteria standard 15408. CC is the nickname for the complete set of common criteria. Several countries have adopted the CC, including Canada, France, Germany, the Netherlands, the United Kingdom, and the United States. All the ISO member countries are expected to use 15408.

The CC brings the benefits of accumulated wisdom with a flexible approach to standardization and evaluation assurance. Flexibility is provided in this specification of secure products by using seven standardized evaluation assurance levels (EALs). Official testing is provided by an independent lab certified under the ISO 17025 standard for laboratories and testing facilities.

Within the CC is a well-defined set of IT security requirements for perspective products and systems. Here's how it works:A system to be evaluated is referred to as the target of evaluation (TOE). Each TOE has security threats, objectives, requirements, and a summary of functions to be measured. Every TOE contains security functions (TSF) to be relied upon in the enforcement of the TOE's desired security policy (TSP).

The grouping of evaluation test objectives is defined as the protection profile (PP). A variety of protection profiles have already been created for systems used as workstations, firewalls, network servers, secure databases, and so forth. A PP is intended to be reusable and effective in defining the security requirements for the system.

Common Criteria (ISO 15408)

The party requesting evaluation simply picks a PP (protection desired), identifies the TOE (system to test), and pays for the tests to be performed to the appropriate EAL (assurance level 1 through 7). The goal is to make it easier for a vendor to advertise systems appropriate to the client needs. Let's take a brief overview of the elements to be tested.

TOE security functionality

The following is a sample list of the components used for security functionality. Each component represents a family of subcomponents required to obtain the EAL:

  • Security management features

  • Identification and authentication

  • User data protection

  • Communications with nonrepudiation

  • Cryptographic support

  • Audit

  • Privacy

  • Resource utilization

  • TOE access (sessions and access parameters)

  • Trusted paths/channels

  • Protection of TOE security functions

Evaluation of protection profiles and security targets

All PPs and their associated security target (ST) evaluations contain the following criteria, each with underlying subcomponents of security that must be evaluated. The following list is a quick summary:

  • Evaluation assurance

  • Configuration management to verify the TOE's current configuration at the time of testing. Changes would require retesting to maintain the EAL.

  • Secure system delivery installation and setup measures to ensure that the system is not compromised during these events.

  • Assurance maintenance

  • PP evaluation to demonstrate that requirements are consistent and technically sound.

  • Development of the target's security functionality (TSF).

  • Guidance documents for use by the users and system administrators.

  • Life cycle support for the remediation of flaws found by TOE users.

  • Security target evaluation to demonstrate that requirements are consistent and technically sound. This includes the TOE description, security environment, security objectives, and PP claims, the TOE security requirements, and the TOE summary security specification.

  • Formal vulnerability assessment to identify vulnerabilities through covert channel analysis, configuration analysis, and examination of the strength of security mechanisms with the identification of flaws introduced during the development of the TOE.

  • Tests demonstrating the coverage and depth of developer testing with requirements for independent testing.

Internal control standards require business systems to undergo a certification process. It may be an internal review or a formal review such as the Common Criteria. Every computer system and application should undergo a certification process prior to use in a production environment.

Note

You can find more information on system certification procedures in the U.S. Federal Information Security Management Act (FISMA) guide available through http://csrc.nist.gov and in the ISACA CObIT. Also visit www.commoncriteriaportal.org for information on system certification under the ISO 15408 Common Criteria. System certification is required by most regulations.

As a CISA, you will be required to undergo update and renewal training to keep your certification current. Existing information systems should also go through a recertification process to remain up-to-date. You should be concerned about systems that the customer has not certified for production, or systems for which the certification was not maintained and is now out-of-date.

System Accreditation

The next step after certification is accreditation. After passing the certification test, management determines how or where the system may be used. Accreditation is an administrative process based on management's comfort level with demonstrated performance or fitness of use (management acceptance). Management is responsible for accreditation of systems during the system's useful life cycle. The designated accreditation authority is a senior executive who will accept full responsibility for the consequences of operating the overall system (often the CIO or agency head). Accreditation is by site, type of use, or system.

Accreditation may be in the form of approval to operate in limited use for 90–180 days or (full) annual accreditation. The approved implementation may begin production use. Systems must be recertified and reaccredited annually.

User Training

Now it is time to train the users and system operators. Hopefully, the organization had some of its power users actively involved in prior phases. The new system's power users were usually involved in the phase 2 design. If so, these power users can serve as instructors and mentors to the new system users. A user training plan is necessary to ensure that everyone receives appropriate training for their role. During the training process, each user should receive specific instructions on the new functions of the system. Care should be taken to explain which of the old procedures will no longer be used. The training plan needs to provide for ongoing training of new users.

Special training is required for the system custodians (system administrator, database administrator, and computer console operator). The custodians need to be trained for normal operations and emergency procedures unique to the system. After the people are trained, it is time to move the system into production use.

Go Live and Changeover

The new system has been running separately from production up to this point. A plan is necessary for switching production processing from the old system to the new system. This process is commonly described by the term changeover, cut over, or go live. The changeover can be a substantial challenge depending on the complexity of the environment. A comprehensive migration plan is required in order to be successful. It is imperative that risk management is used to select and sequence changeover plans.

You need to be aware of the following changeover techniques:

Parallel operation

The old and new systems are run in parallel, usually for an extended period of time. Dual operation allows time to compare the operational differences between the two systems. During parallel operation, software developers can fine-tune any software discrepancies. The primary advantage of parallel operation is the ability to validate the results obtained from the new system against the accuracy of the old system. With parallel operation comes the added burden of simultaneously supporting two major systems. At a future date, the old system will be brought to an idle state while the new system takes over all production processing. Depending on data retention requirements, the field system may still need to be operational for a number of years. The switch from parallel operation to single operation may be performed by using a phase changeover or hard changeover.

Note

Overall, parallel operation is an excellent technique with the lowest level of risk. Making changes in small doses is always advisable. Major failures during changeover can be a real career killer.

Phased changeover

In larger systems, converting to the new system in small steps or phases may be possible. This may take an extended period of time. The concept is best suited to either an upgrade of an existing system, or to the conversion of one department at a time. The phased approach creates a support burden similar to that of parallel operation. A well-managed phased changeover presents a moderate level of risk.

Hard changeover

In certain environments, executing an abrupt change to the new system may be necessary. This is known as a hard changeover, a full change occurring at a particular cutoff date and time. The purpose is to force migration of all the users at once. A hard changeover may be used after successful parallel operation or in times of emergency. One of the biggest concerns about a hard changeover is that it can cause major disruption of normal operations. For this reason, the hard changeover presents the highest level of risk. Risk mitigation activities are of the highest priority whenever the hard changeover technique is chosen.

Phase 5 Review and Approval

This is the last review meeting, and it is concerned with the implementation of a new system. The chairperson opens the meeting with the project sponsor present. The project manager makes a presentation of project updates and achievements. Progress is reported against the plan objectives. Attention then focuses on a review of outstanding engineering issues, system performance as realized in production use, and ongoing service and support plans. The final risk analysis is presented for management approval. After approval is obtained, the system is authorized for production use.

Note

The 2006 movie Man of the Year, starring Robin Williams with Christopher Walken and Laura Linney, is based on a fictional electronic election. During testing of a new electronic voting machine, a software flaw is discovered in the tally of votes.

The development manager ignores the programmer's warning and allows the system to be used in full production. A hidden system flaw results in an unlikely candidate winning the popular vote in error. Ultimately, the truth is discovered, and the voting machine company is ruined and publicly disgraced. Proper certification testing was not performed before the system was placed into production. This fictional story bears striking resemblance to news stories about actual flaws detected in electronic voting machines.

Auditor Interests in the Implementation Phase

The system should be installed and fully operational by the Implementation phase. Support documentation must be in place prior to the system entering production use. All of the appropriate personnel will have been trained to fulfill their roles. The system has completed a final user acceptance test. A production operating schedule should now be in use. The completed system will have undergone a technical certification process. Management reviews the system's fitness of use for a particular task or environment. Management accredits the system for a specified use, based on fitness of use for a particular task or by site location.

You need to verify that appropriate quality control procedures have been executed in support of these objectives. You also need to verify that formal management approval was obtained before the system entered production use. Any deficiencies in management approval should be reported to the audit committee or project oversight.

Phase 6: Postimplementation

The sixth SDLC phase deals with project closure and the administrative process of verifying that the system meets the organizational objectives. A complete project management review is performed. Evidence is checked to verify that the system was implemented as originally designed, with all necessary internal controls present. The results of actual use are compared to the anticipated benefits originally cited in phase 1. The objective is to ensure that these benefits were actually realized by the finished system implementation. Performance measurements are reviewed. A celebration may be in order if the performance exceeded original expectations. Otherwise, a remediation plan may be created to improve current performance.

Additional phase 6 activities will include the following:

  • Continuous monitoring to ensure that the controls are still effective. Periodic testing and reporting are necessary.

  • Annual review of new requirements. This includes changes in legal regulations, system connections, and patterns of use. Consider the impact of HIPAA requirements mandating increased confidentiality for protecting access to data by unauthorized internal users. The PCI regulations force truncation of account numbers and mandate use of encryption. It's interesting how Amazon.com changed its software to be compliant, yet most hotels violate PCI regulations by retaining card numbers on file and improperly handling paper records by retaining the full account number plus card identification code (three-digit CID number). The associated hotel operating procedures instruct staff to continue violating PCI in spite of the enormous consequences of the law. Just because it used to be done that way does not mean it continues to be done that way.

  • Application system review. This includes investigating risks related to system availability (uptime, downtime) and to integrity issues such as incomplete or unauthorized transactions. This is a major area of interest for the auditor. Integrity and security are temporary because of the constant changes made by the IT staff and by vendor updates. Whether small or large, a change will always introduce another set of issues. This is referred to as the law of unintended consequences.

  • System update. Will the newer version of software be installed? Changing versions of the operating system or the application may be a significant project. The updates need to undergo a full certification (recertification) and accreditation process prior to production implementation. Smart CIOs and IT managers have already implemented separate systems for testing and production. The costs were easily justified by comparing the cost of downtime against doing it right the first time.

  • Environment changes. Changes in physical controls and personnel can have a major impact on overall control. Administrative policies may need to be added or refined to accommodate changes to the physical area. This could include overtaxing the generator capacity, testing aging batteries on the UPS, or other repairs that have not been performed. More training may be needed to keep the staff up-to-date. Staff rotation is just one manner in which special skills may become stale or lost.

  • Replacement or migration to new systems.

Phase 6 Review Meetings

Periodic reviews are necessary to verify that the system is maintained in a manner that supports the original objectives and controls. The review should occur at least annually or following a significant change in the business, regulatory climate, or application itself. You may need to utilize the services of a professional expert to conduct the postimplementation review.

Note

You need to remain aware of the conditions necessary to safely rely on using the work of others. The client will frequently request the auditor to use reports from internal staff in order to reduce audit costs. We discussed this issue in Chapters 1 and 2.

Auditor Interests in the Postimplementation Phase

As an auditor, you review evidence indicating that the system objective and requirements were achieved. You should pay attention to users' overall satisfaction with the system. You should review evidence indicating that a diligent process of support and maintenance is in use. In this phase, you review system audit logs and compare them to operational reports.

Auditors want to know whether support personnel are actively monitoring for error conditions. A process of incident response and change control should be in use. Management must demonstrate that they are aware of system limitations with regard to the changing requirements of the organization. Management needs to be cognizant of any deficiencies requiring remediation.

In addition, management and the audit committee should remain aware of any external issues that may dictate system modification or removing the application from service. Examples include changes in regulatory law governing minimum acceptable internal controls. A perfect example is the current trend for strong data encryption to be implemented to protect the privacy of individuals. Previously the concerns were focused on using encryption during external data transmission. The latest requirement is for data in databases and on backup tapes to be stored in encrypted form. The loss of unencrypted data will soon carry harsh penalties.

Phase 7: Disposal

This is the last phase of the SDLC life cycle. After the system has been designated to be removed from service, the security manager needs to perform an audit of the system components and remaining data. The goals of this final stage are to prevent accidental loss. Objectives include the following points:

  • Information preservation. All data and programs need to be archived for long-term storage.

  • Media sanitation. After the standing data has been removed, the system is decommissioned with the intention of shutdown.

  • Hardware and software disposal. Policies and procedures need to exist for the assurance that every disposal is properly managed. No one should profit from the disposal of assets. It's important that the system shutdown does not violate document retention requirements.

A formal authorization from the system owner is required before initiating the disposal phase. Accounting will need to transfer the assets out of inventory. The system owner, accreditation manager, and custodian are to sign the official order moving the system from service. This is the evidence that each has performed their appropriate duties.

Auditor Interests in the Disposal Phase

Evidence should exist to document the disposal process and the records available about prior disposals. The objective is to determine whether the process was correctly followed. A quick check of the asset tags and financial records will help determine the truth. Look to see whether the disposed asset is still shown to hold value in the accounting records. Is it still on the books? Check their plans to preserve the old data. Did the auditee do a good job? Last of all, the auditor looks for evidence that the media was properly sanitized before disposal.

Overview of Data Architecture

A chapter on software development would not be complete without a discussion of the different types of data architecture. The selection of data architecture depends on multiple influences, often including the desires and objectives of the system designer. This section focuses on the fundamentals of data architecture.

Databases

A database is simply an organized method for storing information. Early databases were composed of index cards. Some of you may recall using the manual card catalog at the local library to look up the location of a particular book. Later, the library's manual card catalog system was automated with a computer database. Data may be organized into a table rows and columns similar to an Excel spreadsheet.

Databases are designed by using one of two common architectures:

DODB

A data-oriented database (DODB) contains data entries of a fixed length and format. The information entered into a data-oriented database is predictable.

OODB

An object-oriented database (OODB) does not require a fixed length, nor a fixed format. In fact, the object-oriented database was designed for data of an unpredictable nature.

Note

You may find that some people refer to the two common database architectures as a data-oriented structured database (DOSD) and an object-oriented structured database (OOSD).

Let's start the discussion with an overview of the data-oriented database.

Data-Oriented Database

The first type of database is designed around data in a predefined format, that is, numbers or characters of a particular length. A perfect example is the typical web form or Excel spreadsheet. This DODB the simplest type of database to create.

For this example, we would like to start with a simple database for client entertainment. Say that you have a few key clients to entertain. Your firm wants to ensure that you build rapport by inviting the client to join you in their favorite activities whenever possible.

The first step is to define the data to be recorded in the database. In the SDLC model, this would be part of phase 2, the Requirements Definition phase. Follow along by using Figure 5.14 as we explain the key points.

Let's start by defining a database table of rows and columns to hold the clients' contact information. The first table is named client_table. This will hold the name, address, phone number, and email address of every client.

Next, build a table for each location where you may take the client to be entertained. This is called locations_table. We have added a space to record the average price for this location and a space to record the specialty of the house.

A third table is created to keep track of all of the favorites: favorites_table could be used to record favorite food, a game such as billiards, sporting events, and so forth. One of the objectives in the DODB is to divide information into multiple tables that are relatively static. This allows the system to perform a basic search very fast and not have to process all the data at once. The standardization and removal of duplicates is referred to as database normalization.

Now you have your tables ready to store information. The next step is to link tables together with a referential link or relation. This is where the term relational enters into the description of the database: An item of data in one table relates to data contained in a separate table. Every entry in the database must have at least one required item to show that the entry actually exists. For example, an account number or a person's name would be required for each entry in the database, even if you don't have all the information. This single required entry is referred to as the primary key. Data items used to link to tables are referred to as foreign keys. The idea is that other data is foreign to the first table. Data that you can search is called a candidate key to the search. The purpose of using the term key is to illustrate that it would be impossible to unlock the information unless we know what to use as the key.

To be usable, a database must also have referential integrity. This means that data is valid across the linked entries (keys) in two tables. Take a look at Figure 5.15, and you will notice the reference lines drawn between client ID and location ID. This diagram is a primitive ERD.

Example of client entertainment database

Figure 5.14. Example of client entertainment database

Example database showing data relationships

Figure 5.15. Example database showing data relationships

Another way to view the database is to consider a box of index cards. Each entry is the equivalent of a separate index card. The box of index cards is referred to as the table. A table is made up of rows and columns, like an Excel spreadsheet. Computer programmers may use the term tuple in place of the word row. Figure 5.16 shows the database row, or tuple, as it would appear on index cards.

Database row, also known as a tuple

Figure 5.16. Database row, also known as a tuple

The actual database displays its contents as rows and columns. It is also common to hear the term attribute as a synonym for a database column. Figure 5.17 shows the columns, or attributes, as they would appear on the computer screen.

Database columns, also known as attributes

Figure 5.17. Database columns, also known as attributes

In the illustration, you can see that the ID number is used as a unique identifier (primary key) for each entry. Using a unique ID number allows duplicate names to appear within the database. This is valuable if you have the same company listed with multiple shipping addresses. The unique ID number also permits a name to be updated without any headaches. A common example is to change a maiden name to a married name, or vice versa as the case may be.

In summary, the DODB is designed to be used when the structure and format of your data is well known and predictable. What about data whose structure and format is unpredictable? What about a database that stores documents, graphics, and music files simultaneously? Well, that is the very challenge that led programmers to develop the object-oriented database.

Object-Oriented Database

In a data-oriented database, the program procedures and data are separate. An object-oriented database (OODB) is the opposite. In an OODB, the data and program method are combined into an object. Think of programmed objects as tiny little people or animals with their own way of doing things. Each programmed object has its own data for reference and its own method of accomplishing a required task. Figure 5.18 shows the basic internal design of program objects.

Concept overview of program objects

Figure 5.18. Concept overview of program objects

The number one advantage of using programmed objects is that you can delegate work to another object without having to know the specific procedure or characteristics in advance. An example is the computer display settings in the Microsoft operating system. Microsoft XP and Office are examples of object-oriented programs. When Microsoft Word was written, for example, the program did not need to know the details of the display screen. The Word program would simply delegate screen output to an object specified by the screen display setting. A configuration file would exist that contains the setting SET DISPLAY=vendors_device_driver. The hardware manufacturer for the display would write an object or driver to paint the image on the screen. The whole object-oriented design lends a great deal of flexibility for modular change.

Object-oriented programming is extremely powerful, and the functional design can be confusing to a novice. Objects are grouped together in an object class. An object class is quite similar to a particular class of economy automobiles or class of luxury automobiles, for example. The reference to class indicates the object's position in the hierarchy of the universe. Figure 5.19 shows an example of object classes.

Example of object classes

Figure 5.19. Example of object classes

Database Transaction Integrity

Transaction management refers to the computer program's capability to deal with any failure in the logical data update operations used for a particular transaction. Integrity could be damaged if an incomplete transaction was permanently recorded into the database. This is commonly referred to as the ACID model for database integrity. ACID stands for atomicity, consistency, isolation, and durability.

  • Atomicity refers to the transaction being "all or nothing." On the failure of a transaction, the change is backed out of the database, and the data is restored to its original state of consistency.

  • Isolation means that each transaction operates independently of all others. A transaction must finish before another transaction can modify the same data.

  • After a transaction is completed, the data must remain. This is referred to as durability.

This capability is based on a transaction log used with a before-image journal and after-image journal. The journals act as a temporary record of work in progress. A version of the database entry before the update is recorded is the before-image. Changes made are held in the after-image. The transaction can be reversed (undone) until the transaction is actually committed (written) to the master file. Once committed, the transaction is then deleted from the journals. A real-world example can be found in the redo and back-out capabilities of the MySQL Max database. Many databases use a transaction processing monitor (TP monitor) to ensure that database activity does not overload the processing capacity of the available hardware.

Decision Support Systems

Advancements in computer programming technology and databases have led to the creation of decision support systems. A decision support system (DSS) is a database that can render timely information to aid the user in making a decision. There are three basic types of decision support systems:

Reference by context

This type of primitive decision support system supplies the user with answers based on an estimated level of relevance. The overall value is low to moderate.

Colleague, or associate, level

The colleague level provides support for the more tedious calculations but leaves the real decisions to the user.

Expert level

It has been reported in graduate studies that the mind of an average expert contains more than 50,000 points of data. By comparison, a colleague or associate might possess only 10,000 points of data. The expert system is usually written by capturing specialized data from a person who has been performing the desired work for 20 or 30 years. This type of information would take a human a significant amount of time to acquire. It is also possible that the events are so far apart that it would be difficult to obtain proficiency without the aid of a computer.

Every decision support system is built on a database. The data in the database is retrieved for use by the program rules, also known as heuristics, to sort through the knowledge base in search of possible answers. The heuristic program rules may be based on a fuzzy logic using estimation, means, and averages to calculate a likely outcome. The programmers refer to the process as fuzzification and defuzzification depending on whether we are sharpening the average with a stratified mean or derating the average. The meaning of information in the knowledge base can be recorded into a linkage of objects and symbols known as semantic networks. Another technique is to use weight averages in program logic designed to simulate the path of synapses in the human brain.

Let's look at the common terminology used with decision support systems:

Data mining

After the database and rules are created, the next step in the operation of a decision support system is to drill down through the data for correlations that may represent answers. The drilling for correlations is referred to as data mining. To be successful, it would be necessary to mine data from multiple areas of the organization.

Data warehouse

It is the job of the data warehouse to accomplish the feat of combining data from different systems. Data is captured from multiple databases by using image snapshots triggered by a timer. The timer may be set to capture data daily, weekly, or monthly depending on the needs of the system architect.

Data mart

The data mart is a repository of the results from data mining the warehouse. You can consider a data mart the equivalent of a convenience store. All of the most common requests are ready for the user to grab. A decision support system retrieves prepackaged results of data mining and displays them for the user in a presentation program, typically a graphical user interface (GUI).

Figure 5.20 shows the basic hierarchy of the databases loading the data warehouse, which is mined to create a data mart.

Design of data warehouse and data mart

Figure 5.20. Design of data warehouse and data mart

Presenting Decision Support Data

The information presented from the data mart could indicate correlations of significance for the system user. Senior executives may find this information extremely useful in detecting upcoming trends or areas of concern throughout the organization. Keep in mind, the primary purpose of the decision support system is to give the senior-level manager timely information that will aid in making effective decisions.

The next step up from decision support systems is artificial intelligence.

Using Artificial Intelligence

Artificial intelligence (AI) is the subject of many technology dreams and some horror movies. The concept is that the computer has evolved to the level of being able to render its own decisions. Depending on your point of view, this may be good or bad. Artificial intelligence is useful for machines in a hostile environment. The Mars planetary rover requires a degree of artificial intelligence to ensure that it could respond to a hazard, without waiting for a human to issue instructions.

Now that the database has been developed, the next concern is to ensure that the transactions are processed correctly. Let's move along into a discussion of program architecture.

Program Architecture

Computer programs may be written with an open architecture or proprietary, also known as closed, design. The software architect makes this decision.

The open system architecture is founded on well-known standards and definitions. The primary advantage of open architecture is flexibility. Computer software can be updated and modified by using components from multiple sources. Fortunately, the design promotes the ability to use best-of-breed programs. The disadvantages include having a potential hodgepodge of unstructured programs. For a client, the open system architecture reduces dependence on a particular vendor.

A closed system architecture contains methods and proprietary programming that remain the property of the software creator. Most of the program logic is hidden from view or stored in encrypted format to prevent the user from deciphering internal mechanisms. Most commercial software products are a closed, proprietary system with industry standardized program interfaces for data sharing with other programs—in essence, closed architecture with open architecture interfaces. The advantage is that the user can still share data between programs. Another advantage is that the vendor can lock in the customer to their product. The disadvantage is that the customer may be locked in to the vendor's product.

Centralization versus Decentralization

Every organization will face the challenge of determining whether to use a centralized database or a distributed database application. The centralized database is easier to manage than a distributed system. However, the distributed system offers more flexibility and redundancy. The additional flexibility and redundancy of a distributed system carries higher implementation and support costs.

The decision of centralization versus decentralization would have been addressed by the steering committee and requirements gathered in the SDLC Requirements Definition phase (phase 1). Let's consider the requirements for electronic commerce.

Electronic Commerce

Electronic commerce is also known worldwide as e-commerce, which is the conducting of business and financial transactions electronically across the globe. This concept introduces the challenges of maintaining confidentiality, integrity, and availability for every second of the entire year. An additional challenge is to ensure regulatory compliance for each type of transaction that may occur over the e-commerce system.

Let's look at a few transactions, for example:

Business-to-business (B-to-B)

Regular transactions between a business and its vendors. This could include purchasing, accounts payable, payroll, and outsourcing services. This type of transaction is governed by business contracts in accordance with federal law.

Business-to-government (B-to-G)

The online filing of legal documents and reports. In addition, this includes purchasing and vendor management for the products and services used by the government. This type of transaction is governed by a variety of government regulations. An example is the U.S. Central Contractor Registration system (CCR). Vendors doing business with the U.S. government are required to maintain their company profiles in the CCR database.

Business-to-consumer (B-to-C)

Direct sales of products and services to a consumer. B-to-C also includes providing customer support and product information to the consumer. The payment transaction in this type of environment may be governed by banking, privacy, and credit authorization laws. Business-to-consumer applications require additional logging because the normal paper trail does not exist. The auditor will be interested in how the transactions are monitored and reviewed. Authorizations for processing online payments will require special security measures.

Business-to-employee (B-to-E)

The online administration of employee services, including payroll and job benefits. This type of transaction is governed by federal employment regulations and privacy regulations.

Figure 5.21 shows the common e-commerce avenues in use today. Each of these should be the subject of an IS audit.

E-commerce poses a number of challenges to security. Because of the level of risk, security should weigh heavily in any considerations of reducing protection for convenience. Strong internal controls are mandatory for e-commerce systems.

Note

We discuss data security in Chapter 7, "Information Asset Protection."

E-commerce programs

Figure 5.21. E-commerce programs

Summary

This chapter covered IT governance in the System Development Life Cycle. The primary objective of this governance is to ensure that systems are developed via a methodical process that aligns business requirements to business objectives. During this chapter, we touched on standards used in the development of computer software. This chapter included an introduction to the design of databases, program architecture, and e-commerce.

Throughout the entire System Development Life Cycle are a series of processes to ensure control and promote quality. It is the IS auditor's job to determine whether the organization has fulfilled its duties of leadership and control. The purpose of this chapter is to provide you with a basic understanding of the concepts and terminology used in software development.

Exam Essentials

Evaluate the business case for new systems.

You need to evaluate the requirements for a new system to ensure that it will meet the organization's business goals. You should understand how critical success factors are developed and risks are identified.

Evaluate risk management and project management practices.

You need to review the evidence of the organization's project management practices and risk mitigation practices. The objective is to determine whether the solution was cost-effective and achieved the stated business objectives. A formal selection process should be in use and clearly documented.

Conduct regular performance reviews.

Each project should undergo a regular performance review to verify that it is conforming to planned expectations. The review process should be supported by formal documentation and accurate status reporting. Management oversight review should be in use when plans deviate, assumptions change, or the scope of the project substantially changes.

Understand the practices used to gather and verify requirements.

The organization may use a steering committee with the assistance of various managers to identify critical success factors. Scenario exercises can be used to assist in developing requirements for planning. Additional requirements may be obtained from the business internal operation, specific business market, customer commitments, and other sources of information.

Know the system development methodology being used.

You need to review the thoroughness and maturity of processes by which all systems (including infrastructure) are developed or acquired.

Know the system development tools, including their strengths and weaknesses.

You need to understand the advantages and disadvantages of traditional programming, Agile, and RAD methodologies. You are expected to understand that 4GL programming languages do not build the necessary business logic without the involvement of a skilled programmer. You are expected to have a basic understanding of the differences between data-oriented programming and object-oriented programming.

Understand quality control and the development of a test plan.

A quality control process should be in use throughout the entire project and system life cycle. Formal testing should occur in accordance with a structured test plan designed to verify software logic, defects, transaction integrity, efficiency, controls, and validation against requirements.

Be familiar with the internal control mechanisms in place and working.

All systems are required to have functioning internal control mechanisms. You need to evaluate the effectiveness of the selected safeguards. Evidence should exist that each control was planned during the system specification phase and that the controls were implemented during development and tested for effectiveness.

Understand the difference between certification and accreditation.

Every system should undergo acceptance testing, followed by formal technical certification testing for production use. After completing technical certification, the system should be reviewed by management for accreditation based on fitness of use. Systems should be recertified on a regular basis to ensure that they meet new demands of evolving requirements.

Be familiar with ongoing maintenance and support plans in use.

You need to evaluate the process of ongoing support and maintenance plans. The intention is to ensure that the plans fulfill the organizational objectives. You verify that the internal control process is in use for authorizing and implementing changes. System changes should undergo a regression test to ensure that no negative effects were created as a result of the change.

Know how to conduct postimplementation reviews.

Every system should undergo a postimplementation review. The purpose is to compare actual deliverables against the original objectives, and to compare performance to the project plan. Regular reviews should occur throughout the system's usable life cycle, preferably on an annual basis.

Know the various programming terms and concepts.

You need to have a working knowledge of the terminology and concepts used in the development of computer software.

Review Questions

  1. The advantages of using 4GL software applications include which of the following?

    1. Automatically generates the application screens and business logic

    2. Includes artificial intelligence using fuzzy logic

    3. Reduces application planning time and coding effort

    4. Reduces development effort for primitive functions but does not provide business logic

  2. The best definition of database normalization is to

    1. Increase system performance by creating duplicate copies of the most accessed data, allowing faster caching

    2. Increase the amount (capacity) of valuable data

    3. Minimize duplication of data and reduce the size of data tables

    4. Minimize response time through faster processing of information

  3. Which of the following statements is true concerning the inference engine used in expert systems?

    1. Makes decisions using heuristics

    2. Contains nodes linked via an arc

    3. Used when a knowledge base is unavailable

    4. Records objects in a climactic network

  4. An IT steering committee would most likely perform which of the following functions?

    1. Explain to the users how IT is steering the business objectives

    2. Issue directives for regulatory compliance and provide authorization for ongoing IT audits

    3. Facilitate cooperation between the users and IT to ensure that business objectives are met

    4. Ensure that the business is aligned to fulfill the IT objectives

  5. The Software Engineering Institute's Capability Maturity Model (CMM) would best relate to which of the following statements?

    1. Measurement of resources necessary to ensure a reduction in coding defects

    2. Documentation of accomplishments achieved during program development

    3. Relationship of application performance to the user's stated requirement

    4. Baseline of the current progress or regression

  6. Which of the following best describes a data mart?

    1. Contains raw data to be processed

    2. Used in place of a data warehouse

    3. Provides a graphical GUI presentation

    4. Stores results from data mining

  7. Object-oriented databases (OODBs) are designed for data that is ________.

    1. Predictable

    2. Consistent in structure

    3. Variable

    4. Fixed-length

  8. What does the term referential integrity mean?

    1. Transactions are recorded in before-images and after-images.

    2. It's a valid link between a data entry contained in two tables.

    3. It's a completed tuple in the database.

    4. Candidate keys are used to perform a search.

  9. Which of the following statements best explains a program object in object-oriented programming?

    1. It contains methods and data.

    2. Methods are stored separate from data.

    3. It contains 100 percent of all methods necessary for every task.

    4. It does not provide methods.

  10. What is the primary objective of postimplementation review?

    1. Recognition for forcing an installation to be successful

    2. Authorize vendor's final payment from escrow

    3. Conduct remedial actions

    4. Determine that its organizational objectives have been fulfilled

  11. What is the most important concern regarding the RFP process?

    1. Vendor proposals undergo an objective review to determine alignment with organizational objectives.

    2. The vendor must agree to escrow the program code to protect the buyer in case the vendor organization ceases operation.

    3. The RFP process requires a substantial commitment as opposed to a request for information (RFI).

    4. The RFP planning process is not necessary for organizations with internal programming capability.

  12. Which SDLC phase uses Function Point Analysis (FPA)?

    1. SDLC phase 3: System Design

    2. SDLC phase 4: Development

    3. SDLC phase 1: Feasibility Study

    4. SDLC phase 5: Implementation

  13. Which of the following statements is true concerning regression testing?

    1. Used to observe internal program logic

    2. Verifies that a change did not create a new problem

    3. Provides testing of black-box functions

    4. Compares test results against a knowledge base

  14. Which of the following migration methods provides the lowest risk to the organization?

    1. Phased

    2. Hard

    3. Parallel

    4. Date specified

  15. When is management oversight of a project required?

    1. If time, scope, or cost vary more than 5 percent from the estimate

    2. When the feasibility study is inconclusive

    3. To verify that total program benefits met anticipated projection

    4. When major changes occur in assumptions, requirements, or methodology

  16. What are the advantages of the integrated development environment (IDE)?

    1. Generates and debugs program code

    2. Eliminates the majority of processes in SDLC phase 2

    3. Prevents design errors in SDLC phase 3

    4. Eliminates the testing requirement in SDLC phase 4

  17. What is the difference between certification and accreditation?

    1. Certification is a management process, and accreditation is a technical process.

    2. No difference; both include technical testing.

    3. Certification is a technical test, and accreditation is management's view of fitness for use.

    4. Certification is about fitness of use, and accreditation is a technical testing process.

  18. Which of the following development methodologies is based on knowledge in someone's head, as opposed to traditional documentation of requirements?

    1. System Development Life Cycle (SDLC)

    2. Program Evaluation Review Technique (PERT)

    3. Rapid Application Development (RAD)

    4. Agile

  19. What is the IS auditor's primary purpose in regard to life cycle management?

    1. To verify that evidence supports the organizational objective and that each decision is properly authorized by management

    2. To verify that all business contracts are properly signed and executed by management

    3. To verify that internal controls are tested prior to implementation by a third-party review laboratory

    4. To verify that a sufficient budget was allocated to pay for software development within the allotted time period

  20. Which of the following design techniques will document internal logic functions used for data transformation?

    1. Entity-relationship diagram

    2. Flowchart

    3. Database schema

    4. Function Point Analysis

  21. Which of the following principles includes the concept of all or nothing?

    1. Transaction processing monitor

    2. Atomicity, consistency, isolation, and durability

    3. Runtime processing

    4. Referential integrity

  22. Software development uses several types of testing to ensure proper functionality. Which of the following types of testing is used to test functionality on commercially compiled software?

    1. White-box

    2. Code review

    3. Black-box

    4. Crystal-box

  23. Programming software modules by using a time-box style of management is also referred to as the ________ method. The purpose is to force rapid iterations of software prototypes by small teams of talented programmers.

    1. Agile

    2. Lower CASE

    3. Rapid Application Development (RAD)

    4. Fourth-generation (4GL)

  24. How long does full system accreditation last?

    1. Six months

    2. One year

    3. Nine months

    4. As long as the system is used

  25. During the SDLC, several risks can become real problems. Which of the following is the greatest concern to the auditor?

    1. User acceptance testing lasted only 1 hour.

    2. The depth and breadth of user operation manuals is not sufficient.

    3. The project exceeded a 14 percent cost overrun from the original budget.

    4. User requirements and objectives were not met.

  26. What is the terminology that describes the coding of a program by using a template inside of an integrated software development environment?

    1. Pseudocoding

    2. Macro-coding

    3. Compiled coding

    4. Object coding

  27. What is the real issue regarding software escrow?

    1. The vendor must use a subcontractor for safe storage of the original development software.

    2. The software contains intellectual value that is conveyed to the client.

    3. The client is entitled to the benefit of only using the software and not owning it, unless they pay more money.

    4. Commercial software is kept in escrow in case the vendor sells the rights to another vendor.

  28. How can the price of designing and managing a quality program be justified?

    1. Price of failure

    2. Product profit margin

    3. Preventing regulatory changes and fines

    4. Using the 100-point rule

  29. Which of the following is the best method of reviewing the logic used in software written in programming script?

    1. Black-box test

    2. Regression test

    3. Crystal-box test

    4. User acceptance test

  30. Which of the software development methods includes planning activities in phase 1 of the SDLC model?

    1. Agile

    2. Rapid Application Development (RAD)

    3. Upper CASE tools

    4. Project management

  31. What is the primary purpose of the reviews at the end of each phase in the SLDC?

    1. Approval for the funding to continue development

    2. Approval by management to proceed to the next phase or possibly kill the project

    3. Approval of the final design

    4. Provide the auditor with information about management's decision for regulatory compliance

  32. What is the principle issue concerning users programming with a fourth-generation language (4GL)?

    1. Creates nice-looking screens and data buckets without logical data transformation procedures needed in business applications

    2. Provides an advantage with embedded database hooks to minimize the cost of hiring a professional software developer

    3. Provides the user with drag-and-drop functionality to build their own programs

    4. Uses the revolutionary development approach to reduce the cost and time of traditional development

  33. What should be the basis for management's decision to buy available software or to build a custom software application?

    1. Cost savings by switching to a recognized best-in-class application used by others in the industry

    2. Converting from internal custom processes to how the new software operates in order to save money by avoiding the cost of customization

    3. Competitive advantage of using the same software as everyone else

    4. Data from the feasibility study and business specifications

  34. When does software certification testing actually occur in the SDLC model?

    1. Phase 3 (System Design)

    2. Phase 3 (System Design) and phase 4 (Development)

    3. Phase 4 (Development) and phase 5 (Implementation)

    4. Phase 5 (Implementation)

  35. What is the purpose of using international standards such as ISO 15489 and ISO 9126:2003 with the SDLC?

    1. Input as starting specifications for phase 2 requirements

    2. International reference for starting a quality assurance program

    3. Provides guidance for use in phase 4 development

    4. Lowers the initial cost of software development

Answers to Review Questions

  1. D. The 4GL provides screen-authoring and report-writing utilities that automate database access. The 4GL tools do not create the business logic necessary for data transformation.

  2. C. Database normalization minimizes duplication of data through standardization of the database table layout. Increased speed is obtained by reducing the size of individual tables to allow a faster search.

  3. A. The interference engine uses rules, also known as heuristics, to sort through the knowledge base in search of possible answers. The meaning of information in the knowledge base can be recorded in objects and symbols known as semantic networks.

  4. C. The IT steering committee provides open communication of business objectives for IT to support. The steering committee builds awareness and facilitates user cooperation. Focus is placed on fulfillment of the business objectives.

  5. D. The Capability Maturity Model creates a baseline reference to chart current progress or regression. It provides a guideline for developing the maturity of systems and management procedures.

  6. D. Data mining uses rules to drill down through the data in the data warehouse for correlations. The results of data mining are stored in the data mart. The DSS presentation program may display data from the data mart in a graphical format.

  7. C. Data-oriented databases (DODBs) are designed for predictable data that has a consistent structure and a known or fixed length. Object-oriented databases (OODBs) are designed for data that has a variety of possible data formats.

  8. B. Referential integrity means a valid link exists between data in different tables. When you follow the link from one table for "first_name" it matches the data we expect to find in the next table like "Samantha" rather than "1109 Mian Ave." An error indicates a lack of integrity.

  9. A. Objects contain both methods and data to perform a desired task. The object can delegate to another object.

  10. D. Postimplementation review collects evidence to determine whether the organizational objectives have been fulfilled. The review would include verification that internal controls are present and in use.

  11. A. Each proposal must undergo an objective review to determine whether the offer is properly aligned with organizational objectives. RFP review is a formal process that should be managed as a project.

  12. C. Function Point Analysis (FPA) is used to estimate the effort required to develop software. FPA is used during SDLC phase 1, the Feasibility Study phase, to create estimates by multiplying the number of inputs and outputs against a mathematical factor.

  13. B. The purpose of regression testing is to ensure that a change does not create a new problem with other functions in the program. After a change is made, all of the validation tests are run from beginning to end to discover any conflicts or failures. Regression testing is part of the quality control process.

  14. C. Parallel migration increases support requirements but lowers the overall risk. The old and new systems are run in parallel to verify integrity while building user familiarity with the new system.

  15. D. Management oversight review is necessary when it is anticipated that the estimates are incorrect by more than 10 percent. Management oversight is also necessary if major changes occur in assumptions, requirements, or methodology used.

  16. A. The integrated development environment automates program code generation and provides online debugging for certain types of errors. It does not replace the traditional planning process. IDE does not alter the testing requirements in SDLC phase 4. Full testing must still occur.

  17. C. Certification is a technical testing process. Accreditation is a management process of granting approval based on fitness of use.

  18. D. The Agile method places greater reliance on the undocumented knowledge contained in a person's head. Agile is the direct opposite of capturing knowledge through project documentation.

  19. A. Evidence must support the stated objectives of the organization. Software that is built or purchased should be carefully researched to ensure that it fulfills the organization's objectives. Each phase of the life cycle should be reviewed and approved by management before progressing to the next phase.

  20. B. A flowchart is used to document internal program logic. An entity-relationship diagram (ERD) is used to help define the database schema. Function Point Analysis is used for estimation of work during the feasibility study.

  21. B. The ACID principle of database transaction refers to atomicity (all or nothing), consistency, isolation (transactions operate independently), and durability (data is maintained).

  22. C. Compiled software is unreadable by humans. Black-box testing is used to run a sample transaction through the system. The original input is then compared to verify that the output is correct and that it represents what the customer wanted from the system.

  23. A. Agile uses time-box management for rapid iterations of software prototypes by small teams of talented programmers. Agile does not force preplanning of requirements and relies on undocumented knowledge contained in someone's head, without complete documentation.

  24. B. Full accreditation lasts up to one year. Annual renewal is required. Management must reaccredit systems every year. Temporary or restricted accreditation is for only 90 or 180 days.

  25. D. The biggest concern would be a failure to meet the user requirements or user objectives. Cost overruns can occur. By comparison, the auditor's interest in why the overrun occurred would be less important.

  26. A. Software developers use pseudocoding to write programs into a project template within the integrated development environment (IDE). The IDE tool converts the template's pseudocode information into actual program code for almost any language, including C#, Java, Perl, and the Microsoft .NET framework.

  27. C. The client is entitled to the benefit of only using the software, not the right of ownership. Software escrow may be requested by the client to gain full rights to the software if the vendor goes out of business. This would damage the vendor's right to resell intellectual property rights to another vendor. Clients may gain ownership rights to software by paying the vendor for the total cost of development, not just the right to use it. Clients usually decline to pay development costs and will accept the risk of using someone else's software. For example, what would Microsoft charge for the full rights of ownership for Windows Vista? No client would pay it; it's cheaper to accept the risk.

  28. A. Quality is measured as conformance to specifications. Added costs for failing to meet the specification are known as the price of nonconformance, or the cost of failure. Costs of failure provide an excellent tool to justify funding of preventative controls.

  29. C. Crystal-box, also known as white-box testing, is used to review the logic in software written using programming script. The script is still readable by humans until the script is compiled. Compiled programs would be tested using a black-box method.

  30. D. Traditional project management is the only methodology that covers all seven phases of the SDLC. Agile is for phase 4 development. Rapid Application Development (RAD) and CASE tools apply only to portions of phase 2 requirements, phase 3 design, and phase 4 development. Everything else requires good old-fashioned project management.

  31. B. The review at the end of every SDLC phase is intended to prevent the project from proceeding unless it receives management's approval. The project can be approved, forced to fix existing problems, or killed. In each review, the decision is whether all specifications and objectives are being met or if the project should be cancelled.

  32. A. Fourth-generation (4GL) development tools create nice-looking screens and data buckets without the logical data transformation procedures needed in business applications. Skilled software developers are still needed to write the business logic into the program. Nearly 100 percent of the user-developed applications lack the necessary internal controls. In addition, the concept of users developing their own software creates excessive risk. It also violates separation of duties. Maybe the user wanting to use a 4GL should switch professions to become a professional programmer instead of the job they were hired to perform.

  33. D. All the decisions regarding purchasing existing software or building a custom application should be made by using data from the feasibility study and business specifications. More customization or the desire for competitive advantage indicate the need to build a custom application. Using the same software as your competitor converts your organization into a commodity pricing war and damages the business advantage of being different.

  34. C. Software certification testing begins during phase 4 development and continues into phase 5 implementation testing. Initial certification tests are run for the individual models and internal controls as the program is developed. Phase 5 certification testing is expended to cover the operation of the entire software application prior to entering production use.

  35. A. International standards such as ISO 15489 (record management), ISO 15504 (CMM/SPICE), and ISO 9126:2003 (quality management) are best used as inputs for starting specifications in phase 2 requirements. These standards aid in planning the secondary software specifications. Primary specifications are obtained by collecting information from the user to define their main objectives for the software, detailing the steps in its intended mission.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.179.252