After completing this chapter, you will be able to:
Software is developed, maintained, and used by people in a wide variety of situations. Students create software in their classes, enthusiasts become members of open-source development teams, and professionals develop software for diverse business fields from finance to aerospace. All these individual groups will have to address quality problems that arise in the software they are working with. This chapter will provide definitions for terminology and discuss the source of software errors and the choice of different software engineering practices depending on an organization's sector of business.
Every profession has a body of knowledge made up of generally accepted principles. In order to obtain more specific knowledge about a profession, one must either: (a) have completed a recognized curriculum or (b) have experience in the domain. For most software engineers, software quality knowledge and expertise is acquired in a hands-on fashion in various organizations. The Guide to the Software Engineering Body of Knowledge (SWEBOK) [SWE 14] constitutes the first international consensus developed on the fundamental knowledge required by all software engineers. Chapter 10 of SWEBOK is dedicated to software quality (see Figure 1.1) and its first topic is labeled “fundamentals” and introduces the concepts and terminology that form the underlying basis for understanding the role and scope of software quality activities. The second topic refers to the management processes and highlights the importance of software quality across the life cycle of a software project. The third topic presents practical considerations where various factors that influence planning, management, and selection of software quality activities and techniques are discussed. Last, software quality related tools are presented.
Before explaining the components of software quality assurance (SQA), it is important to consider the basic concepts of software quality. Once you have completed this section, you will be able to:
Intuitively, we see software simply as a set of instructions that make up a program. These instructions are also called the software's source code. A set of programs forms an application or a software component of a system with hardware components. An information system is the interaction between the software application and the information technology (IT) infrastructure of the organization. It is the information system or the system (e.g., digital camera) that clients use.
Is ensuring the quality of the source code sufficient for the client to be able to obtain a quality system? Of course not; a system is far more complex than a single program. Therefore, we must identify all components and their interactions to ensure that the information system is one of quality. An initial response to the challenge regarding software quality can be found in the following definition of the term “software.”
When we consider this definition, it is clear that the programs are only one part of a set of other products (also called intermediary products or software deliverables) and activities that are part of the software life cycle.
Let us look at each part of this definition of the term “software” in more detail:
Software found in embedded systems is sometimes called microcode or firmware. Firmware is present in commercial mass-market products and controls machines and devices used in our daily lives.
If you listen closely during various meetings with your colleagues, you will notice that there are many terms that are used to describe problems with a software-driven system. For example:
Do all of these terms refer to the same concept or to different concepts? It is important to use clear and precise terminology if we want to provide a specific meaning to each of these terms. Figure 1.2 describes how to use these terms correctly.
A failure (synonymous with a crash or breakdown) is the execution (or manifestation) of a fault in the operating environment. A failure is defined as the termination of the ability of a component to fully or partially perform a function that it was designed to carry out. The origin of a failure lies with a defect hidden, that is, not detected by tests or reviews, in the system currently in operation. As long as the system in production does not execute a faulty instruction or process faulty data, it will run normally. Therefore, it is possible that a system contains defects that have not yet been executed. Defects (synonym of faults) are human errors that were not detected during software development, quality assurance (QA), or testing. An error can be found in the documentation, the software source code instructions, the logical execution of the code, or anywhere else in the life cycle of the system.
Figure 1.3 shows the relationship between errors, defects, and failures in the software life cycle. Errors may appear during the initial feasibility and planning stages of new software. These errors become defects when documents have been approved and the errors have gone unnoticed. Defects can be found in both intermediary products (such as requirements specifications and design) and the source code itself. Failures occur when an intermediary product or faulty software is used.
The three cases above correctly use the terms to describe software quality problems. They also identify issues that are investigated by researchers in the field of software quality in order to discover means to help eliminate these problems:
During software development, defects are constantly being involuntarily introduced and must be located and corrected as soon as possible. Therefore, it is useful to collect and analyze data on the defects found as well as the estimated number of defects left in the software. By doing so, we can improve the software engineering processes and in turn, minimize the number of defects introduced in new versions of software products in the future.
Methods for classifying defects have been created for this purpose, one of which is explained in the chapter on verification and validation.
Depending on the business model of your organization, you will have to allow for varying degrees of effort in identifying and correcting defects. Unfortunately, there exists today a certain culture of tolerance for software defects. However, there is no question that we all want Airbus, Boeing, Bombardier, and Embraer to have identified and corrected all the defects in the software for their airplanes before we board them!
Many researchers have studied the source of software errors and have published studies classifying software errors by type in order to evaluate the frequency of each type of error. Beizer (1990) [BEI 90] provides a study that has combined the result of several other studies to provide us with an indication of the origin of errors. The following is a summarized list of this study's results [BEI 90].
Researchers also try to determine how many errors can be expected in a typical software. McConnell (2004) [MCC 04] suggested that this number varied based on the quality and maturity of the software engineering processes as well as the training and competency of the developers. The more mature the processes are, the fewer errors are introduced into the development life cycle of the software. Humphrey (2008) [HUM 08] also collected data from many developers. He found that a developer involuntarily creates about 100 defects for each 1000 lines of source code written. In addition, he noted large variations for a group of 800 experienced developers, that is, from less than 50 defects to more than 250 defects injected per 1000 lines of code. At Rolls-Royce, the manufacturer of airplane engines, the variation published is from 0.5 to 18 defects per 1000 lines of source code [NOL 15]. The use of proven processes, competent and well-trained developers, and the reuse of already proven software components can considerably reduce the number of errors of a software.
McConnell also referenced other studies that have come to the following conclusions:
Therefore, errors are the main cause of poor software quality. It is important to look for the cause of error and identify ways in which to prevent these errors in the future. As we have shown in Figure 1.3, errors can be introduced at each step of the software life cycle: errors in the requirements, code, documentation, data, tests, etc. The causes are almost always human mistakes made by clients, analysts, designers, software engineers, testers, or users. SQA will need to develop a classification of the causes of software error by category which can be used by everyone involved in the software engineering process.
For example, here are eight popular error-cause categories:
Each of the eight categories of error causes listed above is described in more detail in the following sections.
Defining software requirements is now considered a specialty, which means a business analyst or a software engineer specialized in requirements. Requirements definition is the topic of interest groups as well as the subject of professional certification programs (see http://www.iiba.org).
There are a certain number of problems related to the clear, correct, and concise writing of requirements so that they can be converted into specifications that can be directly used by colleagues, such as architects, designers, programmers, and testers.
It must also be understood that there are a certain number of activities that must be mastered when eliciting requirements:
It is clear that errors can arise when eliciting requirements. It can be difficult to cater to the wishes, expectations, and needs of many different user groups at the same time (see Figure 1.4). Therefore, it is important to pay particular attention to erroneous requirement definitions, the lack of definitions for critical obligations and software characteristics, the addition of unnecessary requirements (e.g., those not requested by the customer), the lack of attention to business priorities, and fuzzy requirements descriptions.
A requirement is said to be of good quality when it meets the following characteristics:
We will present techniques to help detect defects in requirements documentation in a later chapter concerning reviews.
We must also ensure that we are not looking for the Holy Grail of the perfect specification, since we do not always have the time or means, or the budget, to achieve this level of perfection.
The article by Ambler [AMB 04] entitled “Examining the Big Requirements Up Front Approach” suggests that it is sometimes ineffective to write detailed requirements early in the life cycle of a software project. He claims that this traditional approach increases the risk of a project's failure. He stipulates that a large percentage of these specifications are not integrated in the final version of the software and that the corresponding documentation is rarely updated during the project. He thus asserts that this way of working is outdated. In his article, he recommends using more recent agile techniques, such as Test-Driven Development, in order to produce a minimum amount of paper documentation.
We have observed that software analysts and designers also often use prototyping, which helps to partially eliminate the traditional requirements document and replace it with a set of user interfaces and test cases that describe the requirements, architecture, and software design to be developed. Prototypes prove useful for pinpointing what the client is envisioning and getting valuable feedback early in the project. In the next section, the development practices adopted by different business sectors will be discussed.
Errors can also occur in intermediary products due to involuntary misunderstandings between software personnel and clients and users from the outset of the software project. Software developers and software engineers must use simple, non-technical language and try to take into account the user's reality. They must be aware of all signs of lack of communication, on both sides. Examples of these situations are:
To minimize errors:
This situation occurs when the developer incorrectly interprets a requirement and develops the software based on his own understanding. This situation creates errors that unfortunately may only be caught later in the development cycle or during the use of the software.
Other types of deviations are:
Errors can be inserted in the software when designers (system and data architects) translate user requirements into technical specifications. The typical design errors are:
Many errors can occur in the construction of software. McConnell (2004) [MCC 04] devotes a substantial part of his book “Code Complete” to describing effective techniques for creating quality source code. He describes common programming errors and inefficiencies. According to McConnell, the typical programming errors are:
Some organizations have their own internal methodology and internal standards for developing/acquiring software. This internal methodology describes processes, procedures, steps, deliverables, templates, and standards (e.g., coding standard) that must be considered for software acquisition, development, maintenance, and operations. Of course, in a less mature organization, these processes/procedures will not be clearly defined.
We can therefore ask ourselves the following question: How can not fulfilling the requirements related to an internal methodology lead to defects in software? We must think in terms of the total life cycle (e.g., over many decades for subways and commercial airplanes) of the software, and not just of its initial development. It is clear that someone who only programs code appears to be far more productive than someone who develops intermediary products, such as requirements, test plans, and user documentation, as prescribed by the internal methodology of an organization. However, the immediate productivity would be disadvantageous in the long run.
Undocumented software will give rise to the following problems sooner or later:
The purpose of software reviews and tests is to identify and check that errors and defects have been eliminated from the software. If these activities are not effective, the software delivered to the client will likely be prone to failure.
All kinds of issues can crop up when reviewing and testing software:
It has been recognized that obsolete or incomplete documentation for software being used in an organization is a common problem. Few development teams enjoy spending time preparing and reviewing documentation.
We would be inclined to say no to the question “does software wear out?” Indeed, the 0s and 1s found in the memories do not wear out from use as with hardware. In addition to classifying types of errors, it is important to understand the typical reliability curve for software. Figure 1.5 describes the reliability curve for computer hardware as a function of time. This curve is called a U-shaped or bathtub curve. It represents the reliability of a piece of equipment, such as a car, throughout its life cycle.
With regard to software, the reliability curve resembles more of what is shown in Figure 1.6. This means that software deterioration occurs over time due to, among other things, numerous changes in requirements.
In conclusion, we see that there are many sources of potential errors, and that without SQA, these defects may result in failures if not discovered.
The previous section, which presented the issues with identifying defects, has laid the ground work for our next discussion, namely software quality. How do we define software quality? The standards groups suggest the following definition.
The second definition in the text box is very different, despite appearances. The first part of the definition comes from the perspective of Crosby who reassures the software engineer with its strictness. This perspective is: “If I deliver all that is specified in the requirements document, then I will have delivered quality software.” However, the second part of this definition is from the quality perspective of Juran, which specifies that one must satisfy the client's needs, wants, and expectations that are not necessarily described in the requirements documentation!
These two points of view force the software engineer to establish the kind of agreement that must describe client's requirements and attempt to faithfully reflect his needs, wants, and expectations. Of course, there is a practical element to the functional characteristics that need to be described, but also implicit characteristics, which are expected of any professionally developed piece of software.
In this context, the software engineer can be inspired by the standards in his field, just as his colleagues in construction engineering or other engineering specialties, in order to identify his obligations. Process conformance can be achieved and measured. As an example, Professor April published an example of the measurement, in Ouanouki and April (2007) [OUA 07], where the software testing process had to be assessed for Sarbanes-Oxley conformance for the largest Canadian hardware retailer.
Software quality is recognized differently depending on each perspective, including that of the clients, maintainers, and users. Sometimes, it is necessary to differentiate between the client, who is responsible for acquiring the software, and the users, who will ultimately use it.
Users seek, among other things, functionalities, performance, efficiency, accurate results, reliability, and usability. Clients typically focus more on costs and deadlines, with a view to the best solution at the best price. This can be considered an external point of view with regard to quality. To draw a parallel with the automobile industry, the user (driver) will go to the garage that provides him with fast service, quality, and a good price. He has a non-technical point of view.
As for software specialists, they focus more on meeting obligations based on the allocated budget. Therefore, they see their obligations from the point of view of meeting requirements and the terms and conditions of the agreement. The choice of the right tools and modern techniques are often at the heart of concerns, and is therefore an internal point of view like that of a mechanic who is interested in the engine technology and knows it in detail. To him or her, quality is equally important with regard to the choice and assembly of components. We will consider these two points of view (external versus internal) when discussing the software product quality models.
Therefore, quality software is software that meets the true needs of the stakeholders while respecting any predefined cost and time constraints.
The client's need for software (or more generally any kind of system) may be defined at four levels:
The ability of software to meet (or not meet) the needs of the client can be described in the differences between these four levels. Throughout the development of a project, there will be factors that will affect the final quality.
For each level, Table 1.1 describes the typical factors that can affect the satisfaction of the client requirements.
Table 1.1 Factors that can Affect Meeting the True Requirements of the Client [CEG 90] (© 1990 - ALSTOM Transport SA)
Type of requirement | Origin of the expression | Main causes of difference |
True | Mind of the stakeholders |
|
Expressed | User requirements |
|
Specified | Software Specification Document |
|
Achieved | Documents and Product Code |
This section presents a definition of SQA. This section also aims to describe the objectives of SQA. In order to put these definitions into perspective, here is a reminder of the general definition of software engineering:
To be a recognized profession, software engineering must have its own body of knowledge for which there is consensus. As with most other engineering fields, recognized knowledge, methods, and standards must be used for the development, maintenance/evolution, and infrastructure/operation of software. The body of knowledge for software engineering is published in the SWEBOK guide (www.swebok.org). An entire chapter is dedicated to SQA.
The term “software quality assurance” could be a bit misleading. The implementation of software engineering practices can only “assure” the quality of a project, since the term “assurance” refers to “grounds for justified confidence that a claim has been or will be achieved.” In fact, QA is implemented to reduce the risks of developing a software that does not meet the wants, needs, and expectations of stakeholders within budget and schedule.
This perspective of QA, in terms of software development, involves the following elements:
In addition to software development, SQA can also focus on the maintenance/evolution and infrastructure/operations of software. A typical quality system should include all software processes from the most general (such as governance) to the most technical (e.g., data replication). QA is described in standards such as ISO 12207 [ISO 17], IEEE 730 [IEE 14], ISO 9001 [ISO 15], and exemplary practices models, such as CobiT [COB 12] and the Capability Maturity Model Integration (CMMI) models that will be presented in a later chapter.
In this section, Iberle (2002) [IBE 02], a senior test engineer at Hewlett-Packard, describes her experience in two business sectors of the same company: cardiology products and printers. Different business models are then described to help us understand the risks and the respective needs of each business sector with regards to software practices. These business models will be used in the following chapters to help choose or adapt software practices according to the context of a specific project or application domain.
Knowledge of the business models and organizational culture will help the reader to [IBE 02]:
This section concludes with a brief discussion of exemplary software practices.
Medical products belong to a field known for its very high quality standards. During a mandate in the cardiology products sector, Iberle (2002) [IBE 02] used a large number of traditional practices described in software engineering manuals, for example: detailed written specifications, intensive use of inspections and reviews throughout the life cycle, and exhaustive tests for requirements. Exit criteria were created at the beginning of the project and a product could not be shipped as long as the exit criteria were not met.
In this field, a project end date can be missed by weeks and even months. These delays are acceptable in order to fix any last-minute problems using a long checklist. It was far from painless. Iberle (2002) [IBE 02] explains that she worked many extra hours to try to be on schedule (and not exceed the deadline too much). There were heated debates as to whether a specific defect should be qualified as severe (level 1 severity) or average (level 2–5 severity). However, in the end, quality always won out over the schedule.
After 8 years of working on medical products, Iberle (2002) [IBE 02] was assigned to the business sector that produced printers and served small businesses and consumers. Practices in this business sector of the company were very different. For example, specifications were far shorter, project exit criteria significantly less formal, but making the delivery date was very important. While Iberle was working in testing, she noticed differences in test practices. The main test effort was not focused on tests related to specifications. They were not trying to test all possible entry combinations. There was far less test documentation. In fact, some testers had no test procedures. This was a huge culture shock. At first, Iberle would walk around shaking her head, and grumbling “These people don't care about quality!" After a while, she started to see that her definition of quality was different and was based on her experience in a different field. It was time for her to revisit her beliefs about software quality.
When Iberle (2003) [IBE 03] worked on defibrillators and cardiographs, missing a delivery date was not the worst thing that could happen. What really scared the team was what could kill a patient or technician due to an electrical shock, cause a person to come to the wrong diagnosis, or that the device could not be used in an emergency situation. If the team raised the possibility of a failure, the delivery date was automatically pushed forward, without any discussion whatsoever. Lengthy and costly efforts to find and definitely eliminate the cause of the defect were systematically approved. It was obvious that, for an organization in this business sector, shirking one's legal responsibility or being blamed by the American Food and Drug Administration definitely contributed to these decisions. Delivery dates could be changed and production completed with overtime.
In the consumer products division, the reality was quite different. The potential for injury was very low, even in the worst conditions imaginable. The real concern was not respecting schedules or exceeding costs. When software has to be packaged in hundreds of thousands of boxes and these boxes must be sent to resellers on time for the day of a major sale, there is not much room to “play catch up.” Another fear was having thousands of users unable to install their new printer and calling customer support lines the day after Christmas. Incompatibility between the most popular software and hardware was another source of concern.
So these two business divisions had different definitions of “quality.” Clients valued different things: clients from the medical sector favored accuracy and reliability above all, whereas printer customers looked for user-friendliness and compatibility far more than reliability. Of course, everyone wants reliability. However, whether they are aware of it or not, people value reliability as a function of the pain that certain problems may cause them. People are not happy when they have to restart their computer from time to time, but their misfortune is nothing in comparison with the anguish of a patient faced with a functional problem with a heart defibrillator. When someone goes into fibrillation, there is a 5–6 minute window for saving the patient. So there is no time to lose with equipment problems.
The definition of “reliability” is therefore also very different in these two business sectors. When it was understood that no one would die from a printer software error, the team examined the software practices in the medical products division to determine whether they were also useful in the printer sector [IBE 03]. It would take Iberle several months to realize that what seemed shoddy in the printer sector was a way of dealing with different priorities that did not carry the same weight as for medical products.
As expected, people from both business sectors chose software engineering practices that would lower the probability of their worst fears. Since their apprehensions are different, their practices are also different. In fact, in light of their fears, the choice of practices starts to make sense. The fear of a false diagnosis leads to many detailed reviews and various types of tests. However, the fear of confusing printer users results in more usability tests.
It is not surprising to see that people who work in the same business sectors have similar concerns and use similar practices. Certain concerns can also be found in other organizations. For example, the aerospace sector and medical sector are very closely related. It is also possible for the same organization to have different fears and values in different business sectors, as Iberle (2003) [IBE 03] described above of her employment at Hewlett-Packard.
Software organizations or software specialists are divided into groups that appreciate similar things or share the same concerns, based on similarities in client and business community expectations. These cultures are called “practice groups,” that is, software development groups, which share common definitions of quality and tend to use similar practices.
The following models were developed by Iberle to better understand the need for QA in different business sectors, given that the way in which money flows through an organization (e.g., contract income, cost of products delivered, and losses) and how profits are generated affect the choice of the software practices used to develop products for an organization. The five main business models in the software industry are [IBE 03]:
Each business model has a set of attributes or factors that are specific to it. Here is a list of situational factors that seem to influence the choice of software engineering practices in general [IBE 03]:
- Concurrent developer–developer communication: Communication with other people on the same project is affected by the way in which the work is distributed. In certain organizations, senior engineers design the software and junior staff carries out the coding and unit tests (instead of having the same person carrying out the design, coding, and unit tests for a given component). This practice increases the quantity of communications between developers.
- Developer–maintainer communication: Maintenance and enhancements require communication with the developers. Communication with developers is greatly facilitated when they work in the same area.
- Communication between managers and developers: Progress reports must be sent to upper management. However, the quantity of information and form of communication that managers believe they need may vary substantially.
- Control culture: control cultures, such as IBM and GE, are motivated by the need for power and security.
- Skill culture: A culture of skill is defined by the need to make full use of one's skills: Microsoft is a good example.
- Collaborative culture: A collaborative culture, as illustrated by Hewlett-Packard, is motivated by a need to belong.
- Thriving culture: A thriving culture is motivated by self-actualization, and can be seen in start-up organizations.
This section goes into more detail about each of the five main business models. A single business model, contract-based development for made-to-measure systems, is described as an in-depth case study. For this business model, we describe the following four perspectives:
For the other four business models, we will only consider the context and concerns.
In a fixed-price contract, Iberle (2003) [IBE 03] indicated that the client specifies exactly what he wants and promises the supplier a given sum of money. The profits made by the supplier depend on his ability to remain within budget and to deliver on schedule, as defined in the contract, a product that performs as intended. Large-scale applications and military software are often written under contract. The software produced in this business culture is often critical software. The cost of distributing fixes after delivery is manageable because the corrections are provided to an environment that is known and accessible, and to a reasonable number of sites.
Following is the list of dominating factors in this business model [IBE 03]:
The concerns of the developers of these systems are often:
These situational factors lead to certain assumptions regarding this business model:
In the text above, we presented three perspectives: context; situational factors; and concerns about the first business model.
In the next few paragraphs, we present the prevailing practices used with the business model of this case study.
These practices are taken from [IBE 03]:
Documentation is a valuable way of communicating when the project size is large and when external suppliers are involved. Written documentation is often far more effective than discussions around the cooler when the communication channels are complex, which occurs when people are geographically remote and in different organizations. In addition, certain documents are often necessary to prove that we are doing what was set out in the contract. Lastly, in order for the requirements to be known in detail at the start of the project, documentation and many reviews of the requirements are necessary before responding to the call for tenders.
Lists of exemplary practices, such as the CobiT [COB 12] and CMMI models, developed by the Software Engineering Institute, are used to develop contractual clauses. For example, in this business model, the focus is on project estimating and management in order to be on schedule and within budget as stipulated in the contract, and regular progress reports are necessary.
The waterfall development life cycle was invented in the 1950s to provide enough structure for large IT projects to be able to plan and strategize on-time delivery. The new iterative and agile development cycles plan out development in smaller increments, which allow for planning while offering more flexibility as to delivery. However, as it has been observed, in this business model, cascade development is often the preferred method.
Audits are often specified in the contract for this business model. The audit is used to prove to the client, or during legal proceedings, that the contractual clauses, such as respecting schedules, quality, and functions, have been fulfilled.
We have now described the four perspectives for this business model: the context; situational factors; concerns; and predominant practices. In the next section, we present, as described by Iberle (2003) [IBE 03], only the context and concerns regarding the other four business models.
When using one's own employees to develop software, economic aspects are different than for those who have their software developed on a contract basis. The value of the work depends on improving efficacy or efficiency of operations within the organization. Less focus is put on scheduling meetings since projects are often pending or postponed depending on the budget. The systems can be critical for the organization or of an experimental nature. Fixes are distributed to a limited number of sites.
Developers of these systems often are concerned with the following:
Commercial software is software sold to other organizations rather than to an individual consumer. Profits depend on the familiar economic model, which involves selling many copies of the same piece of software for more than the cost of developing and making the copies. Instead of meeting the specific needs of a single client, the developer aims to satisfy many clients. The software is often critical for the organization or at least very important for the client's organizational operations. Since the software is in the hands of many clients in many places, the distribution of corrections can be very costly. These clients also tend to instigate legal proceedings if the software is deficient, which increases the cost of errors.
Business system vendors are generally fearful of:
This software is sold to individual consumers often at a very high volume. Profits are made by selling products at higher than development cost, often in a niche market or at certain times of the year, such as at Christmas. The potential effects of software failures for the client are generally less serious than those in the previous models and clients are less likely to demand reparation for any damage incurred. The failure of certain software may considerably affect the user's well-being, such as in the case of tax preparation software. However, for most, a failure is simply a source of frustration.
The typical concerns in this culture are:
The cost of fixing errors, for mass-market product manufacturers, could be significantly reduced when the owner can update their products. Unfortunately, the customer will be left to search for and perform these upgrades.
Given that profits depend on the sale of the product for more than the manufacturing cost, the cost of distributing fixes is extremely high, since electronic circuits must often be changed on site. Corrections cannot simply be sent to the client. The impact of down time with mass-market embedded software is potentially more serious than the impact of software failures, since the software is controlling a device. Although the destructive potential of small objects, such as digital watches, is low, in certain cases, software failures could have fatal consequences.
The typical concerns of this culture are:
Implementing practices to improve software quality can be facilitated or slowed down based on factors inherent to the organization. The following text boxes list some of these factors.
Describe the difference between a defect, an error, and a failure.
According to the studies of Boris Beizer, when do the greatest number of software errors occur in the software development life cycle?
Describe the difference between the software and hardware reliability curves.
Eight categories for causes of errors describe the development and maintenance environment, as experienced in organizations:
Describe the different perspectives of software quality from the point of view of the client, the user, and the software engineer.
Describe the types of needs, their origin, and the causes for differences that may be due to a discrepancy between the needs expressed by the client and those carried out by the software engineer.
Describe the concept of business models and how it creates different perspectives for SQA requirements.
Describe the main differences between QA and quality control.
18.117.189.228